This removes the case where we draw into a Cairo surface and then copy
the results into a GdkPixmap. Now, if we've got a GdkPixmap, we just
draw into it directly using Cairo. This vastly reduces the number of
CopyArea operations needed to draw on the screen.
I just found cairo_paint() while wandering the Cairo documentation. It
just fills the entire clip region with data from the current source,
which is precisely what draw_area() wants to do. This is simpler for us
than requesting the bounding rectangle of the clipping region and then
filling it, and as far as I can tell the clipping rectangle generally
covers the whole window anyway.
Rather than constructing a transformation matrix piece by piece (with
very branchy code), draw_stretch_before now just calls cairo_translate()
and cairo_scale() with values that are almost-obviously correct.
Also, rather than stashing and restoring the transformation matrix
ourselves, it seems simpler to use cairo_save() and cairo_restore().
That requires that draw_stretch_before() and draw_stretch_after() be
called strictly in pairs, but they are so that's OK.
According to the X specs, WhitePixel and BlackPixel refer to permanent
entries in the default colourmap. This means that they're not
necessarily appropriate for use with a Drawable with a different depth
than the root window. When drawing to a Pixmap that will be used as a
1-bit alpha mask by Cairo, the correct values are simply 0
(transparent) and 1 (opaque).
On Windows, when a blinking cursor is enabled, PuTTY uses the system
default blink time from GetCaretBlinkTime(), which can be configured in
Control Panel.
Control Panel allows caret blinking to be disabled entirely, in which
case GetCaretBlinkTime() returns INFINITE. PuTTY wasn't handling this
case; if cursor blinking was enabled in PuTTY but disabled at the system
level, the terminal window would hang, blinking the cursor madly.
This commit fixes a problem that Simon observed when using an X bitmap
font with Cairo and making a line double-width or double-size. When
using Cairo, PuTTY implements double-width and double-size by just
asking Cairo to scale all its drawing operations. This works fine
with outline fonts, but when using a bitmap font the results are a bit
fuzzy. This appears to be because Cairo's default is to use bilinear
interpolation when scaling an image, which is fine for photos but not
so good for fonts.
In this commit, I decompose PuTTY's cairo_mask_surface() call into its
component parts so that I can set the mask pattern's filter to
CAIRO_FILTER_NEAREST before using it. That solves the problem, but it
suggests that maybe we should be caching the pattern rather then the
surface.
When using an X server-side font with Cairo rendering, PuTTY takes the
rather horrible approach of rendering each glyph it uses into a depth-1
Pixmap and then copying the result into a Cairo surface that it uses
every time it wants to display that glyph.
Heretofore, the conversion of the Pixmap into a Cairo surface was done
by downloading it using XGetImage() and then manually re-arringing the
bits into a suitable form for Cairo. But Cairo has a way of turning an
X Drawable (including a Pixmap) into a surface, and then it's just a
case of copying one surface to another using cairo_paint(). So that's
what PuTTY does now and the process is a little less unpleasant than it
was.
The Cairo documentation is clear that cairo_set_antialias() only
affects shape drawing and not text rendering. To change
anti-aliasing settings for font rendering you need
cairo_font_options_set_antialias() instead. Therefore the comment
can be a bit more certain than just describing what Cairo "appears"
to do.
A user reports that our top-level .gitignore ignores several files
that are actually part of the real git repository. This is
inconvenient if you start from a downloaded tarball or zip file, and
try to make it _back_ into a git repository to work with it.
The blanket rule to ignore files called "Makefile" (on the theory that
they're autogenerated by cmake, or in the pre-cmake days, by
autotools) was also excluding two handwritten Makefiles, in 'icons'
and in 'contrib/cygtermd'. And the rule about doc/*.txt, intended to
exclude Halibut's plain-text output, also excluded doc/CMakeLists.txt.
With these exclusions in place, if you download a PuTTY source
.tar.gz, unpack it, change into the unpacked subdirectory, and run
'git init', 'git add .' and 'git commit', then 'git status --ignored'
to see what files in the tarball weren't added to the repo, you'll
find that the remaining ones are all in the 'doc' directory, and
really _are_ Halibut outputs: all the man pages (putty.1 etc), the
Windows help file putty.chm, and the plain text puttydoc.txt.
Currently, we display the cursor at the right-hand end of the pre-edit
text. That looks good for block and underline cursors, but it's a bit
weird for the vertical line, which naturally appears at the left edge
of the rightmost character. Setting ATTR_RIGHTCURS fixes this and
means that the vertical-line cursor appears at the right edge of the
rightmost character.
That also corrects a weirdness where ATTR_RIGHTCURS was leaking
through from the underlying pending-wrap state of the terminal, which
was definitely wrong.
That's the more logical location in a string more than one character
long. GTK does actually tell us where it thinks the cursor should be,
but we don't yet pay attention to that.
This involves repeatedly resizing it as we decode characters. That's a
bit inefficient (at least with the current implementation of
resizeline()), but it makes it much easier to be certain that the line
is actually the right length.
I think supporting combining characters in pre-edit text will be simpler
if I can use add_cc, which operated on termlines. Also we have code for
resizing termlines, which means I might not need to count the width of
the pre-edit string accurately before allocating it.
If a character cell under the pre-edit text has a combining character,
it shouldn't be combined with a character from the pre-edit text, but
should be hidden instead. This also means that the pre-edit text
could contain combining characters if I implemented a way to put them
into it.
Now the pre-edit text is converted into a dynamically-allocated array of
termchars in term_set_preedit_text(), which slightly simplifies
do_paint(). This means that the long pre-edit generated by Ctrl+Shift+U
in GNOME now displays more or less properly. I may need a better plan
for what to do about cursor positioning, though.
Now we can cope with a single wide or narrow pre-edit character, which
is good enough for the input methods that I use. When rendering the
line that contains the cursor we set up a little array of termchars
that contains the pre-edit text and work out where it should be
displayed. Then when rendering the screen we switch between
displaying text from the real terminal and from the pre-edit string as
necessary.
Ideally, we should support longer strings, combining characters, and
setting attributes. I think the current architecture should make all
of those possible, but not entirely easy.
This is approximately how it should work: term_set_preedit_text stashes
data in the terminal structure and then do_paint() renders it in place
of what's in the terminal buffer. Currently this only works for a
single narrow character, and it copies the existing attributes under the
cursor, but this might actually be enough for the UK keyboard layout in
GNOME.
We simply pass each character to term_display_graphic_char and then
put the cursor back where we found it. This works in simple cases,
but is fundamentally wrong. Really we should do this in a way that
doesn't touch the terminal state and just gets rendered on top of it
somehow.
The terminal code doesn't yet do anything with the text other than feed
it to a debugging printf. The call uses UTF-8 and expects the terminal
to copy the string because that's compatible with
gtk_im_context_get_preedit_string().
There was a bytes / array elements confusion in the code that prints
out the input and output Unicode strings when a test fails. It was
using a loop with index variable 'pos', which was used as an array
index, but incremented by sizeof(a character) each time, leading to
only every fourth character actually being printed.
I assume this is leftover confusion from when I hadn't quite decided
whether to abuse the char-based strbuf for these Unicode character
buffers, or make a specialist type.
Discovered when I reached for this test program just now in order to
manually decompose a Unicode string. It doesn't have a convenient CLI
for that, but it was a thing I already knew where to find!
When telling front ends to paint the screen, the terminal code treats
the cursor as an attribute applied to the character cell(s) it appears
in. do_paint() detects changes to most such attributes by storing what
it last sent to the front end in term->disptext and comparing that with
what it thinks should be displayed in the window. However, before this
commit the cursor was special. Its last-drawn position was recorded in
special structure members and invalidated parts of the display based on
those. The cursor attributes were treated as "temporary attributes" and
were not saved in term->disptext.
This commit regularizes this and turns the cursor attributes into normal
attributes that are stored in term->disptext. This removes a bunch of
special-case code in do_paint() because now the normal update code
handles the cursor properly, and also removes some members from the
Terminal structure. I hope it will also make future cursor-handling
changes (for instance for input method pre-editing) simpler.
This commit makes the required semantic changes but doesn't make the
rather more pervasive change of actually renaming the attributes from
TATTR_ to ATTR_. That will be in the next commit.
Up-to-date trunk clang has introduced a built-in operator called
_Countof, which is like the 'lenof' macro in this code (returns the
number of elements in a statically-declared array object) but with the
safety advantage that it provokes a compile error if you accidentally
use it on a pointer. In this commit I add a cmake-time check for it,
and conditional on that, switch over the definition of lenof.
This should add a safety check for accidental uses of lenof(pointer).
When I tested it with new clang, this whole code base compiled cleanly
with the new setting, so there aren't currently any such accidents.
clang cites C2y as the source for _Countof: WG14 document N3369
initially proposed it under a different name, and then there was a big
internet survey about naming (in which of course I voted for lenof!),
and document N3469 summarises the results, which show that the name
_Countof and/or countof won. Links:
https://www.open-std.org/jtc1/sc22/wg14/www/docs/n3369.pdfhttps://www.open-std.org/jtc1/sc22/wg14/www/docs/n3469.htm
My reading of N3469 seems to say that there will _either_ be _Countof
by itself, _or_ lowercase 'countof' as a new keyword, but they don't
say which. They say they _don't_ intend to do the same equivocation we
had with _Complex and _Bool, where you have a _Countof keyword and an
optional header file defining a lowercase non-underscore macro
wrapping it. But there hasn't been a new whole draft published since
N3469 yet, so I don't know what will end up in it when there is.
However, as of now, _Countof exists in at least one compiler, and that
seems like enough reason to implement it here. If it becomes 'countof'
in the real standard, then we can always change over later. (And in
that case it would probably make sense to rename the macro throughout
the code base to align with what will become the new standard usage.)
GtkIMContext has focus_in and focus_out methods for telling it when the
corresponding widget gains or loses keyboard focus. It's not obvious to
me why these are necessary, but PuTTY now calls them when it sees
focus-in and focus-out events for the terminal window. Somehow, this
has caused Hangul input to start working in PuTTY. I can't yet
see what I'm typing for lack of proper preedit support, though.
[ECMA-48] section 8.3.27 specifies the format of Device Control String
(DCS) commands which are used for XTGETTCAP and other sequences.
We don't parse DCS commands. This causes this command to wrongly
output some characters:
printf '\033P+q616d\033\\'
Fix that by parsing DCS commands just like other OSC-like commands.
(Apart from the initial characters, DCS has the same format as OSC.)
We also allow 0x07 as sequence terminator which does not seem specified
but a lot of people use it with OSC; it's fine because 0x07 is not
allowed in the OSC/DCS payload.
[ECMA-48]: https://www.ecma-international.org/wp-content/uploads/ECMA-48_2nd_edition_august_1979.pdf
It was a bit far to the right, looking at risk of falling off. Now
moved it as far left as it will go without the top right corner of the
computer monitor peeking out from behind it.
If I'm going to use this as a means of generating bitmap icons at
large sizes, I want it to support all the same modes as the existing
bitmap script. So this adds a mode to the SVG generator that produces
the same black and white colour scheme as the existing monochrome
bitmap icons.
(Plus, who knows, the black and white SVGs might come in useful for
other purposes. Printing as a logo on black-and-white printers springs
to mind.)
The existing monochrome icons aren't greyscale: all colours are
literally either black or white, except for the cardboard box in the
installer icon, which is halftoned. Here I've rendered that box as
mid-grey. When I convert the rendered SVG output to an actual
1-bit (plus alpha) image, I'll have to redo that halftoning.
It looked nasty that the back corner of the monitor didn't line up
exactly with the outline of the system box behind it. Now I choose the
y offset between the two components to ensure it does. Also adjusted
the monitor's depth so that it fits better with the new alignment.
We weren't building _all_ the icons in true-colour mode, because most
don't change anyway. The installer ones do, so let's build them. Works
better with the preview page.
A user reported recently that if you connect to a Telnet server via a
proxy that requires authentication, and enter the auth details
manually in the PuTTY terminal window, then the entire Telnet session
is shown with trust sigils to its left.
This happens because telnet.c calls seat_set_trust_status(false) as
soon as it's called new_connection() to make the Socket. But the
interactive proxy authentication dialogue hasn't happened yet, at that
point. So the proxy resets the trust status to true and asks for a
username and password, and then nothing ever resets it to false,
because telnet.c thought it had already done that.
The solution is to defer the Telnet backend's change of trust status
to when we get the notification that the socket is properly connected,
which arrives via plug_log(PLUGLOG_CONNECT_SUCCESS).
The same bug occurs in raw.c and supdup.c, but not in rlogin.c,
because Rlogin has an initial authentication exchange known to the
protocol, and already delays resetting the trust status until after
that has concluded.
Colin Watson reported that a build failure occurred in the AArch64
Debian build of PuTTY 0.83:
gcc now defaults to enabling branch protection using AArch64 pointer
authentication, if the target architecture version supports it.
Debian's base supported architecture does not, but Armv8.4-A does. So
when I changed the compile flags for enable_dit.c to add
-march=armv8.4-a, it didn't _just_ allow me to write the 'msr dit, %0'
instruction in my asm statement; it also unexpectedly turned on
pointer authentication in the containing function, which caused a
SIGILL when running on a pre-Armv8.4-A CPU, because although the code
correctly skipped the instruction that set DIT, it was already inside
enable_dit() at that point and couldn't avoid going through the
unsupported 'retaa' instruction which tries to check an auth code on
the return address.
An obvious approach would be to add -mbranch-protection=none to the
compile flags for enable_dit.c. Another approach is to leave the
_compiler_ flags alone, and change the architecture in the assembler,
either via a fiddly -Wa,... option or by putting a .arch directive
inside the asm statement. But both have downsides. Turning off branch
protection is fine for the Debian build, but has the unwanted side
effect of turning it off (in that one function) even in builds
targeting a later architecture which _did_ want branch protection. And
changing the assembler's architecture risks changing it _down_ instead
of up, again perhaps invalidating other instructions generated by the
compiler (like if some later security feature is introduced that gcc
also wants to turn on by default).
So instead I've taken the much simpler approach of not bothering to
change the target architecture at all, and instead generating the move
into DIT by hardcoding its actual instruction encoding. This meant I
also had to force the input value into a specific register, but I
don't think that does any harm (not _even_ wasting an extra
instruction in codegen). Now we should avoid interfering with any
security features the compiler wants to turn on or off: all of that
should be independent of the instruction I really wanted.
We used to have a practice of \IM-ing every command-line option for the
index, but haven't kept it up.
Add these for all existing indexed command-line options, plus some
related tidying.
(That's Halibut's non-breaking hyphen.)
Triggered by noticing that the changes in 54f6fefe61 happened to come
out badly in the text-only rendering, but I noticed there were many more
instances in the main docs where non-breaking hyphens would help.