A user reports that our top-level .gitignore ignores several files
that are actually part of the real git repository. This is
inconvenient if you start from a downloaded tarball or zip file, and
try to make it _back_ into a git repository to work with it.
The blanket rule to ignore files called "Makefile" (on the theory that
they're autogenerated by cmake, or in the pre-cmake days, by
autotools) was also excluding two handwritten Makefiles, in 'icons'
and in 'contrib/cygtermd'. And the rule about doc/*.txt, intended to
exclude Halibut's plain-text output, also excluded doc/CMakeLists.txt.
With these exclusions in place, if you download a PuTTY source
.tar.gz, unpack it, change into the unpacked subdirectory, and run
'git init', 'git add .' and 'git commit', then 'git status --ignored'
to see what files in the tarball weren't added to the repo, you'll
find that the remaining ones are all in the 'doc' directory, and
really _are_ Halibut outputs: all the man pages (putty.1 etc), the
Windows help file putty.chm, and the plain text puttydoc.txt.
Currently, we display the cursor at the right-hand end of the pre-edit
text. That looks good for block and underline cursors, but it's a bit
weird for the vertical line, which naturally appears at the left edge
of the rightmost character. Setting ATTR_RIGHTCURS fixes this and
means that the vertical-line cursor appears at the right edge of the
rightmost character.
That also corrects a weirdness where ATTR_RIGHTCURS was leaking
through from the underlying pending-wrap state of the terminal, which
was definitely wrong.
That's the more logical location in a string more than one character
long. GTK does actually tell us where it thinks the cursor should be,
but we don't yet pay attention to that.
This involves repeatedly resizing it as we decode characters. That's a
bit inefficient (at least with the current implementation of
resizeline()), but it makes it much easier to be certain that the line
is actually the right length.
I think supporting combining characters in pre-edit text will be simpler
if I can use add_cc, which operated on termlines. Also we have code for
resizing termlines, which means I might not need to count the width of
the pre-edit string accurately before allocating it.
If a character cell under the pre-edit text has a combining character,
it shouldn't be combined with a character from the pre-edit text, but
should be hidden instead. This also means that the pre-edit text
could contain combining characters if I implemented a way to put them
into it.
Now the pre-edit text is converted into a dynamically-allocated array of
termchars in term_set_preedit_text(), which slightly simplifies
do_paint(). This means that the long pre-edit generated by Ctrl+Shift+U
in GNOME now displays more or less properly. I may need a better plan
for what to do about cursor positioning, though.
Now we can cope with a single wide or narrow pre-edit character, which
is good enough for the input methods that I use. When rendering the
line that contains the cursor we set up a little array of termchars
that contains the pre-edit text and work out where it should be
displayed. Then when rendering the screen we switch between
displaying text from the real terminal and from the pre-edit string as
necessary.
Ideally, we should support longer strings, combining characters, and
setting attributes. I think the current architecture should make all
of those possible, but not entirely easy.
This is approximately how it should work: term_set_preedit_text stashes
data in the terminal structure and then do_paint() renders it in place
of what's in the terminal buffer. Currently this only works for a
single narrow character, and it copies the existing attributes under the
cursor, but this might actually be enough for the UK keyboard layout in
GNOME.
We simply pass each character to term_display_graphic_char and then
put the cursor back where we found it. This works in simple cases,
but is fundamentally wrong. Really we should do this in a way that
doesn't touch the terminal state and just gets rendered on top of it
somehow.
The terminal code doesn't yet do anything with the text other than feed
it to a debugging printf. The call uses UTF-8 and expects the terminal
to copy the string because that's compatible with
gtk_im_context_get_preedit_string().
There was a bytes / array elements confusion in the code that prints
out the input and output Unicode strings when a test fails. It was
using a loop with index variable 'pos', which was used as an array
index, but incremented by sizeof(a character) each time, leading to
only every fourth character actually being printed.
I assume this is leftover confusion from when I hadn't quite decided
whether to abuse the char-based strbuf for these Unicode character
buffers, or make a specialist type.
Discovered when I reached for this test program just now in order to
manually decompose a Unicode string. It doesn't have a convenient CLI
for that, but it was a thing I already knew where to find!
When telling front ends to paint the screen, the terminal code treats
the cursor as an attribute applied to the character cell(s) it appears
in. do_paint() detects changes to most such attributes by storing what
it last sent to the front end in term->disptext and comparing that with
what it thinks should be displayed in the window. However, before this
commit the cursor was special. Its last-drawn position was recorded in
special structure members and invalidated parts of the display based on
those. The cursor attributes were treated as "temporary attributes" and
were not saved in term->disptext.
This commit regularizes this and turns the cursor attributes into normal
attributes that are stored in term->disptext. This removes a bunch of
special-case code in do_paint() because now the normal update code
handles the cursor properly, and also removes some members from the
Terminal structure. I hope it will also make future cursor-handling
changes (for instance for input method pre-editing) simpler.
This commit makes the required semantic changes but doesn't make the
rather more pervasive change of actually renaming the attributes from
TATTR_ to ATTR_. That will be in the next commit.
Up-to-date trunk clang has introduced a built-in operator called
_Countof, which is like the 'lenof' macro in this code (returns the
number of elements in a statically-declared array object) but with the
safety advantage that it provokes a compile error if you accidentally
use it on a pointer. In this commit I add a cmake-time check for it,
and conditional on that, switch over the definition of lenof.
This should add a safety check for accidental uses of lenof(pointer).
When I tested it with new clang, this whole code base compiled cleanly
with the new setting, so there aren't currently any such accidents.
clang cites C2y as the source for _Countof: WG14 document N3369
initially proposed it under a different name, and then there was a big
internet survey about naming (in which of course I voted for lenof!),
and document N3469 summarises the results, which show that the name
_Countof and/or countof won. Links:
https://www.open-std.org/jtc1/sc22/wg14/www/docs/n3369.pdfhttps://www.open-std.org/jtc1/sc22/wg14/www/docs/n3469.htm
My reading of N3469 seems to say that there will _either_ be _Countof
by itself, _or_ lowercase 'countof' as a new keyword, but they don't
say which. They say they _don't_ intend to do the same equivocation we
had with _Complex and _Bool, where you have a _Countof keyword and an
optional header file defining a lowercase non-underscore macro
wrapping it. But there hasn't been a new whole draft published since
N3469 yet, so I don't know what will end up in it when there is.
However, as of now, _Countof exists in at least one compiler, and that
seems like enough reason to implement it here. If it becomes 'countof'
in the real standard, then we can always change over later. (And in
that case it would probably make sense to rename the macro throughout
the code base to align with what will become the new standard usage.)
GtkIMContext has focus_in and focus_out methods for telling it when the
corresponding widget gains or loses keyboard focus. It's not obvious to
me why these are necessary, but PuTTY now calls them when it sees
focus-in and focus-out events for the terminal window. Somehow, this
has caused Hangul input to start working in PuTTY. I can't yet
see what I'm typing for lack of proper preedit support, though.
[ECMA-48] section 8.3.27 specifies the format of Device Control String
(DCS) commands which are used for XTGETTCAP and other sequences.
We don't parse DCS commands. This causes this command to wrongly
output some characters:
printf '\033P+q616d\033\\'
Fix that by parsing DCS commands just like other OSC-like commands.
(Apart from the initial characters, DCS has the same format as OSC.)
We also allow 0x07 as sequence terminator which does not seem specified
but a lot of people use it with OSC; it's fine because 0x07 is not
allowed in the OSC/DCS payload.
[ECMA-48]: https://www.ecma-international.org/wp-content/uploads/ECMA-48_2nd_edition_august_1979.pdf
It was a bit far to the right, looking at risk of falling off. Now
moved it as far left as it will go without the top right corner of the
computer monitor peeking out from behind it.
If I'm going to use this as a means of generating bitmap icons at
large sizes, I want it to support all the same modes as the existing
bitmap script. So this adds a mode to the SVG generator that produces
the same black and white colour scheme as the existing monochrome
bitmap icons.
(Plus, who knows, the black and white SVGs might come in useful for
other purposes. Printing as a logo on black-and-white printers springs
to mind.)
The existing monochrome icons aren't greyscale: all colours are
literally either black or white, except for the cardboard box in the
installer icon, which is halftoned. Here I've rendered that box as
mid-grey. When I convert the rendered SVG output to an actual
1-bit (plus alpha) image, I'll have to redo that halftoning.
It looked nasty that the back corner of the monitor didn't line up
exactly with the outline of the system box behind it. Now I choose the
y offset between the two components to ensure it does. Also adjusted
the monitor's depth so that it fits better with the new alignment.
We weren't building _all_ the icons in true-colour mode, because most
don't change anyway. The installer ones do, so let's build them. Works
better with the preview page.
A user reported recently that if you connect to a Telnet server via a
proxy that requires authentication, and enter the auth details
manually in the PuTTY terminal window, then the entire Telnet session
is shown with trust sigils to its left.
This happens because telnet.c calls seat_set_trust_status(false) as
soon as it's called new_connection() to make the Socket. But the
interactive proxy authentication dialogue hasn't happened yet, at that
point. So the proxy resets the trust status to true and asks for a
username and password, and then nothing ever resets it to false,
because telnet.c thought it had already done that.
The solution is to defer the Telnet backend's change of trust status
to when we get the notification that the socket is properly connected,
which arrives via plug_log(PLUGLOG_CONNECT_SUCCESS).
The same bug occurs in raw.c and supdup.c, but not in rlogin.c,
because Rlogin has an initial authentication exchange known to the
protocol, and already delays resetting the trust status until after
that has concluded.
Colin Watson reported that a build failure occurred in the AArch64
Debian build of PuTTY 0.83:
gcc now defaults to enabling branch protection using AArch64 pointer
authentication, if the target architecture version supports it.
Debian's base supported architecture does not, but Armv8.4-A does. So
when I changed the compile flags for enable_dit.c to add
-march=armv8.4-a, it didn't _just_ allow me to write the 'msr dit, %0'
instruction in my asm statement; it also unexpectedly turned on
pointer authentication in the containing function, which caused a
SIGILL when running on a pre-Armv8.4-A CPU, because although the code
correctly skipped the instruction that set DIT, it was already inside
enable_dit() at that point and couldn't avoid going through the
unsupported 'retaa' instruction which tries to check an auth code on
the return address.
An obvious approach would be to add -mbranch-protection=none to the
compile flags for enable_dit.c. Another approach is to leave the
_compiler_ flags alone, and change the architecture in the assembler,
either via a fiddly -Wa,... option or by putting a .arch directive
inside the asm statement. But both have downsides. Turning off branch
protection is fine for the Debian build, but has the unwanted side
effect of turning it off (in that one function) even in builds
targeting a later architecture which _did_ want branch protection. And
changing the assembler's architecture risks changing it _down_ instead
of up, again perhaps invalidating other instructions generated by the
compiler (like if some later security feature is introduced that gcc
also wants to turn on by default).
So instead I've taken the much simpler approach of not bothering to
change the target architecture at all, and instead generating the move
into DIT by hardcoding its actual instruction encoding. This meant I
also had to force the input value into a specific register, but I
don't think that does any harm (not _even_ wasting an extra
instruction in codegen). Now we should avoid interfering with any
security features the compiler wants to turn on or off: all of that
should be independent of the instruction I really wanted.
We used to have a practice of \IM-ing every command-line option for the
index, but haven't kept it up.
Add these for all existing indexed command-line options, plus some
related tidying.
(That's Halibut's non-breaking hyphen.)
Triggered by noticing that the changes in 54f6fefe61 happened to come
out badly in the text-only rendering, but I noticed there were many more
instances in the main docs where non-breaking hyphens would help.
After all these years, this checklist is _still_ hard for me to get
right. In the 0.83 runup this month, I prepared everything about the
RC build in advance, but nothing about the announcements, website
updates etc, and had to do all of that on release day.
So I've completely removed the section "Preparing to make the
release", which was ambiguous about whether it's done in advance or on
the day. Now all the text parts (website, wishlist, announcements) are
folded into the "make a release candidate" section, in the hope that
I'll remember to do them all at the same time, which should mean
- people have a few days to review the text _and_ test the RC build
- because they go together, I also remember to revise the text if a
new RC build is needed (e.g. mention whatever extra fix it has).
The "actual release procedure" section is now down to _only_ the
things I have to do on the day, which is basically uploading
everything, going live, and communicating the release.
Spotted by Coverity: we've just allocated a strbuf to hold the output
of the classical half of the hybrid key exchange, but if that output
isn't generated due to some kind of failure, we forgot to free the
strbuf on exit.
This was introduced in commit e7acb9f6968d482, as a side effect of
also wanting to call wmemchr to find a NUL wide character in the
buffer returned from GetDlgItemTextW. But the previous commit has
superseded that code, so now we don't use wmemchr in this code base
any more. Remove the machinery that provides it, saving a now-useless
cmake configure-time check.
This rewrite, due to SATO Kentaro, uses GetWindowTextLength (which I
hadn't known about) to find the correct size to allocate the buffer
the first time, avoiding the need to keep growing it until a call to
GetDlgItemText doesn't have to truncate the result.