A KDE user observed that if you 'dock' a GTK PuTTY window to the side
of the screen (by dragging it to the RH edge, causing it to
half-maximise over the right-hand half of the display, similarly to
Windows), and then send a terminal resize sequence, then PuTTY fails
the assertion in term_resize_request_completed() which expects that an
unacknowledged resize request was currently in flight.
When drawing_area_setup() calls term_resize_request_completed() in
response to the inst->term_resize_notification_required flag, it
resets the inst->win_resize_pending flag, but doesn't reset
inst->term_resize_notification_required. As a result, the _next_ call
to drawing_area_setup will find that flag still set, and make a
duplicate call to term_resize_request_completed, after the terminal no
longer believes it's waiting for a response to a resize request. And
in this 'docked to the right-hand side of the display' state, KDE
apparently triggers two calls to drawing_area_setup() in quick
succession, making this bug manifest.
I could fix this by clearing inst->term_resize_notification_required.
But inspecting all the other call sites, it seems clear to me that my
original intention was for inst->term_resize_notification_required to
be a flag that's only meaningful if inst->win_resize_pending is set.
So I think a better fix is to conditionalise the check in
drawing_area_setup so that we don't even check
inst->term_resize_notification_required if !inst->win_resize_pending.
From https://invisible-island.net/xterm/ctlseqs/ctlseqs.html#h3-Any-event-tracking:
Any-event mode is the same as button-event mode, except that all motion
events are reported, even if no mouse button is down. It is enabled by
specifying 1003 to DECSET.
Normally the front ends only report mouse events when buttons are
pressed, so we introduce a MA_MOVE event with MBT_NOTHING set to
indicate such a mouse movement.
Unlike clang, VS didn't like me using the value of one 'static const'
integer variable to compute the value of another, and complained
'initializer is not a constant'. Replaced all those variables with an
enum, which should also more reliably ensure that even an
unsophisticated compiler doesn't actually reserve data-section space
for them.
A new module in 'utils' computes NFC and NFD, via a new set of data
tables generated by read_ucd.py.
The new module comes with a new test program, which can read the
NormalizationTest.txt that appears in the Unicode Character Database.
All the tests pass, as of Unicode 15.
Now I have a script I can easily re-run, there's no reason not to do
just that! This updates all of the new generated header files for the
UCD.zip that comes with Unicode 15.0.0.
I've re-run my bidi test suite against 15.0.0's file of test cases,
and confirmed they all pass.
The initial outputs were all deliberately inconsistent with each
other, so that each one exactly matched the existing table I was
trying to replace.
Now I've done that check, I can clean them up. Normalised spacing and
case to be consistent; removed pointless indentation (these are now
include files, so they don't have to be indented to the same level as
the array declaration surrounding each one's #include); added a header
comment in each autogenerated file, saying that it's autogenerated,
what it's for, and who it's used by.
The currently supported version number of Unicode is also exposed in a
header file, so that I can put it in diagnostics.
This will replace the various pieces of Perl scattered throughout the
code base in comments above long boring data tables. The idea is that
those long boring tables will move into header files in the new
'unicode' directory, and will be #included from the source files that
use the tables.
One benefit is that I won't have to page tediously past the tables to
get to the actual code I want to edit. But more importantly, it should
now become easy to update to a new version of Unicode, by re-running
just one script and committing the changed versions of all the headers
in the 'unicode' subdir.
This version of the script regenerates six Unicode-derived tables in
the existing source code in a byte-for-byte identical form. In the
next commits I'll clean it up, commit the output, and delete the
tables from their previous locations.
(One table I _haven't_ incorporated into this system is the Arabic
shaping table in bidi.c, because my attempt to regenerate it came out
not matching the original at all. That _might_ be because the table is
based on an old Unicode standard and desperately needs updating, but
it might also be because I misunderstood how it works. So I'll leave
sorting that out for another time.)
This enables it to handle data that isn't presented as a
NUL-terminated string.
In particular, the NUL byte can appear _within_ the string and be
correctly translated to the NUL wide character. So I've been able to
remove the awkwardness in the test rig of having to include the
terminating NUL in every test to ensure NUL has been tested, and
instead, insert a single explicit test for it.
Similarly to the previous commit, the simplification at the (one) call
site gives me a strong feeling of 'this is what the API should have
been all along'!
The test in question was supposed to contain the spurious UTF-8
encoding that 0xD800 would have if it were not a surrogate. But the
final continuation character 0x80 was instead 0x00.
The test passed anyway, because ED A0 was regarded as a truncated
sequence, instead of ED A0 80 being regarded as an illegal encoding of
a surrogate, and both return the same output!
Previously it output to an ordinary char buffer, and returned the
number of bytes it had written. But three out of the four call sites
immediately chucked the resulting bytes into a BinarySink anyway. The
fourth, in windows/unicode.c, really is writing into successive
locations of a fixed-size buffer - but we can make that into a
BinarySink too, using the buffer_sink added in the previous commit.
So now encode_utf8() is renamed put_utf8_char, and the call sites all
look simpler than they started out.
This is one of marshal.c's small collection of handy BinarySink
adapters to existing kinds of thing, alongside stdio_sink and
bufchain_sink. It writes into a fixed-size buffer, discarding all
writes after the buffer fills up, and sets a flag to let you know if
it overflowed.
There was one of these in Windows Pageant a while back, under the name
'struct PageantReply' (introduced in commit b6cbad89fc, removed
again in 98538caa39 when the named-pipe revamp made it
unnecessary). This is the same idea but centralised for reusability.
Like 5f3b743eb0, specifically reassure the user that taking the
add-to-cache action will not cause the CA that signed the key to be
trusted in any wider context, in the case where there was no previous
certified key cached. (I don't know why I missed this out before.)
When a host certificate was used outside its valid date range, we were
displaying the current time where we meant to show the relevant bound of
the validity range.
A server attempt to resize the window (for instance via DECCOLM) when
"When window is resized" was set to "Forbid resizing completely" would
cause all terminal output to be suspended, due to the resize attempt
never being acknowledged.
(There are other code paths like this, which I've fixed for
completeness, but I don't think they have any effect: the terminal
filters out resize attempts to the current size before this point, and
even if a server can get such a request through the SUPDUP protocol, the
test for that is wrong and will never fire -- this needs fixing
separately.)
I just happened to notice ARG1 and ARGN in the code that builds the
dispatch table in process_line(), which aren't used at all, because
they date from a previous version of the testcrypt-func.h macro
system. They were supposed to be replaced everywhere with the unified
ARG.
So why didn't the missing definition of ARG break anything? Because
ARG only ever appears in the variadic part of a FUNC_INNER call - and
for this particular trawl of testcrypt-func.h, the variadic part isn't
ever used in the macro expansion in the first place. So there's no
need to define ARG and VOID to anything at all, not even the empty
string.
We still don't build or ship a PDF PuTTY manual by default, but we may
as well conveniently expose Halibut's ability to do so.
(I don't guarantee the resulting PDF is particularly pretty -- some of
our overlong code lines do go off the right margin currently.)
The setup code for CTRL_FILESELECT and CTRL_FONTSELECT is shared,
which means it's a mistake to test ctrl->fileselect.just_button in it
without first checking which control type we're actually dealing with.
UBsan picks this up by complaining that the just_button field contains
some byte value that's illegal for a boolean. I think it's also the
cause of an intermittent assertion failure reported recently, in which
dlg_fontsel_set finds that uc->entry is NULL when it never ought to
be. If the byte from the wrong union branch happened to be 0 by sheer
bad luck, that could give rise to exactly that failure.
Python 3's stderr was fully-buffered when non-interactive, unlike
Python 2 and more or less everything else, until 3.9 in 2020(!):
https://bugs.python.org/issue13601
(It would be less faff to sys.stderr.reconfigure(line_buffering=True)
at the start, but that was only added in 3.7, whereas the 'flush'
argument to print() dates back to 3.3, so I chose that to minimise
the risk of version dependencies getting in the way of using this as
a working example.)
Jacob spotted that an unused -pwfile input can be accidentally used as
the answer to Plink's antispoof 'press Return to begin session'
prompt, which is unintended and confusing.
To fix that, I've made the use of a command-line password conditional
on p->to_server, the flag in a prompts_t that indicates whether the
results of the prompts are going to be sent directly to the server or
consumed locally by PuTTY. (And I've also corrected the setting of
to_server in the antispoof prompt, which was true when it should have
been false.)
A side effect of this is that -pwfile will no longer work to provide a
private-key passphrase, if you're using public-key authentication
without Pageant. This is deliberate, because if you're doing that on
purpose then Pageant is a better way to achieve the same thing (or
else just store the key unencrypted, which is no worse); but in the
case of a server that sequentially demands public-key _and_ password
authentication, the new behaviour makes -pwfile apply to the right one
of the two prompts, i.e. the actual password.
Removed 'try cmake 3.7 on Windows': I think that's not really
necessary, because Windows doesn't have the concept of an old overall
distro that makes it hard to upgrade a particular build tool.
On the other hand, added a big pile of other things I'd like not to
forget.
Those versions of GTK (or rather, GDK) don't support the
GDK_WINDOW_STATE_TOP_TILED constants; they only support the
non-directional GDK_WINDOW_STATE_TILED. And GTK < 3.10.0 doesn't even
support that.
All those constants were under #ifdef already; I've just made the
ifdefs a bit more precise.
The "Cancel" button's keyboard shortcut was accidentally removed by
f1c8298000, having only just reinstated it in a77040afa1.
(Also, fix a couple of blatantly fibbing "accelerators used" comments.)
Mainly to try to clarify that if you're sat at this warning dialog/
prompt, no response you make to it will cause a new CA to be trusted for
signing arbitrary host keys.
The 'certified host key' variant of the host key warning always comes
with a scary 'POTENTIAL SECURITY BREACH!' message. So the error message
section with the scary title that should acknowledge that variant, and
the section about that variant should mention the scary warning.