This imports the following options from command-line PuTTYgen, which
all correspond to controls in Windows PuTTYgen's GUI, and let you set
the GUI controls to initial values of your choice:
-t <key type>
-b <bits>
-E <fingerprint type>
--primes <prime gen policy>
--strong-rsa
--ppk-param <KDF parameters or PPK version etc>
The idea is that if someone generates a lot of keys and has standard
non-default preferences, they can make a shortcut that passes those
preferences on the command line.
These two tools had ad-hoc command loops with similar options, and I
want to extend both (in particular, in a way that introduces options
with arguments). So I've started by throwing together some common code
to do all the tedious bits like finding option arguments wherever they
might be, throwing errors, handling "--" and so on.
Should be no functional change to the existing command-line syntax,
except that now all long options are recognised in both "-foo" and
"--foo" form.
It looks as if I broke this some time around commit cfc9023616,
when I stopped proactively calling term_size() in advance of resizing
the window. A side effect was that I also stopped calling it at all in
the case where we're _not_ resizing the window (because changing the
size of the terminal means adapting the font size to fit a different
amount of stuff in the existing window).
Fixed by moving all the new machinery inside the 'actually resize the
window' branch of the if statement, and restoring the previous
behaviour in the other branch, this time with a comment that will
hopefully stop me making the same mistake again.
Some pointing devices (e.g. gaming mice) can be set to send
mouse-movement events at an extremely high sample rate like 1kHz. This
apparently translates into Windows genuinely sending WM_MOUSEMOVE
messages at that rate. So if you're using one of those mice, then
PuTTYgen's mouse-based entropy collection system will fill its buffer
almost immediately, and give you no perceptible time to actually wave
the mouse around.
I think that in that situation, there's also likely to be a strong
correlation between the contents of successive movement events,
because human-controlled mouse movements aren't fractals - the more
you zoom in on a little piece of one, the more it starts to look more
and more like something moving in a straight line at constant speed,
because the deviations from that happen on a larger time scale than
you're seeing.
So this commit adds a rate limit, not on the _collection_ of the data
(we'll still take all the bits we can get, thanks!), but on the rate
at which we increment the _counter_ for how much entropy we think
we've collected so far. That way, the user still has to spend time
wiggling the mouse back and forth in a way that varies with muscle
motions and introduces randomness.
There's no point having a separate boolean flag. All we have to do is
remember to NULL out the strbuf point state->entropy when we free the
strbuf (which is a good idea in any case!), and then we can use the
non-NULL-ness of that pointer as the indicator that we're currently
collecting entropy.
This offloads the memory management into centralised code which is
better at it than the ad-hoc code here. Now I don't have to predict in
advance how much memory the entropy will consume, and can change that
decision on the fly.
In the previous state of the code, we first tested agent_exists() to
decide whether to be the long-running Pageant server or a short-lived
client; then, during the command-line parsing loop, we prompted for
passphrases to add keys presented on the command line (to ourself or
the server, respectively); *then* we set up the named pipe and
WM_COPYDATA receiver window to actually start functioning as a server,
if we decided that was our role.
A consequence is that if a user started up two Pageants each with an
encrypted key on the command line, there would be a race condition:
each one would decide that it was _going_ to be the server, then
prompt for a passphrase, and then try to set itself up as the server
once the passphrase is entered. So whichever one's passphrase prompt
was answered second would add its key to its own internal data
structures, then fail to set up the server's named pipe, terminate
with an error, and end up not having added its key to the _surviving_
server.
This change reorders the setup steps so that the command-line parsing
loop does not add the keys immediately; instead it merely caches the
key filenames provided. Then we make the decision about whether we're
the server, and set up both the named pipe and WM_COPYDATA window if
we are; and finally, we go back to our list of key filenames and
actually add them, either to ourself (if we're the server) or to some
other Pageant (if we're a client).
Moreover, the decision about whether to be the server is now wrapped
in an interprocess mutex similar to the one used in connection
sharing, which means that even if two or more Pageants are started up
as close to simultaneously as possible, they should avoid a race
condition and successfully manage to agree on exactly one of
themselves to be the server. For example, a user reported that this
could occur if you put shortcuts to multiple private key files in your
Windows Startup folder, so that they were all launched simultaneously
at startup.
One slightly odd behaviour that remains: if the server Pageant has to
prompt for private key passphrases at startup, then it won't actually
start _servicing_ requests from other Pageants until it's finished
dealing with its own prompts. As a result, if you do start up two
Pageants at once each with an encrypted key file on its command line,
the second one won't even manage to present its passphrase prompt
until the first one's prompt is dismissed, because it will block
waiting for the initial check of the key list. But it will get there
in the end, so that's only a cosmetic oddity.
It would be nice to arrange that Pageant GUI operations don't block
unrelated agent requests (e.g. by having the GUI and the agent code
run in separate threads). But that's a bigger problem, not specific to
startup time - the same thing happens if you interactively load a key
via Pageant's file dialog. And it would require a major reorganisation
to fix that fully, because currently the GUI code depends on being
able to call _internal_ Pageant query functions like
pageant_count_ssh2_keys() that don't work by constructing an agent
request at all.
This update covers several recently added features: SSH proxying, HTTP
Digest proxy auth, and interactive prompting for proxy auth in general.
Also, downplayed the use of 'plink -nc' as a Local-type proxy command.
It still works, but it's no longer the recommended way of tunnelling
SSH over SSH, so there's no need to explain it quite so
enthusiastically.
I was just writing the documentation for the new proxy type, which
caused me to realise that the thing I obviously wanted to write in the
documentation was not actually true. Let's make it true, and then I
can document it!
I have _no_ idea how I managed to leave this out of the list of
examples when I first wrote this man page. It should have been the
very first one I thought of, since Cygwin was the platform I wrote
cygtermd for, and one of psusan's primary purposes was to be a
productised and improved replacement for cygtermd!
Oh well, better late than never.
All the seat functions that request an interactive prompt of some kind
to the user - both the main seat_get_userpass_input and the various
confirmation dialogs for things like host keys - were using a simple
int return value, with the general semantics of 0 = "fail", 1 =
"proceed" (and in the case of seat_get_userpass_input, answers to the
prompts were provided), and -1 = "request in progress, wait for a
callback".
In this commit I change all those functions' return types to a new
struct called SeatPromptResult, whose primary field is an enum
replacing those simple integer values.
The main purpose is that the enum has not three but _four_ values: the
"fail" result has been split into 'user abort' and 'software abort'.
The distinction is that a user abort occurs as a result of an
interactive UI action, such as the user clicking 'cancel' in a dialog
box or hitting ^D or ^C at a terminal password prompt - and therefore,
there's no need to display an error message telling the user that the
interactive operation has failed, because the user already knows,
because they _did_ it. 'Software abort' is from any other cause, where
PuTTY is the first to know there was a problem, and has to tell the
user.
We already had this 'user abort' vs 'software abort' distinction in
other parts of the code - the SSH backend has separate termination
functions which protocol layers can call. But we assumed that any
failure from an interactive prompt request fell into the 'user abort'
category, which is not true. A couple of examples: if you configure a
host key fingerprint in your saved session via the SSH > Host keys
pane, and the server presents a host key that doesn't match it, then
verify_ssh_host_key would report that the user had aborted the
connection, and feel no need to tell the user what had gone wrong!
Similarly, if a password provided on the command line was not
accepted, then (after I fixed the semantics of that in the previous
commit) the same wrong handling would occur.
So now, those Seat prompt functions too can communicate whether the
user or the software originated a connection abort. And in the latter
case, we also provide an error message to present to the user. Result:
in those two example cases (and others), error messages should no
longer go missing.
Implementation note: to avoid the hassle of having the error message
in a SeatPromptResult being a dynamically allocated string (and hence,
every recipient of one must always check whether it's non-NULL and
free it on every exit path, plus being careful about copying the
struct around), I've instead arranged that the structure contains a
function pointer and a couple of parameters, so that the string form
of the message can be constructed on demand. That way, the only users
who need to free it are the ones who actually _asked_ for it in the
first place, which is a much smaller set.
(This is one of the rare occasions that I regret not having C++'s
extra features available in this code base - a unique_ptr or
shared_ptr to a string would have been just the thing here, and the
compiler would have done all the hard work for me of remembering where
to insert the frees!)
In cmdline_get_passwd_input(), there's a boolean 'tried_once' which is
set the first time the function returns the password set by -pw or
-pwfile. The idea, as clearly commented in the code, was that if
cmdline_get_password_input asks for that password _again_, we return
failure, as if the user had refused to make a second attempt.
But that wasn't actually what happened, because when we set tried_once
to true, we also set cmdline_password to NULL, which causes the second
call to the function to behave as if no password was ever provided at
all. So after the -pw password failed, we'd fall back to asking
interactively.
This change moves the check of cmdline_password to after the check of
tried_once, restoring the originally intended behaviour: password
authentication will now _only_ be done via the pre-set password, if
there is one.
This seems like an xkcd #1172 kind of change: now that it's been wrong
for a while, _someone_ has probably found the unintended behaviour
useful, and started relying on it. So it may become necessary to add
an option to set the behaviour either way. But for the moment, let's
try it the way I originally intended it.
In SSH-1 there's a function that takes a void * that it casts to the
state of the login layer. The corresponding function in SSH-2 casts it
to the state of a differently named layer, but I had still called the
parameter 'loginv'.
I happened to notice in passing that this function doesn't have any
tests (although it will have been at least somewhat tested by the
cmdgen interop test system).
This involved writing a wrapper that passes the passphrase and salt as
ptrlens, and I decided it made more sense to make the same change to
the original function too and adjust the call sites appropriately.
I derived a test case by getting OpenSSH itself to make an encrypted
key file, and then using the inputs and output from the password hash
operation that decrypted it again.
This fills in the remaining gap in the interactive prompt rework of
the proxy system in general. If you used the Telnet proxy with a
command containing %user or %pass, and hadn't filled in those
variables in the PuTTY config, then proxy/telnet.c would prompt you at
run time to enter the proxy auth details. But the local proxy command,
which uses the same format_telnet_command function, would not do that.
Now it does!
I've implemented this by moving the formatting of the proxy command
into a new module proxy/local.c, shared between both the Unix and
Windows local-proxy implementations. That module implements a
DeferredSocketOpener, which constructs the proxy command (prompting
first if necessary), and once it's constructed, hands it to a
per-platform function platform_setup_local_proxy().
So each platform-specific proxy function, instead of starting a
subprocess there and then and passing its details to make_fd_socket or
make_handle_socket, now returns a _deferred_ version of one of those
sockets, with the DeferredSocketOpener being the thing in
proxy/local.c. When that calls back to platform_setup_local_proxy(),
we actually start the subprocess and pass the resulting fds/handles to
the deferred socket to un-defer it.
A side effect of the rewrite is that when proxy commands are logged in
the Event Log, they now get the same amenities as in the Telnet proxy
type: the proxy password is sanitised out, and any difficult
characters are escaped.
Previously, a setup function returning one of these socket types (such
as platform_new_connection) had to do all its setup synchronously,
because if it was going to call make_fd_socket or make_handle_socket,
it had to have the actual fds or HANDLEs ready-made. If some kind of
asynchronous operation were needed before those fds become available,
there would be no way the function could achieve it, except by
becoming a whole extra permanent Socket wrapper layer.
Now there is, because you can make an FdSocket when you don't yet have
the fds, or a HandleSocket without the HANDLEs. Instead, you provide
an instance of the new trait 'DeferredSocketOpener', which is
responsible for setting in motion whatever asynchronous setup
procedure it needs, and when that finishes, calling back to
setup_fd_socket / setup_handle_socket to provide the missing pieces.
In the meantime, the FdSocket or HandleSocket will sit there inertly,
buffering any data the client might eagerly hand it via sk_write(),
and waiting for its setup to finish. When it does finish, buffered
data will be released.
In FdSocket, this is easy enough, because we were doing our own
buffering anyway - we called the uxsel system to find out when the fds
were readable/writable, and then wrote to them from our own bufchain.
So more or less all I had to do was make the try_send function do
nothing if the setup phase wasn't finished yet.
In HandleSocket, on the other hand, we're passing all our data to the
underlying handle-io.c system, and making _that_ deferrable in the
same way would be much more painful, because that's the place where
the scary threads live. So instead I've arranged it by replacing the
whole vtable, so that a deferred HandleSocket and a normal
HandleSocket are effectively separate trait implementations that can
share their state structure. And in fact that state struct itself now
contains a big anonymous union, containing one branch to go with each
vtable.
Nothing yet uses this system, but the next commit will do so.
This will mean that platform-specific proxy types will also be able to
set themselves up as child Interactors and prompt the user
interactively for passwords and the like.
NFC: nothing uses the new parameter yet.
Now it's done using the same code as in write_c_string_literal(), by
means of factoring the latter into a version that targets any old
BinarySink and a convenience wrapper taking a FILE *.
Now, when you resize the Event Log window, the list box grows in both
directions. Previously, as a side effect of the Columns-based layout,
it grew only horizontally.
I've arranged this by adding an extra wrinkle to the Columns layout
system, which allows you to tag _exactly one_ widget in the whole
container as the 'vexpand' widget. When the Columns is size-allocated
taller than its minimum height, the vexpand widget is given all the
extra space.
This technique ports naturally across all versions of GTK (since the
hard part is done in my own code). But it's limited: you can't set
more than one widget to be vexpand (which saves having to figure out
whether they're side by side and can expand in parallel, or vertically
separated and each have to get half the available extra space, etc).
And in complex layouts where the widget you really want to expand is
in a sub-Columns, there's no system for recursively searching down to
find it.
In other words, this is a one-shot bodge for the Event Log, and it
will want more work if we ever plan to extend it to list boxes in the
main config dialog.
Colin points out that in the migration to cmake, I accidentally
stopped putting some of the pre-built docs in the tarball - only the
man pages are still there.
This is a piece I forgot in the initial implementation of HTTP Digest:
an HTTP server can send _more than one_ authentication request header
(WWW-Authenticate for normal servers, Proxy-Authenticate for proxies),
and if it does, they're supposed to be treated as alternatives to each
other, so that the client chooses one to reply to.
I suppose that technically we were 'complying' with that spec already,
in that HttpProxyNegotiator would have read each new header and
overwritten all the fields set by the previous one, so we'd always
have gone with the last header presented by the server. But that seems
inelegant: better to choose the one we actually like best.
So now we do that. All the details of an auth header are moved out of
the main HttpProxyNegotiator struct into a sub-struct we can have
multiple copies of. Each new header is parsed into a fresh struct of
that kind, and then we can compare it with the previous one and decide
which we prefer.
The preference order, naturally, is 'more secure is better': Digest
beats Basic, and between two Digest headers, SHA-256 beats MD5. (And
anything beats a header we can't make sense of at all.)
Another side effect of this change is that a 407 response which
contains _no_ Proxy-Authenticate headers will trigger an error message
saying so, instead of just going with whatever happened to be left in
the relevant variables from the previous attempt.
While re-testing on Wayland after all this churn of the window
resizing code, I discovered that the window constantly came out a few
pixels too small, losing a character cell in width and height. This
stopped happening once I experimentally stopped setting geometry
hints.
Source-diving in GTK, it turns out that this is because the GDK
Wayland backend is applying the geometry hints to the size of the
window including 'margins', which are a very large extra space around
a window beyond even the visible 'non-client-area' furniture like the
title bar. And I have no idea how you find out the size of those
margins, so I can't allow for them in the geometry hints.
I also noticed that gtk_window_set_geometry_hints is removed in GTK 4,
which suggests that GTK upstream are probably not interested in
fiddling with them until they work more usefully (even if they would
agree with me that this is a bug in the first place, which I have no
idea). A simpler workaround is to avoid setting geometry hints at all
on any GDK backend other than X11.
So, that's what this commit does. On Wayland (or other backends), the
window can now be resized a pixel at a time, and if its size doesn't
work out to a whole number of character cells, then you just get some
dead space at the edges. Not especially nice, but better than the
alternatives I can see.
One other job the geometry hints were doing was to forbid resizing if
the backend sets the BACKEND_RESIZE_FORBIDDEN flag (which SUPDUP
does). That's now done at window creation time, via
gtk_window_set_resizable.
The window size set in the geometry hints when the backend has the
BACKEND_RESIZE_FORBIDDEN flag was computed in a simplistic way that
forgot to take account of window furniture like scrollbars and menu
bars. Now it's computed based on the rest of the geometry hints, which
are more accurate.
If the Seat that the pty backend is talking to starts to back up, then
we ought to temporarily stop reading from the pty device, to pass that
back-pressure on to whatever's running in the terminal.
Previously, this didn't matter because a Seat running a GUI terminal
never backed up anyway. But now it does, so we should support it all
the way through the system.
Normally, the GTK code runs toplevel callbacks from a GTK 'idle
function'. But those mean what they say: they are considered
low-priority, to be run _only_ when the system is idle - so they can
fail to run at all in conditions of a steady stream of higher-priority
things, e.g. something is throwing data at the application so fast
that every main-loop iteration finds a readable fd.
And that's not good, because _we_ don't think our callbacks are
low-priority: they do a lot of really important work like redrawing
the window. So if they never get round to happening, PuTTY or pterm
can appear to lock up.
Simple solution to that one: whenever we process a select notification
on any fd, we _also_ call run_toplevel_callbacks(). Then our callbacks
are bound to happen reasonably regularly.
The return value of term_data() is used as the return value from the
GUI-terminal versions of the Seat output method, which means backends
will take it to be the amount of standard-output data currently
buffered, and exert back-pressure on the remote peer if it gets too
big (e.g. by ceasing to extend the window in that particular SSH-2
channel).
Historically, as a comment in term_data() explained, we always just
returned 0 from that function, on the basis that we were processing
all the terminal data through our terminal emulation code immediately,
and never retained any of it in the buffer at all. If the terminal
emulation code were to start running slowly, then it would slow down
the _whole_ PuTTY system, due to single-threadedness, and
back-pressure of a sort would be exerted on the remote by it simply
failing to get round to reading from the network socket. But by the
time we got back to the top level of term_data(), we'd have finished
reading all the data we had, so it was still appropriate to return 0.
That comment is still correct if you're thinking about the limiting
factor on terminal data processing being the CPU usage in term_out().
But now that's no longer the whole story, because sometimes we leave
data in term->inbuf without having processed it: during drag-selects
in the terminal window, and (just introduced) while waiting for the
response to a pending window resize request. For both those reasons,
we _don't_ always have a buffer size of zero when we return from
term_data().
So now that hole in our buffer size management is filled in:
term_data() returns the true size of the remaining unprocessed
terminal output, so that back-pressure will be exerted if the terminal
is currently not consuming it. And when processing resumes and we
start to clear our backlog, we call backend_unthrottle to let the
backend know it can relax the back-pressure if necessary.
Previously, the Windows implementation of win_request_resize would
call term_size() to tell the terminal about its new size, _before_
calling SetWindowPos to actually change the window size.
If SetWindowPos ends up not getting the exact window size it asks for,
Windows notifies the application by sending back a WM_SIZE message -
synchronously, by calling back to the window procedure from within
SetWindowPos. So after the first over-optimistic term_size(), the
WM_SIZE handler would trigger a second one, resetting the terminal
again to the _actual_ size.
This was more or less harmless, since current handling of terminal
resizes is such that no terminal data gets too confused: any lines
pushed into the scrollback by the first term_size will be pulled back
out by the second one if needed (or vice versa), and since commit
271de3e4ec, individual termlines are not eagerly truncated by
resizing them twice.
However, it's more work than we really wanted to be doing, and it
seems presumptuous to send term_size() before we know it's right! So
now we send term_size() after SetWindowPos returns, unless it already
got sent by a re-entrant call to the WM_SIZE handler _before_
SetWindowPos returned. That way, the terminal should get exactly one
term_size() call in response to win_request_resize(), whether it got
the size it was expecting or not.
This is the payoff from the last few commits of refactoring. It fixes
the following race-condition bug in terminal application redraw:
* server sends a window-resizing escape sequence
* terminal requests a window resize from the front end
* server sends further escape sequences to perform a redraw of some
full-screen application, which assume that the window resize has
occurred and the window is already its new size
* terminal processes all those sequences in the context of the old
window size, while the front end is still thinking
* window resize completes in the front end and term_size() tells the
terminal it now has its new size, but it's too late, the screen
redraw has made a total mess.
(Perhaps the server might even send its window resize + followup
redraw all in one SSH packet, so that it's all queued in term->inbuf
in one go.)
As far as I can see, handling of this case has been broken more or
less forever in the GTK frontend (where window resizes are inherently
asynchronous due to the way X11 works, and we've never done anything
to compensate for that). On Windows, where window size is changed via
SetWindowPos which is synchronous, it used to work, but broke in
commit d74308e90e (i.e. between 0.74 and 0.75), which made all
the ancillary window updates run on the same delayed-action timer as
ordinary text display.
So, it's time to fix it, and I think now I should be able to fix it in
GTK as well as on Windows.
Now, as soon as we've set the term->win_resize_pending flag (in
response to a resize escape sequence), the next return to the top of
the main loop in term_out will terminate output processing early,
leaving any further terminal data still in the term->inbuf bufchain.
Once we get a term_size() callback from the front end telling us our
new size, we reset term->win_resize_pending, which unblocks output
processing again, and we also queue a toplevel callback to have
another try at term_out() so that it will be unblocked promptly.
To implement this I've changed term->win_resize_pending from a bool
into a three-state enumeration, so that we can tell the difference
between 'pending' in the sense of not yet having sent our resize
request to the frontend, and in the sense of waiting for the frontend
to reply. That way, a window resize from the GUI user at least won't
be mistaken for the response to our resize request if it arrives in
the former state. (It can still be mistaken for one in the latter
case, but if the user is resizing the window at the same time as the
server-side application is doing critically size-dependent redrawing,
I don't think there can be any reasonable expectation of nothing going
wrong.)
As mentioned in the previous commit, some failure modes under X11 (in
particular the window manager process getting wedged in some way) can
result in no response being received to a ConfigureWindow request. In
that situation, it seems to me that we really _shouldn't_ sit there
waiting forever - perhaps it's technically the WM's fault and not
ours, but what kind of X window are you most likely to want to use to
do emergency WM repair? A terminal window, of course, so it would be
exceptionally unhelpful to make any terminal window stop working
completely in this situation! Hence, there's a fallback timeout in
terminal.c, so that if we don't receive a response in _too_ long,
we'll assume one is not forthcoming, and resume processing terminal
data at the old window size. The fallback timeout is set to 5 seconds,
following existing practice in libXt (DEFAULT_WM_TIMEOUT).
When the terminal asks its TermWin for a resize, the resize operation
happens asynchronously (or can do), and sooner or later, the terminal
will see a term_size() telling it the resize has actually taken
effect.
If the resize _doesn't_ take effect for any reason - e.g. because the
window is maximised, or because the X11 window manager is a tiling one
which will refuse requests to change the window size anyway - then the
terminal never got any explicit notification of refusal to resize. Now
it should, in as many cases as I can arrange.
One obvious case of this is the early exit I recently added to
gtkwin_request_resize() when the window is known to be in a maximised
or tiled state preventing a resize: in that situation, when our own
code knows we're not even attempting the resize, we also queue a
toplevel callback to tell the terminal so.
The more interesting case is when the request is refused for a reason
GTK _didn't_ know in advance, e.g. because the user is running an X11
tiling window manager such as i3, which generally refuses windows'
resize requests. In X11, if a window manager refuses an attempt to
change the window's size via ConfigureWindow, ICCCM says it should
respond by sending a synthetic ConfigureNotify event restating the
same size. Such no-op configure events do reach the "configure_event"
handler in a GTK program, but they weren't previously getting as far
as term_size(), because the call to term_size() was triggered from the
GTK "size_allocate" event on the GtkDrawingArea inside the window (via
drawing_area_setup()), so GTK would detect that nothing had changed.
Now we queue a last-ditch toplevel callback which ensures that if the
configure event doesn't also cause a size_allocate and a call to
drawing_area_setup(), then we cause one of our own once the dust has
settled. And drawing_area_setup(), in turn, now unconditionally calls
term_size() even if the size is the same as it was last time, instead
of taking an early exit. (It still does take an early exit to avoid
unnecessary faffing with Cairo surfaces etc, but _after_ term_size()).
This won't be 100% reliable, because it's the window manager's
responsibility to send those synthetic ConfigureNotify events, and a
window manager is a fallible process which could get into a stuck
state. But it covers all the cases I know of that _can_ be sensibly
covered - so now, when terminal.c asks the front end to resize the
window, it ought to find out in a timely manner whether or not that
has happened, in _almost_ all cases.
There's no actual need to copy the data from term->inbuf into a local
variable inside term_out(). We can simply store a pointer and length,
and use the data _in situ_ - as long as we remember how much of it
we've used, and bufchain_consume() it when the routine exits.
Getting rid of that awkward and size-limited local array should
marginally improve performance. But also, it opens up the possibility
to suddenly suspend handling of terminal data and leave everything not
yet processed in the bufchain, because now we never remove anything
from the bufchain until _after_ it's been processed, so there's no
need to awkwardly push the unused segment of localbuf[] back on to the
front of the bufchain if we need to do that.
NFC, but as usual, I have a plan to use the new capability in a
followup commit.
This tiny refactoring makes a convenient function for setting all the
'pending' flags and triggering a callback for the next window update.
This saves a bit of code, but that's not really the main point (or
else I'd have done the same to all the other similar things like
window moves). The point is that in a future commit I'm going to want
to do an extra thing on every server-controlled window resize, and
this refactoring gives me a single place to put that extra action.
This tiny refactoring replaces three identical checks at call sites,
not all as well commented as each other, with a check in just one
place with the best of the three comments.
This is another thing that seems harmless on X11 but causes window
redraws to semipermanently stop happening on Wayland: if we try to
gtk_window_resize() a window that is maximised at the time, then
something mysterious goes wrong and we stop ever getting "draw" events.
The event should return a 'gboolean', indicating whether the event
needs propagating any further. We were returning void, which meant
that the default handling might be accidentally suppressed.
On Wayland, this had the particularly nasty effect that window redraws
would stop completely if you maximised the terminal window.
(By trial and error I found that this stopped happening if I removed
GDK_HINT_RESIZE_INC from the geometry hints, from which I guess that
the default window-state-event handler is doing something important
relating to that hint, or would have been if we hadn't accidentally
suppressed it. But the bug is clearly here.)