When term_input_data_from_charset receives data in a specified
character set (that isn't a negative number indicating "just send
binary"), it was translating from that character set into wide-
character Unicode, but then forgetting to translate back again into
the terminal's configured character set.
Introduced by the rewrite in commit 4f756d2a4d. Affected the
answerback string sent in response to ^E, and I think potentially some
paths through term_keyinput too, although I haven't quite worked out
which ones would have been affected.
The previous mb_to_wc and wc_to_mb had horrible and also buggy APIs.
This commit introduces a fresh pair of functions to replace them,
which generate output by writing to a BinarySink. So it's now up to
the caller to decide whether it wants the output written to a
fixed-size buffer with overflow checking (via buffer_sink), or
dynamically allocated, or even written directly to some other output
channel.
Nothing uses the new functions yet. I plan to migrate things over in
upcoming commits.
What was wrong with the old APIs: they had that awkward undocumented
Windows-specific 'flags' parameter that I described in the previous
commit and took out of the dup_X_to_Y wrappers. But much worse, the
semantics for buffer overflow were not just undocumented but actually
inconsistent. dup_wc_to_mb() in utils assumed that the underlying
wc_to_mb would fill the buffer nearly full and return the size of data
it wrote. In fact, this was untrue in the case where wc_to_mb called
WideCharToMultiByte: that returns straight-up failure, setting the
Windows error code to ERROR_INSUFFICIENT_BUFFER. It _does_ partially
fill the output buffer, but doesn't tell you how much it wrote!
What's wrong with the new API: it's a bit awkward to write a sequence
of wchar_t in native byte order to a byte-oriented BinarySink, so
people using put_mb_to_wc directly have to do some annoying pointer
casting. But I think that's less horrible than the previous APIs.
Another change: in the new API for wc_to_mb, defchr can be "", but not
NULL.
This #if completely replaced the actually useful version of clipme(),
so I think it must have been used early in development before the
useful version was even written.
Since then it's bit-rotted to the point where it doesn't even make
sense: the "#endif" is nested inside a while loop that the "#if 0" and
"#else" are outside, so that if the condition were changed to evaluate
true, you'd get syntactic nonsense out of the preprocessor.
And I can't see any use for it now, even if it did compile! Get rid of
it.
term_do_paste and term_input_data_from_* were still taking int, which
should be size_t. Not that I expect those functions to get >2^31 bytes
of input in one go, but I noticed it while about to modify the same
functions for another reason.
I've had more than one conversation recently in which users have
mentioned finding this mode inconvenient. I don't know whether any of
them would want to turn it off completely, but it seems likely that
_somebody_ will, sooner or later. So here's an option to do that.
A user sent a transcript from a curses-based tool 'ncmpc', which
carefully disables terminal autowrap when printing a character in the
bottom right corner of the display, and then turns it back on again.
After that, it expects that sending the backspace character really
moves the cursor back a space, instead of clearing the wrapnext flag.
But in PuTTY, we set the wrapnext flag even if we're not in wrapping
mode - it just doesn't _do_ anything when the next character is sent.
But it remains set, and still affects backspace. So the display is
corrupted by this change of expectation.
(Specifically, ncmpc is printing a time display [m:ss] in the very
bottom right, so it disables wrap in order to print the final ']'.
Then the next thing it needs to do is to update the low-order digit of
the seconds field, so it sends \b as the simplest way to get to that
character. The effect on the display is that the updated seconds digit
appears where the ] was, instead of overwriting the old seconds digit.)
This is a tradeoff in desirable behaviours. The point of having a
backspace operation cancel the wrapnext flag and not actually move the
cursor is to preserve the invariant that sending 'x', backspace, 'y'
causes the y to overprint the x, even if that happens near the end of
the terminal's line length. In non-wrapping mode that invariant was
bound to break _eventually_, but with this change, it breaks one
character earlier than before. However, I think that's less bad than
breaking the expectations of curses-based full-screen applications,
especially since the _main_ need for that invariant arises from naïve
applications that don't want to have to think about the terminal width
at all - and those applications generally run in _wrapping_ mode,
where it's possible to continue the invariant across multiple lines in
any case.
These were deliberately thrown away in our UTF-8 decoder, with a
comment apparently introduced by RDB in the 2001 big Unicode patch.
The purpose of this character range has changed completely since then,
and now they act as modifier characters on top of U+1F3F4 to construct
a space of flags (the standard examples being those of England,
Scotland and Wales). We were failing to display those flags, and even
pasting out of the terminal didn't give back the right Unicode.
This is the only one of the newly added cases in test/utf8.txt which I
can (try to) fix unilaterally just by changing PuTTY's display code,
because it doesn't change the number of character cells occupied by
the text, only the appearance of those cells.
In this commit I make the necessary changes in terminal.c, which makes
flags start working in GTK PuTTY and pterm, but not on Windows.
The system of encoding flags in Unicode is that there's a space of 26
regional-indicator letter code points (U+1F1E6 to U+1F1FF inclusive)
corresponding to the unaccented Latin alphabet, and an adjacent pair
of those letters represents the flag associated with that two-letter
code (usually a nation, although at least one non-nation pair exists,
namely EU).
There are two plausible ways we could handle this in terminal.c:
(a) leave the regional indicators as they are in the internal data
model, so that each RI letter occupies its own character cell,
and at display time have do_paint() spot adjacent pairs of them
and send each pair to the frontend as a combined glyph.
(b) combine the pairs _in_ the internal data model, by
special-casing them in term_display_graphic_char().
This choice makes a semantic difference. What if a flag is displayed
in the terminal and something overprints one of its two character
cells? With option (a), overprinting one cell of an RI pair with a
different RI letter would change it into a different flag; with
option (b), flags behave like any other wide character, in that
overprinting one of the two cells blanks the other as a side effect.
I think we need (a), because not all terminal redraw systems
(curses-style libraries) will understand the Unicode flag glyph system
at all. So if a full-screen application genuinely wants to do a screen
redraw in which a flag changes to a different flag while keeping one
of its constituent letters the same (say, swapping between BA and CA,
or between AC and AD), then the redraw library might very well
implement that screen update by redrawing only the changed letter, and
we need not to corrupt the flag.
All of this is now implemented in terminal.c. The effect is that pairs
of RI characters are passed to the TermWin draw_text() method as if
they were a wide character with a combining mark: that is, you get a
two-character (or four-surrogate) string, with TATTR_COMBINING
indicating that it represents a single glyph, and ATTR_WIDE indicating
that that glyph occupies two character cells rather than one.
In GTK, that's enough to make flag display Just Work. But on
Windows (at least the Win10 machine I have to test on), that doesn't
make flags start working all by itself. But then, the rest of the new
emoji tests also look a bit confused on Windows too. Help would be
welcome from someone who knows how Windows emoji display is supposed
to work!
I encountered an instance of this sequence in the log files from a
clang CI build. The payload text inside the wrapper was
"bk;t=1697630539879"; I don't know what the "bk" stood for, but the
second half appears to be a timestamp in milliseconds since the Unix
epoch.
I don't think there's anything we can (or should) actually _do_ with
this sequence, but I think it's useful to at least recognise it, so
that it can be conveniently discarded.
Shortly after the previous commit I spotted another definitely missing
display update: if you send the byte 0x7F, aka 'destructive
backspace', then the display didn't update immediately.
That was two in a row, so I did an eyeball review of the whole
terminal state machine to the best of my ability. Found a couple more
borderline ones, but also, found that the entire VT52 sub-state-
machine had a blanket seen_disp_event which really _shouldn't_ have
been there, because half the VT52 sequences aren't actually display-
modifying updates.
To make this _slightly_ less error-prone, I've sunk a number of
seen_disp_update calls into subroutines that aren't the top-level
term_out(). For example, erase_lots(), scroll(), move() and
swap_screen() now all call seen_disp_update within themselves, so
their call sites don't all have to remember to.
There are probably further bugs after this upheaval, but I think it's
moving in generally the right direction.
These escape sequences immediately change the display of the line
they're invoked on, so they need to trigger a display update. But they
weren't, and I suppose we must have never noticed before due to the
complete confusion fixed in commit bdbd5f429c.
Recently I encountered a CLI tool that took tens of seconds to run,
and produced no _visible_ output, but wrote ESC[0m to the terminal a
few times during its operation. (Probably by mistake. In other modes
it does print colourful messages, so I expect a 'reset colour' call
was accidentally outside the 'if' statement containing the rest of the
diagnostic it followed. Or something along those lines.)
I noticed this because every ESC[0m reset my pterm scrollback to the
bottom, which wasn't very helpful, and was unintentional on pterm's
part (as _well_ as on the part of the tool). But I can fix pterm!
At first glance the code _looked_ sensible: terminal.c contains calls
to seen_disp_event(term) whenever terminal output does something that
requires a redraw of the terminal window. Those are also the updates
that should count as 'reset scrollback on display activity'. And
ESC[0m, along with the rest of the SGR handler, correctly contained no
such call. So how did a display update happen at all?
The code was confusingly tangled up with the code that responds to
terminal activity by resetting the phase of the blinking cursor (if
any). term_reset_cblink() was calling seen_disp_event() (when surely
it should be the other way round!), and also, term_reset_cblink() was
called whenever _any_ terminal output data arrived. That combination
meant that any byte output to the terminal at all turned out to count
as display activity, whether or not it changed the screen contents.
Additionally, the other scrollback-reset flag, 'reset scrollback on
keypress', was handled by calling seen_disp_event() from the keyboard
handler. But display events and keyboard events are supposed to be
_independent_ potential causes of scrollback resets - it doesn't make
any sense to handle one by treating it as the other!
So I've reorganised the code completely:
- the seen_disp_event *flag* is now gone. Instead, the
seen_disp_event function tests the scroll_on_disp flag, and if set,
resets the scroll position immediately and sets the general
'scrollbar needs updating' flag.
- keyboard input is handled by doing exactly the same thing except
testing the scroll_on_key flag, so the two systems are properly
independent. That code calls term_schedule_update so that the
terminal will be redrawn as a result of the scroll, but doesn't
also call seen_disp_event() for the rest of the full treatment.
- the term_update code that does the scrollbar update is much
simpler, since now it only needs to test that one flag.
- I also had to set that flag explicitly in scroll() so that the
scrollbar would still be updated as a result of the scrollback size
changing. I think that must have been happening entirely by
accident before.
- term_reset_cblink is subsumed into seen_disp_event, so that only
_substantive_ display updates cause the cursor blink phase to reset
to the start of the solid period.
Result: if programs output no-op sequences like ESC[0m, or if you
press keys that don't echo, then the cursor will carry on blinking
normally, and (if you don't also have scroll_on_key set) the
scrollback won't be reset. And the code is slightly shorter than it
was before, and hopefully more sensible too.
(However, other classes of no-op activity _will_ still cause a cursor
blink phase change and a scrollback reset, such as sending a
cursor-positioning sequence that puts the cursor in the same place it
was already - even something as simple as ^M when already at the start
of the line. It might be nice to fix that, but it's much more
difficult: you'd have to either put a complicated and error-prone test
at every seen_disp_event call site, or else expensively diff the
entire visible terminal state against how it was before. And to avoid
a nondeterministic dependency on the terminal update cooldown, that
diff would have to be done at the granularity of individual control
sequences rather than a bounded number of times a second. I'd rather
not!)
Just ran across this, which I think was a casualty of GNU indent in
the distant past not really knowing what to do with nested array
declarations.
It's also possible that that array should be replaced with something
mechanically generated from the current Unicode standard. I don't have
time to do that right now, but I can at least make the existing
version not amazingly ugly.
This avoids any further problems with non-NULL pointers, and also
allows me to throw out a lot of boilerplate initialisation of
pointers, integers and booleans to NULL, 0 or false respectively. (Of
course, all the initialisations to _other_ values have to stay.)
In live uses of Terminal, we always give it an ldisc almost
immediately after creating it. But the new test_terminal program
doesn't, which allows it to be an uninitialised pointer. Apparently I
got lucky while testing this the first few times.
Another bug turned up by writing tests. The code that spots that the
character won't fit, and wraps it to the next line setting
LATTR_WRAPPED2, was not checking that wrap mode was _enabled_ before
doing that. So if you printed a DW character in the rightmost column
while the terminal was in non-auto-wrap mode, you'd get an unwanted
wrap.
Other terminals disagree on what to do here. xterm leaves the cursor
in the same place and doesn't print any character at all.
gnome-terminal, on the other hand, backspaces by a character so that
it _can_ print the requested DW character, in the rightmost _two_
columns.
I think I don't much like either of those, so instead I'm using the
same fallback we use for displaying a DW character when the whole
terminal is only one column wide: if there is physically no room to
print the requested character, turn it into U+FFFD REPLACEMENT
CHARACTER.
This is the first bug found as a direct result of writing that
terminal test program - I added some tests for things I expected to
work already, and some of them didn't, proving immediately that it was
a good idea!
If the terminal is one column wide, and you've printed a
character (hence, set the wrapnext flag), what should backspace do?
Surely it should behave like any other backspace with wrapnext set,
i.e. clear the wrapnext flag, returning the cursor's _logical_
position to the location of the most recently printed character. But
in fact it was anti-wrapping to the previous line, because I'd got the
cases in the wrong order in the if-else chain that forms the backspace
handler. So the handler for 'we're in column 0, wrapping time' was
coming before 'wrapnext is set, just clear it'.
Now wrapnext is checked _first_, before checking anything at all. Any
time we can just clear that, we should.
Suppose an application tries to print a double-width character
starting in the rightmost column of the screen, so that we apply our
emergency fix of wrapping to the next line immediately and printing
the character in the first two columns. Suppose they then backspace
twice, taking the cursor to the RHS and then the LHS of that
character. What should happen if they backspace a third time?
Our previous behaviour was to completely ignore the unusual situation,
and do the same thing we'd do in any other backspace from column 0:
anti-wrap the cursor to the last column of the previous line, leaving
it in the empty character cell that was skipped when the DW char
couldn't be printed in it.
But I think this isn't the best response, because it breaks the
invariant that printing N columns' worth of graphic characters and
then backspacing N times should leave the cursor on the first of those
characters. If I print "a가" (for example) and then backspace three
times, I want the cursor on the a, _even_ if weird line wrapping
behaviour happened somewhere in that sequence.
(Rationale: this helps naïve terminal applications which don't even
know what the terminal width is, and aren't tracking their absolute x
position. In particular, the simplistic line-based input systems that
appear in OS kernels and our own lineedit.c will want to emit a fixed
number of backspace-space-backspace sequences to delete characters
previously entered on to the line by the user. They still need to
check the wcwidth of the characters they're emitting, so that they can
BSB twice for a DW character or 0 times for a combining one, but it
would be *hugely* more awkward for them to ask the terminal where the
cursor is so that they can take account of difficult line wraps!)
We already have the ability to _recognise_ this situation: on a line
that was wrapped in this unusual way, we set the LATTR_WRAPPED2 line
attribute flag, to prevent the empty rightmost column from injecting
an unwanted space into copy-pastes from the terminal. Now we also use
the same flag to cause the backspace control character to do something
interesting.
This was the fix that inspired me to start writing test_terminal,
because I knew it was touching a delicate area. However, in the course
of writing this fix and its tests, I encountered two (!) further bugs,
which I'll fix in followup commits!
It wasn't used for anything except in an assert statement, which was
triggered by the use of the scrlineptr() macro wrapper. Now moved that
check into scrlineptr() itself, via a helper function that passes the
line number of the scrlineptr() call site.
(Yes, this is introducing another modalfatalbox in terminal.c, much
like the dreaded line==NULL one that caused us so many headaches in
past decades. But the check in question was being done _already_ by
the assert in lineptr(), so this change shouldn't make it go off in
any _more_ circumstances - and now, if it does, it will at least give
us slightly more useful information about where the problem is!)
This continues the programme of UTF-8 support in authentication, begun
in commit f4519b6533 which arranged for console userpass prompts
to function in UTF-8 when the prompts_t asked them to. Since the new
line editing setup works properly when it _is_ in UTF-8 mode, I can
now also arrange that it puts the terminal into UTF-8 mode in the
right circumstances.
I've extended the applicability of the '-legacy-charset-handling' flag
introduced by the commit mentioned above, so that now it's not
specific to the console front end. Now you can give it to GUI PuTTY as
well, which restores the previous (wrong) behaviour of accepting
username and password prompt input in the main session's configured
character set. So if this change breaks someone's workflow, they
should be able to have it back.
This takes over from both the implementation in ldisc.c and the one in
term_get_userpass_input, which were imperfectly duplicating each
other's functionality. The new version should be more consistent
between the two already, and also, it means further improvements can
now be made in just one place.
In the course of this, I've restructured the inside of ldisc.c by
moving the input_queue bufchain to the other side of the translation
code in ldisc_send. Previously, ldisc_send received a string, an
optional 'dedicated key' indication (bodgily signalled by a negative
length) and an 'interactive' flag, translated that somehow into a
combination of raw backend output and specials, and saved the latter
in input_queue. Now it saves the original (string, dedicated flag,
interactive flag) data in input_queue, and doesn't do the translation
until the data is pulled back _out_ of the queue. That's because the
new line editing system expects to receive something much closer to
the original data format.
The term_get_userpass_input system is also substantially restructured.
Instead of ldisc.c handing each individual keystroke to terminal.c so
that it can do line editing on it, terminal.c now just gives the Ldisc
a pointer to its instance of the new TermLineEditor object - and then
ldisc.c can put keystrokes straight into that, in the same way it
would put them into its own TermLineEditor, without having to go via
terminal.c at all. So the term_get_userpass_input edifice is only
called back when the line editor actually delivers the answer to a
username or password prompt.
(I considered not _even_ having a separate TermLineEditor for password
prompts, and just letting ldisc.c use its own. But the problem is that
some of the behaviour differences between the two line editors are
deliberate, for example the use of ^D to signal 'abort this prompt',
and the use of Escape as an alternative line-clearing command. So
TermLineEditor has a flags word that allows ldisc and terminal to set
it up differently. Also this lets me give the two TermLineEditors a
different vtable of callback functions, which is a convenient way for
terminal.c to get notified when a prompt has been answered.)
The new line editor still passes all the tests I wrote for the old
one. But it already has a couple of important improvements, both in
the area of UTF-8 handling:
Firstly, when we display a UTF-8 character on the terminal, we check
with the terminal how many character cells it occupied, and then if
the user deletes it again from the editing buffer, we can emit the
right number of backspace-space-backspace sequences. (The old ldisc
line editor incorrectly assumed all Unicode characters had terminal
with 1, partly because its buffer was byte- rather than character-
oriented and so it was more than enough work just finding where the
character _start_ was.)
Secondly, terminal.c's userpass line editor would never emit a byte in
the 80-BF range to the terminal at all, which meant that nontrivial
UTF-8 characters always came out as U+FFFD blobs!
Horizontal scroll events aren't generated by the traditional mouse
wheel, but they can be generated by trackpad gestures, though this
isn't always configured on.
The cross-platform and Windows parts of this patch is due to
Christopher Plewright; I added the GTK support.
This was requested by a downstream of the code, who wanted to change
the time/space tradeoff in the terminal. I currently have no plans to
change this setting for upstream PuTTY, although there is a cmake
option for it just to make testing it easy.
To avoid sprinkling ifdefs over the whole terminal code, the strategy
is to keep the separate type 'compressed_scrollback_line', and turn it
into a typedef for a 'termline *'. So compressline() becomes almost
trivial, and decompressline() even more so.
Memory management is the fiddly part. To make this work sensibly on
both sides, I've broken up each of compressline() and decompressline()
into two versions, one of which takes ownership of (and logically
speaking frees) its input, and the other doesn't. So at call sites
where a function was followed by a free, it's now calling the
'and_free' version of the function, and where the input object was
reused afterwards, it's calling the 'no_free' version. This means that
in different branches of the #if, I can make one function call the
other or vice versa, and no call site is stuck with having to do
things in a more roundabout way than necessary.
The freeing of the _return_ value from decompressline() is handled for
us, because termlines already have a 'temporary' flag which is set
when they're returned from the decompressor, and anyone receiving a
termline from lineptr() calls unlineptr() when they're finished with
it, which will _conditionally_ free it, depending on that 'temporary'
flag. So in the new mode, 'temporary' is never set at all, and all
those unlineptr() calls do nothing.
However, we also still need to free compressed lines properly when
they're actually being thrown away (scrolled off the top of the
scrollback, or cleaned up in term_free), and for that, I've made a new
special-purpose free_compressed_line() function.
From https://invisible-island.net/xterm/ctlseqs/ctlseqs.html#h3-Any-event-tracking:
Any-event mode is the same as button-event mode, except that all motion
events are reported, even if no mouse button is down. It is enabled by
specifying 1003 to DECSET.
Normally the front ends only report mouse events when buttons are
pressed, so we introduce a MA_MOVE event with MBT_NOTHING set to
indicate such a mouse movement.
Previously it output to an ordinary char buffer, and returned the
number of bytes it had written. But three out of the four call sites
immediately chucked the resulting bytes into a BinarySink anyway. The
fourth, in windows/unicode.c, really is writing into successive
locations of a fixed-size buffer - but we can make that into a
BinarySink too, using the buffer_sink added in the previous commit.
So now encode_utf8() is renamed put_utf8_char, and the call sites all
look simpler than they started out.
In the course of recent refactorings I noticed a couple of cases where
we were doing old-fashioned preallocation of a char array with some
conservative maximum size, then writing into it via *p++ or similar
and hoping we got the calculation right.
Now we have strbuf and dupcat, so we shouldn't ever have to do that.
Fixed as many cases as I could find by searching for allocations of
the form 'snewn(foo, char)'.
Particularly worth a mention was the Windows GSSAPI setup code, which
was directly using the Win32 Registry API, and looks much more legible
using the windows/utils/registry.c wrappers. (But that was why I had
to enhance them in the previous commit so as to be able to open
registry keys read-only: without that, the open operation would
actually fail on this key, which is not user-writable.)
Also unix/askpass.c, which was doing a careful reallocation of its
buffer to avoid secrets being left behind in the vacated memory -
which is now just a matter of ensuring we called strbuf_new_nm().
This patch fixes a few other whitespace and formatting issues which
were pointed out by the bulk-reindent or which I spotted in passing,
some involving manual editing to break lines more nicely.
I think the weirdest hunk in here is the one in windows/window.c
TranslateKey() where _half_ of an assignment statement inside an 'if'
was on the same line as the trailing paren of the if condition. No
idea at all how that one managed to happen!
I think a lot of these were inserted by a prior run through GNU indent
many years ago. I noticed in a more recent experiment that that tool
doesn't always correctly distinguish which instances of 'id * id' are
pointer variable declarations and which are multiplications, so it
spaces some of the former as if they were the latter.
My bulk indentation check also turned up a lot of cases where a run-on
function call or if statement didn't have its later lines aligned
correctly relative to the open paren.
I think this is quite easy to do by getting things out of
sync (editing the first line of the function call and forgetting to
update the rest, perhaps even because you never _saw_ the rest during
a search-replace). But a few didn't quite fit into that pattern, in
particular an outright misleading case in unix/askpass.c where the
second line of a call was aligned neatly below the _wrong_ one of the
open parens on the opening line.
Restored as many alignments as I could easily find.
In several pieces of development recently I've run across the
occasional code block in the middle of a function which suddenly
switched to 2-space indent from this code base's usual 4. I decided I
was tired of it, so I ran the whole code base through a re-indenter,
which made a huge mess, and then manually sifted out the changes that
actually made sense from that pass.
Indeed, this caught quite a few large sections with 2-space indent
level, a couple with 8, and a handful of even weirder things like 3
spaces or 12. This commit fixes them all.
In the 'xterm 216+' function key mode, a function key pressed with a
combination of Shift, Ctrl and Alt has its usual sequence like
ESC[n~ (for some integer n) turned into ESC[n;m~ where m-1 is a 3-bit
bitmap of currently pressed modifier keys.
This mode now also applies to the keys on the small keypad above the
arrow keys (Ins, Home, PgUp etc). If xterm 216+ mode is selected,
those keys are modified in the same way as the function keys.
As with the function keys, this doesn't guarantee that PuTTY will
_receive_ any particular shifted key of this kind, and not repurpose
it. Just as Alt+F4 still closes the window (at least on Windows)
rather than sending a modified F4 sequence, Shift+Ins will still
perform a paste action rather than sending a modified Ins sequence,
Shift-PgUp will still scroll the scrollback, etc. But the keys not
already used by PuTTY for other purposes should now have their
modern-xterm behaviour in modern-xterm mode.
Thanks to H.Merijn Brand for developing and testing a version of this
patch.
Instead of maintaining a single sparse table mapping Unicode to the
currently selected code page, we now maintain a collection of such
tables mapping Unicode to any code page we've so far found a need to
work with, and we add code pages to that list as necessary, and never
throw them away (since there are a limited number of them).
This means that the wc_to_mb family of functions are effectively
stateless: they no longer depend on a 'struct unicode_data'
corresponding to the current terminal settings. So I've removed that
parameter from all of them.
This fills in the missing piece of yesterday's commit a216d86106:
now wc_to_mb too should be able to handle internally-implemented
character sets, by hastily making their reverse mapping table if it
doesn't already have it.
(That was only a _latent_ bug, because the only use of wc_to_mb in the
cross-platform or Windows code _did_ want to convert to the currently
selected code page, so the old strategy worked in that case. But there
was no protection against an unworkable use of it being added later.)
In the changes around commit 420fe75552, I made the terminal
suspend output processing while it waited for a term_size() callback
in response to a resize request. Because on X11 there are unusual
circumstances in which you never receive that callback, I also added a
last-ditch 5-second timeout, so that eventually we'll resume terminal
output processing regardless.
But the timeout lives in terminal.c, in the cross-platform code. This
is pointless on Windows (where resize processing is synchronous, so we
always finish it before the timer code next gets called anyway), but I
decided it was easier to keep the whole mechanism in terminal.c in the
absence of a good reason not to.
Now I've found that reason. We _also_ generate window resizes locally
to the GTK front end, in response to the key combinations that change
the font size, and _those_ still have an asynchrony problem.
So, to begin with, I'm refactoring the request_resize system so that
now there's an explicit callback from the frontend to the terminal to
say 'Your resize request has now been processed, whether or not you've
received a term_size() call'. On Windows, this simplifies matters
greatly because we always know exactly when to call that, and don't
have to keep a 'have we called term_size() already?' flag. On GTK, the
timing complexity previously in terminal.c has moved into window.c.
No functional change (I hope). The payoff will be in the next commit.
A user reports that the xterm OSC 112 sequence (reset cursor colour)
is sometimes sent as simply OSC 112 BEL, rather than OSC 112 ; BEL.
When xterm parses this, the BEL still acts as an OSC terminator, even
though it appears before the separating semicolon that shifts into the
'absorb the notional command string' state.
PuTTY doesn't support that sequence at all. But currently, the way it
doesn't support it is by treating the BEL completely normally, so that
you get an annoying beep when a client application sends that
abbreviated sequence.
Now we recognise all the OSC terminator sequences even in the OSC
setup termstates, as well as the final OSC_STRING state. That goes
equally for BEL, ST in the form of ESC \, ST in the form of
single-byte 0x9C, and ST in the UTF-8 encoding.
I got a pterm into a stuck state this morning by an accidental mouse
action. I'd intended to press Ctrl + right-click to pop up the context
menu, but I accidentally pressed down the left button first, starting
a selection drag, and then while the left button was still held down,
pressed down the right button as well, triggering the menu.
The effect was that the context menu appeared while term->selstate was
set to DRAGGING, in which state terminal output is suppressed, and
which is only unset by a mouse-button release event. But then that
release event went to the popup menu, and the terminal window never
got it. So the terminal stayed stuck forever - or rather, until I
guessed the cause and did another selection drag to reset it.
This happened to me on GTK, but once I knew how I'd done it, I found I
could reproduce the same misbehaviour on Windows by the same method.
Added a simplistic fix, on both platforms, that cancels a selection
drag if the popup menu is summoned part way through it.
A user points out that this has regressed since 0.76, probably when I
reorganised the keyboard control-sequence formatting into centralised
helper functions in terminal.c.
The SCO function keys should behave differently when you press Shift
or Ctrl or both. For example, F1 should generate ESC[M bare, ESC[Y
with Shift, Esc[k with Ctrl, Esc[w with Shift+Ctrl. But in fact, Shift
was having no effect, so those tests would give ESC[M twice and ESC[k
twice.
That was because I was setting 'shift = false' for all function key
types except FUNKY_XTERM_216, after modifying the derived 'index'
value. But the SCO branch of the code doesn't use 'index' (it wouldn't
have the right value in any case), so the sole effect was to forget
about Shift. Easily fixed by disabling that branch for FUNKY_SCO too.
(cherry picked from commit aa01530488)
If term_get_userpass_input is called with term->ldisc not yet set up,
then we had a special-case handler that returns an error message - but
it does it via the same subroutine that returns normal results, which
also turns off the prompt callback in term->ldisc! Need an extra NULL
check in that subroutine. Thanks Coverity.
All the seat functions that request an interactive prompt of some kind
to the user - both the main seat_get_userpass_input and the various
confirmation dialogs for things like host keys - were using a simple
int return value, with the general semantics of 0 = "fail", 1 =
"proceed" (and in the case of seat_get_userpass_input, answers to the
prompts were provided), and -1 = "request in progress, wait for a
callback".
In this commit I change all those functions' return types to a new
struct called SeatPromptResult, whose primary field is an enum
replacing those simple integer values.
The main purpose is that the enum has not three but _four_ values: the
"fail" result has been split into 'user abort' and 'software abort'.
The distinction is that a user abort occurs as a result of an
interactive UI action, such as the user clicking 'cancel' in a dialog
box or hitting ^D or ^C at a terminal password prompt - and therefore,
there's no need to display an error message telling the user that the
interactive operation has failed, because the user already knows,
because they _did_ it. 'Software abort' is from any other cause, where
PuTTY is the first to know there was a problem, and has to tell the
user.
We already had this 'user abort' vs 'software abort' distinction in
other parts of the code - the SSH backend has separate termination
functions which protocol layers can call. But we assumed that any
failure from an interactive prompt request fell into the 'user abort'
category, which is not true. A couple of examples: if you configure a
host key fingerprint in your saved session via the SSH > Host keys
pane, and the server presents a host key that doesn't match it, then
verify_ssh_host_key would report that the user had aborted the
connection, and feel no need to tell the user what had gone wrong!
Similarly, if a password provided on the command line was not
accepted, then (after I fixed the semantics of that in the previous
commit) the same wrong handling would occur.
So now, those Seat prompt functions too can communicate whether the
user or the software originated a connection abort. And in the latter
case, we also provide an error message to present to the user. Result:
in those two example cases (and others), error messages should no
longer go missing.
Implementation note: to avoid the hassle of having the error message
in a SeatPromptResult being a dynamically allocated string (and hence,
every recipient of one must always check whether it's non-NULL and
free it on every exit path, plus being careful about copying the
struct around), I've instead arranged that the structure contains a
function pointer and a couple of parameters, so that the string form
of the message can be constructed on demand. That way, the only users
who need to free it are the ones who actually _asked_ for it in the
first place, which is a much smaller set.
(This is one of the rare occasions that I regret not having C++'s
extra features available in this code base - a unique_ptr or
shared_ptr to a string would have been just the thing here, and the
compiler would have done all the hard work for me of remembering where
to insert the frees!)
The return value of term_data() is used as the return value from the
GUI-terminal versions of the Seat output method, which means backends
will take it to be the amount of standard-output data currently
buffered, and exert back-pressure on the remote peer if it gets too
big (e.g. by ceasing to extend the window in that particular SSH-2
channel).
Historically, as a comment in term_data() explained, we always just
returned 0 from that function, on the basis that we were processing
all the terminal data through our terminal emulation code immediately,
and never retained any of it in the buffer at all. If the terminal
emulation code were to start running slowly, then it would slow down
the _whole_ PuTTY system, due to single-threadedness, and
back-pressure of a sort would be exerted on the remote by it simply
failing to get round to reading from the network socket. But by the
time we got back to the top level of term_data(), we'd have finished
reading all the data we had, so it was still appropriate to return 0.
That comment is still correct if you're thinking about the limiting
factor on terminal data processing being the CPU usage in term_out().
But now that's no longer the whole story, because sometimes we leave
data in term->inbuf without having processed it: during drag-selects
in the terminal window, and (just introduced) while waiting for the
response to a pending window resize request. For both those reasons,
we _don't_ always have a buffer size of zero when we return from
term_data().
So now that hole in our buffer size management is filled in:
term_data() returns the true size of the remaining unprocessed
terminal output, so that back-pressure will be exerted if the terminal
is currently not consuming it. And when processing resumes and we
start to clear our backlog, we call backend_unthrottle to let the
backend know it can relax the back-pressure if necessary.
This is the payoff from the last few commits of refactoring. It fixes
the following race-condition bug in terminal application redraw:
* server sends a window-resizing escape sequence
* terminal requests a window resize from the front end
* server sends further escape sequences to perform a redraw of some
full-screen application, which assume that the window resize has
occurred and the window is already its new size
* terminal processes all those sequences in the context of the old
window size, while the front end is still thinking
* window resize completes in the front end and term_size() tells the
terminal it now has its new size, but it's too late, the screen
redraw has made a total mess.
(Perhaps the server might even send its window resize + followup
redraw all in one SSH packet, so that it's all queued in term->inbuf
in one go.)
As far as I can see, handling of this case has been broken more or
less forever in the GTK frontend (where window resizes are inherently
asynchronous due to the way X11 works, and we've never done anything
to compensate for that). On Windows, where window size is changed via
SetWindowPos which is synchronous, it used to work, but broke in
commit d74308e90e (i.e. between 0.74 and 0.75), which made all
the ancillary window updates run on the same delayed-action timer as
ordinary text display.
So, it's time to fix it, and I think now I should be able to fix it in
GTK as well as on Windows.
Now, as soon as we've set the term->win_resize_pending flag (in
response to a resize escape sequence), the next return to the top of
the main loop in term_out will terminate output processing early,
leaving any further terminal data still in the term->inbuf bufchain.
Once we get a term_size() callback from the front end telling us our
new size, we reset term->win_resize_pending, which unblocks output
processing again, and we also queue a toplevel callback to have
another try at term_out() so that it will be unblocked promptly.
To implement this I've changed term->win_resize_pending from a bool
into a three-state enumeration, so that we can tell the difference
between 'pending' in the sense of not yet having sent our resize
request to the frontend, and in the sense of waiting for the frontend
to reply. That way, a window resize from the GUI user at least won't
be mistaken for the response to our resize request if it arrives in
the former state. (It can still be mistaken for one in the latter
case, but if the user is resizing the window at the same time as the
server-side application is doing critically size-dependent redrawing,
I don't think there can be any reasonable expectation of nothing going
wrong.)
As mentioned in the previous commit, some failure modes under X11 (in
particular the window manager process getting wedged in some way) can
result in no response being received to a ConfigureWindow request. In
that situation, it seems to me that we really _shouldn't_ sit there
waiting forever - perhaps it's technically the WM's fault and not
ours, but what kind of X window are you most likely to want to use to
do emergency WM repair? A terminal window, of course, so it would be
exceptionally unhelpful to make any terminal window stop working
completely in this situation! Hence, there's a fallback timeout in
terminal.c, so that if we don't receive a response in _too_ long,
we'll assume one is not forthcoming, and resume processing terminal
data at the old window size. The fallback timeout is set to 5 seconds,
following existing practice in libXt (DEFAULT_WM_TIMEOUT).
There's no actual need to copy the data from term->inbuf into a local
variable inside term_out(). We can simply store a pointer and length,
and use the data _in situ_ - as long as we remember how much of it
we've used, and bufchain_consume() it when the routine exits.
Getting rid of that awkward and size-limited local array should
marginally improve performance. But also, it opens up the possibility
to suddenly suspend handling of terminal data and leave everything not
yet processed in the bufchain, because now we never remove anything
from the bufchain until _after_ it's been processed, so there's no
need to awkwardly push the unused segment of localbuf[] back on to the
front of the bufchain if we need to do that.
NFC, but as usual, I have a plan to use the new capability in a
followup commit.
This tiny refactoring makes a convenient function for setting all the
'pending' flags and triggering a callback for the next window update.
This saves a bit of code, but that's not really the main point (or
else I'd have done the same to all the other similar things like
window moves). The point is that in a future commit I'm going to want
to do an extra thing on every server-controlled window resize, and
this refactoring gives me a single place to put that extra action.
This tiny refactoring replaces three identical checks at call sites,
not all as well commented as each other, with a check in just one
place with the best of the three comments.