We weren't building _all_ the icons in true-colour mode, because most
don't change anyway. The installer ones do, so let's build them. Works
better with the preview page.
A user reported recently that if you connect to a Telnet server via a
proxy that requires authentication, and enter the auth details
manually in the PuTTY terminal window, then the entire Telnet session
is shown with trust sigils to its left.
This happens because telnet.c calls seat_set_trust_status(false) as
soon as it's called new_connection() to make the Socket. But the
interactive proxy authentication dialogue hasn't happened yet, at that
point. So the proxy resets the trust status to true and asks for a
username and password, and then nothing ever resets it to false,
because telnet.c thought it had already done that.
The solution is to defer the Telnet backend's change of trust status
to when we get the notification that the socket is properly connected,
which arrives via plug_log(PLUGLOG_CONNECT_SUCCESS).
The same bug occurs in raw.c and supdup.c, but not in rlogin.c,
because Rlogin has an initial authentication exchange known to the
protocol, and already delays resetting the trust status until after
that has concluded.
Colin Watson reported that a build failure occurred in the AArch64
Debian build of PuTTY 0.83:
gcc now defaults to enabling branch protection using AArch64 pointer
authentication, if the target architecture version supports it.
Debian's base supported architecture does not, but Armv8.4-A does. So
when I changed the compile flags for enable_dit.c to add
-march=armv8.4-a, it didn't _just_ allow me to write the 'msr dit, %0'
instruction in my asm statement; it also unexpectedly turned on
pointer authentication in the containing function, which caused a
SIGILL when running on a pre-Armv8.4-A CPU, because although the code
correctly skipped the instruction that set DIT, it was already inside
enable_dit() at that point and couldn't avoid going through the
unsupported 'retaa' instruction which tries to check an auth code on
the return address.
An obvious approach would be to add -mbranch-protection=none to the
compile flags for enable_dit.c. Another approach is to leave the
_compiler_ flags alone, and change the architecture in the assembler,
either via a fiddly -Wa,... option or by putting a .arch directive
inside the asm statement. But both have downsides. Turning off branch
protection is fine for the Debian build, but has the unwanted side
effect of turning it off (in that one function) even in builds
targeting a later architecture which _did_ want branch protection. And
changing the assembler's architecture risks changing it _down_ instead
of up, again perhaps invalidating other instructions generated by the
compiler (like if some later security feature is introduced that gcc
also wants to turn on by default).
So instead I've taken the much simpler approach of not bothering to
change the target architecture at all, and instead generating the move
into DIT by hardcoding its actual instruction encoding. This meant I
also had to force the input value into a specific register, but I
don't think that does any harm (not _even_ wasting an extra
instruction in codegen). Now we should avoid interfering with any
security features the compiler wants to turn on or off: all of that
should be independent of the instruction I really wanted.
We used to have a practice of \IM-ing every command-line option for the
index, but haven't kept it up.
Add these for all existing indexed command-line options, plus some
related tidying.
(That's Halibut's non-breaking hyphen.)
Triggered by noticing that the changes in 54f6fefe61 happened to come
out badly in the text-only rendering, but I noticed there were many more
instances in the main docs where non-breaking hyphens would help.
After all these years, this checklist is _still_ hard for me to get
right. In the 0.83 runup this month, I prepared everything about the
RC build in advance, but nothing about the announcements, website
updates etc, and had to do all of that on release day.
So I've completely removed the section "Preparing to make the
release", which was ambiguous about whether it's done in advance or on
the day. Now all the text parts (website, wishlist, announcements) are
folded into the "make a release candidate" section, in the hope that
I'll remember to do them all at the same time, which should mean
- people have a few days to review the text _and_ test the RC build
- because they go together, I also remember to revise the text if a
new RC build is needed (e.g. mention whatever extra fix it has).
The "actual release procedure" section is now down to _only_ the
things I have to do on the day, which is basically uploading
everything, going live, and communicating the release.
Spotted by Coverity: we've just allocated a strbuf to hold the output
of the classical half of the hybrid key exchange, but if that output
isn't generated due to some kind of failure, we forgot to free the
strbuf on exit.
This was introduced in commit e7acb9f6968d482, as a side effect of
also wanting to call wmemchr to find a NUL wide character in the
buffer returned from GetDlgItemTextW. But the previous commit has
superseded that code, so now we don't use wmemchr in this code base
any more. Remove the machinery that provides it, saving a now-useless
cmake configure-time check.
This rewrite, due to SATO Kentaro, uses GetWindowTextLength (which I
hadn't known about) to find the correct size to allocate the buffer
the first time, avoiding the need to keep growing it until a call to
GetDlgItemText doesn't have to truncate the result.
[SGT: the helper function do_filereq_w expects its filename size to be
in characters, not bytes, because it's used both as an index into the
wchar_t buffer and also as nMaxFile in the OPENFILENAMEW structure
which is also documented as measured in characters. So at the call
site it should be measured via lenof rather than sizeof. This patch
has done the same with the char version, which makes no functional
difference but keeps the code more consistent.]
A user reported that the following sequence of events leads to Pageant
crashing:
- load an encrypted key into Pageant for decryption later
- attempt to use the key, so that Pageant prompts for the passphrase
- before entering the passphrase, abort the attempt to use the
key (e.g. by closing the PuTTY that was trying to use it)
- now enter the passphrase at the Pageant prompt, once the need for
it has gone away.
Once the key is decrypted, unblock_requests_for_key() goes through the
linked list of blocked PageantSignOp attached to the private key
record it's just decrypted, and tries to unblock them. The
PageantSignOp belonging to the aborted Pageant request is still linked
on that list, which it shouldn't be, because it's also been freed by
pageant_unregister_client when that traversed the separate linked list
of PageantAsyncOp associated with that client connection. So the
private key's list of blocked requests contained a stale pointer.
Now PageantSignOp's implementation of the PageantAsyncOp free method
makes sure to unlink the signop from any list it's on before freeing
it.
This can come up, for example, if the terminal receives a ^E character
and has an empty answerback string configured.
Without this early return, we append zero bytes to ldisc's ordinary
bufchain input_queue, which is harmless; but we also append a
zero-length record to ldisc's list of (type, length) chunks describing
which parts of the input bufchain should be treated as interactive or
as coming from special dedicated keystrokes (e.g. telling Return apart
from ^M).
That zero-length record is not _immediately_ harmful, but when the
user next presses a key, it will have a different type from the empty
answerback data, so that another chunk record is appended to the list
after the zero-length one. And then ldisc_input_queue_callback goes
into a tight loop, because it keeps trying to consume bytes from the
start of the input bufchain but bounding the size at the length of the
first (type, length) chunk, which is zero. So it consumes 0 bytes,
finds the bufchain still isn't empty, and loops round again.
When retrieving Unicode text from an edit box in the GUI configurer,
we were using plain memchr() to look for a terminating NUL. But of
course you have to use wmemchr() to look for a UTF-16 NUL, or else
memchr() will generate a false positive on the UTF-16 version of (at
least) any ASCII character!
(I also have to provide a fallback implementation of wmemchr for the
w32old builds, which don't have it in the libc they build against.
It's as simple as possible, and we use the libc version where
possible.)
This is a combined cherry-pick of three consecutive commits from main:
b088d77d580b8f7 GTK: hard-code some last-ditch fallback fonts.
7f4cccde2ae53c0 GTK: fixes to the previous font fallback patch.
6155365076c47a8 GTK: switch the default to client-side fonts.
The combined effect is that now PuTTY's built-in default font is
client-side rather than server-side (advantaging Wayland and
disadvantaging legacy GTK1 builds, which seems like a sensible
tradeoff these days), and also, if the configured main font can't be
found, we'll try falling back to either the client- or server-side
default (whichever is available) before giving up completely and
whinging on standard error.
"server:fixed" was a good default when GTK1 was common and non-X11
environments were rare. Now it's the other way round - Wayland is very
common and the GTK1 configuration of PuTTY is legacy - so it's time to
make the default GTK font a client-side one.
Of course, anyone with an existing saved session (including Default
Settings) won't be affected by this change; it only helps new users
without an existing ~/.putty at all. That's why we _also_ need the
fallbacks introduced by the previous couple of commits. But we can at
least start making it sensible for new users.
(I considered keeping the #if, and switching it round so that it tests
GTK_CHECK_VERSION(2,0,0) rather than NOT_X_WINDOWS, i.e. selects the
client-side default whenever client-side fonts _are_ available,
instead of only when server-side fonts _aren't_. That way, in GTK1
builds, the Conf default font would _still_ be "server:fixed". But I
think this is firstly too marginal to worry about, and secondly, it's
more futureproof to make the default the same everywhere: if anyone
still stuck on a GTK1 environment later manages to update it, then
their saved settings are less likely to have had a legacy thing
written into them. And the GTK1 build will still run out of the box
because of the last-ditch fallback mechanism I've just added.)
I'd forgotten that I'd already chosen a default client-side font, for
NOT_X_WINDOWS builds. I should have made the two defaults match! Now
both default font names are defined in the header file.
If the user's choice of fonts can't be instantiated during initial
terminal-window setup, then we now automatically try two fallback
options, "client:Monospace 10" and "server:fixed", and only give a
fatal error if _no_ option allows us to open a terminal window.
Previously, on Wayland, PuTTY and pterm with default configuration
would completely fail to open a terminal window at all, because our
default font configuration is still the trad X11 "server:fixed", and
on Wayland, X11-style server-side bitmap fonts don't exist at all.
Conversely, in the GTK1 build of PuTTY which we're still supporting,
_client-side_ fonts aren't supported, so if a user had configured one
in their normal PuTTY configuration, then the GTK1 version would
similarly fail to launch.
Now both of those cases should work, because the fallbacks include a
client-side font _and_ a server-side one, and I hope that any usable
Pango system will make "Monospace" map to _some_ locally available
font, and that any remotely sensible X server has 'fixed'
I think it would be even better if there was a mechanism for the Conf
to specify a fallback list of fonts. For example, this might be
specified via a new multifont prefix along the lines of
choices:client:Monospace 10:server:fixed
with the semantics that the "choices:" prefix means that the rest of
the string is split up at every other colon to find a list of fonts to
try to make. Then we could not only set PuTTY's default to a list of
possibilities likely to find a usable font everywhere, but also, users
could configure their own list of preferred fallbacks.
But I haven't thought of a good answer to the design question of what
should happen if a Conf font setting looks like that and the user
triggers the GUI font selector! (Also, you'd need to figure out what
happened if a 'choices:' string had another 'choices' in it...)
The cmake script that determines the current git head commit, in order
to bake it into the binaries for development builds from a git
checkout, wasn't working reliably on Windows: sometimes it reported
that the source directory didn't seem to be a git repository, when in
fact it was.
This occurred because gitcommit.cmake starts by trying to work out
_whether_ the source directory is the top level of a git worktree at
all, by the technique of running `git rev-parse --show-toplevel` (to
print the top-level path of the git worktree _containing_ $PWD, if
any) and comparing it with ${CMAKE_SOURCE_DIR}. But the comparison was
done as a plain string, which leads to problems if more than one
string can represent the same actual directory.
On Windows, this can occur for two reasons that I know of. One reason
is related to Windows itself: if you map a network file server path to
a local drive letter, then the same directory is accessible as a UNC
path (e.g. \\hostname\share\subdir) and via the drive letter (e.g.
N:\subdir). And it can happen that CMAKE_SOURCE_DIR and git's output
disagree on which representation to use, causing the string comparison
to return a false negative.
(This can also affect filesystems in a WSL instance, accessed from
native Windows via \\wsl$\instance\path, because Windows implements
that as a network file share even though the network in question is
purely in the imagination of that one machine.)
The other reason is related more specifically to git, because some
versions of Windows git are built on top of MSYS or MINGW or that kind
of shim layer, and model Windows drive letters as subdirectories of a
Unixlike VFS root. So you might also find that the two strings
disagree on whether you're in C:\Users\alice\src\putty or
/c/Users/alice/src/putty.
I think this commit should work around both of these problems. Reading
the man page for `git rev-parse` more carefully, it has an option
`--show-cdup`, which returns a _relative_ path from $PWD to the
top-level directory of the worktree: that is, it will be a sequence of
`../../..` as long as necessary, including length zero. So you can use
that to directly query whether you're at the top of a git worktree: if
`git rev-parse --show-cdup` returns the empty string and a success
status, then you are. (If you're deeper in a worktree it will return a
non-empty string, and if you're not in a worktree at all it will
return a failure status and print an error message to stderr.)
This is a cherry-pick of Stefan Tauner's patch from main, but without
my followup refactoring, since the refactoring seemed to me to have a
(small but easily avoidable) chance of introducing a bug in 0.83.
The only downside of the original patch is that it contains a variable
name telling a lie: 'osc_is_apc' should really read 'this isn't an OSC
but one of APC, SOS and PM'. But we don't actually treat those three
things differently, so the functionality is fine.
(cherry picked from commit b6b95f23e563211437e51322edc9118b63a3ca40)
I didn't like the previous patch setting a flag claiming that two
kinds of thing were APC which aren't in fact APC. So I've made a
little enum to distinguish them in the code. There's an outside chance
we might want to handle some case of these in future, in which case
this makes it easier, but more likely it's just making me feel less
wrong about it.
But I've also folded the two existing flags osc_is_apc and osc_w into
that single enum field, which I think is an improvement.
SOS (Start of string) and PM (privacy message) are opening delimiters defined
in ECMA-48 similar to APC. With this change they are treated exactly like APC,
i.e., like fake OSC sequences that are ignored and not printed.
Signed-off-by: Stefan Tauner <stefan.tauner@artech.at>
DIT, for 'Data-Independent Timing', is a bit you can set in the
processor state on sufficiently new Arm CPUs, which promises that a
long list of instructions will deliberately avoid varying their timing
based on the input register values. Just what you want for keeping
your constant-time crypto primitives constant-time.
As far as I'm aware, no CPU has _yet_ implemented any data-dependent
optimisations, so DIT is a safety precaution against them doing so in
future. It would be embarrassing to be caught without it if a future
CPU does do that, so we now turn on DIT in the PuTTY process state.
I've put a call to the new enable_dit() function at the start of every
main() and WinMain() belonging to a program that might do
cryptography (even testcrypt, in case someone uses it for something!),
and in case I missed one there, also added a second call at the first
moment that any cryptography-using part of the code looks as if it
might become active: when an instance of the SSH protocol object is
configured, when the system PRNG is initialised, and when selecting
any cryptographic authentication protocol in an HTTP or SOCKS proxy
connection. With any luck those precautions between them should ensure
it's on whenever we need it.
Arm's own recommendation is that you should carefully choose the
granularity at which you enable and disable DIT: there's a potential
time cost to turning it on and off (I'm not sure what, but plausibly
something of the order of a pipeline flush), so it's a performance hit
to do it _inside_ each individual crypto function, but if CPUs start
supporting significant data-dependent optimisation in future, then it
will also become a noticeable performance hit to just leave it on
across the whole process. So you'd like to do it somewhere in the
middle: for example, you might turn on DIT once around the whole
process of verifying and decrypting an SSH packet, instead of once for
decryption and once for MAC.
With all respect to that recommendation as a strategy for maximum
performance, I'm not following it here. I turn on DIT at the start of
the PuTTY process, and then leave it on. Rationale:
1. PuTTY is not otherwise a performance-critical application: it's
not likely to max out your CPU for any purpose _other_ than
cryptography. The most CPU-intensive non-cryptographic thing I can
imagine a PuTTY process doing is the complicated computation of
font rendering in the terminal, and that will normally be cached
(you don't recompute each glyph from its outline and hints for
every time you display it).
2. I think a bigger risk lies in accidental side channels from having
DIT turned off when it should have been on. I can imagine lots of
causes for that. Missing a crypto operation in some unswept corner
of the code; confusing control flow (like my coroutine macros)
jumping with DIT clear into the middle of a region of code that
expected DIT to have been set at the beginning; having a reference
counter of DIT requests and getting it out of sync.
In a more sophisticated programming language, it might be possible to
avoid the risk in #2 by cleverness with the type system. For example,
in Rust, you could have a zero-sized type that acts as a proof token
for DIT being enabled (it would be constructed by a function that also
sets DIT, have a Drop implementation that clears DIT, and be !Send so
you couldn't use it in a thread other than the one where DIT was set),
and then you could require all the actual crypto functions to take a
DitToken as an extra parameter, at zero runtime cost. Then "oops I
forgot to set DIT around this piece of crypto" would become a compile
error. Even so, you'd have to take some care with coroutine-structured
code (what happens if a Rust async function yields while holding a DIT
token?) and with nesting (if you have two DIT tokens, you don't want
dropping the inner one to clear DIT while the outer one is still there
to wrongly convince callees that it's set). Maybe in Rust you could
get this all to work reliably. But not in C!
DIT is an optional feature of the Arm architecture, so we must first
test to see if it's supported. This is done the same way as we already
do for the various Arm crypto accelerators: on ELF-based systems,
check the appropriate bit in the 'hwcap' words in the ELF aux vector;
on Mac, look for an appropriate sysctl flag.
On Windows I don't know of a way to query the DIT feature, _or_ of a
way to write the necessary enabling instruction in an MSVC-compatible
way. I've _heard_ that it might not be necessary, because Windows
might just turn on DIT unconditionally and leave it on, in an even
more extreme version of my own strategy. I don't have a source for
that - I heard it by word of mouth - but I _hope_ it's true, because
that would suit me very well! Certainly I can't write code to enable
DIT without knowing (a) how to do it, (b) how to know if it's safe.
Nonetheless, I've put the enable_dit() call in all the right places in
the Windows main programs as well as the Unix and cross-platform code,
so that if I later find out that I _can_ put in an explicit enable of
DIT in some way, I'll only have to arrange to set HAVE_ARM_DIT and
compile the enable_dit() function appropriately.
I'm not sure why these have never bothered me before, but a test build
I just made for a completely different reason complained about them!
findtest() did a binary search using a while loop, and then used
variables set in the loop body, which gcc objected to on the grounds
that the body might have run 0 times and not initialised those
variables. Also in the same function gcc objected to the idea that
findrelpos234() might have returned NULL and not set 'index'. I think
neither of these things can actually have _happened_, but let's stop
the compiler complaining anyway.
In 0.81 and before, we put an application manifest (XML-formatted
Windows resource) into all the GUI tools on purpose, and the CLI tools
like Plink didn't have one. But in 0.82, the CLI tools do have one,
and it's a small default one we didn't write ourselves, inserted by
some combination of cmake and clang-imitating-MSVC (I haven't checked
which of those is the cause).
This appears to have happened as a side effect of a build-tools
update, not on purpose. And its effect is that Windows XP now objects
to our plink.exe, because it's very picky about manifest format (we
have an old 'xp-wont-run' bug record about that).
Since it seemed to work fine to not have a manifest at all in 0.81,
let's go back to that. We were already passing /manifest:no to inhibit
the default manifest in the GUI tools, to stop it fighting with our
custom one; now I've moved /manifest:no into the global linker flags,
so it's applied to _all_ binaries, whether we're putting our own
manifest in or not.
In protocols other than PROT_RAW, the new line editing system differed
from the old one in not considering ^M or ^J (typed using the actual
Ctrl key, so distinct from pressing Return) to mean "I've finished
editing this line, please send it". This commit reinstates that
behaviour.
It turned out that a third-party tool (namely PuTTY Connection Manager),
which automatically answers prompts for the user, was terminating them
by sending ^J in place of the Return key. We don't know why (and it's
now unmaintained), but it was. So this change should make that tool
start working again.
I exclude PROT_RAW above because in that protocol the line editing has
much weirder handling for ^M and ^J, which lineedit replicated
faithfully from the old code: either control character by itself is
treated literally (displaying as "^M" or "^J" in the terminal), but if
you type the two in sequence in that order, then the ^J deletes the ^M
from the edit buffer and enters the line, so that the sequence CR LF
acts as a newline overall. I haven't changed that behaviour here, but
I have added a regression test of it to test_lineedit.
A user reported that when a PuTTY window is resized by the
'FancyZones' tool included in Microsoft PowerToys, the terminal itself
knows the new size ('stty' showed that it had sent a correct SIGWINCH
to the SSH server), but the next invocation of the Change Settings
dialog box still has the old size entered in it, leading to confusing
behaviour when you press Apply.
Inside PuTTY, this must mean that we updated the actual terminal's
size, but didn't update the main Conf object to match it, which is
where Change Settings populates its initial dialog state from.
It looks as if this is because FancyZones resizes the window by
sending it one single WM_SIZE, without wrapping it in the
WM_ENTERSIZEMOVE and WM_EXITSIZEMOVE messages that signal the start
and end of an interactive dragging resize operation. And the update of
Conf in wm_size_resize_term was in only one branch of the if statement
that checks whether we're in an interactive resize. Now it's outside
the if, so Conf will be updated in both cases.
I was about to try to debug a window resizing issue, and it looked as
if this patch had a plausible set of diagnostics already. But in fact
when I turned on this #ifdef it failed to compile, so I'm getting rid
of it.
Perhaps there is a use for having types of diagnostic living
permanently in the source code and easy to enable in themed sets, but
if so, I think they'd be better compiled in and enabled by an option,
than compiled out and enabled by #ifdef. That way they're less likely
to rot, and also, you can ask a user to turn one on really easily and
get extra logs for whatever is bothering them!
The new (ish) "3.7...3.28" syntax means: cmake will give up with a
fatal error if you try to build with a version older than 3.7, but
also, it won't turn on any new behaviour introduced after 3.28 (which
is the cmake version in Ubuntu 24.04, where I'm currently doing both
my development and production builds).
Without this, cmake 3.31 (found on Debian sid) will give a warning at
configure time: "Compatibility with CMake < 3.10 will be removed from
a future version of CMake." I guess the point is that they're planning
to make breaking changes that arrange that you _can't_ make the same
CMakeLists work with both 3.7 and this potential newer version. So by
specifying 3.28 as the "max" version, we avoid those breaking changes
affecting us, for the moment.
Our "old distro support" policy is currently that we still want to be
able to (and indeed I actually test it before each release) build on
Debian stretch, which is still in support, albeit a very marginal
paid-LTS kind of support. So we do still need to support cmake 3.7.
This seems to be a plausible way to get that to carry on working,
while not provoking annoying warnings from cmake 3.31, or risking the
actual breaking change when it comes, whatever it is.
(Fun fact: cmake 3.7 doesn't actually _understand_ this 3.7...3.28
syntax! That syntax itself was introduced in 3.12. But the cmake
manual explains that it's harmless to earlier versions, which will
interpret the extra dots as separating additional version components,
and ignore them. :-)
A user helpfully figured this out for us after the changes to Plink's
password prompt handling had disrupted their previous workflow. So it
seems worth documenting in case anyone else needs this fix.
(I think it is a fix, not a workaround: anyone needing this option now
probably _should_ have been doing it all along, because with the old
behaviour, Plink would have been sending a password prompt to Git, and
maybe even interpreting some of Git's protocol output as a password!
-batch would have been a more sensible way to abort the connection
even before the changes.)
When running on Wayland, gdk_display_get_name() can return things like
"wayland-0" rather than valid X display names. PuTTY nonetheless
treated them as X display names, meaning that when running under
Wayland, pterm would set DISPLAY to "wayland-0" in subprocesses, and
PuTTY's X forwarding wouldn't work properly.
To fix this, places that call gdk_display_get_name() now only do so on
displays for which GDK_IS_X_DISPLAY() is true. As with
GDK_IS_X_WINDOW(), this requires some backward-compatibility for GDK
versions where everything is implicitly running on X.
To make this work usefully, pterm now also won't unset DISPLAY if it
can't get an X display name and instead will pass through whatever value
of DISPLAY it received. I think that's better behaviour anyway.
There are two separate parts of PuTTY that call gdk_display_get_name().
platform_get_x_display() in unix/putty.c is used for X forwarding, while
gtk_seat_get_x_display() in unix/window.c is used used for setting DISPLAY
and recording in utmp. I've updated both of them.
When the user pressed Return at the end of a line, we were calling the
TermLineEditor's receiver function once for each character in the line
buffer. A Telnet user reported from looking at packet traces that this
leads to each character being sent in its own TCP segment, which is
wasteful and silly, and a regression in 0.82 compared to 0.81.
You can see the SSH version of the phenomenon even more easily in
PuTTY's own SSH logs, without having to look at the TCP layer at all:
you get a separate SSH2_MSG_CHANNEL_DATA per character when sending a
line that you entered via local editing in the GUI terminal.
The fix in this commit makes lineedit_send_line() collect keystrokes
into a temporary bufchain and pass them on to the backend in chunks
the size of a bufchain block.
This is better, but still not completely ideal: lineedit_send_line()
is often followed by a call to lineedit_send_newline(), and there's no
buffering done between _those_ functions. So you'll still see a
separate SSH message / Telnet TCP segment for the newline after the
line.
I haven't fixed that in this commit, for two reasons. First, unlike
the character-by-character sending of the line content, it's not a
regression in 0.82: previous versions also sent the newline in a
separate packet and nobody complained about that. Second, it's much
more difficult, because newlines are handled specially - in particular
by the Telnet backend, which sometimes turns them into a wire sequence
CR LF that can't be generated by passing any literal byte to
backend_send. So you'd need to violate a load of layers, or else have
multiple parts of the system buffer up output and then arrange to
release it on a toplevel callback or some such. Much more code, more
risk of bugs, and less gain.
This occurred if the SSH server closed the connection for any
reason (in practice usually a timeout, but reproducible more easily by
manually killing a test server process) while the user was in the
middle of any kind of interactive prompt-based login in the GUI PuTTY
terminal (be it simple password, k-i, private key passphrase,
whatever).
The problem was that term->userpass_state wasn't cleaned up when the
connection died, and then if you started a fresh SSH session in the
same terminal, the attempt to create a new term->userpass_state would
find there was one already there.
The simplest place to insert the missing cleanup is the call to
term_provide_backend(), because that's a terminal API function which
is already called to notify the terminal that one backend has gone
away and the next one has turned up.
(In fact, it's called twice, once to set term->backend to NULL when
the first session closes, and again when the session is restarted. I
see no harm in making the cleanup unconditional, not bothering to tell
the difference between the two cases.)