I'd forgotten that the text-only branch of seat_antispoof_msg()
constructs a string from its input in the expectation that it's a
one-line message. So it was a mistake to put a \n at the start of the
string in interactor_announce() to get a blank line first.
Now interactor_announce() makes an extra call to seat_antispoof_msg to
show its blank line, and seat_antispoof_msg itself handles the
blank-line case specially.
This is generated when setup of a network connection is cancelled by
deliberate user action, namely, pressing ^C or ^D or the like at a
get_userpass_input prompt presented during proxy setup.
It's handled just like normal socket setup errors, except that it
omits the call to seat_connection_fatal, on the grounds that in this
one case of connection-setup failure, the user doesn't need to be
_informed_ that the connection failed - they already know, because
they failed it themself on purpose.
Passing an operating-system-specific error code to plug_closing(),
such as errno or GetLastError(), was always a bit weird, given that it
generally had to be handled by cross-platform receiving code in
backends. I had the platform.h implementations #define any error
values that the cross-platform code would have to handle specially,
but that's still not a great system, because it also doesn't leave
freedom to invent error representations of my own that don't
correspond to any OS code. (For example, the ones I just removed from
proxy.h.)
So now, the OS error code is gone from the plug_closing API, and in
its place is a custom enumeration of closure types: normal, error, and
the special case BROKEN_PIPE which is the only OS error code we have
so far needed to handle specially. (All others just mean 'abandon the
connection and print the textual message'.)
Having already centralised the handling of OS error codes in the
previous commit, we've now got a convenient place to add any further
type codes for errors needing special handling: each of Unix
plug_closing_errno(), Windows plug_closing_system_error(), and Windows
plug_closing_winsock_error() can easily grow extra special cases if
need be, and each one will only have to live in one place.
Having a single plug_closing() function covering various kinds of
closure is reasonably convenient from the point of view of Plug
implementations, but it's annoying for callers, who all have to fill
in pointless NULL and 0 parameters in the cases where they're not
used.
Added some inline helper functions in network.h alongside the main
plug_closing() dispatch wrappers, so that each kind of connection
closure can present a separate API for the Socket side of the
interface, without complicating the vtable for the Plug side.
Also, added OS-specific extra helpers in the Unix and Windows
directories, which centralise the job of taking an OS error code (of
whatever kind) and translating it into its error message.
In passing, this removes the horrible ad-hoc made-up error codes in
proxy.h, which is OK, because nothing checked for them anyway, and
also I'm about to do an API change to plug_closing proper that removes
the need for them.
Now we always respond to backend disconnection or connection_fatal by
calling plug_closing. And we always do it in a toplevel callback, so
that when the Plug responds by calling our Socket close method (which
frees us), nothing re-entrant happens.
Also, the handling of notify_remote_disconnect is brought into line
with the spec in putty.h, which says it can be sent redundantly (when
already disconnected) or spuriously (when not even disconnected at
all), so the toplevel callback queued by that method will check first.
After this change, failures during connection_setup are now handled
_mostly_ sensibly: if the proxy connection fails, then the main
connection gets enough information to pass a sensible connection_fatal
on to the real front end.
This also fixes the assertion failure mentioned in the TODO comment,
replacing it with a reasonably sensible connection_fatal() - although
I still think that in that situation it might be better not to have a
dialog box at all.
In interactor_return_seat, I wrote a comment saying that we should
call interactor_announce when handing over to the next Interactor in
the chain, *if* any Interactor had already made any kind of
announcement.
But, having written that comment, I didn't actually *implement* the
'if' clause, and called interactor_announce unconditionally! Now
fixed.
Removed the FIXMEs in various Seat passthrough functions that were
there to remind me to say which SSH connection they referred to: that
is now done by the interactor_announce mechanism, as of commit
215b9d1775. (Though not by modifying the actual passthrough
functions, as it turned out, which is how I didn't find and remove the
FIXMEs when I did all that.)
Also, added a comment in wrap() explaining *why* it's allowed to (in
fact, must) cheat by making an InteractionReadySeat without going
through interactor_announce().
Since it's a manually-enabled bug compatibility mode, AUTO isn't one of
the available UI options.
This was causing Windows PuTTY to display a blank entry in the drop-down
for "Discards data sent before its greeting".
(It is possible that this unhelpful default has escaped into saved
sessions of snapshot users, which would have the same effect, but since
the actual using code can cope with it, I've not done anything to clean
that up.)
Finally, the payoff from all of this refactoring: now, when a proxy
prompts interactively during connection setup, you get a message in
advance telling you which Interactor is originating the following
messages.
To achieve this, I've arranged to link Interactors together into a
list, so that any Interactor created by a proxy has a 'parent' pointer
pointing to the Interactor its client passed to new_connection().
This allows interactor_announce() to follow the links back up the
chain and count them, so that it knows whether it's a primary
connection, or a proxy, or a proxy-for-a-proxy, or more generally an
nth-order proxy, and can include that in its announcement.
And secondly, once interactor_announce() reaches the top of the chain,
it can use that as a storage location agreed on by all Interactors in
the whole setup, to tell each other which one of them was the last to
do anything interactive. Then, whenever there's a change of
Interactor, a message can be printed to indicate it to the user; and
when the same Interactor does multiple things in succession, you don't
get a slew of pointless messages in between them all.
Previously, SshProxy dealt with creating a TempSeat to wrap the one it
was borrowing from its client, and then each client in turn dealt with
detecting when it had had its seat borrowed and finishing up with the
TempSeat. The latter involved a lot of code duplication; the former
didn't involve code duplication _yet_ (since SshProxy was the only
thing doing this job), but would have once we started wanting to do
interactive password prompting for other types of network proxy.
Now all of that functionality is centralised into two new Interactor
helper functions: interactor_borrow_seat and interactor_return_seat.
All this Interactor business has been gradually working towards being
able to inform the user _which_ network connection is currently
presenting them with a password prompt (or whatever), in situations
where more than one of them might be, such as an SSH connection being
used as a proxy for another SSH connection when neither one has
one-touch login configured.
At some point, we have to arrange that any attempt to do a user
interaction during connection setup - be it a password prompt, a host
key confirmation dialog, or just displaying an SSH login banner -
makes it clear which host it's come from. That's going to mean calling
some kind of announcement function before doing any of those things.
But there are several of those functions in the Seat API, and calls to
them are scattered far and wide across the SSH backend. (And not even
just there - the Rlogin backend also uses seat_get_userpass_input).
How can we possibly make sure we don't forget a vital call site on
some obscure little-tested code path, and leave the user confused in
just that one case which nobody might notice for years?
Today I thought of a trick to solve that problem. We can use the C
type system to enforce it for us!
The plan is: we invent a new struct type which contains nothing but a
'Seat *'. Then, for every Seat method which does a thing that ought to
be clearly identified as relating to a particular Interactor, we
adjust the API for that function to take the new struct type where it
previously took a plain 'Seat *'. Or rather - doing less violence to
the existing code - we only need to adjust the API of the dispatch
functions inline in putty.h.
How does that help? Because the way you _get_ one of these
struct-wrapped Seat pointers is by calling interactor_announce() on
your Interactor, which will in turn call interactor_get_seat(), and
wrap the returned pointer into one of these structs.
The effect is that whenever the SSH (or Rlogin) code wants to call one
of those particular Seat methods, it _has_ to call
interactor_announce() just beforehand, which (once I finish all of
this) will make sure the user is aware of who is presenting the prompt
or banner or whatever. And you can't forget to call it, because if you
don't call it, then you just don't have a struct of the right type to
give to the Seat method you wanted to call!
(Of course, there's nothing stopping code from _deliberately_ taking a
Seat * it already has and wrapping it into the new struct. In fact
SshProxy has to do that, in order to forward these requests up the
chain of Seats. But the point is that you can't do it _by accident_,
just by forgetting to make a vital function call - when you do that,
you _know_ you're doing it on purpose.)
No functional change: the new interactor_announce() function exists,
and the type-system trick ensures it's called in all the right places,
but it doesn't actually _do_ anything yet.
Thanks to the previous commit, this new parameter can replace two of
the existing ones: instead of passing a LogPolicy and a Seat, we now
pass just an Interactor, from which any proxy implementation can
extract the LogPolicy and the Seat anyway if they need it.
This trait will be implemented by anything that wants to display
interactive prompts or notifications to the user in the course of
setting up a network connection, _or_ anything that wants to make a
network connection whose proxy setup might in turn need to do that.
To begin with, that means every Backend that makes network connections
at all must be an Interactor, because any of those network connections
might be proxied via an SSH jump host which might need to interact
with the user.
I'll fill in the contents of this trait over the next few commits, to
keep the patches comprehensible. For the moment, I've just introduced
the trait, set up implementations of it in the five network backends,
and given it a single 'description' method.
The previous 'description' methods of Backend and Plug are now
removed, and their work is done by the new Interactor method instead.
(I changed my mind since last week about where that should best live.)
This isn't too much of an upheaval, fortunately, because I hadn't got
round yet to committing anything that used those methods!
Previously, SSH authentication banners were displayed by calling the
ordinary seat_output function, and passing it a special value in the
SeatOutputType enumeration indicating an auth banner.
The awkwardness of this was already showing a little in SshProxy's
implementation of seat_output, where it had to check for that special
value and do totally different things for SEAT_OUTPUT_AUTH_BANNER and
everything else. Further work in that area is going to make it more
and more awkward if I keep the two output systems unified.
So let's split them up. Now, Seat has separate output() and banner()
methods, which each implementation can override differently if it
wants to.
All the 'end user' Seat implementations use the centralised
implementation function nullseat_banner_to_stderr(), which turns
banner text straight back into SEAT_OUTPUT_STDERR and passes it on to
seat_output. So I didn't have to tediously implement a boring version
of this function in GTK, Windows GUI, consoles, file transfer etc.
There are quite a few of them already, and I'm about to make another
one, so let's start with a bit of tidying up.
The CMake build organisation is unchanged: I haven't put the proxy
object files into a separate library, just moved the locations of the
source files. (Organising proxying as a library would be tricky
anyway, because of the various overrides for tools that want to avoid
cryptography.)
Now every struct that implements the Backend trait is completely
cleared before we start initialising any of its fields. This will mean
I can add new fields that default to 0 or NULL, without having to mess
around initialising them explicitly everywhere.
Previously, when we scrolled the terminal, the newly exposed line at
the bottom would be immediately allocated a trust status corresponding
to the current state of the terminal. So if you're in trusted mode and
you print a newline, then the line scrolled on at the bottom
immediately gets a trust sigil, whether you subsequently print
anything on it or not.
Up until now, that hasn't mattered, because we always _do_ print
something on it. But if you don't - if you send \r\n\r\n to
deliberately leave a blank line - then it turns out that's not what we
want after all, because if the screen _doesn't_ scroll, the
passed-over line remains completely blank, whereas if it does scroll
the blank line gets a trust sigil, which is inconsistent.
Now, terminal lines newly exposed by a scroll have untrusted status,
just the same as terminal lines that were present in the initial blank
screen. They only become trusted if you actually print at least one
character on them (whereupon check_trust_status will re-clear them
just in case). And this is now independent of whether the terminal has
scrolled or not.
After this change, the cmake setup now works even on Debian stretch
(oldoldstable), which runs cmake 3.7.
In order to support a version that early I had to:
- write a fallback implementation of 'add_compile_definitions' for
older cmakes, which is easy, because add_compile_definitions(FOO)
is basically just add_compile_options(-DFOO)
- stop using list(TRANSFORM) and string(JOIN), of which I had one
case each, and they were easily replaced with simple foreach loops
- stop putting OBJECT libraries in the target_link_libraries command
for executable targets, in favour of adding $<TARGET_OBJECTS:foo>
to the main sources list for the same target. That matches what I
do with library targets, so it's probably more sensible anyway.
I tried going back by another Debian release and getting this cmake
setup to work on jessie, but that runs CMake 3.0.1, and in _that_
version of cmake the target_sources command is missing, and I didn't
find any alternative way to add extra sources to a target after having
first declared it. Reorganising to cope with _that_ omission would be
too much upheaval without a very good reason.
When the user tries to add a string to the CONF_ssh_manual_hostkeys
list box, we call a validation function which is supposed to look
along the string for either a valid-looking SSH key fingerprint, or a
base64 public key blob, and after it finds it, move that key alone to
the start of the input string and delete all the surrounding cruft.
SHA-256 key fingerprints were being detected all right, but not moved
to the start of the string sensibly - we just returned true without
rewriting anything. (Probably inadequate testing when I added SHA-256
fairly recently.)
And the code that moved a full public-key blob to the front of the
string triggered an ASan error on the grounds that it used strcpy with
the source and destination overlapping. I actually hadn't known that
was supposed to be a bad thing these days! But it's easily fixed by
making it a memmove instead.
Previously, checking the host key against the persistent cache managed
by the storage.h API was done as part of the seat_verify_ssh_host_key
method, i.e. separately by each Seat.
Now that check is done by verify_ssh_host_key(), which is a new
function in ssh/common.c that centralises all the parts of host key
checking that don't need an interactive prompt. It subsumes the
previous verify_ssh_manual_host_key() that checked against the Conf,
and it does the check against the storage API that each Seat was
previously doing separately. If it can't confirm or definitively
reject the host key by itself, _then_ it calls out to the Seat, once
an interactive prompt is definitely needed.
The main point of doing this is so that when SshProxy forwards a Seat
call from the proxy SSH connection to the primary Seat, it won't print
an announcement of which connection is involved unless it's actually
going to do something interactive. (Not that we're printing those
announcements _yet_ anyway, but this is a piece of groundwork that
works towards doing so.)
But while I'm at it, I've also taken the opportunity to clean things
up a bit by renaming functions sensibly. Previously we had three very
similarly named functions verify_ssh_manual_host_key(), SeatVtable's
'verify_ssh_host_key' method, and verify_host_key() in storage.h. Now
the Seat method is called 'confirm' rather than 'verify' (since its
job is now always to print an interactive prompt, so it looks more
like the other confirm_foo methods), and the storage.h function is
called check_stored_host_key(), which goes better with store_host_key
and avoids having too many functions with similar names. And the
'manual' function is subsumed into the new centralised code, so
there's now just *one* host key function with 'verify' in the name.
Several functions are reindented in this commit. Best viewed with
whitespace changes ignored.
These will typically be implemented by objects that are both a Backend
*and* a Plug, and the two methods will deliver the same results to any
caller, regardless of which facet of the object is known to that
caller.
Their purpose is to deliver a user-oriented natural-language
description of what network connection the object is handling, so that
it can appear in diagnostic messages.
The messages I specifically have in mind are going to appear in cases
where proxies require interactive authentication: when PuTTY prompts
interactively for a password, it will need to explain which *thing*
it's asking for the password for, and these descriptions are what it
will use to describe the thing in question.
Each backend is allowed to compose these messages however it thinks
best. In all cases at present, the description string is constructed
by the new centralised default_description() function, which takes a
host name and port number and combines them with the backend's display
name. But the SSH backend does things a bit differently, because it
uses the _logical_ host name (the one that goes with the SSH host key)
rather than the physical destination of the network connection. That
seems more appropriate when the question it's really helping the user
to answer is "What host am I supposed to be entering the password for?"
In this commit, no clients of the new methods are introduced. I have a
draft implementation of actually using it for the purpose I describe
above, but it needs polishing.
The current 'displayname' field is designed for presenting in the
config UI, so it starts with a capital letter even when it's not a
proper noun. If I want to name the backend in the middle of a
sentence, I'll need a version that starts with lowercase where
appropriate.
The old field is renamed displayname_tc, to avoid ambiguity.
It always confused me that each comment was _after_ the function
prototype it described, instead of before, which is my usual idiom.
Reordered everything, and added a blank line between each
(comment,function) pair to make it clear what goes with what.
While I'm at it, rewrote some of the comments for clarity and whole
sentences.
It was totally unused. No implementation of the 'closing' method in a
Plug vtable was checking it for any reason at all, except for
ProxySocket which captured it from its client in order to pass on to
its server (which, perhaps after further iterations of ProxySocket,
would have ended up ignoring it similarly). And every caller of
plug_closing set it to 0 (aka false), except for the one in sshproxy.c
which passed true (but it would have made no difference to anyone).
The comment in network.h refers to a FIXME comment which was in
try_send() when that code was written (see winnet.c in commit
7b0e082700). That FIXME is long gone, replaced by a use of a
toplevel callback. So I think the aim must have been to avoid
re-entrancy when sk_write called try_send which encountered a socket
error and called back to plug_closing - but that's long since fixed by
other means now.
This is the same as the previous FUNKY_XTERM mode if you don't press
any modifier keys, but now Shift or Ctrl or Alt with function keys
adds an extra bitmap parameter. The bitmaps are the same as the ones
used by the new SHARROW_BITMAP arrow key mode.
As well as affecting the bitmap field in the escape sequence, it was
_also_ having its otherwise standard effect of prefixing Esc to the
whole sequence. It shouldn't do both.
For some reason, in my comment explaining which Visual Studio compile
warnings I'd suppressed and why, one of the warning numbers in the
comment totally failed to match the one in the suppression option! I
probably pasted it from some other warning in that compile, which I
fixed rather than suppressing.
This commit introduces a new config option for how to handle shifted
arrow keys.
In the default mode (SHARROW_APPLICATION), we do what we've always
done: Ctrl flips the arrow keys between sending their most usual
escape sequences (ESC [ A ... ESC [ D) and sending the 'application
cursor keys' sequences (ESC O A ... ESC O D). Whichever of those modes
is currently configured, Ctrl+arrow sends the other one.
In the new mode (SHARROW_BITMAP), application cursor key mode is
unaffected by any shift keys, but the default sequences acquire two
numeric arguments. The first argument is 1 (reflecting the fact that a
shifted arrow key still notionally moves just 1 character cell); the
second is the bitmap (1 for Shift) + (2 for Alt) + (4 for Ctrl),
offset by 1. (Except that if _none_ of those modifiers is pressed,
both numeric arguments are simply omitted.)
The new bitmap mode is what current xterm generates, and also what
Windows ConPTY seems to expect. If you start an ordinary Command
Prompt and launch into WSL, those are the sequences it will generate
for shifted arrow keys; conversely, if you run a Command Prompt within
a ConPTY, then these sequences for Ctrl+arrow will have the effect you
expect in cmd.exe command-line editing (going backward or forward a
word). For that reason, I enable this mode unconditionally when
launching Windows pterm.
While fixing the previous commit I noticed that window titles don't
actually _work_ properly if you change the terminal character set,
because the text accumulated in the OSC string buffer is sent to the
TermWin as raw bytes, with no indication of what character set it
should interpret them as. You might get lucky if you happened to
choose the right charset (in particular, UTF-8 is a common default),
but if you change the charset half way through a run, then there's
certainly no way the frontend will know to interpret two window titles
sent before and after the change in two different charsets.
So, now win_set_title() and win_set_icon_title() both include a
codepage parameter along with the byte string, and it's up to them to
translate the provided window title from that encoding to whatever the
local window system expects to receive.
On Windows, that's wide-string Unicode, so we can just use the
existing dup_mb_to_wc utility function. But in GTK, it's UTF-8, so I
had to write an extra utility function to encode a wide string as
UTF-8.
When the terminal is in UTF-8 mode, we accumulate UTF-8 text normally
in the OSC string buffer - but the byte 0x9C is interpreted as the C1
control character String Terminator, which terminates the OSC
sequence. That's not really what you want in UTF-8 mode, because 0x9C
is also a perfectly normal UTF-8 continuation character. For example,
you'd expect this to set the window title to "FÜNF":
echo -ne '\033]0;FÜNF\007'
but in fact, by the sheer chance that Ü is encoded with an 0x9C byte,
you get a window title consisting of "F" followed by an illegal-
encoding marker, and the OSC sequence is terminated abruptly so that
the trailing 'NF' is printed normally to the terminal and then the BEL
generates a beep.
Now, in UTF-8 mode, we only support the C1 control for ST if it
appears in the form of the proper UTF-8 encoding of U+009C. So that
example now 'works', at least in the sense that the terminal considers
the OSC sequence to terminate where the sender expected it to
terminate.
Another case where we interpret 0x9C inappropriately as ST is if the
terminal is in a single-byte character set in which that character is
a printing one. In CP437, for example, you can't set a window title
containing a pound sign, because its encoding is 0x9C.
This commit by itself doesn't make those window titles _work_, in the
sense of coming out looking right. They just mean that the OSC
sequence is not terminated at the wrong place. The actual title
rendering will be fixed in the next commit.
I accidentally deleted the original author's name in my rewrite, which
was unnecessarily unfriendly given that some of their code is still
here. Also I made a thinko in my explanation of the U+00AD problem.
This standalone CLI program runs the UCD bidi tests in the form
provided in Unicode 14.0.0. You can run it by just saying
bidi_test --class BidiTest.txt --char BidiCharacterTest.txt
assuming those two UCD files are in the current directory.
A user reported that PuTTY's existing bidi algorithm will generate
misordered text in cases like this (assuming UTF-8):
echo -e '12 A \xD7\x90\xD7\x91 B'
The hex codes in the middle are the Hebrew letters aleph and beth.
Appearing in the middle of a line whose primary direction is
left-to-right, those two letters should appear in the opposite order,
but not cause the rest of the line to move around. That is, you expect
the displayed text in this situation to be
12 A <beth><aleph> B
But in fact, the digits '12' were erroneously reversed, so you would
actually see '21 A <beth><aleph> B'.
I tried to debug the existing bidi algorithm, but it was very hard,
because the Unicode bidi spec has been extensively changed since
Arabeyes contributed that code, and I couldn't even reliably work out
which version of the spec the code was intended to implement. I found
some problems, notably that the resolution phase was running once on
the whole line instead of separately on runs of characters at the same
level, and also that the 'sor' and 'eor' values were being wrongly
computed. But I had no way to test any fix to ensure it hadn't
introduced another bug somewhere else.
Unicode provides a set of conformance tests in the UCD. That was just
what I wanted - but they're too up-to-date to run against the old
algorithm and expect to pass!
So, paradoxically, it seemed to me that the _easiest_ way to fix this
bidi bug would be to bring absolutely everything up to date. But the
revised bidi algorithm is significantly more complicated, so I also
didn't think it would be sensible to try to gradually evolve the
existing code into it. Instead, I've done a complete rewrite of my
own.
The new code implements the full UAX#9 rev 44 algorithm, including in
particular support for the new 'directional isolate' control
characters, and also special handling for matched pairs of brackets in
the text (see rule N0 in the spec). I've managed to get it to pass the
entire UCD conformance test suite, so I'm reasonably confident it's
right, or at the very least a lot closer to right than the old
algorithm was.
So the upshot is: the test case shown at the top of this file now
passes, but also, other detailed bidi handling might have changed,
certainly some cases involving brackets, but perhaps also other things
that were either bugs in the old algorithm or updates to the standard.
The input length field is now a size_t rather than an int, on general
principles. The return value is now void (we weren't using the
previous return value at all). And we now require the client to have
previously allocated a BidiContext, which will allow allocated storage
to be reused between runs, saving a lot of churn on malloc.
(However, the current BidiContext doesn't contain anything
interesting. I could have moved the existing mallocs into it, but
there's no point, since I'm about to rewrite the whole thing anyway.)
I wasn't able to find the 'uniset' program mentioned in the comment
that generated one of the tables, or at least I wasn't confident that
I'd found the right thing of that name. So I rewrote the semantics of
that command line in my own Perl and have included that in the revised
version of the comment.
This makes it easier to create the matching array of type names in
bidi_gettype.c, and eliminates the need for an assertion to check the
array matched the enum. And I'm about to need to add more types, so
let's start by making that trivially easy.
That's what I've usually been doing with any main()s I find under
ifdef; there's no reason this should be an exception. If we're keeping
it in the code at all, we should ensure it carries on compiling.
I've also created a new header file bidi.h, containing pieces of the
bidi definitions shared between bidi.c and the new source file.
This contains terminal.c, bidi.c (formerly minibidi.c), and
terminal.h. I'm about to make a couple more bidi-related source files,
so it seems worth starting by making a place to put them that won't be
cluttering up the top level.
When I create any class that implements PageantAsyncOp, I had intended
to link its pao.cr list node on to the list in the appropriate
PageantClientInfo, so that if that PageantClientInfo was destroyed
prematurely (e.g. an agent connection was abruptly closed), we could
destroy all the pending requests containing a pointer to it.
I did this linking by setting the linked-list fields in pao.cr to
point to the appropriate places in the existing list - but I didn't
modify the pointer fields _in_ the existing list to point to the new
operation at all. So nothing ever actually got linked on to any of
those queues!
In the Windows Pageant message loop, we were alternating between
MsgWaitForMultipleObjects with timeout=INFINITE, and
run_toplevel_callbacks. But run_toplevel_callbacks doesn't loop until
the callback queue is empty: it just runs at least one of the
currently queued callbacks, so that some progress was made. So if two
or more callbacks were queued, we'd leave the rest in the queue and go
back into MsgWaitForMultipleObjects, which could hang indefinitely.
(A very silly workaround was available: move the mouse over the
Pageant systray icon! Every mouse event would terminate the wait and
let you get one more iteration of run_toplevel_callbacks.)
Now we do the same thing as in other main loops: if any further
callbacks are pending, then we still run MsgWaitForMultipleObjects,
but we do it with timeout=0 instead of timeout=INFINITE, so that we
won't go back to a _blocking_ sleep until all callbacks have been
serviced.
When a writable HANDLE is managed by the handle-io.c system, you ask
to send EOF on the handle by calling handle_write_eof. That waits
until all buffered data has been written, and then sends an EOF event
by simply closing the handle.
That is, of course, the only way to send an EOF signal on a handle at
all. And yet, it's a bug, because the handle_output system does not
take ownership of the handle you give it: the client of handle_output
retains ownership, keeps its own copy of the handle, and will expect
to close it itself.
In most cases, the extra close will harmlessly fail, and return
ERROR_INVALID_HANDLE (which the caller didn't notice anyway). But if
you're unlucky, in conditions of frantic handle opening and closing
(e.g. with a lot of separate named-pipe-style agent forwarding
connections being constantly set up and torn down), the handle value
might have been reused between the two closes, so that the second
CloseHandle closes an unrelated handle belonging to some other part of
the program.
We can't fix this by giving handle_output permanent ownership of the
handle, because it really _is_ necessary for copies of it to survive
elsewhere: in particular, for a bidirectional file such as a serial
port or named pipe, the reading side also needs a copy of the same
handle! And yet, we can't replace the handle_write_eof call in the
client with a direct CloseHandle, because that won't wait until
buffered output has been drained.
The solution is that the client still calls handle_write_eof to
register that it _wants_ an EOF sent; the handle_output system will
wait until it's ready, but then, instead of calling CloseHandle, it
will ask its _client_ to close the handle, by calling the provided
'sentdata' callback with the new 'close' flag set to true. And then
the client can not only close the handle, but do whatever else it
needs to do to record that that has been done.
Trying to debug a problem involving threads just now, it turned out
that the version of the diagnostics going to my debug.log was getting
data in a different order from the version going to the debug console.
Now I open and write to debug_fp by going directly to the Win32 API
instead of via a buffering userland stdio, and that seems to have
solved the problem.
Now you can turn various authentication methods on and off, so that
the server won't even offer (say) k-i or publickey at all.
This subsumes the previous -allow-none-auth option; there's now a
general -{allow,deny}-auth=foo option schema, so -allow-auth=none is
the new spelling of -allow-none-auth. The former spelling is kept for
backwards compatibility, just in case.
Similarly to cmdgen's passphrase options, this replaces the password
on the command line with a filename to read the password out of, which
means it can't show up in 'ps' or the Windows task manager.
I've moved all the results of the command-line config options into a
small struct instead of having them be local variables of main(). We
maintain an array of those structs; most command-line options modify
the last element in the array; and we respond to the new special
option '--and' by appending a fresh struct to the end of the array and
initialising it to default values.
So now, if I want two or three SSH servers running on different ports
with separately configured host keys, banners, etc, I can do that with
a single command line along the lines of:
./uppity --listen 2222 --hostkey this.ppk --bannertext "this" \
--and --listen 2223 --hostkey that.ppk --bannertext "that"
There's a single number space of connections used in log messages, and
each new connection reports which of the servers it connects to.
This is only a marginally useful feature: there's not much it does
that couldn't have been done just as well by running multiple Uppitys
each in their own process. But when I do want several servers at once
(which I've been using recently to test the jump-host system), it's
quite nice to have them all producing a single combined stream of log
data and all conveniently killable with a single ^C.
It doesn't actually do anything specific to the userauth layer; it's
just a helper function that deals with the mechanics of printing an
unspoofable message on various kinds of front end, and the only
parameters it needs are a Seat and a message.
Currently, it's used for 'here is the start/end of the server banner'
only. But it's also got all the right functionality to be used for the
(still missing) messages about which proxy SSH server the next set of
login prompts are going to refer to.
The next backend that tries to use the connection we're now proxying
is going to expect the seat's trust status to be in trusted mode,
because that's how things normally start up when a backend is
initialised. So we should set it back to that state before handing on
to that backend.