Now that I've removed side-channel leakage from both prime candidate
generation (via mp_unsafe_mod_integer) and Miller-Rabin, the
probabilistic prime generation system in this code base is now able to
get through testsc without it detecting any source of cache or timing
side channels. So you should be able to generate an RSA key (in which
the primes themselves must be secret) in a more hostile environment
than you could previously be confident of.
This is a bit counterintuitive, because _obviously_ random prime
generation takes a variable amount of time, because it has to keep
retrying until an attempt succeeds! But that's OK as long as the
attempts are completely independent, because then any timing or cache
information leaked by a _failed_ attempt will only tell an attacker
about the numbers used in the failed attempt, and those numbers have
been thrown away, so it doesn't matter who knows them. It's only
important that the _successful_ attempt, from generating the random
candidate through to completing its verification as (probably) prime,
should be side-channel clean, because that's the attempt whose data is
actually going to be turned into a private key that needs to be kept
secret.
(In particular, this means you have to avoid the old-fashioned
strategy of generating successive prime candidates by incrementing a
starting value until you find something not divisible by any small
prime, because the number of iterations of that method would be a
timing leak. Happily, we stopped doing that last year, in commit
08a3547bc5: now every candidate integer is generated
independently, and if one fails the initial checks, we throw it away
and start completely from scratch with a fresh random value.)
So the test harness works by repeatedly running the prime generator in
one-shot mode until an attempt succeeds, and then resetting the
random-number stream to where it was just before the successful
attempt. Then we generate the same prime number again, this time with
the sclog mechanism turned on - and then, we compare it against the
version we previously generated with the same random numbers, to make
sure they're the same. This checks that the attempts really _are_
independent, in the sense that the prime generator is a pure function
of its random input stream, and doesn't depend on state left over from
previous attempts.
This is used to notify the Seat that some data has been cleared from
the backend's outgoing data buffer. In other words, it notifies the
Seat that it might be worth calling backend_sendbuffer() again.
We've never needed this before, because until now, Seats have always
been the 'main program' part of the application, meaning they were
also in control of the event loop. So they've been able to call
backend_sendbuffer() proactively, every time they go round the event
loop, instead of having to wait for a callback.
But now, the SSH proxy is the first example of a Seat without
privileged access to the event loop, so it has no way to find out that
the backend's sendbuffer has got smaller. And without that, it can't
pass that notification on to plug_sent, to unblock in turn whatever
the proxied connection might have been waiting to send.
In fact, before this commit, sshproxy.c never called plug_sent at all.
As a result, large data uploads over an SSH jump host would hang
forever as soon as the outgoing buffer filled up for the first time:
the main backend (to which sshproxy.c was acting as a Socket) would
carefully stop filling up the buffer, and then never receive the call
to plug_sent that would cause it to start again.
The new callback is ignored everywhere except in sshproxy.c. It might
be a good idea to remove backend_sendbuffer() entirely and convert all
previous uses of it into non-empty implementations of this callback,
so that we've only got one system; but for the moment, I haven't done
that.
This allows the 'no trivial auth' option introduced by the previous
commit to be tested. Uppity has grown three new options to make it
accept "none" authentication, keyboard-interactive involving no
prompts, and the perverse sending of USERAUTH_SUCCESS after a
signatureless public-key offer.
The first of those options also enables the analogue in SSH-1; the
other two have no SSH-1 analogues in the first place. (SSH-1 public
key authentication has a challenge-response structure that doesn't
contain any way to terminate the exchange early with success. And the
TIS and CryptoCard methods, which are its closest analogue of k-i,
have a fixed number of prompts, which is not 0.)
Suggested by Manfred Kaiser, who also wrote most of this patch
(although outlying parts, like documentation and SSH-1 support, are by
me).
This is a second line of defence against the kind of spoofing attacks
in which a malicious or compromised SSH server rushes the client
through the userauth phase of SSH without actually requiring any auth
inputs (passwords or signatures or whatever), and then at the start of
the connection phase it presents something like a spoof prompt,
intended to be taken for part of userauth by the user but in fact with
some more sinister purpose.
Our existing line of defence against this is the trust sigil system,
and as far as I know, that's still working. This option allows a bit of
extra defence in depth: if you don't expect your SSH server to
trivially accept authentication in the first place, then enabling this
option will cause PuTTY to disconnect if it unexpectedly does so,
without the user having to spot the presence or absence of a fiddly
little sigil anywhere.
Several types of authentication count as 'trivial'. The obvious one is
the SSH-2 "none" method, which clients always try first so that the
failure message will tell them what else they can try, and which a
server can instead accept in order to authenticate you unconditionally.
But there are two other ways to do it that we know of: one is to run
keyboard-interactive authentication and send an empty INFO_REQUEST
packet containing no actual prompts for the user, and another even
weirder one is to send USERAUTH_SUCCESS in response to the user's
preliminary *offer* of a public key (instead of sending the usual PK_OK
to request an actual signature from the key).
This new option detects all of those, by clearing the 'is_trivial_auth'
flag only when we send some kind of substantive authentication response
(be it a password, a k-i prompt response, a signature, or a GSSAPI
token). So even if there's a further path through the userauth maze we
haven't spotted, that somehow avoids sending anything substantive, this
strategy should still pick it up.
This introduces a new entry to the radio-button list of proxy types,
in which the 'Proxy host' box is taken to be the name of an SSH server
or saved session. We make an entire subsidiary SSH connection to that
host, open a direct-tcpip channel through it, and use that as the
connection over which to run the primary network connection.
The result is basically the same as if you used a local proxy
subprocess, with a command along the lines of 'plink -batch %proxyhost
-nc %host:%port'. But it's all done in-process, by having an SshProxy
object implement the Socket trait to talk to the main connection, and
implement Seat and LogPolicy to talk to its subsidiary SSH backend.
All the refactoring in recent years has got us to the point where we
can do that without both SSH instances fighting over some global
variable or unique piece of infrastructure.
From an end user perspective, doing SSH proxying in-process like this
is a little bit easier to set up: it doesn't require you to bake the
full pathname of Plink into your saved session (or to have it on the
system PATH), and the SshProxy setup function automatically turns off
SSH features that would be inappropriate in this context, such as
additional port forwardings, or acting as a connection-sharing
upstream. And it has minor advantages like getting the Event Log for
the subsidiary connection interleaved in the main Event Log, as if it
were stderr output from a proxy subcommand, without having to
deliberately configure the subsidiary Plink into verbose mode.
However, this is an initial implementation only, and it doesn't yet
support the _big_ payoff for doing this in-process, which (I hope)
will be the ability to handle interactive prompts from the subsidiary
SSH connection via the same user interface as the primary one. For
example, you might need to answer two password prompts in succession,
or (the first time you use a session configured this way) confirm the
host keys for both proxy and destination SSH servers. Comments in the
new source file discuss some design thoughts on filling in this gap.
For the moment, if the proxy SSH connection encounters any situation
where an interactive prompt is needed, it will make the safe
assumption, the same way 'plink -batch' would do. So it's at least no
_worse_ than the existing technique of putting the proxy connection in
a subprocess.
This notifies the Seat that the entire backend session has finished
and closed its network connection - or rather, that it _might_ have
done, and that the frontend should check backend_connected() if it
wasn't planning to do so already.
The existing Seat implementations haven't needed this: the GUI ones
don't actually need to do anything specific when the network
connection goes away, and the CLI ones deal with it by being in charge
of their own event loop so that they can easily check
backend_connected() at every possible opportunity in any case. But I'm
about to introduce a new Seat implementation that does need to know
this, and doesn't have any other way to get notified of it.
Less than 12 hours after 0.75 went out of the door, a user pointed out
that enabling the 'Use system colours' config option causes an
immediate NULL-dereference crash. The reason is because a chain of
calls from term_init() ends up calling back to the Windows
implementation of the palette_get_overrides() method, which responds
by trying to call functions on the static variable 'term' in window.c,
which won't be initialised until term_init() has returned.
Simple fix: palette_get_overrides() is now given a pointer to the
Terminal that it should be updating, because it can't find it out any
other way.
Now that the main source file of Plink in each platform directory has
the same name, we can put centralise the main definition of the
program in the main CMakeLists.txt, and in the platform directory,
just add the few extra modules needed to clear up platform-specific
details.
The same goes for psocks. And PSCP and PSFTP could have been moved to
the top level already - I just hadn't done it in the initial setup.
This gets rid of all those annoying 'win', 'ux' and 'gtk' prefixes
which made filenames annoying to type and to tab-complete. Also, as
with my other recent renaming sprees, I've taken the opportunity to
expand and clarify some of the names so that they're not such cryptic
abbreviations.
It's another file that should have been subdivided into lots of tiny
separate things in the utils library - especially since for some
reason I made a completely separate 'guimisc' cmake-level library for
it when there was no need.
This clears up another large pile of clutter at the top level, and in
the process, allows me to rename source files to things that don't all
have that annoying 'ssh' prefix at the top.
This applies to all of AES, SHA-1, SHA-256 and SHA-512. All those
source files previously contained multiple implementations of the
algorithm, enabled or disabled by ifdefs detecting whether they would
work on a given compiler. And in order to get advanced machine
instructions like AES-NI or NEON crypto into the output file when the
compile flags hadn't enabled them, we had to do nasty stuff with
compiler-specific pragmas or attributes.
Now we can do the detection at cmake time, and enable advanced
instructions in the more sensible way, by compile-time flags. So I've
broken up each of these modules into lots of sub-pieces: a file called
(e.g.) 'foo-common.c' containing common definitions across all
implementations (such as round constants), one called 'foo-select.c'
containing the top-level vtable(s), and a separate file for each
implementation exporting just the vtable(s) for that implementation.
One advantage of this is that it depends a lot less on compiler-
specific bodgery. My particular least favourite part of the previous
setup was the part where I had to _manually_ define some Arm ACLE
feature macros before including <arm_neon.h>, so that it would define
the intrinsics I wanted. Now I'm enabling interesting architecture
features in the normal way, on the compiler command line, there's no
need for that kind of trick: the right feature macros are already
defined and <arm_neon.h> does the right thing.
Another change in this reorganisation is that I've stopped assuming
there's just one hardware implementation per platform. Previously, the
accelerated vtables were called things like sha256_hw, and varied
between FOO-NI and NEON depending on platform; and the selection code
would simply ask 'is hw available? if so, use hw, else sw'. Now, each
HW acceleration strategy names its vtable its own way, and the
selection vtable has a whole list of possibilities to iterate over
looking for a supported one. So if someone feels like writing a second
accelerated implementation of something for a given platform - for
example, I've heard you can use plain NEON to speed up AES somewhat
even without the crypto extension - then it will now have somewhere to
drop in alongside the existing ones.
The definition of HAVE_CMAKE_H is now at the very top of the main
CMakeLists.txt, so that it applies to all objects. And the consequent
include of cmake.h is at the very top of defs.h, so that it should be
included first by everything. This way, I don't have to worry any more
that the HAVE_FOO definitions in cmake.h might accidentally have
failed to reach some part of the code.
add_platform_sources_to_library() is now called
add_sources_from_current_dir(), so that it will make sense when I use
it in subdirectories that aren't for a particular platform.
It was there because of a limitation of mkfiles.pl, which had a single
list of include directories that it used on all platforms. CMake does
not. So now there's an easier and more sensible way to have a
different header file included on Windows and Unix: call it the same
name in the two subdirectories, and rely on CMake having put the right
one of those subdirs on the include path.
This new implementation uses the same optimisation-barrier technique
that I used in various places in testsc: have a no-op function, and a
volatile function pointer pointing at it, and then call through the
function pointer, so that nothing actually happens (apart from the
physical call and return) but the compiler has to assume that
_anything_ might have happened.
Doing this just after a memset enforces that the compiler can't have
thrown away the memset, because the called function might (for
example) check that all the memory really is zero and abort if not.
I've been turning this over in my mind ever since coming up with the
technique for testsc. I think it's far more robust than the previous
smemclr technique: so much so that I'm switching to using it
_everywhere_, and no longer using platform alternatives like Windows's
SecureZeroMemory().
This is the start of the payoff for all that reorganisation (and
perhaps also from having moved to a library-based build structure in
the first place): a collection of pointless stub functions in outlying
programs, which were only there to prevent link failures, now no
longer need to be there even for that purpose.
This is a module that I'd noticed in the past was too monolithic.
There's a big pile of stub functions in uxpgnt.c that only have to be
there because the implementation of true X11 _forwarding_ (i.e.
actually managing a channel within an SSH connection), which Pageant
doesn't need, was in the same module as more general X11-related
utility functions which Pageant does need.
So I've broken up this awkward monolith. Now x11fwd.c contains only
the code that really does all go together for dealing with SSH X
forwarding: the management of an X forwarding channel (including the
vtables to make it behave as Channel at the SSH end and a Plug at the
end that connects to the local X server), and the management of
authorisation for those channels, including maintaining a tree234 of
possible auth values and verifying the one we received.
Most of the functions removed from this file have moved into the utils
subdir, and also into the utils library (i.e. further down the link
order), because they were basically just string and data processing.
One exception is x11_setup_display, which parses a display string and
returns a struct telling you everything about how to connect to it.
That talks to the networking code (it does name lookups and makes a
SockAddr), so it has to live in the network library rather than utils,
and therefore it's not in the utils subdirectory either.
The other exception is x11_get_screen_number, which it turned out
nothing called at all! Apparently the job it used to do is now done as
part of x11_setup_display. So I've just removed it completely.
Now that the new CMake build system is encouraging us to lay out the
code like a set of libraries, it seems like a good idea to make them
look more _like_ libraries, by putting things into separate modules as
far as possible.
This fixes several previous annoyances in which you had to link
against some object in order to get a function you needed, but that
object also contained other functions you didn't need which included
link-time symbol references you didn't want to have to deal with. The
usual offender was subsidiary supporting programs including misc.c for
some innocuous function and then finding they had to deal with the
requirements of buildinfo().
This big reorganisation introduces three new subdirectories called
'utils', one at the top level and one in each platform subdir. In each
case, the directory contains basically the same files that were
previously placed in the 'utils' build-time library, except that the
ones that were extremely miscellaneous (misc.c, utils.c, uxmisc.c,
winmisc.c, winmiscs.c, winutils.c) have been split up into much
smaller pieces.
A couple of actual checks were missing (elf_aux_info, sysctlbyname).
Several more were accidentally left out of cmake.h.in, meaning they
wouldn't be propagated from cmake's variable space into the actual
compilation. And a handful of checks in the C source were still using
the autotools-style 'if defined' in place of the cmake-style "it's
always 0 or 1" plain #if.
This brings various concrete advantages over the previous system:
- consistent support for out-of-tree builds on all platforms
- more thorough support for Visual Studio IDE project files
- support for Ninja-based builds, which is particularly useful on
Windows where the alternative nmake has no parallel option
- a really simple set of build instructions that work the same way on
all the major platforms (look how much shorter README is!)
- better decoupling of the project configuration from the toolchain
configuration, so that my Windows cross-building doesn't need
(much) special treatment in CMakeLists.txt
- configure-time tests on Windows as well as Linux, so that a lot of
ad-hoc #ifdefs second-guessing a particular feature's presence from
the compiler version can now be replaced by tests of the feature
itself
Also some longer-term software-engineering advantages:
- other people have actually heard of CMake, so they'll be able to
produce patches to the new build setup more easily
- unlike the old mkfiles.pl, CMake is not my personal problem to
maintain
- most importantly, mkfiles.pl was just a horrible pile of
unmaintainable cruft, which even I found it painful to make changes
to or to use, and desperately needed throwing in the bin. I've
already thrown away all the variants of it I had in other projects
of mine, and was only delaying this one so we could make the 0.75
release branch first.
This change comes with a noticeable build-level restructuring. The
previous Recipe worked by compiling every object file exactly once,
and then making each executable by linking a precisely specified
subset of the same object files. But in CMake, that's not the natural
way to work - if you write the obvious command that puts the same
source file into two executable targets, CMake generates a makefile
that compiles it once per target. That can be an advantage, because it
gives you the freedom to compile it differently in each case (e.g.
with a #define telling it which program it's part of). But in a
project that has many executable targets and had carefully contrived
to _never_ need to build any module more than once, all it does is
bloat the build time pointlessly!
To avoid slowing down the build by a large factor, I've put most of
the modules of the code base into a collection of static libraries
organised vaguely thematically (SSH, other backends, crypto, network,
...). That means all those modules can still be compiled just once
each, because once each library is built it's reused unchanged for all
the executable targets.
One upside of this library-based structure is that now I don't have to
manually specify exactly which objects go into which programs any more
- it's enough to specify which libraries are needed, and the linker
will figure out the fine detail automatically. So there's less
maintenance to do in CMakeLists.txt when the source code changes.
But that reorganisation also adds fragility, because of the trad Unix
linker semantics of walking along the library list once each, so that
cyclic references between your libraries will provoke link errors. The
current setup builds successfully, but I suspect it only just manages
it.
(In particular, I've found that MinGW is the most finicky on this
score of the Windows compilers I've tried building with. So I've
included a MinGW test build in the new-look Buildscr, because
otherwise I think there'd be a significant risk of introducing
MinGW-only build failures due to library search order, which wasn't a
risk in the previous library-free build organisation.)
In the longer term I hope to be able to reduce the risk of that, via
gradual reorganisation (in particular, breaking up too-monolithic
modules, to reduce the risk of knock-on references when you included a
module for function A and it also contains function B with an
unsatisfied dependency you didn't really need). Ideally I want to
reach a state in which the libraries all have sensibly described
purposes, a clearly documented (partial) order in which they're
permitted to depend on each other, and a specification of what stubs
you have to put where if you're leaving one of them out (e.g.
nocrypto) and what callbacks you have to define in your non-library
objects to satisfy dependencies from things low in the stack (e.g.
out_of_memory()).
One thing that's gone completely missing in this migration,
unfortunately, is the unfinished MacOS port linked against Quartz GTK.
That's because it turned out that I can't currently build it myself,
on my own Mac: my previous installation of GTK had bit-rotted as a
side effect of an Xcode upgrade, and I haven't yet been able to
persuade jhbuild to make me a new one. So I can't even build the MacOS
port with the _old_ makefiles, and hence, I have no way of checking
that the new ones also work. I hope to bring that port back to life at
some point, but I don't want it to block the rest of this change.
This will let us put two controls side by side (e.g. in disjoint
columns of a multi-col layout) and indicate that instead of the
default behaviour of aligning their top edges, their centreline (or,
even better if available, font baseline) should be aligned.
NFC: nothing uses this yet.
We never expect to be passed a NULL GtkFrontend pointer, and even if
we were, we'd have crashed several lines above this test.
It was benign, of course, but Coverity (which pointed it out) dislikes
this kind of thing on the basis that it's confusing - you ought to
either test it for NULL properly, or not at all - and I see its point.
Coverity objected to several similar cases in this code in which I'd
checked a pointer for NULL after already having done things to it. I
think all the cases are benign, in that (as the comments tersely
mention) those checks could only fail if the unifontsel system had got
_really_ confused, in which case probably some other bug would have
been on the point of manifesting anyway. But Coverity has a point
anyway: if I'm _going_ to check those values for NULL, let's check
them consistently.
Commit d851df486f deleted a #if / #else / #endif on the grounds
that the condition would now always be true, without also deleting the
code inside the #else. Happily, the then-branch ended with a return,
so it was a benign mistake - the erroneously left-in else-clause code
was unreachable. But now Coverity has pointed it out, let's remove it.
I've decided that it was a mistake to use -E as the option for adding
keys encrypted, because it's better to use it as a fingerprint type
selector for the Pageant client side. That way it works the same as
command-line PuTTYgen, and also OpenSSH ssh-add (and ssh-keygen).
What spelling(s) to use instead for the option to add keys encrypted?
Obviously, the same ones I've just decided on for Windows Pageant;
there's no sensible reason to make them different.
If the prompt got big enough to reach to the edges of the dialog box,
it looked ugly without any margins. Previously I hadn't noticed,
because the prompt text was never that big.
Now Windows Pageant has two clearly distinct dialog boxes for
requesting a key passphrase: one to use synchronously when the user
has just used the 'Add Key' GUI action, and one to use asynchronously
in response to an agent client's attempt to use a key that was loaded
encrypted.
Also fixed the wording in the asynchronous box: there were two copies
of the 'enter passphrase' instruction, one from the dialog definition
in pageant.rc file and one from the cross-platform pageant.c. Now
pageant.c doesn't format a whole user-facing message any more: it
leaves that to the platform front end to do it the way it wants.
I've also added a call to SetForegroundWindow, to try to get the
passphrase prompt into the foreground. In my experience this doesn't
actually get it the keyboard focus, which I think is deliberate on
Windows's part and there's nothing I can do about it. But at least the
user should _see_ that the prompt is there, so they can focus it
themself.
Now you can press 'i' at the host key prompt, and it will print all
the key fingerprints we know about, plus the full public key. So if
you wanted to check against a fingerprint type that wasn't the one
shown in the default prompt, you can see all the ones we've got.
Now we pass the whole set of fingerprints, and also a displayable
format for the full host public key.
NFC: this commit doesn't modify any of the host key prompts to _use_
any of the new information. That's coming next.
The callback-function API in pageant.h for key enumeration is modified
so that we pass an array of all the available fingerprints for each
key.
In Unix Pageant, that's used by the -l option to print whichever
fingerprint the user asked for. (Unfortunately, the option name -E is
already taken, so for the moment I've called it --fptype. I may
revisit that later.)
Also, when matching a key by fingerprint, we're prepared to match
against any fingerprint type we know, with disambiguating prefixes if
necessary (e.g. you can match "md5🆎12" or "sha256:Ab12". That has
to be done a bit carefully, because we match MD5 hex fingerprints
case-insensitively, but SHA256 fingerprints are case-sensitive.
These would have left the terminal in the wrong termios state, if a
batch-mode Plink was run from a terminal and had to abort the
connection due to a weak crypto primitive.
The assorted host-key and warning prompt messages have no reason to
differ between the two platforms, so let's centralise them. Also,
while I'm here, some basic support functions that are the same in both
modules.