1
0
mirror of https://git.tartarus.org/simon/putty.git synced 2025-01-09 01:18:00 +00:00
Commit Graph

409 Commits

Author SHA1 Message Date
Simon Tatham
98200d1bfe Arm: turn on PSTATE.DIT if available and needed.
DIT, for 'Data-Independent Timing', is a bit you can set in the
processor state on sufficiently new Arm CPUs, which promises that a
long list of instructions will deliberately avoid varying their timing
based on the input register values. Just what you want for keeping
your constant-time crypto primitives constant-time.

As far as I'm aware, no CPU has _yet_ implemented any data-dependent
optimisations, so DIT is a safety precaution against them doing so in
future. It would be embarrassing to be caught without it if a future
CPU does do that, so we now turn on DIT in the PuTTY process state.

I've put a call to the new enable_dit() function at the start of every
main() and WinMain() belonging to a program that might do
cryptography (even testcrypt, in case someone uses it for something!),
and in case I missed one there, also added a second call at the first
moment that any cryptography-using part of the code looks as if it
might become active: when an instance of the SSH protocol object is
configured, when the system PRNG is initialised, and when selecting
any cryptographic authentication protocol in an HTTP or SOCKS proxy
connection. With any luck those precautions between them should ensure
it's on whenever we need it.

Arm's own recommendation is that you should carefully choose the
granularity at which you enable and disable DIT: there's a potential
time cost to turning it on and off (I'm not sure what, but plausibly
something of the order of a pipeline flush), so it's a performance hit
to do it _inside_ each individual crypto function, but if CPUs start
supporting significant data-dependent optimisation in future, then it
will also become a noticeable performance hit to just leave it on
across the whole process. So you'd like to do it somewhere in the
middle: for example, you might turn on DIT once around the whole
process of verifying and decrypting an SSH packet, instead of once for
decryption and once for MAC.

With all respect to that recommendation as a strategy for maximum
performance, I'm not following it here. I turn on DIT at the start of
the PuTTY process, and then leave it on. Rationale:

 1. PuTTY is not otherwise a performance-critical application: it's
    not likely to max out your CPU for any purpose _other_ than
    cryptography. The most CPU-intensive non-cryptographic thing I can
    imagine a PuTTY process doing is the complicated computation of
    font rendering in the terminal, and that will normally be cached
    (you don't recompute each glyph from its outline and hints for
    every time you display it).

 2. I think a bigger risk lies in accidental side channels from having
    DIT turned off when it should have been on. I can imagine lots of
    causes for that. Missing a crypto operation in some unswept corner
    of the code; confusing control flow (like my coroutine macros)
    jumping with DIT clear into the middle of a region of code that
    expected DIT to have been set at the beginning; having a reference
    counter of DIT requests and getting it out of sync.

In a more sophisticated programming language, it might be possible to
avoid the risk in #2 by cleverness with the type system. For example,
in Rust, you could have a zero-sized type that acts as a proof token
for DIT being enabled (it would be constructed by a function that also
sets DIT, have a Drop implementation that clears DIT, and be !Send so
you couldn't use it in a thread other than the one where DIT was set),
and then you could require all the actual crypto functions to take a
DitToken as an extra parameter, at zero runtime cost. Then "oops I
forgot to set DIT around this piece of crypto" would become a compile
error. Even so, you'd have to take some care with coroutine-structured
code (what happens if a Rust async function yields while holding a DIT
token?) and with nesting (if you have two DIT tokens, you don't want
dropping the inner one to clear DIT while the outer one is still there
to wrongly convince callees that it's set). Maybe in Rust you could
get this all to work reliably. But not in C!

DIT is an optional feature of the Arm architecture, so we must first
test to see if it's supported. This is done the same way as we already
do for the various Arm crypto accelerators: on ELF-based systems,
check the appropriate bit in the 'hwcap' words in the ELF aux vector;
on Mac, look for an appropriate sysctl flag.

On Windows I don't know of a way to query the DIT feature, _or_ of a
way to write the necessary enabling instruction in an MSVC-compatible
way. I've _heard_ that it might not be necessary, because Windows
might just turn on DIT unconditionally and leave it on, in an even
more extreme version of my own strategy. I don't have a source for
that - I heard it by word of mouth - but I _hope_ it's true, because
that would suit me very well! Certainly I can't write code to enable
DIT without knowing (a) how to do it, (b) how to know if it's safe.
Nonetheless, I've put the enable_dit() call in all the right places in
the Windows main programs as well as the Unix and cross-platform code,
so that if I later find out that I _can_ put in an explicit enable of
DIT in some way, I'll only have to arrange to set HAVE_ARM_DIT and
compile the enable_dit() function appropriately.
2024-12-19 08:52:47 +00:00
Simon Tatham
a3f22a2cf9 Use the new 'HYBRID' names for the hybrid KEX packets.
draft-kampanakis-curdle-ssh-pq-ke defines the packet names
SSH_MSG_KEX_HYBRID_INIT and SSH_MSG_KEX_HYBRID_REPLY. They have the
same numbers as ECDH_INIT and ECDH_REPLY, and don't change anything
else, so this is just a naming change. But I think it's a good one,
because the post-quantum KEMs are less symmetric than ECDH (they're
much more like Ben's RSA kex in concept, though very different in
detail), and shouldn't try to pretend they're the same kind of thing.
Also this enables logparse.pl to give a warning about the fact that
one string in each packet contains two separate keys glomphed together.

For the latter reason (and also because it's easier in my code
structure) I've also switched to using the HYBRID naming for the
existing NTRU + Curve25519 hybrid method, even though the
Internet-Draft for that one still uses the ECDH names. Sorry, but I
think it's clearer!
2024-12-08 10:42:34 +00:00
Simon Tatham
e98615f0ba New post-quantum kex: ML-KEM, and three hybrids of it.
As standardised by NIST in FIPS 203, this is a lattice-based
post-quantum KEM.

Very vaguely, the idea of it is that your public key is a matrix A and
vector t, and the private key is the knowledge of how to decompose t
into two vectors with all their coefficients small, one transformed by
A relative to the other. Encryption of a binary secret starts by
turning each bit into one of two maximally separated residues mod a
prime q, and then adding 'noise' based on the public key in the form
of small increments and decrements mod q, again with some of the noise
transformed by A relative to the rest. Decryption uses the knowledge
of t's decomposition to align the two sets of noise so that the
_large_ changes (which masked the secret from an eavesdropper) cancel
out, leaving only a collection of small changes to the original secret
vector. Then the vector of input bits can be recovered by assuming
that those accumulated small pieces of noise haven't concentrated in
any particular residue enough to push it more than half way to the
other of its possible starting values.

A weird feature of it is that decryption is not a true mathematical
inverse of encryption. The assumption that the noise doesn't get large
enough to flip any bit of the secret is only probabilistically valid,
not a hard guarantee. In other words, key agreement can fail, simply
by getting particularly unlucky with the distribution of your random
noise! However, the probability of a failure is very low - less than
2^-138 even for ML-KEM-512, and gets even smaller with the larger
variants.

An awkward feature for our purposes is that the matrix A, containing a
large number of residues mod the prime q=3329, is required to be
constructed by a process of rejection sampling, i.e. generating random
12-bit values and throwing away the out-of-range ones. That would be a
real pain for our side-channel testing system, which generally handles
rejection sampling badly (since it necessarily involves data-dependent
control flow and timing variation). Fortunately, the matrix and the
random seed it was made from are both public: the matrix seed is
transmitted as part of the public key, so it's not necessary to try to
hide it. Accordingly, I was able to get the implementation to pass
testsc by means of not varying the matrix seed between runs, which is
justified by the principle of testsc that you vary the _secrets_ to
ensure timing is independent of them - and the matrix seed isn't a
secret, so you're allowed to keep it the same.

The three hybrid algorithms, defined by the current Internet-Draft
draft-kampanakis-curdle-ssh-pq-ke, include one hybrid of ML-KEM-768
with Curve25519 in exactly the same way we were already hybridising
NTRU Prime with Curve25519, and two more hybrids of ML-KEM with ECDH
over a NIST curve. The former hybrid interoperates with the
implementation in OpenSSH 9.9; all three interoperate with the fork
'openssh-oqs' at github.com/open-quantum-safe/openssh, and also with
the Python library AsyncSSH.
2024-12-08 10:41:08 +00:00
Simon Tatham
16629d3bbc Add more variants of SHAKE.
This adds a ssh_hashalg defining SHAKE256 with a 32-byte output, in
addition to the 114-byte output we already have.

Also, it defines a new API for using SHAKE128 and SHAKE256 in the more
general form of an extendable output function, which is to say that
you still have to put in all the input before reading any output, but
once you start reading output you can just keep going until you have
enough.

Both of these will be needed in an upcoming commit implementing ML-KEM.
2024-12-08 09:50:08 +00:00
Simon Tatham
f08da2b638 Separate NTRU Prime from the hybridisation layer.
Now ntru.c contains just the NTRU business, and kex-hybrid.c contains
the system for running a post-quantum and a classical KEX and hashing
together the results. In between them is a new small vtable API for
the key encapsulation mechanisms that the post-quantum standardisation
effort seems to be settling on.
2024-12-08 09:50:08 +00:00
Simon Tatham
f454c84a23 Rename SocketPeerInfo to SocketEndpointInfo.
I'm preparing to be able to ask about the other end of the connection
too, so the first step is to give this data structure a neutral name
that can refer to either. No functional change yet.
2024-06-29 11:49:32 +01:00
Simon Tatham
f0f058ccb4 Merge 0.81 branch. 2024-04-15 19:42:50 +01:00
Simon Tatham
c193fe9848 Switch to RFC 6979 for DSA nonce generation.
This fixes a vulnerability that compromises NIST P521 ECDSA keys when
they are used with PuTTY's existing DSA nonce generation code. The
vulnerability has been assigned the identifier CVE-2024-31497.

PuTTY has been doing its DSA signing deterministically for literally
as long as it's been doing it at all, because I didn't trust Windows's
entropy generation. Deterministic nonce generation was introduced in
commit d345ebc2a5, as part of the initial version of our DSA
signing routine. At the time, there was no standard for how to do it,
so we had to think up the details of our system ourselves, with some
help from the Cambridge University computer security group.

More than ten years later, RFC 6979 was published, recommending a
similar system for general use, naturally with all the details
different. We didn't switch over to doing it that way, because we had
a scheme in place already, and as far as I could see, the differences
were not security-critical - just the normal sort of variation you
expect when any two people design a protocol component of this kind
independently.

As far as I know, the _structure_ of our scheme is still perfectly
fine, in terms of what data gets hashed, how many times, and how the
hash output is converted into a nonce. But the weak spot is the choice
of hash function: inside our dsa_gen_k() function, we generate 512
bits of random data using SHA-512, and then reduce that to the output
range by modular reduction, regardless of what signature algorithm
we're generating a nonce for.

In the original use case, this introduced a theoretical bias (the
output size is an odd prime, which doesn't evenly divide the space of
2^512 possible inputs to the reduction), but the theory was that since
integer DSA uses a modulus prime only 160 bits long (being based on
SHA-1, at least in the form that SSH uses it), the bias would be too
small to be detectable, let alone exploitable.

Then we reused the same function for NIST-style ECDSA, when it
arrived. This is fine for the P256 curve, and even P384. But in P521,
the order of the base point is _greater_ than 2^512, so when we
generate a 512-bit number and reduce it, the reduction never makes any
difference, and our output nonces are all in the first 2^512 elements
of the range of about 2^521. So this _does_ introduce a significant
bias in the nonces, compared to the ideal of uniformly random
distribution over the whole range. And it's been recently discovered
that a bias of this kind is sufficient to expose private keys, given a
manageably small number of signatures to work from.

(Incidentally, none of this affects Ed25519. The spec for that system
includes its own idea of how you should do deterministic nonce
generation - completely different again, naturally - and we did it
that way rather than our way, so that we could use the existing test
vectors.)

The simplest fix would be to patch our existing nonce generator to use
a longer hash, or concatenate a couple of SHA-512 hashes, or something
similar. But I think a more robust approach is to switch it out
completely for what is now the standard system. The main reason why I
prefer that is that the standard system comes with test vectors, which
adds a lot of confidence that I haven't made some other mistake in
following my own design.

So here's a commit that adds an implementation of RFC 6979, and
removes the old dsa_gen_k() function. Tests are added based on the
RFC's appendix of test vectors (as many as are compatible with the
more limited API of PuTTY's crypto code, e.g. we lack support for the
NIST P192 curve, or for doing integer DSA with many different hash
functions). One existing test changes its expected outputs, namely the
one that has a sample key pair and signature for every key algorithm
we support.
2024-04-06 09:30:57 +01:00
Simon Tatham
dea3ddca05 Add an extra HMAC constructor function.
This takes a plain ssh_hashalg, and constructs the most natural kind
of HMAC wrapper around it, taking its key length and output length
to be the hash's output length. In other words, it converts SHA-foo
into exactly the thing usually called HMAC-SHA-foo.

It does it by constructing a new ssh2_macalg vtable, and including it
in the same memory allocation as the actual hash object. That's the
first time in PuTTY I've done it this way.

Nothing yet uses this, but a new piece of code is about to.
2024-04-01 08:45:21 +01:00
Simon Tatham
968ac6dbf0 Merge tag '0.80'.
This involved a trivial merge conflict fix in terminal.c because of
the way the cherry-pick 73b41feba5 differed from its original
bdbd5f429c.

But a more significant rework was needed in windows/console.c, because
the updates to confirm_weak_* conflicted with the changes on main to
abstract out the ConsoleIO system.
2023-12-18 14:47:48 +00:00
Simon Tatham
b80a41d386 Terrapin warning: say if reconfiguration can help.
The Terrapin vulnerability affects the modified binary packet protocol
used with ChaCha20+Poly1305, and also CBC-mode ciphers in ETM mode.
It's best prevented by the new strict-kex mode, but if the server
can't handle that protocol alteration, another approach is to change
PuTTY's configuration so that it will negotiate a different algorithm.

That may not be possible either (an obvious case being if the server
has been manually configured to _only_ support vulnerable modes). But
if it is possible, then it would be nice for us to detect that and
show how to do it.

That could be a hard problem in general, but the most likely cause of
it is configuring ChaCha20 to the top of the cipher list, so that it's
selected ahead of things that aren't vulnerable. And it's reasonably
easy to do just one fantasy-renegotiation, having moved ChaCha20 down
to below the warn line, and see if that sorts it out. If it does, we
can pass on that advice to the user.
2023-12-13 18:49:17 +00:00
Simon Tatham
0b00e4ce26 Warn about Terrapin vulnerability for unpatched servers.
If the KEXINIT exchange results in a vulnerable cipher mode, we now
give a warning, similar to the 'we selected a crypto primitive below
the warning threshold' one. But there's nothing we can do about it at
that point other than let the user abort the connection.
2023-12-13 18:47:08 +00:00
Simon Tatham
9fcbb86f71 Refactor confirm_weak to use SeatDialogText.
This centralises the messages for weak crypto algorithms (general, and
host keys in particular, the latter including a list of all the other
available host key types) into ssh/common.c, in much the same way as
we previously did for ordinary host key warnings.

The reason is the same too: I'm about to want to vary the text in one
of those dialog boxes, so it's convenient to start by putting it
somewhere that I can modify just once.
2023-11-29 07:29:29 +00:00
Simon Tatham
85680c77c0 Make x11_get_auth_from_authfile take a Filename.
I think the only reason it currently takes a plain string is because
its interesting caller (in unix/x11.c) has just constructed a string
out of an environment variable, and it seemed like the path of least
effort not to bother wrapping it into a proper Filename. But when
Filename on Windows becomes more interesting, we'll need it to take
the full version.
2023-05-29 15:41:50 +01:00
Simon Tatham
d663356634 Work around key algorithm naming change in OpenSSH <= 7.7.
When you send a "publickey" USERAUTH_REQUEST containing a certified
RSA key, and you want to use a SHA-2 based RSA algorithm, modern
OpenSSH expects you to send the algorithm string as
rsa-sha2-NNN-cert-v01@openssh.com. But 7.7 and earlier didn't
recognise those names, and expected the algorithm string in the
userauth request packet to be ssh-rsa-cert-v01@... and would then
follow it with an rsa-sha2-NNN signature.

OpenSSH itself has a bug workaround for its own older versions. Follow
suit.
2023-05-05 00:05:28 +01:00
Simon Tatham
f6f9848465 Add support for HMAC-SHA512.
I saw a post on comp.security.ssh just now where someone had
encountered an SSH server that would _only_ speak that, which makes it
worth bothering to implement.

The totally obvious implementation works, and passes the test cases
from RFC 6234.

(cherry picked from commit b77e985513)
2023-04-23 13:24:19 +01:00
Simon Tatham
ed94aa5058 Remove spurious 'const' on return types. 2022-09-03 11:59:12 +01:00
Simon Tatham
15f097f399 New feature: k-i authentication helper plugins.
In recent months I've had two requests from different people to build
support into PuTTY for automatically handling complicated third-party
auth protocols layered on top of keyboard-interactive - the kind of
thing where you're asked to enter some auth response, and you have to
refer to some external source like a web server to find out what the
right response _is_, which is a pain to do by hand, so you'd prefer it
to be automated in the SSH client.

That seems like a reasonable thing for an end user to want, but I
didn't think it was a good idea to build support for specific
protocols of that kind directly into PuTTY, where there would no doubt
be an ever-lengthening list, and maintenance needed on all of them.

So instead, in collaboration with one of my correspondents, I've
designed and implemented a protocol to be spoken between PuTTY and a
plugin running as a subprocess. The plugin can opt to handle the
keyboard-interactive authentication loop on behalf of the user, in
which case PuTTY passes on all the INFO_REQUEST packets to it, and
lets it make up responses. It can also ask questions of the user if
necessary.

The protocol spec is provided in a documentation appendix. The entire
configuration for the end user consists of providing a full command
line to use as the subprocess.

In the contrib directory I've provided an example plugin written in
Python. It gives a set of fixed responses suitable for getting through
Uppity's made-up k-i system, because that was a reasonable thing I
already had lying around to test against. But it also provides example
code that someone else could pick up and insert their own live
response-provider into the middle of, assuming they were happy with it
being in Python.
2022-09-01 20:43:23 +01:00
Simon Tatham
5e2acd9af7 New bug workaround: KEXINIT filtering.
We've occasionally had reports of SSH servers disconnecting as soon as
they receive PuTTY's KEXINIT. I think all such reports have involved
the kind of simple ROM-based SSH server software you find in small
embedded devices.

I've never been able to prove it, but I've always suspected that one
possible cause of this is simply that PuTTY's KEXINIT is _too long_,
either in number of algorithms listed or in total length (especially
given all the ones that end in @very.long.domain.name suffixes).

If I'm right about either of those being the cause, then it's just
become even more likely to happen, because of all the extra
Diffie-Hellman groups and GSSAPI algorithms we just threw into our
already-long list in the previous few commits.

A workaround I've had in mind for ages is to wait for the server's
KEXINIT, and then filter our own down to just the algorithms the
server also mentioned. Then our KEXINIT is no longer than that of the
server, and hence, presumably fits in whatever buffer it has. So I've
implemented that workaround, in anticipation of it being needed in the
near future.

(Well ... it's not _quite_ true that our KEXINIT is at most the same
length as the server. In fact I had to leave in one KEXINIT item that
won't match anything in the server's list, namely "ext-info-c" which
gates access to SHA-2 based RSA. So if we turn out to support
absolutely everything on all the server's lists, then our KEXINIT
would be a few bytes longer than the server's, even with this
workaround. But that would only cause trouble if the server's outgoing
KEXINIT was skating very close to whatever buffer size it has for the
incoming one, and I'm guessing that's not very likely.)

((Another possible cause of this kind of disconnection would be a
server that simply objects to seeing any KEXINIT string it doesn't
know how to speak. But _surely_ no such server would have survived
initial testing against any full-featured client at all!))
2022-08-30 18:51:33 +01:00
Simon Tatham
cec8c87626 Support elliptic-curve Diffie-Hellman GSS KEX.
This is surprisingly simple, because it wasn't necessary to touch the
GSS parts at all. Nothing changes about the message formats between
integer DH and ECDH in GSS KEX, except that the mpints sent back and
forth as part of integer DH are replaced by the opaque strings used in
ECDH. So I've invented a new KEXTYPE and made it control a bunch of
small conditionals in the middle of the GSS KEX code, leaving the rest
unchanged.
2022-08-30 18:09:39 +01:00
Simon Tatham
031d86ed5b Add RFC8268 / RFC3126 Diffie-Hellman group{15,16,17,18}.
These are a new set of larger integer Diffie-Hellman fixed groups,
using SHA-512 as the hash.
2022-08-30 18:09:39 +01:00
Simon Tatham
c1a2114b28 Implement AES-GCM using the @openssh.com protocol IDs.
I only recently found out that OpenSSH defined their own protocol IDs
for AES-GCM, defined to work the same as the standard ones except that
they fixed the semantics for how you select the linked cipher+MAC pair
during key exchange.

(RFC 5647 defines protocol ids for AES-GCM in both the cipher and MAC
namespaces, and requires that you MUST select both or neither - but
this contradicts the selection policy set out in the base SSH RFCs,
and there's no discussion of how you resolve a conflict between them!
OpenSSH's answer is to do it the same way ChaCha20-Poly1305 works,
because that will ensure the two suites don't fight.)

People do occasionally ask us for this linked cipher/MAC pair, and now
I know it's actually feasible, I've implemented it, including a pair
of vector implementations for x86 and Arm using their respective
architecture extensions for multiplying polynomials over GF(2).

Unlike ChaCha20-Poly1305, I've kept the cipher and MAC implementations
in separate objects, with an arm's-length link between them that the
MAC uses when it needs to encrypt single cipher blocks to use as the
inputs to the MAC algorithm. That enables the cipher and the MAC to be
independently selected from their hardware-accelerated versions, just
in case someone runs on a system that has polynomial multiplication
instructions but not AES acceleration, or vice versa.

There's a fourth implementation of the GCM MAC, which is a pure
software implementation of the same algorithm used in the vectorised
versions. It's too slow to use live, but I've kept it in the code for
future testing needs, and because it's a convenient place to dump my
design comments.

The vectorised implementations are fairly crude as far as optimisation
goes. I'm sure serious x86 _or_ Arm optimisation engineers would look
at them and laugh. But GCM is a fast MAC compared to HMAC-SHA-256
(indeed compared to HMAC-anything-at-all), so it should at least be
good enough to use. And we've got a working version with some tests
now, so if someone else wants to improve them, they can.
2022-08-16 20:33:58 +01:00
Simon Tatham
840043f06e Add 'next_message' methods to cipher and MAC vtables.
This provides a convenient hook to be called between SSH messages, for
the crypto components to do any per-message processing like
incrementing a sequence number.
2022-08-16 18:27:06 +01:00
Simon Tatham
423ce20ffb Pageant core: separate public and private key storage.
Previously, we had a single data structure 'keytree' containing
records each involving a public and private key (the latter maybe in
clear, or as an encrypted key file, or both). Now, we have separate
'pubkeytree' and 'privkeytree', the former storing public keys indexed
by their full public blob (including certificate, if any), and the
latter storing private keys, indexed by the _base_ public blob
only (i.e. with no certificate included).

The effect of this is that deferred decryption interacts more sensibly
with certificates. Now, if you load certified and uncertified versions
of the same key into Pageant, or two or more differently certified
versions, then the separate public key records will all share the same
private key record, and hence, a single state of decryption. So the
first time you enter a passphrase that unlocks that private key, it
will unlock it for all public keys that share the same private half.
Conversely, re-encrypting any one of them will cause all of them to
become re-encrypted, eliminating the risk that you deliberately
re-encrypt a key you really care about and forget that another equally
valuble copy of it is still in clear.

The most subtle part of this turned out to be the question of what key
comment you present in a deferred decryption prompt. It's very
tempting to imagine that it should be the comment that goes with
whichever _public_ key was involved in the signing request that
triggered the prompt. But in fact, it _must_ be the comment that goes
with whichever version of the encrypted key file is stored in Pageant
- because what if the user chose different passphrases for their
uncertified and certified PPKs? Then the decryption prompt will have
to indicate which passphrase they should be typing, so it's vital to
present the comment that goes with the _file we're decrypting_.

(Of course, if the user has selected different passphrases for those
two PPKs but the _same_ comment, they're still going to end up
confused. But at least once they realise they've done that, they have
a workaround.)
2022-08-06 11:34:36 +01:00
Simon Tatham
cd7f6c4407 Certificate-aware handling of key fingerprints.
OpenSSH, when called on to give the fingerprint of a certified public
key, will in many circumstances generate the hash of the public blob
of the _underlying_ key, rather than the hash of the full certificate.

I think the hash of the certificate is also potentially useful (if
nothing else, it provides a way to tell apart multiple certificates on
the same key). But I can also see that it's useful to be able to
recognise a key as the same one 'really' (since all certificates on
the same key share a private key, so they're unavoidably related).

So I've dealt with this by introducing an extra pair of fingerprint
types, giving the cross product of {MD5, SHA-256} x {base key only,
full certificate}. You can manually select which one you want to see
in some circumstances (notably PuTTYgen), and in others (such as
diagnostics) both fingerprints will be emitted side by side via the
new functions ssh2_double_fingerprint[_blob].

The default, following OpenSSH, is to just fingerprint the base key.
2022-08-05 18:08:59 +01:00
Simon Tatham
4fa3480444 Formatting: realign run-on parenthesised stuff.
My bulk indentation check also turned up a lot of cases where a run-on
function call or if statement didn't have its later lines aligned
correctly relative to the open paren.

I think this is quite easy to do by getting things out of
sync (editing the first line of the function call and forgetting to
update the rest, perhaps even because you never _saw_ the rest during
a search-replace). But a few didn't quite fit into that pattern, in
particular an outright misleading case in unix/askpass.c where the
second line of a call was aligned neatly below the _wrong_ one of the
open parens on the opening line.

Restored as many alignments as I could easily find.
2022-08-03 20:48:46 +01:00
Simon Tatham
ff2ffa539c Windows Pageant: display RSA/DSA cert bit counts.
The test in the Pageant list box code for whether we should display
the bit count of a key was done by checking specifically for ssh_rsa
or ssh_dsa, which of course meant that it didn't catch the certified
versions of those keys.

Now there's yet another footling ssh_keyalg method that asks the
question 'is it worth displaying the bit count?', to which RSA and DSA
answer yes, and the opensshcert family delegates to its base key type,
so that RSA and DSA certified keys also answer yes.

(This isn't the same as ssh_key_public_bits(alg, blob) >= 0. All
supported public key algorithms _can_ display a bit count if called
on. But only in RSA and DSA is it configurable, and therefore worth
bothering to print in the list box.)

Also in this commit, I've fixed a bug in the certificate
implementation of public_bits, which was passing a wrongly formatted
public blob to the underlying key. (Done by factoring out the code
from opensshcert_new_shared which constructed the _correct_ public
blob, and reusing it in public_bits to do the same job.)
2022-08-02 18:39:31 +01:00
Simon Tatham
fea08bb244 Windows Pageant: use nicer key-type strings.
If you load a certified key into Windows Pageant, the official SSH id
for the key type is so long that it overflows its space in the list
box and overlaps the key fingerprint hash.

This commit introduces yet another footling little ssh_keyalg method
which returns a shorter human-readable description of the key type,
and uses that in the Windows Pageant list box only.

(Not in the Unix Pageant list, though, because being output to stdout,
that seems like something people are more likely to want to
machine-read, which firstly means we shouldn't change it lightly, and
secondly, if we did change it we'd want to avoid having a variable
number of spaces in the replacement key type text.)
2022-08-02 18:03:45 +01:00
Simon Tatham
6737a19072 cmdgen: human-readable certificate info dump.
The recently added SeatDialogText type was just what I needed to add a
method to the ssh_key vtable for dumping certificate information in a
human-readable format. It will be good for displaying in a Windows
dialog box as well as in cmdgen's text format.

This commit introduces and implements the new method, and adds a
--cert-info mode to command-line Unix PuTTYgen that uses it. The
Windows side will follow shortly.
2022-07-30 17:16:55 +01:00
Simon Tatham
42740a5455 Allow manually confirming and caching certified keys.
In the case where a server presents a host key signed by a different
certificate from the one you've configured, it need not _always_ be
evidence of wrongdoing. I can imagine situations in which two CAs
cover overlapping sets of things, and you don't want to blanket-trust
one of them, but you do want to connect to a specific host signed by
that one.

Accordingly, PuTTY's previous policy of unconditionally aborting the
connection if certificate validation fails (which was always intended
as a stopgap until I thought through what I wanted to replace it with)
is now replaced by fallback handling: we present the host key
fingerprint to the user and give them the option to accept and/or
cache it based on the public key itself.

This means that the certified key types have to have a representation
in the host key cache. So I've assigned each one a type id, and
generate the cache string itself by simply falling back to the base
key.

(Rationale for the latter: re-signing a public key with a different
certificate doesn't change the _private_ key, or the set of valid
signatures generated with it. So if you've been convinced for reasons
other than the certificate that a particular private key is in the
possession of $host, then proof of ownership of that private key
should be enough to convince you you're talking to $host no matter
what CA has signed the public half this week.)

We now offer to receive a given certified host key type if _either_ we
have at least one CA configured to trust that host, _or_ we have a
certified key of that type cached. (So once you've decided manually
that you trust a particular key, we can still receive that key and
authenticate the host with it, even if you later delete the CA record
that it didn't match anyway.)

One change from normal (uncertified) host key handling is that for
certified key types _all_ the host key prompts use the stronger
language, with "WARNING - POTENTIAL SECURITY BREACH!" rather than the
mild 'hmm, we haven't seen this host before'. Rationale: if you
expected this CA key and got that one, it _could_ be a bold-as-brass
MITM attempt in which someone hoped you'd accept their entire CA key.
The mild wording is only for the case where we had no previous
expectations _at all_ for the host to violate: not a CA _or_ a cached
key.
2022-07-17 14:11:38 +01:00
Simon Tatham
dc7ba12253 Permit configuring RSA signature types in certificates.
As distinct from the type of signature generated by the SSH server
itself from the host key, this lets you exclude (and by default does
exclude) the old "ssh-rsa" SHA-1 signature type from the signature of
the CA on the certificate.
2022-05-02 11:17:58 +01:00
Simon Tatham
9f583c4fa8 Certificate-specific ssh_key method suite.
Certificate keys don't work the same as normal keys, so the rest of
the code is going to have to pay attention to whether a key is a
certificate, and if so, treat it differently and do cert-specific
stuff to it. So here's a collection of methods for that purpose.

With one exception, these methods of ssh_key are not expected to be
implemented at all in non-certificate key types: they should only ever
be called once you already know you're dealing with a certificate. So
most of the new method pointers can be left out of the ssh_keyalg
initialisers.

The exception is the base_key method, which retrieves the base key of
a certificate - the underlying one with the certificate stripped off.
It's convenient for non-certificate keys to implement this too, and
just return a pointer to themselves. So I've added an implementation
in nullkey.c doing that. (The returned pointer doesn't transfer
ownership; you have to use the new ssh_key_clone() if you want to keep
the base key after freeing the certificate key.)

The methods _only_ implemented in certificates:

Query methods to return the public key of the CA (for looking up in a
list of trusted ones), and to return the key id string (which exists
to be written into log files).

Obviously, we need a check_cert() method which will verify the CA's
actual signature, not to mention checking all the other details like
the principal and the validity period.

And there's another fiddly method for dealing with the RSA upgrade
system, called 'related_alg'. This is quite like alternate_ssh_id, in
that its job is to upgrade one key algorithm to a related one with
more modern RSA signing flags (or any other similar thing that might
later reuse the same mechanism). But where alternate_ssh_id took the
actual signing flags as an argument, this takes a pointer to the
upgraded base algorithm. So it answers the question "What is to this
key algorithm as you are to its base?" - if you call it on
opensshcert_ssh_rsa and give it ssh_rsa_sha512, it'll give you back
opensshcert_ssh_rsa_sha512.

(It's awkward to have to have another of these fiddly methods, and in
the longer term I'd like to try to clean up their proliferation a bit.
But I even more dislike the alternative of just going through
all_keyalgs looking for a cert algorithm with, say, ssh_rsa_sha512 as
the base: that approach would work fine now but it would be a lurking
time bomb for when all the -cert-v02@ methods appear one day. This
way, each certificate type can upgrade itself to the appropriately
related version. And at least related_alg is only needed if you _are_
a certificate key type - it's not adding yet another piece of
null-method boilerplate to the rest.)
2022-04-25 15:09:31 +01:00
Simon Tatham
34d01e1b65 Family of key types for OpenSSH certificates.
This commit is groundwork for full certificate support, but doesn't
complete the job by itself. It introduces the new key types, and adds
a test in cryptsuite ensuring they work as expected, but nothing else.

If you manually construct a PPK file for one of the new key types, so
that it has a certificate in the public key field, then this commit
enables PuTTY to present that key to a server for user authentication,
either directly or via Pageant storing and using it. But I haven't yet
provided any mechanism for making such a PPK, so by itself, this isn't
much use.

Also, these new key types are not yet included in the KEXINIT host
keys list, because if they were, they'd just be treated as normal host
keys, in that you'd be asked to manually confirm the SSH fingerprint
of the certificate. I'll enable them for host keys once I add the
missing pieces.
2022-04-25 15:09:31 +01:00
Simon Tatham
043c24844a Improve the base64 utility functions.
The low-level functions to handle a single atom of base64 at a time
have been in 'utils' / misc.h for ages, but the higher-level family of
base64_encode functions that handle a whole data block were hidden
away in sshpubk.c, and there was no higher-level decode function at
all.

Now moved both into 'utils' modules and declared them in misc.h rather
than ssh.h. Also, improved the APIs: they all take ptrlen in place of
separate data and length arguments, their naming is more consistent
and more explicit (the previous base64_encode which didn't name its
destination is now base64_encode_fp), and the encode functions now
accept cpl == 0 as a special case meaning that the output base64 data
is wanted in the form of an unbroken single-line string with no
trailing \n.
2022-04-25 14:10:16 +01:00
Simon Tatham
c2f1a563a5 Utility function ssh_key_clone().
This makes a second independent copy of an existing ssh_key, for
situations where one piece of code is going to want to keep it after
its current owner frees it.

In order to have it work on an arbitrary ssh_key, whether public-only
or a full public+private key pair, I've had to add an ssh_key query
method to ask whether a private key is known. I'm surprised I haven't
found a need for that before! But I suppose in most situations in an
SSH client you statically know which kind of key you're dealing with.
2022-04-24 08:39:04 +01:00
Simon Tatham
180d1b78de Extra helper functions for adding key_components.
In this commit, I provide further functions which generate the
existing set of data types:

 - key_components_add_text_pl() adds a text component, but takes a
   ptrlen rather than a const char *, in case that was what you
   happened to have already.

 - key_components_add_uint() ends up adding an mp_int to the
   structure, but takes it as input in the form of an ordinary C
   integer, for the convenience of call sites which will want to do
   that a lot and don't enjoy repeating the mp_int construction
   boilerplate

 - key_components_add_copy() takes a pointer to one of the
   key_component sub-structs in an existing key_components, and copies
   it into the output key_components under a new name, handling
   whatever type it turns out to have.
2022-04-24 08:39:04 +01:00
Simon Tatham
62bc6c5448 New key component type KCT_BINARY.
This stores its data in the same format as the existing KCT_TEXT, but
it displays differently in puttygen --dump, expecting that the data
will be full of horrible control characters, invalid UTF-8, etc.

The displayed data is of the form b64("..."), so you get a hint about
what the encoding is, and can still paste into Python by defining the
identifier 'b64' to be base64.b64decode or equivalent.
2022-04-24 08:39:04 +01:00
Simon Tatham
68514ac8a1 Refactor the key-components mechanism a bit.
Having recently pulled it out into its own file, I think it could also
do with a bit of tidying. In this rework:

 - the substructure for a single component now has a globally visible
   struct tag, so you can make a variable pointing at it, saving
   verbiage in every piece of code looping over a key_components

 - the 'is_mp_int' flag has been replaced with a type enum, so that
   more types can be added without further upheaval

 - the printing loop in cmdgen.c for puttygen --dump has factored out
   the initial 'name=' prefix on each line so that it isn't repeated
   per component type

 - the storage format for text components is now a strbuf rather than
   a plain char *, which I think is generally more useful.
2022-04-24 08:39:04 +01:00
Simon Tatham
cf36b9215f ssh_keyalg: new method 'alternate_ssh_id'.
Previously, the fact that "ssh-rsa" sometimes comes with two subtypes
"rsa-sha2-256" and "rsa-sha2-512" was known to three different parts
of the code - two in userauth and one in transport. Now the knowledge
of what those ids are, which one goes with which signing flags, and
which key types have subtypes at all, is centralised into a method of
the key algorithm, and all those locations just query it.

This will enable the introduction of further key algorithms that have
a parallel upgrade system.
2022-04-24 08:39:04 +01:00
Simon Tatham
f9775a7b67 Make ssh_keyalg's supported_flags a method.
It's a class method rather than an object method, so it doesn't allow
keys with the same algorithm to make different choices about what
flags they support. But that's not what I wanted it for: the real
purpose is to allow one key algorithm to delegate supported_flags to
another, by having its method implementation call the one from the
delegate class.

(If only C's compile/link model permitted me to initialise a field of
one global const struct variable to be a copy of that of another, I
wouldn't need the runtime overhead of this method! But object file
formats don't let you even specify that.)

Most key algorithms support no flags at all, so they all want to use
the same implementation of this method. So I've started a file of
stubs utils/nullkey.c to contain the common stub version.
2022-04-24 08:39:04 +01:00
Simon Tatham
a5c0205b87 Utility functions to get the algorithm from a public key.
Every time I've had to do this before, I've always done the three-line
dance of initialising a BinarySource and calling get_string on it.
It's long past time I wrapped that up into a convenient subroutine.
2022-04-24 08:38:27 +01:00
Simon Tatham
3a54f28a4e Extra utility function add_to_commasep_pl.
Just like add_to_commasep, but takes a ptrlen.
2022-04-21 08:13:38 +01:00
Simon Tatham
faf1601a55 Implement OpenSSH 9.x's NTRU Prime / Curve25519 kex.
This consists of DJB's 'Streamlined NTRU Prime' quantum-resistant
cryptosystem, currently in round 3 of the NIST post-quantum key
exchange competition; it's run in parallel with ordinary Curve25519,
and generates a shared secret combining the output of both systems.

(Hence, even if you don't trust this newfangled NTRU Prime thing at
all, it's at least no _less_ secure than the kex you were using
already.)

As the OpenSSH developers point out, key exchange is the most urgent
thing to make quantum-resistant, even before working quantum computers
big enough to break crypto become available, because a break of the
kex algorithm can be applied retroactively to recordings of your past
sessions. By contrast, authentication is a real-time protocol, and can
only be broken by a quantum computer if there's one available to
attack you _already_.

I've implemented both sides of the mechanism, so that PuTTY and Uppity
both support it. In my initial testing, the two sides can both
interoperate with the appropriate half of OpenSSH, and also (of
course, but it would be embarrassing to mess it up) with each other.
2022-04-15 17:46:06 +01:00
Simon Tatham
e59ee96554 Refactor ecdh_kex into an organised vtable.
This is already slightly nice because it lets me separate the
Weierstrass and Montgomery code more completely, without having to
have a vtable tucked into dh->extra. But more to the point, it will
allow completely different kex methods to fit into the same framework
later.

To that end, I've moved more of the descriptive message generation
into the vtable, and also provided the constructor with a flag that
will let it do different things in client and server.

Also, following on from a previous commit, I've arranged that the new
API returns arbitrary binary data for the exchange hash, rather than
an mp_int. An upcoming implementation of this interface will want to
return an encoded string instead of an encoded mp_int.
2022-04-15 17:46:06 +01:00
Simon Tatham
5935c68288 Update source file names in comments and docs.
Correcting a source file name in the docs just now reminded me that
I've seen a lot of outdated source file names elsewhere in the code,
due to all the reorganisation since we moved to cmake. Here's a giant
pass of trying to make them all accurate again.
2022-01-22 15:51:31 +00:00
Simon Tatham
a2ff884512 Richer data type for interactive prompt results.
All the seat functions that request an interactive prompt of some kind
to the user - both the main seat_get_userpass_input and the various
confirmation dialogs for things like host keys - were using a simple
int return value, with the general semantics of 0 = "fail", 1 =
"proceed" (and in the case of seat_get_userpass_input, answers to the
prompts were provided), and -1 = "request in progress, wait for a
callback".

In this commit I change all those functions' return types to a new
struct called SeatPromptResult, whose primary field is an enum
replacing those simple integer values.

The main purpose is that the enum has not three but _four_ values: the
"fail" result has been split into 'user abort' and 'software abort'.
The distinction is that a user abort occurs as a result of an
interactive UI action, such as the user clicking 'cancel' in a dialog
box or hitting ^D or ^C at a terminal password prompt - and therefore,
there's no need to display an error message telling the user that the
interactive operation has failed, because the user already knows,
because they _did_ it. 'Software abort' is from any other cause, where
PuTTY is the first to know there was a problem, and has to tell the
user.

We already had this 'user abort' vs 'software abort' distinction in
other parts of the code - the SSH backend has separate termination
functions which protocol layers can call. But we assumed that any
failure from an interactive prompt request fell into the 'user abort'
category, which is not true. A couple of examples: if you configure a
host key fingerprint in your saved session via the SSH > Host keys
pane, and the server presents a host key that doesn't match it, then
verify_ssh_host_key would report that the user had aborted the
connection, and feel no need to tell the user what had gone wrong!
Similarly, if a password provided on the command line was not
accepted, then (after I fixed the semantics of that in the previous
commit) the same wrong handling would occur.

So now, those Seat prompt functions too can communicate whether the
user or the software originated a connection abort. And in the latter
case, we also provide an error message to present to the user. Result:
in those two example cases (and others), error messages should no
longer go missing.

Implementation note: to avoid the hassle of having the error message
in a SeatPromptResult being a dynamically allocated string (and hence,
every recipient of one must always check whether it's non-NULL and
free it on every exit path, plus being careful about copying the
struct around), I've instead arranged that the structure contains a
function pointer and a couple of parameters, so that the string form
of the message can be constructed on demand. That way, the only users
who need to free it are the ones who actually _asked_ for it in the
first place, which is a much smaller set.

(This is one of the rare occasions that I regret not having C++'s
extra features available in this code base - a unique_ptr or
shared_ptr to a string would have been just the thing here, and the
compiler would have done all the hard work for me of remembering where
to insert the frees!)
2021-12-28 18:08:31 +00:00
Simon Tatham
831accb2a9 Expose openssh_bcrypt() to testcrypt, and test it.
I happened to notice in passing that this function doesn't have any
tests (although it will have been at least somewhat tested by the
cmdgen interop test system).

This involved writing a wrapper that passes the passphrase and salt as
ptrlens, and I decided it made more sense to make the same change to
the original function too and adjust the call sites appropriately.

I derived a test case by getting OpenSSH itself to make an encrypted
key file, and then using the inputs and output from the password hash
operation that decrypted it again.
2021-12-24 10:13:28 +00:00
Simon Tatham
cd60a602f5 Stop using short exponents for Diffie-Hellman.
I recently encountered a paper [1] which catalogues all kinds of
things that can go wrong when one party in a discrete-log system
invents a prime and the other party chooses an exponent. In
particular, some choices of prime make it reasonable to use a short
exponent to save time, but others make that strategy very bad.

That paper is about the ElGamal encryption scheme used in OpenPGP,
which is basically integer Diffie-Hellman with one side's key being
persistent: a shared-secret integer is derived exactly as in DH, and
then it's used to communicate a message integer by simply multiplying
the shared secret by the message, mod p.

I don't _know_ that any problem of this kind arises in the SSH usage
of Diffie-Hellman: the standard integer DH groups in SSH are safe
primes, and as far as I know, the usual generation of prime moduli for
DH group exchange also picks safe primes. So the short exponents PuTTY
has been using _should_ be OK.

However, the range of imaginative other possibilities shown in that
paper make me nervous, even so! So I think I'm going to retire the
short exponent strategy, on general principles of overcaution.

This slows down 4096-bit integer DH by about a factor of 3-4 (which
would be worse if it weren't for the modpow speedup in the previous
commit). I think that's OK, because, firstly, computers are a lot
faster these days than when I originally chose to use short exponents,
and secondly, more and more implementations are now switching to
elliptic-curve DH, which is unaffected by this change (and with which
we've always been using maximum-length exponents).

[1] On the (in)security of ElGamal in OpenPGP. Luca De Feo, Bertram
Poettering, Alessandro Sorniotti. https://eprint.iacr.org/2021/923
2021-11-28 12:19:34 +00:00
Simon Tatham
a434b13050 Pass diffiehellman ssh_kex objects to testcrypt.
This slightly simplifies the lookup function get_dh_group(), but
mostly, the point is to make it more similar to the other lookup
functions, because I'm planning to have those autogenerated.
2021-11-22 18:32:17 +00:00
Simon Tatham
f00c72cc2a Framework for announcing which Interactor is talking.
All this Interactor business has been gradually working towards being
able to inform the user _which_ network connection is currently
presenting them with a password prompt (or whatever), in situations
where more than one of them might be, such as an SSH connection being
used as a proxy for another SSH connection when neither one has
one-touch login configured.

At some point, we have to arrange that any attempt to do a user
interaction during connection setup - be it a password prompt, a host
key confirmation dialog, or just displaying an SSH login banner -
makes it clear which host it's come from. That's going to mean calling
some kind of announcement function before doing any of those things.

But there are several of those functions in the Seat API, and calls to
them are scattered far and wide across the SSH backend. (And not even
just there - the Rlogin backend also uses seat_get_userpass_input).
How can we possibly make sure we don't forget a vital call site on
some obscure little-tested code path, and leave the user confused in
just that one case which nobody might notice for years?

Today I thought of a trick to solve that problem. We can use the C
type system to enforce it for us!

The plan is: we invent a new struct type which contains nothing but a
'Seat *'. Then, for every Seat method which does a thing that ought to
be clearly identified as relating to a particular Interactor, we
adjust the API for that function to take the new struct type where it
previously took a plain 'Seat *'. Or rather - doing less violence to
the existing code - we only need to adjust the API of the dispatch
functions inline in putty.h.

How does that help? Because the way you _get_ one of these
struct-wrapped Seat pointers is by calling interactor_announce() on
your Interactor, which will in turn call interactor_get_seat(), and
wrap the returned pointer into one of these structs.

The effect is that whenever the SSH (or Rlogin) code wants to call one
of those particular Seat methods, it _has_ to call
interactor_announce() just beforehand, which (once I finish all of
this) will make sure the user is aware of who is presenting the prompt
or banner or whatever. And you can't forget to call it, because if you
don't call it, then you just don't have a struct of the right type to
give to the Seat method you wanted to call!

(Of course, there's nothing stopping code from _deliberately_ taking a
Seat * it already has and wrapping it into the new struct. In fact
SshProxy has to do that, in order to forward these requests up the
chain of Seats. But the point is that you can't do it _by accident_,
just by forgetting to make a vital function call - when you do that,
you _know_ you're doing it on purpose.)

No functional change: the new interactor_announce() function exists,
and the type-system trick ensures it's called in all the right places,
but it doesn't actually _do_ anything yet.
2021-10-30 18:20:33 +01:00