DIT, for 'Data-Independent Timing', is a bit you can set in the
processor state on sufficiently new Arm CPUs, which promises that a
long list of instructions will deliberately avoid varying their timing
based on the input register values. Just what you want for keeping
your constant-time crypto primitives constant-time.
As far as I'm aware, no CPU has _yet_ implemented any data-dependent
optimisations, so DIT is a safety precaution against them doing so in
future. It would be embarrassing to be caught without it if a future
CPU does do that, so we now turn on DIT in the PuTTY process state.
I've put a call to the new enable_dit() function at the start of every
main() and WinMain() belonging to a program that might do
cryptography (even testcrypt, in case someone uses it for something!),
and in case I missed one there, also added a second call at the first
moment that any cryptography-using part of the code looks as if it
might become active: when an instance of the SSH protocol object is
configured, when the system PRNG is initialised, and when selecting
any cryptographic authentication protocol in an HTTP or SOCKS proxy
connection. With any luck those precautions between them should ensure
it's on whenever we need it.
Arm's own recommendation is that you should carefully choose the
granularity at which you enable and disable DIT: there's a potential
time cost to turning it on and off (I'm not sure what, but plausibly
something of the order of a pipeline flush), so it's a performance hit
to do it _inside_ each individual crypto function, but if CPUs start
supporting significant data-dependent optimisation in future, then it
will also become a noticeable performance hit to just leave it on
across the whole process. So you'd like to do it somewhere in the
middle: for example, you might turn on DIT once around the whole
process of verifying and decrypting an SSH packet, instead of once for
decryption and once for MAC.
With all respect to that recommendation as a strategy for maximum
performance, I'm not following it here. I turn on DIT at the start of
the PuTTY process, and then leave it on. Rationale:
1. PuTTY is not otherwise a performance-critical application: it's
not likely to max out your CPU for any purpose _other_ than
cryptography. The most CPU-intensive non-cryptographic thing I can
imagine a PuTTY process doing is the complicated computation of
font rendering in the terminal, and that will normally be cached
(you don't recompute each glyph from its outline and hints for
every time you display it).
2. I think a bigger risk lies in accidental side channels from having
DIT turned off when it should have been on. I can imagine lots of
causes for that. Missing a crypto operation in some unswept corner
of the code; confusing control flow (like my coroutine macros)
jumping with DIT clear into the middle of a region of code that
expected DIT to have been set at the beginning; having a reference
counter of DIT requests and getting it out of sync.
In a more sophisticated programming language, it might be possible to
avoid the risk in #2 by cleverness with the type system. For example,
in Rust, you could have a zero-sized type that acts as a proof token
for DIT being enabled (it would be constructed by a function that also
sets DIT, have a Drop implementation that clears DIT, and be !Send so
you couldn't use it in a thread other than the one where DIT was set),
and then you could require all the actual crypto functions to take a
DitToken as an extra parameter, at zero runtime cost. Then "oops I
forgot to set DIT around this piece of crypto" would become a compile
error. Even so, you'd have to take some care with coroutine-structured
code (what happens if a Rust async function yields while holding a DIT
token?) and with nesting (if you have two DIT tokens, you don't want
dropping the inner one to clear DIT while the outer one is still there
to wrongly convince callees that it's set). Maybe in Rust you could
get this all to work reliably. But not in C!
DIT is an optional feature of the Arm architecture, so we must first
test to see if it's supported. This is done the same way as we already
do for the various Arm crypto accelerators: on ELF-based systems,
check the appropriate bit in the 'hwcap' words in the ELF aux vector;
on Mac, look for an appropriate sysctl flag.
On Windows I don't know of a way to query the DIT feature, _or_ of a
way to write the necessary enabling instruction in an MSVC-compatible
way. I've _heard_ that it might not be necessary, because Windows
might just turn on DIT unconditionally and leave it on, in an even
more extreme version of my own strategy. I don't have a source for
that - I heard it by word of mouth - but I _hope_ it's true, because
that would suit me very well! Certainly I can't write code to enable
DIT without knowing (a) how to do it, (b) how to know if it's safe.
Nonetheless, I've put the enable_dit() call in all the right places in
the Windows main programs as well as the Unix and cross-platform code,
so that if I later find out that I _can_ put in an explicit enable of
DIT in some way, I'll only have to arrange to set HAVE_ARM_DIT and
compile the enable_dit() function appropriately.
As standardised by NIST in FIPS 203, this is a lattice-based
post-quantum KEM.
Very vaguely, the idea of it is that your public key is a matrix A and
vector t, and the private key is the knowledge of how to decompose t
into two vectors with all their coefficients small, one transformed by
A relative to the other. Encryption of a binary secret starts by
turning each bit into one of two maximally separated residues mod a
prime q, and then adding 'noise' based on the public key in the form
of small increments and decrements mod q, again with some of the noise
transformed by A relative to the rest. Decryption uses the knowledge
of t's decomposition to align the two sets of noise so that the
_large_ changes (which masked the secret from an eavesdropper) cancel
out, leaving only a collection of small changes to the original secret
vector. Then the vector of input bits can be recovered by assuming
that those accumulated small pieces of noise haven't concentrated in
any particular residue enough to push it more than half way to the
other of its possible starting values.
A weird feature of it is that decryption is not a true mathematical
inverse of encryption. The assumption that the noise doesn't get large
enough to flip any bit of the secret is only probabilistically valid,
not a hard guarantee. In other words, key agreement can fail, simply
by getting particularly unlucky with the distribution of your random
noise! However, the probability of a failure is very low - less than
2^-138 even for ML-KEM-512, and gets even smaller with the larger
variants.
An awkward feature for our purposes is that the matrix A, containing a
large number of residues mod the prime q=3329, is required to be
constructed by a process of rejection sampling, i.e. generating random
12-bit values and throwing away the out-of-range ones. That would be a
real pain for our side-channel testing system, which generally handles
rejection sampling badly (since it necessarily involves data-dependent
control flow and timing variation). Fortunately, the matrix and the
random seed it was made from are both public: the matrix seed is
transmitted as part of the public key, so it's not necessary to try to
hide it. Accordingly, I was able to get the implementation to pass
testsc by means of not varying the matrix seed between runs, which is
justified by the principle of testsc that you vary the _secrets_ to
ensure timing is independent of them - and the matrix seed isn't a
secret, so you're allowed to keep it the same.
The three hybrid algorithms, defined by the current Internet-Draft
draft-kampanakis-curdle-ssh-pq-ke, include one hybrid of ML-KEM-768
with Curve25519 in exactly the same way we were already hybridising
NTRU Prime with Curve25519, and two more hybrids of ML-KEM with ECDH
over a NIST curve. The former hybrid interoperates with the
implementation in OpenSSH 9.9; all three interoperate with the fork
'openssh-oqs' at github.com/open-quantum-safe/openssh, and also with
the Python library AsyncSSH.
This adds a ssh_hashalg defining SHAKE256 with a 32-byte output, in
addition to the 114-byte output we already have.
Also, it defines a new API for using SHAKE128 and SHAKE256 in the more
general form of an extendable output function, which is to say that
you still have to put in all the input before reading any output, but
once you start reading output you can just keep going until you have
enough.
Both of these will be needed in an upcoming commit implementing ML-KEM.
This can now happen if, for instance, the CLMUL implementation of
aesgcm is compiled in, but not available at runtime because we're on
an old Intel CPU.
In this situation, testcrypt would segfault when driven by
test/cryptsuite.py, and test/list-accel.py would erroneously claim the
CLMUL implementation was available when it wasn't.
I just happened to notice ARG1 and ARGN in the code that builds the
dispatch table in process_line(), which aren't used at all, because
they date from a previous version of the testcrypt-func.h macro
system. They were supposed to be replaced everywhere with the unified
ARG.
So why didn't the missing definition of ARG break anything? Because
ARG only ever appears in the variadic part of a FUNC_INNER call - and
for this particular trawl of testcrypt-func.h, the variadic part isn't
ever used in the macro expansion in the first place. So there's no
need to define ARG and VOID to anything at all, not even the empty
string.
I only recently found out that OpenSSH defined their own protocol IDs
for AES-GCM, defined to work the same as the standard ones except that
they fixed the semantics for how you select the linked cipher+MAC pair
during key exchange.
(RFC 5647 defines protocol ids for AES-GCM in both the cipher and MAC
namespaces, and requires that you MUST select both or neither - but
this contradicts the selection policy set out in the base SSH RFCs,
and there's no discussion of how you resolve a conflict between them!
OpenSSH's answer is to do it the same way ChaCha20-Poly1305 works,
because that will ensure the two suites don't fight.)
People do occasionally ask us for this linked cipher/MAC pair, and now
I know it's actually feasible, I've implemented it, including a pair
of vector implementations for x86 and Arm using their respective
architecture extensions for multiplying polynomials over GF(2).
Unlike ChaCha20-Poly1305, I've kept the cipher and MAC implementations
in separate objects, with an arm's-length link between them that the
MAC uses when it needs to encrypt single cipher blocks to use as the
inputs to the MAC algorithm. That enables the cipher and the MAC to be
independently selected from their hardware-accelerated versions, just
in case someone runs on a system that has polynomial multiplication
instructions but not AES acceleration, or vice versa.
There's a fourth implementation of the GCM MAC, which is a pure
software implementation of the same algorithm used in the vectorised
versions. It's too slow to use live, but I've kept it in the code for
future testing needs, and because it's a convenient place to dump my
design comments.
The vectorised implementations are fairly crude as far as optimisation
goes. I'm sure serious x86 _or_ Arm optimisation engineers would look
at them and laugh. But GCM is a fast MAC compared to HMAC-SHA-256
(indeed compared to HMAC-anything-at-all), so it should at least be
good enough to use. And we've got a working version with some tests
now, so if someone else wants to improve them, they can.
The length test was pasted from the ordinary decrypt function, when it
should have been pasted from encrypt_length (which got this right).
I've never tried to test those functions before, so I never noticed.
This patch fixes a few other whitespace and formatting issues which
were pointed out by the bulk-reindent or which I spotted in passing,
some involving manual editing to break lines more nicely.
I think the weirdest hunk in here is the one in windows/window.c
TranslateKey() where _half_ of an assignment statement inside an 'if'
was on the same line as the trailing paren of the if condition. No
idea at all how that one managed to happen!
My bulk indentation check also turned up a lot of cases where a run-on
function call or if statement didn't have its later lines aligned
correctly relative to the open paren.
I think this is quite easy to do by getting things out of
sync (editing the first line of the function call and forgetting to
update the rest, perhaps even because you never _saw_ the rest during
a search-replace). But a few didn't quite fit into that pattern, in
particular an outright misleading case in unix/askpass.c where the
second line of a call was aligned neatly below the _wrong_ one of the
open parens on the opening line.
Restored as many alignments as I could easily find.
As distinct from the type of signature generated by the SSH server
itself from the host key, this lets you exclude (and by default does
exclude) the old "ssh-rsa" SHA-1 signature type from the signature of
the CA on the certificate.
Certificate keys don't work the same as normal keys, so the rest of
the code is going to have to pay attention to whether a key is a
certificate, and if so, treat it differently and do cert-specific
stuff to it. So here's a collection of methods for that purpose.
With one exception, these methods of ssh_key are not expected to be
implemented at all in non-certificate key types: they should only ever
be called once you already know you're dealing with a certificate. So
most of the new method pointers can be left out of the ssh_keyalg
initialisers.
The exception is the base_key method, which retrieves the base key of
a certificate - the underlying one with the certificate stripped off.
It's convenient for non-certificate keys to implement this too, and
just return a pointer to themselves. So I've added an implementation
in nullkey.c doing that. (The returned pointer doesn't transfer
ownership; you have to use the new ssh_key_clone() if you want to keep
the base key after freeing the certificate key.)
The methods _only_ implemented in certificates:
Query methods to return the public key of the CA (for looking up in a
list of trusted ones), and to return the key id string (which exists
to be written into log files).
Obviously, we need a check_cert() method which will verify the CA's
actual signature, not to mention checking all the other details like
the principal and the validity period.
And there's another fiddly method for dealing with the RSA upgrade
system, called 'related_alg'. This is quite like alternate_ssh_id, in
that its job is to upgrade one key algorithm to a related one with
more modern RSA signing flags (or any other similar thing that might
later reuse the same mechanism). But where alternate_ssh_id took the
actual signing flags as an argument, this takes a pointer to the
upgraded base algorithm. So it answers the question "What is to this
key algorithm as you are to its base?" - if you call it on
opensshcert_ssh_rsa and give it ssh_rsa_sha512, it'll give you back
opensshcert_ssh_rsa_sha512.
(It's awkward to have to have another of these fiddly methods, and in
the longer term I'd like to try to clean up their proliferation a bit.
But I even more dislike the alternative of just going through
all_keyalgs looking for a cert algorithm with, say, ssh_rsa_sha512 as
the base: that approach would work fine now but it would be a lurking
time bomb for when all the -cert-v02@ methods appear one day. This
way, each certificate type can upgrade itself to the appropriately
related version. And at least related_alg is only needed if you _are_
a certificate key type - it's not adding yet another piece of
null-method boilerplate to the rest.)
This stores its data in the same format as the existing KCT_TEXT, but
it displays differently in puttygen --dump, expecting that the data
will be full of horrible control characters, invalid UTF-8, etc.
The displayed data is of the form b64("..."), so you get a hint about
what the encoding is, and can still paste into Python by defining the
identifier 'b64' to be base64.b64decode or equivalent.
Having recently pulled it out into its own file, I think it could also
do with a bit of tidying. In this rework:
- the substructure for a single component now has a globally visible
struct tag, so you can make a variable pointing at it, saving
verbiage in every piece of code looping over a key_components
- the 'is_mp_int' flag has been replaced with a type enum, so that
more types can be added without further upheaval
- the printing loop in cmdgen.c for puttygen --dump has factored out
the initial 'name=' prefix on each line so that it isn't repeated
per component type
- the storage format for text components is now a strbuf rather than
a plain char *, which I think is generally more useful.
If you already have a string (of potentially-binary data) in the form
of a ptrlen reference to somewhere else, and you want to keep a copy
somewhere, it's useful to copy it into a strbuf. But it takes a couple
of lines of faff to do that, and it's nicer to wrap that up into a
tiny helper function.
This commit adds that helper function strbuf_dup, and its non-movable
sibling strbuf_dup_nm for secret data. Also, gone through the existing
code and found a bunch of cases where this makes things less verbose.
This consists of DJB's 'Streamlined NTRU Prime' quantum-resistant
cryptosystem, currently in round 3 of the NIST post-quantum key
exchange competition; it's run in parallel with ordinary Curve25519,
and generates a shared secret combining the output of both systems.
(Hence, even if you don't trust this newfangled NTRU Prime thing at
all, it's at least no _less_ secure than the kex you were using
already.)
As the OpenSSH developers point out, key exchange is the most urgent
thing to make quantum-resistant, even before working quantum computers
big enough to break crypto become available, because a break of the
kex algorithm can be applied retroactively to recordings of your past
sessions. By contrast, authentication is a real-time protocol, and can
only be broken by a quantum computer if there's one available to
attack you _already_.
I've implemented both sides of the mechanism, so that PuTTY and Uppity
both support it. In my initial testing, the two sides can both
interoperate with the appropriate half of OpenSSH, and also (of
course, but it would be embarrassing to mess it up) with each other.
This is already slightly nice because it lets me separate the
Weierstrass and Montgomery code more completely, without having to
have a vtable tucked into dh->extra. But more to the point, it will
allow completely different kex methods to fit into the same framework
later.
To that end, I've moved more of the descriptive message generation
into the vtable, and also provided the constructor with a flag that
will let it do different things in client and server.
Also, following on from a previous commit, I've arranged that the new
API returns arbitrary binary data for the exchange hash, rather than
an mp_int. An upcoming implementation of this interface will want to
return an encoded string instead of an encoded mp_int.
I happened to notice in passing that this function doesn't have any
tests (although it will have been at least somewhat tested by the
cmdgen interop test system).
This involved writing a wrapper that passes the passphrase and salt as
ptrlens, and I decided it made more sense to make the same change to
the original function too and adjust the call sites appropriately.
I derived a test case by getting OpenSSH itself to make an encrypted
key file, and then using the inputs and output from the password hash
operation that decrypted it again.
They were there to work around that annoying feature of VS's
preprocessor when it expands __VA_ARGS__ into the argument list of
another macro. But I've just thought of a workaround that I can apply
in testcrypt.c itself, so that those parens don't have to appear in
every function definition in the header file.
The trick is, instead of writing
destination_macro(__VA_ARGS__)
you instead write
JUXTAPOSE(destination_macro, (__VA_ARGS__))
where JUXTAPOSE is defined to be a macro that simply expands its two
arguments next to each other:
#define JUXTAPOSE(first, second) first second
This works because the arguments to JUXTAPOSE get macro-expanded
_before_ passing them to JUXTAPOSE itself - the same reason that the
standard tricks with STR_INNER and CAT_INNER work (as seen in defs.h
here). So this defuses the magic behaviour of commas expanded from
__VA_ARGS__, and causes the destination macro to get all its arguments
in the expected places again.
Now testcrypt has _two_ header files, that's more files than I want at
the top level, so I decided to move it.
It has a good claim to live in either 'test' or 'crypto', but in the
end I decided it wasn't quite specific enough to crypto (it already
also tests things in keygen and proxy), and also, the Python half of
the mechanism already lives in 'test', so it can live alongside that.
Having done that, it seemed silly to leave testsc and testzlib at the
top level: those have 'test' in the names as well, so they can go in
the test subdir as well.
While I'm renaming, also renamed testcrypt.h to testcrypt-func.h to
distinguish it from the new testcrypt-enum.h.