DIT, for 'Data-Independent Timing', is a bit you can set in the
processor state on sufficiently new Arm CPUs, which promises that a
long list of instructions will deliberately avoid varying their timing
based on the input register values. Just what you want for keeping
your constant-time crypto primitives constant-time.
As far as I'm aware, no CPU has _yet_ implemented any data-dependent
optimisations, so DIT is a safety precaution against them doing so in
future. It would be embarrassing to be caught without it if a future
CPU does do that, so we now turn on DIT in the PuTTY process state.
I've put a call to the new enable_dit() function at the start of every
main() and WinMain() belonging to a program that might do
cryptography (even testcrypt, in case someone uses it for something!),
and in case I missed one there, also added a second call at the first
moment that any cryptography-using part of the code looks as if it
might become active: when an instance of the SSH protocol object is
configured, when the system PRNG is initialised, and when selecting
any cryptographic authentication protocol in an HTTP or SOCKS proxy
connection. With any luck those precautions between them should ensure
it's on whenever we need it.
Arm's own recommendation is that you should carefully choose the
granularity at which you enable and disable DIT: there's a potential
time cost to turning it on and off (I'm not sure what, but plausibly
something of the order of a pipeline flush), so it's a performance hit
to do it _inside_ each individual crypto function, but if CPUs start
supporting significant data-dependent optimisation in future, then it
will also become a noticeable performance hit to just leave it on
across the whole process. So you'd like to do it somewhere in the
middle: for example, you might turn on DIT once around the whole
process of verifying and decrypting an SSH packet, instead of once for
decryption and once for MAC.
With all respect to that recommendation as a strategy for maximum
performance, I'm not following it here. I turn on DIT at the start of
the PuTTY process, and then leave it on. Rationale:
1. PuTTY is not otherwise a performance-critical application: it's
not likely to max out your CPU for any purpose _other_ than
cryptography. The most CPU-intensive non-cryptographic thing I can
imagine a PuTTY process doing is the complicated computation of
font rendering in the terminal, and that will normally be cached
(you don't recompute each glyph from its outline and hints for
every time you display it).
2. I think a bigger risk lies in accidental side channels from having
DIT turned off when it should have been on. I can imagine lots of
causes for that. Missing a crypto operation in some unswept corner
of the code; confusing control flow (like my coroutine macros)
jumping with DIT clear into the middle of a region of code that
expected DIT to have been set at the beginning; having a reference
counter of DIT requests and getting it out of sync.
In a more sophisticated programming language, it might be possible to
avoid the risk in #2 by cleverness with the type system. For example,
in Rust, you could have a zero-sized type that acts as a proof token
for DIT being enabled (it would be constructed by a function that also
sets DIT, have a Drop implementation that clears DIT, and be !Send so
you couldn't use it in a thread other than the one where DIT was set),
and then you could require all the actual crypto functions to take a
DitToken as an extra parameter, at zero runtime cost. Then "oops I
forgot to set DIT around this piece of crypto" would become a compile
error. Even so, you'd have to take some care with coroutine-structured
code (what happens if a Rust async function yields while holding a DIT
token?) and with nesting (if you have two DIT tokens, you don't want
dropping the inner one to clear DIT while the outer one is still there
to wrongly convince callees that it's set). Maybe in Rust you could
get this all to work reliably. But not in C!
DIT is an optional feature of the Arm architecture, so we must first
test to see if it's supported. This is done the same way as we already
do for the various Arm crypto accelerators: on ELF-based systems,
check the appropriate bit in the 'hwcap' words in the ELF aux vector;
on Mac, look for an appropriate sysctl flag.
On Windows I don't know of a way to query the DIT feature, _or_ of a
way to write the necessary enabling instruction in an MSVC-compatible
way. I've _heard_ that it might not be necessary, because Windows
might just turn on DIT unconditionally and leave it on, in an even
more extreme version of my own strategy. I don't have a source for
that - I heard it by word of mouth - but I _hope_ it's true, because
that would suit me very well! Certainly I can't write code to enable
DIT without knowing (a) how to do it, (b) how to know if it's safe.
Nonetheless, I've put the enable_dit() call in all the right places in
the Windows main programs as well as the Unix and cross-platform code,
so that if I later find out that I _can_ put in an explicit enable of
DIT in some way, I'll only have to arrange to set HAVE_ARM_DIT and
compile the enable_dit() function appropriately.
draft-kampanakis-curdle-ssh-pq-ke defines the packet names
SSH_MSG_KEX_HYBRID_INIT and SSH_MSG_KEX_HYBRID_REPLY. They have the
same numbers as ECDH_INIT and ECDH_REPLY, and don't change anything
else, so this is just a naming change. But I think it's a good one,
because the post-quantum KEMs are less symmetric than ECDH (they're
much more like Ben's RSA kex in concept, though very different in
detail), and shouldn't try to pretend they're the same kind of thing.
Also this enables logparse.pl to give a warning about the fact that
one string in each packet contains two separate keys glomphed together.
For the latter reason (and also because it's easier in my code
structure) I've also switched to using the HYBRID naming for the
existing NTRU + Curve25519 hybrid method, even though the
Internet-Draft for that one still uses the ECDH names. Sorry, but I
think it's clearer!
As standardised by NIST in FIPS 203, this is a lattice-based
post-quantum KEM.
Very vaguely, the idea of it is that your public key is a matrix A and
vector t, and the private key is the knowledge of how to decompose t
into two vectors with all their coefficients small, one transformed by
A relative to the other. Encryption of a binary secret starts by
turning each bit into one of two maximally separated residues mod a
prime q, and then adding 'noise' based on the public key in the form
of small increments and decrements mod q, again with some of the noise
transformed by A relative to the rest. Decryption uses the knowledge
of t's decomposition to align the two sets of noise so that the
_large_ changes (which masked the secret from an eavesdropper) cancel
out, leaving only a collection of small changes to the original secret
vector. Then the vector of input bits can be recovered by assuming
that those accumulated small pieces of noise haven't concentrated in
any particular residue enough to push it more than half way to the
other of its possible starting values.
A weird feature of it is that decryption is not a true mathematical
inverse of encryption. The assumption that the noise doesn't get large
enough to flip any bit of the secret is only probabilistically valid,
not a hard guarantee. In other words, key agreement can fail, simply
by getting particularly unlucky with the distribution of your random
noise! However, the probability of a failure is very low - less than
2^-138 even for ML-KEM-512, and gets even smaller with the larger
variants.
An awkward feature for our purposes is that the matrix A, containing a
large number of residues mod the prime q=3329, is required to be
constructed by a process of rejection sampling, i.e. generating random
12-bit values and throwing away the out-of-range ones. That would be a
real pain for our side-channel testing system, which generally handles
rejection sampling badly (since it necessarily involves data-dependent
control flow and timing variation). Fortunately, the matrix and the
random seed it was made from are both public: the matrix seed is
transmitted as part of the public key, so it's not necessary to try to
hide it. Accordingly, I was able to get the implementation to pass
testsc by means of not varying the matrix seed between runs, which is
justified by the principle of testsc that you vary the _secrets_ to
ensure timing is independent of them - and the matrix seed isn't a
secret, so you're allowed to keep it the same.
The three hybrid algorithms, defined by the current Internet-Draft
draft-kampanakis-curdle-ssh-pq-ke, include one hybrid of ML-KEM-768
with Curve25519 in exactly the same way we were already hybridising
NTRU Prime with Curve25519, and two more hybrids of ML-KEM with ECDH
over a NIST curve. The former hybrid interoperates with the
implementation in OpenSSH 9.9; all three interoperate with the fork
'openssh-oqs' at github.com/open-quantum-safe/openssh, and also with
the Python library AsyncSSH.
This adds a ssh_hashalg defining SHAKE256 with a 32-byte output, in
addition to the 114-byte output we already have.
Also, it defines a new API for using SHAKE128 and SHAKE256 in the more
general form of an extendable output function, which is to say that
you still have to put in all the input before reading any output, but
once you start reading output you can just keep going until you have
enough.
Both of these will be needed in an upcoming commit implementing ML-KEM.
Now ntru.c contains just the NTRU business, and kex-hybrid.c contains
the system for running a post-quantum and a classical KEX and hashing
together the results. In between them is a new small vtable API for
the key encapsulation mechanisms that the post-quantum standardisation
effort seems to be settling on.
draft-ietf-sshm-ntruprime-ssh-00 asserts that it's identical to the
@openssh.com version we already implement:
'[sntrup761x25519-sha512@openssh.com] became the default key exchange
algorithm in OpenSSH during 2022. That is identical to the
"sntrup761x25519-sha512" mechanism described in this document.'
The "rsa-sha2-256" and "rsa-sha2-512" algorithms, as defined by RFC
8332, differ in one detail from "ssh-rsa" in addition to the change of
hash function. They also specify that the signature integer should be
encoded using the same number of bytes as the key modulus, even if
that means giving it a leading zero byte (or even more than one).
I hadn't noticed this, and had assumed that unrelated details wouldn't
have changed. But they had. Thanks to Ilia Mirkin for pointing this
out.
Nobody has previously reported a problem, so very likely most servers
are forgiving of people making this mistake! But now it's been pointed
out, we should comply with the spec. (Especially since the new spec is
more sensible, and only historical inertia justified sticking to the
old one.)
This fixes a vulnerability that compromises NIST P521 ECDSA keys when
they are used with PuTTY's existing DSA nonce generation code. The
vulnerability has been assigned the identifier CVE-2024-31497.
PuTTY has been doing its DSA signing deterministically for literally
as long as it's been doing it at all, because I didn't trust Windows's
entropy generation. Deterministic nonce generation was introduced in
commit d345ebc2a5, as part of the initial version of our DSA
signing routine. At the time, there was no standard for how to do it,
so we had to think up the details of our system ourselves, with some
help from the Cambridge University computer security group.
More than ten years later, RFC 6979 was published, recommending a
similar system for general use, naturally with all the details
different. We didn't switch over to doing it that way, because we had
a scheme in place already, and as far as I could see, the differences
were not security-critical - just the normal sort of variation you
expect when any two people design a protocol component of this kind
independently.
As far as I know, the _structure_ of our scheme is still perfectly
fine, in terms of what data gets hashed, how many times, and how the
hash output is converted into a nonce. But the weak spot is the choice
of hash function: inside our dsa_gen_k() function, we generate 512
bits of random data using SHA-512, and then reduce that to the output
range by modular reduction, regardless of what signature algorithm
we're generating a nonce for.
In the original use case, this introduced a theoretical bias (the
output size is an odd prime, which doesn't evenly divide the space of
2^512 possible inputs to the reduction), but the theory was that since
integer DSA uses a modulus prime only 160 bits long (being based on
SHA-1, at least in the form that SSH uses it), the bias would be too
small to be detectable, let alone exploitable.
Then we reused the same function for NIST-style ECDSA, when it
arrived. This is fine for the P256 curve, and even P384. But in P521,
the order of the base point is _greater_ than 2^512, so when we
generate a 512-bit number and reduce it, the reduction never makes any
difference, and our output nonces are all in the first 2^512 elements
of the range of about 2^521. So this _does_ introduce a significant
bias in the nonces, compared to the ideal of uniformly random
distribution over the whole range. And it's been recently discovered
that a bias of this kind is sufficient to expose private keys, given a
manageably small number of signatures to work from.
(Incidentally, none of this affects Ed25519. The spec for that system
includes its own idea of how you should do deterministic nonce
generation - completely different again, naturally - and we did it
that way rather than our way, so that we could use the existing test
vectors.)
The simplest fix would be to patch our existing nonce generator to use
a longer hash, or concatenate a couple of SHA-512 hashes, or something
similar. But I think a more robust approach is to switch it out
completely for what is now the standard system. The main reason why I
prefer that is that the standard system comes with test vectors, which
adds a lot of confidence that I haven't made some other mistake in
following my own design.
So here's a commit that adds an implementation of RFC 6979, and
removes the old dsa_gen_k() function. Tests are added based on the
RFC's appendix of test vectors (as many as are compatible with the
more limited API of PuTTY's crypto code, e.g. we lack support for the
NIST P192 curve, or for doing integer DSA with many different hash
functions). One existing test changes its expected outputs, namely the
one that has a sample key pair and signature for every key algorithm
we support.
This takes a plain ssh_hashalg, and constructs the most natural kind
of HMAC wrapper around it, taking its key length and output length
to be the hash's output length. In other words, it converts SHA-foo
into exactly the thing usually called HMAC-SHA-foo.
It does it by constructing a new ssh2_macalg vtable, and including it
in the same memory allocation as the actual hash object. That's the
first time in PuTTY I've done it this way.
Nothing yet uses this, but a new piece of code is about to.
This involved a trivial merge conflict fix in terminal.c because of
the way the cherry-pick 73b41feba5 differed from its original
bdbd5f429c.
But a more significant rework was needed in windows/console.c, because
the updates to confirm_weak_* conflicted with the changes on main to
abstract out the ConsoleIO system.
In the key generation step where we invert 3f in the field
Z_q/<x^p-x-1>, I was carefully checking for failure, on the grounds
that even a field does have _one_ non-invertible element, namely zero.
But I forgot that we'd generated f in such a way that it can't
possibly be zero. So that failure check is pointless.
(However, I've retained it in the form of an assertion.)
I saw a post on comp.security.ssh just now where someone had
encountered an SSH server that would _only_ speak that, which makes it
worth bothering to implement.
The totally obvious implementation works, and passes the test cases
from RFC 6234.
(cherry picked from commit b77e985513)
I saw a post on comp.security.ssh just now where someone had
encountered an SSH server that would _only_ speak that, which makes it
worth bothering to implement.
The totally obvious implementation works, and passes the test cases
from RFC 6234.
When a host certificate was used outside its valid date range, we were
displaying the current time where we meant to show the relevant bound of
the validity range.
(cherry picked from commit 68db3d195d)
When a host certificate was used outside its valid date range, we were
displaying the current time where we meant to show the relevant bound of
the validity range.
In some compilers (I'm told clang 10, in particular), the NEON
intrinsic vaddq_p128 is missing, even though its input type poly128_t
is provided.
vaddq_p128 is just an XOR of two vector registers, so that's easy to
work around by casting to a more mundane type and back. Added a
configure-time test for that intrinsic, and a workaround to be used in
its absence.
I mentioned recently (in commit 9e7d4c53d8) message that I'm no
longer fond of the variable name 'ret', because it's used in two quite
different contexts: it's the return value from a subroutine you just
called (e.g. 'int ret = read(fd, buf, len);' and then check for error
or EOF), or it's the value you're preparing to return from the
_containing_ routine (maybe by assigning it a default value and then
conditionally modifying it, or by starting at NULL and reallocating,
or setting it just before using the 'goto out' cleanup idiom). In the
past I've occasionally made mistakes by forgetting which meaning the
variable had, or accidentally conflating both uses.
If all else fails, I now prefer 'retd' (short for 'returned') in the
former situation, and 'toret' (obviously, the value 'to return') in
the latter case. But even better is to pick a name that actually says
something more specific about what the thing actually is.
One particular bad habit throughout this codebase is to have a set of
functions that deal with some object type (say 'Foo'), all *but one*
of which take a 'Foo *foo' parameter, but the foo_new() function
starts with 'Foo *ret = snew(Foo)'. If all the rest of them think the
canonical name for the ambient Foo is 'foo', so should foo_new()!
So here's a no-brainer start on cutting down on the uses of 'ret': I
looked for all the cases where it was being assigned the result of an
allocation, and renamed the variable to be a description of the thing
being allocated. In the case of a new() function belonging to a
family, I picked the same name as the rest of the functions in its own
family, for consistency. In other cases I picked something sensible.
One case where it _does_ make sense not to use your usual name for the
variable type is when you're cloning an existing object. In that case,
_neither_ of the Foo objects involved should be called 'foo', because
it's ambiguous! They should be named so you can see which is which. In
the two cases I found here, I've called them 'orig' and 'copy'.
As in the previous refactoring, many thanks to clang-rename for the
help.
In the initial commit 031d86ed5b that introduced them, I
accidentally put them below the 'warn about insecurity' line, which I
didn't mean to. Moved them up to just above the existing group14.
Also, I've arranged them in a slightly weird order, so that the most
preferred group of this collection is the medium-sized group16,
followed by the larger ones (17 and 18) and then the smaller 15.
Rationale: larger is better _until_ it starts costing way too much CPU
time, and group18 can grind quite painfully on a slow machine. (And of
course users are free to reconfigure if they have different
preferences.)
This isn't really ideal, of course. The idea that you might not want
to use group18 *because it's slow* contradicts the basic concept of
PuTTY's current crypto-preferences UI, which assumes that you rank
things by security, which is why there's a dividing line below which
things are assumed insecure. I hope that in a future release we'll
rework the UI so that you can express more subtle ideas of what crypto
you do and don't like. But this will do for the moment.
The GSS versions of the same DH methods are reordered similarly.
Jacob points out that the output of 'puttygen --dump', where the
key_components are used, is much more likely to need to be machine-
than human-readable, and so it makes more sense to use a date/time
format that's invariant under external changes such as locale.
(He also points out that Windows's time zone description strings are
overly verbose!)
Simon tells me he was pondering whether chacha20-poly1305 could be
reworked to use the new facilities, but on reflection there's no way to
use it to improve matters.
I decided that the 'namemaker' system introduced recently in commit
fbb979aa98 was just too marginal to be sensible, and it's easier
to simply quote the full SSH id for each protocol.
Also, included an empty argument at the end of each macro invocation,
so that the variadic "..." is never completely missing.
Apparently a nasty trick I did in one of the selector vtable macros
was not acceptable to VS, which thinks that "string" ? NULL : NULL is
not a constant expression - it can't tell that the string literal has
a non-null value _or_ that it doesn't matter whether the value is null
or not.
Redone the vtable name construction in a way that depends only on the
actual preprocessor, not on the followup C expression semantics.
This is surprisingly simple, because it wasn't necessary to touch the
GSS parts at all. Nothing changes about the message formats between
integer DH and ECDH in GSS KEX, except that the mpints sent back and
forth as part of integer DH are replaced by the opaque strings used in
ECDH. So I've invented a new KEXTYPE and made it control a bunch of
small conditionals in the middle of the GSS KEX code, leaving the rest
unchanged.
These were introduced in 34d01e1b65 to pretty-print certificate validity
ranges. But Microsoft's C runtime took a while to catch up with C99 --
stackoverflow claims that VS2013 and earlier don't support these
specifiers -- so it's possible to end up with PuTTY executables that
misdisplay these dates. Also, the mingw-w64 toolchain's -Wformat
complains about these specifiers, at least on Debian buster, presumably
for the same reason.
Since the specifiers in question have exact pre-C99 replacements, it
seems easiest just to use those.
I only recently found out that OpenSSH defined their own protocol IDs
for AES-GCM, defined to work the same as the standard ones except that
they fixed the semantics for how you select the linked cipher+MAC pair
during key exchange.
(RFC 5647 defines protocol ids for AES-GCM in both the cipher and MAC
namespaces, and requires that you MUST select both or neither - but
this contradicts the selection policy set out in the base SSH RFCs,
and there's no discussion of how you resolve a conflict between them!
OpenSSH's answer is to do it the same way ChaCha20-Poly1305 works,
because that will ensure the two suites don't fight.)
People do occasionally ask us for this linked cipher/MAC pair, and now
I know it's actually feasible, I've implemented it, including a pair
of vector implementations for x86 and Arm using their respective
architecture extensions for multiplying polynomials over GF(2).
Unlike ChaCha20-Poly1305, I've kept the cipher and MAC implementations
in separate objects, with an arm's-length link between them that the
MAC uses when it needs to encrypt single cipher blocks to use as the
inputs to the MAC algorithm. That enables the cipher and the MAC to be
independently selected from their hardware-accelerated versions, just
in case someone runs on a system that has polynomial multiplication
instructions but not AES acceleration, or vice versa.
There's a fourth implementation of the GCM MAC, which is a pure
software implementation of the same algorithm used in the vectorised
versions. It's too slow to use live, but I've kept it in the code for
future testing needs, and because it's a convenient place to dump my
design comments.
The vectorised implementations are fairly crude as far as optimisation
goes. I'm sure serious x86 _or_ Arm optimisation engineers would look
at them and laugh. But GCM is a fast MAC compared to HMAC-SHA-256
(indeed compared to HMAC-anything-at-all), so it should at least be
good enough to use. And we've got a working version with some tests
now, so if someone else wants to improve them, they can.
This provides a convenient hook to be called between SSH messages, for
the crypto components to do any per-message processing like
incrementing a sequence number.
In the situation where a MAC and cipher implementation are tied
together by being facets of the same underlying object (used by the
inseparable ChaCha20 + Poly1305 pair), previously we freed them by
having the cipher_free function actually do the freeing, having the
mac_free function do nothing, and taking great care to call those in
the right order. (Otherwise, the mac_free function dereferences a
no-longer-valid vtable pointer and doesn't get as far as _finding out_
that it doesn't have to do anything.)
That's a time bomb in general, and especially awkward in situations
like testcrypt where we don't get precise control over freeing order
in any case. So I've replaced that system with one in which there are
two flags in the ChaCha20-Poly1305 structure, saying whether each of
the cipher and MAC facets is currently considered to be allocated.
When the last of those flags is cleared, the object is actually freed.
So now they can be freed in either order.
Previously, we had a single data structure 'keytree' containing
records each involving a public and private key (the latter maybe in
clear, or as an encrypted key file, or both). Now, we have separate
'pubkeytree' and 'privkeytree', the former storing public keys indexed
by their full public blob (including certificate, if any), and the
latter storing private keys, indexed by the _base_ public blob
only (i.e. with no certificate included).
The effect of this is that deferred decryption interacts more sensibly
with certificates. Now, if you load certified and uncertified versions
of the same key into Pageant, or two or more differently certified
versions, then the separate public key records will all share the same
private key record, and hence, a single state of decryption. So the
first time you enter a passphrase that unlocks that private key, it
will unlock it for all public keys that share the same private half.
Conversely, re-encrypting any one of them will cause all of them to
become re-encrypted, eliminating the risk that you deliberately
re-encrypt a key you really care about and forget that another equally
valuble copy of it is still in clear.
The most subtle part of this turned out to be the question of what key
comment you present in a deferred decryption prompt. It's very
tempting to imagine that it should be the comment that goes with
whichever _public_ key was involved in the signing request that
triggered the prompt. But in fact, it _must_ be the comment that goes
with whichever version of the encrypted key file is stored in Pageant
- because what if the user chose different passphrases for their
uncertified and certified PPKs? Then the decryption prompt will have
to indicate which passphrase they should be typing, so it's vital to
present the comment that goes with the _file we're decrypting_.
(Of course, if the user has selected different passphrases for those
two PPKs but the _same_ comment, they're still going to end up
confused. But at least once they realise they've done that, they have
a workaround.)
OpenSSH, when called on to give the fingerprint of a certified public
key, will in many circumstances generate the hash of the public blob
of the _underlying_ key, rather than the hash of the full certificate.
I think the hash of the certificate is also potentially useful (if
nothing else, it provides a way to tell apart multiple certificates on
the same key). But I can also see that it's useful to be able to
recognise a key as the same one 'really' (since all certificates on
the same key share a private key, so they're unavoidably related).
So I've dealt with this by introducing an extra pair of fingerprint
types, giving the cross product of {MD5, SHA-256} x {base key only,
full certificate}. You can manually select which one you want to see
in some circumstances (notably PuTTYgen), and in others (such as
diagnostics) both fingerprints will be emitted side by side via the
new functions ssh2_double_fingerprint[_blob].
The default, following OpenSSH, is to just fingerprint the base key.
I think a lot of these were inserted by a prior run through GNU indent
many years ago. I noticed in a more recent experiment that that tool
doesn't always correctly distinguish which instances of 'id * id' are
pointer variable declarations and which are multiplications, so it
spaces some of the former as if they were the latter.
My bulk indentation check also turned up a lot of cases where a run-on
function call or if statement didn't have its later lines aligned
correctly relative to the open paren.
I think this is quite easy to do by getting things out of
sync (editing the first line of the function call and forgetting to
update the rest, perhaps even because you never _saw_ the rest during
a search-replace). But a few didn't quite fit into that pattern, in
particular an outright misleading case in unix/askpass.c where the
second line of a call was aligned neatly below the _wrong_ one of the
open parens on the opening line.
Restored as many alignments as I could easily find.
In several pieces of development recently I've run across the
occasional code block in the middle of a function which suddenly
switched to 2-space indent from this code base's usual 4. I decided I
was tired of it, so I ran the whole code base through a re-indenter,
which made a huge mess, and then manually sifted out the changes that
actually made sense from that pass.
Indeed, this caught quite a few large sections with 2-space indent
level, a couple with 8, and a handful of even weirder things like 3
spaces or 12. This commit fixes them all.
The test in the Pageant list box code for whether we should display
the bit count of a key was done by checking specifically for ssh_rsa
or ssh_dsa, which of course meant that it didn't catch the certified
versions of those keys.
Now there's yet another footling ssh_keyalg method that asks the
question 'is it worth displaying the bit count?', to which RSA and DSA
answer yes, and the opensshcert family delegates to its base key type,
so that RSA and DSA certified keys also answer yes.
(This isn't the same as ssh_key_public_bits(alg, blob) >= 0. All
supported public key algorithms _can_ display a bit count if called
on. But only in RSA and DSA is it configurable, and therefore worth
bothering to print in the list box.)
Also in this commit, I've fixed a bug in the certificate
implementation of public_bits, which was passing a wrongly formatted
public blob to the underlying key. (Done by factoring out the code
from opensshcert_new_shared which constructed the _correct_ public
blob, and reusing it in public_bits to do the same job.)
If you load a certified key into Windows Pageant, the official SSH id
for the key type is so long that it overflows its space in the list
box and overlaps the key fingerprint hash.
This commit introduces yet another footling little ssh_keyalg method
which returns a shorter human-readable description of the key type,
and uses that in the Windows Pageant list box only.
(Not in the Unix Pageant list, though, because being output to stdout,
that seems like something people are more likely to want to
machine-read, which firstly means we shouldn't change it lightly, and
secondly, if we did change it we'd want to avoid having a variable
number of spaces in the replacement key type text.)
The recently added SeatDialogText type was just what I needed to add a
method to the ssh_key vtable for dumping certificate information in a
human-readable format. It will be good for displaying in a Windows
dialog box as well as in cmdgen's text format.
This commit introduces and implements the new method, and adds a
--cert-info mode to command-line Unix PuTTYgen that uses it. The
Windows side will follow shortly.
In the case where a server presents a host key signed by a different
certificate from the one you've configured, it need not _always_ be
evidence of wrongdoing. I can imagine situations in which two CAs
cover overlapping sets of things, and you don't want to blanket-trust
one of them, but you do want to connect to a specific host signed by
that one.
Accordingly, PuTTY's previous policy of unconditionally aborting the
connection if certificate validation fails (which was always intended
as a stopgap until I thought through what I wanted to replace it with)
is now replaced by fallback handling: we present the host key
fingerprint to the user and give them the option to accept and/or
cache it based on the public key itself.
This means that the certified key types have to have a representation
in the host key cache. So I've assigned each one a type id, and
generate the cache string itself by simply falling back to the base
key.
(Rationale for the latter: re-signing a public key with a different
certificate doesn't change the _private_ key, or the set of valid
signatures generated with it. So if you've been convinced for reasons
other than the certificate that a particular private key is in the
possession of $host, then proof of ownership of that private key
should be enough to convince you you're talking to $host no matter
what CA has signed the public half this week.)
We now offer to receive a given certified host key type if _either_ we
have at least one CA configured to trust that host, _or_ we have a
certified key of that type cached. (So once you've decided manually
that you trust a particular key, we can still receive that key and
authenticate the host with it, even if you later delete the CA record
that it didn't match anyway.)
One change from normal (uncertified) host key handling is that for
certified key types _all_ the host key prompts use the stronger
language, with "WARNING - POTENTIAL SECURITY BREACH!" rather than the
mild 'hmm, we haven't seen this host before'. Rationale: if you
expected this CA key and got that one, it _could_ be a bold-as-brass
MITM attempt in which someone hoped you'd accept their entire CA key.
The mild wording is only for the case where we had no previous
expectations _at all_ for the host to violate: not a CA _or_ a cached
key.
The polynomial Stein's algorithm in that code was adapted from the
binary Stein in mpint.c. One of the comments which originally said
'dividing by 2' should have been updated to say 'dividing by x' in the
polynomial case, and didn't.
This function has to make an array containing a specific number of
random values that are +1 or -1, and all the rest zero. The simplest
way I could think of to do it at first was to make the array with all
the zeroes at the end and then shuffle the array.
But I couldn't think of a time-safe algorithm to shuffle an array in
such a way that all orders come out equiprobable, that was better than
quadratic time. In fact I still can't think of one. (Making a random
Benes network is the best idea I've come up with: it arranges that
every output order is _possible_, and runs in O(N log N) time, but it
skews the probabilities, which makes it unacceptable.)
However, there's no need to shuffle an array in this application
anyway: we're not actually trying to generate a random _permutation_,
only a random element of (n choose w). So we can just walk linearly
along the array remembering how many nonzero elements we have yet to
output, and using an appropriately chosen random number at each step
to decide whether this will be one of them.
This isn't a significant improvement in the performance of NTRU
overall, but it satisfies my sense of rightness a little, and at least
means I don't have to have a comment in the code apologising for the
terrible algorithm any more.
As distinct from the type of signature generated by the SSH server
itself from the host key, this lets you exclude (and by default does
exclude) the old "ssh-rsa" SHA-1 signature type from the signature of
the CA on the certificate.