Another bug turned up by writing tests. The code that spots that the
character won't fit, and wraps it to the next line setting
LATTR_WRAPPED2, was not checking that wrap mode was _enabled_ before
doing that. So if you printed a DW character in the rightmost column
while the terminal was in non-auto-wrap mode, you'd get an unwanted
wrap.
Other terminals disagree on what to do here. xterm leaves the cursor
in the same place and doesn't print any character at all.
gnome-terminal, on the other hand, backspaces by a character so that
it _can_ print the requested DW character, in the rightmost _two_
columns.
I think I don't much like either of those, so instead I'm using the
same fallback we use for displaying a DW character when the whole
terminal is only one column wide: if there is physically no room to
print the requested character, turn it into U+FFFD REPLACEMENT
CHARACTER.
This is the first bug found as a direct result of writing that
terminal test program - I added some tests for things I expected to
work already, and some of them didn't, proving immediately that it was
a good idea!
If the terminal is one column wide, and you've printed a
character (hence, set the wrapnext flag), what should backspace do?
Surely it should behave like any other backspace with wrapnext set,
i.e. clear the wrapnext flag, returning the cursor's _logical_
position to the location of the most recently printed character. But
in fact it was anti-wrapping to the previous line, because I'd got the
cases in the wrong order in the if-else chain that forms the backspace
handler. So the handler for 'we're in column 0, wrapping time' was
coming before 'wrapnext is set, just clear it'.
Now wrapnext is checked _first_, before checking anything at all. Any
time we can just clear that, we should.
Suppose an application tries to print a double-width character
starting in the rightmost column of the screen, so that we apply our
emergency fix of wrapping to the next line immediately and printing
the character in the first two columns. Suppose they then backspace
twice, taking the cursor to the RHS and then the LHS of that
character. What should happen if they backspace a third time?
Our previous behaviour was to completely ignore the unusual situation,
and do the same thing we'd do in any other backspace from column 0:
anti-wrap the cursor to the last column of the previous line, leaving
it in the empty character cell that was skipped when the DW char
couldn't be printed in it.
But I think this isn't the best response, because it breaks the
invariant that printing N columns' worth of graphic characters and
then backspacing N times should leave the cursor on the first of those
characters. If I print "a가" (for example) and then backspace three
times, I want the cursor on the a, _even_ if weird line wrapping
behaviour happened somewhere in that sequence.
(Rationale: this helps naïve terminal applications which don't even
know what the terminal width is, and aren't tracking their absolute x
position. In particular, the simplistic line-based input systems that
appear in OS kernels and our own lineedit.c will want to emit a fixed
number of backspace-space-backspace sequences to delete characters
previously entered on to the line by the user. They still need to
check the wcwidth of the characters they're emitting, so that they can
BSB twice for a DW character or 0 times for a combining one, but it
would be *hugely* more awkward for them to ask the terminal where the
cursor is so that they can take account of difficult line wraps!)
We already have the ability to _recognise_ this situation: on a line
that was wrapped in this unusual way, we set the LATTR_WRAPPED2 line
attribute flag, to prevent the empty rightmost column from injecting
an unwanted space into copy-pastes from the terminal. Now we also use
the same flag to cause the backspace control character to do something
interesting.
This was the fix that inspired me to start writing test_terminal,
because I knew it was touching a delicate area. However, in the course
of writing this fix and its tests, I encountered two (!) further bugs,
which I'll fix in followup commits!
This has all the basic necessities to become a test of the terminal's
behaviour, in terms of how its data structures evolve as output is
sent to it, and perhaps also (by filling in the stub TermWin more
usefully) testing what it draws during updates and what it sends in
response to query sequences.
For the moment, all I've done is to set up the framework, and add one
demo test of printing some ordinary text and observing that it appears
in the data structures and the cursor has moved.
I expect that writing a full test of terminal.c will be a very big
job. But perhaps I or someone else will find time to prod it gradually
in the background of other work. In particular, when I'm _modifying_
any part of the terminal code, it would be good to add some tests for
the part I'm changing, before making the change, and check they still
work afterwards.
This takes over from both the implementation in ldisc.c and the one in
term_get_userpass_input, which were imperfectly duplicating each
other's functionality. The new version should be more consistent
between the two already, and also, it means further improvements can
now be made in just one place.
In the course of this, I've restructured the inside of ldisc.c by
moving the input_queue bufchain to the other side of the translation
code in ldisc_send. Previously, ldisc_send received a string, an
optional 'dedicated key' indication (bodgily signalled by a negative
length) and an 'interactive' flag, translated that somehow into a
combination of raw backend output and specials, and saved the latter
in input_queue. Now it saves the original (string, dedicated flag,
interactive flag) data in input_queue, and doesn't do the translation
until the data is pulled back _out_ of the queue. That's because the
new line editing system expects to receive something much closer to
the original data format.
The term_get_userpass_input system is also substantially restructured.
Instead of ldisc.c handing each individual keystroke to terminal.c so
that it can do line editing on it, terminal.c now just gives the Ldisc
a pointer to its instance of the new TermLineEditor object - and then
ldisc.c can put keystrokes straight into that, in the same way it
would put them into its own TermLineEditor, without having to go via
terminal.c at all. So the term_get_userpass_input edifice is only
called back when the line editor actually delivers the answer to a
username or password prompt.
(I considered not _even_ having a separate TermLineEditor for password
prompts, and just letting ldisc.c use its own. But the problem is that
some of the behaviour differences between the two line editors are
deliberate, for example the use of ^D to signal 'abort this prompt',
and the use of Escape as an alternative line-clearing command. So
TermLineEditor has a flags word that allows ldisc and terminal to set
it up differently. Also this lets me give the two TermLineEditors a
different vtable of callback functions, which is a convenient way for
terminal.c to get notified when a prompt has been answered.)
The new line editor still passes all the tests I wrote for the old
one. But it already has a couple of important improvements, both in
the area of UTF-8 handling:
Firstly, when we display a UTF-8 character on the terminal, we check
with the terminal how many character cells it occupied, and then if
the user deletes it again from the editing buffer, we can emit the
right number of backspace-space-backspace sequences. (The old ldisc
line editor incorrectly assumed all Unicode characters had terminal
with 1, partly because its buffer was byte- rather than character-
oriented and so it was more than enough work just finding where the
character _start_ was.)
Secondly, terminal.c's userpass line editor would never emit a byte in
the 80-BF range to the terminal at all, which meant that nontrivial
UTF-8 characters always came out as U+FFFD blobs!
I'm about to rewrite it completely, so the first thing I need to do is
to write tests for as much of the functionality as possible, so that I
can check the new implementation behaves in the same ways.
Similarly to the one I just added for FontSpec: in a cross-platform
main source file, you don't really want to mess about with
per-platform ifdefs just to initialise a 'struct unicode_data' from a
Conf. But until now, you had to, because init_ucs had a different
prototype on Windows and Unix.
I plan to use this in future test programs. But an immediate positive
effect is that it removes the only platform-dependent call from
fuzzterm.c. So now that could be built on Windows too, given only an
appropriate cmake stanza. (Not that I have much idea if it's useful to
fuzz the terminal separately on multiple platforms, but it's nice to
know that it's possible if anyone does need to.)
Constructing a FontSpec in platform-independent code is awkward,
because you can't call fontspec_new() outside the platform subdirs
(since its prototype varies per platform). But sometimes you just need
_some_ valid FontSpec, e.g. to put in a Conf that will be used in some
place where you don't actually care about font settings, such as a
purely CLI program.
Both Unix and Windows _have_ an idiom for this, but they're different,
because their FontSpec constructors have different prototypes. The
existing CLI tools have always had per-platform main source files, so
they just use the locally appropriate method of constructing a boring
don't-care FontSpec.
But if you want a _platform-independent_ main source file, such as you
might find in a test program, then that's rather awkward. Better to
have a platform-independent API for making a default FontSpec.
I just happened to notice ARG1 and ARGN in the code that builds the
dispatch table in process_line(), which aren't used at all, because
they date from a previous version of the testcrypt-func.h macro
system. They were supposed to be replaced everywhere with the unified
ARG.
So why didn't the missing definition of ARG break anything? Because
ARG only ever appears in the variadic part of a FUNC_INNER call - and
for this particular trawl of testcrypt-func.h, the variadic part isn't
ever used in the macro expansion in the first place. So there's no
need to define ARG and VOID to anything at all, not even the empty
string.
I only recently found out that OpenSSH defined their own protocol IDs
for AES-GCM, defined to work the same as the standard ones except that
they fixed the semantics for how you select the linked cipher+MAC pair
during key exchange.
(RFC 5647 defines protocol ids for AES-GCM in both the cipher and MAC
namespaces, and requires that you MUST select both or neither - but
this contradicts the selection policy set out in the base SSH RFCs,
and there's no discussion of how you resolve a conflict between them!
OpenSSH's answer is to do it the same way ChaCha20-Poly1305 works,
because that will ensure the two suites don't fight.)
People do occasionally ask us for this linked cipher/MAC pair, and now
I know it's actually feasible, I've implemented it, including a pair
of vector implementations for x86 and Arm using their respective
architecture extensions for multiplying polynomials over GF(2).
Unlike ChaCha20-Poly1305, I've kept the cipher and MAC implementations
in separate objects, with an arm's-length link between them that the
MAC uses when it needs to encrypt single cipher blocks to use as the
inputs to the MAC algorithm. That enables the cipher and the MAC to be
independently selected from their hardware-accelerated versions, just
in case someone runs on a system that has polynomial multiplication
instructions but not AES acceleration, or vice versa.
There's a fourth implementation of the GCM MAC, which is a pure
software implementation of the same algorithm used in the vectorised
versions. It's too slow to use live, but I've kept it in the code for
future testing needs, and because it's a convenient place to dump my
design comments.
The vectorised implementations are fairly crude as far as optimisation
goes. I'm sure serious x86 _or_ Arm optimisation engineers would look
at them and laugh. But GCM is a fast MAC compared to HMAC-SHA-256
(indeed compared to HMAC-anything-at-all), so it should at least be
good enough to use. And we've got a working version with some tests
now, so if someone else wants to improve them, they can.
This provides a convenient hook to be called between SSH messages, for
the crypto components to do any per-message processing like
incrementing a sequence number.
Not sure how I missed this! I tested ChaCha20, but not the MAC that
goes with it. Happily, it passes, so no harm done.
This also involved adding a general framework for testing MACs that
are tied to a specific cipher: we have to allocate, key and IV the
cipher before attempting to use the MAC, and free it all afterwards.
Instead of having separate subsidiary list macros for all the AES-NI
or NEON accelerated ciphers, the main list macro now contains each
individual thing conditionalised under an IF_FOO macro defined at the
top.
Makes relatively little difference in the current state of things, but
it will make it easier to do lots of differently conditionalised
single entries in a list, which will be coming up shortly.
This causes cmake to stop whinging that there isn't one. More
usefully, by specifying the LANGUAGES keyword as just C (rather than
the default of both C and CXX), the cmake configure step is sped up by
not having to faff about finding a C++ compiler.
The length test was pasted from the ordinary decrypt function, when it
should have been pasted from encrypt_length (which got this right).
I've never tried to test those functions before, so I never noticed.
OpenSSH, when called on to give the fingerprint of a certified public
key, will in many circumstances generate the hash of the public blob
of the _underlying_ key, rather than the hash of the full certificate.
I think the hash of the certificate is also potentially useful (if
nothing else, it provides a way to tell apart multiple certificates on
the same key). But I can also see that it's useful to be able to
recognise a key as the same one 'really' (since all certificates on
the same key share a private key, so they're unavoidably related).
So I've dealt with this by introducing an extra pair of fingerprint
types, giving the cross product of {MD5, SHA-256} x {base key only,
full certificate}. You can manually select which one you want to see
in some circumstances (notably PuTTYgen), and in others (such as
diagnostics) both fingerprints will be emitted side by side via the
new functions ssh2_double_fingerprint[_blob].
The default, following OpenSSH, is to just fingerprint the base key.
This patch fixes a few other whitespace and formatting issues which
were pointed out by the bulk-reindent or which I spotted in passing,
some involving manual editing to break lines more nicely.
I think the weirdest hunk in here is the one in windows/window.c
TranslateKey() where _half_ of an assignment statement inside an 'if'
was on the same line as the trailing paren of the if condition. No
idea at all how that one managed to happen!
My bulk indentation check also turned up a lot of cases where a run-on
function call or if statement didn't have its later lines aligned
correctly relative to the open paren.
I think this is quite easy to do by getting things out of
sync (editing the first line of the function call and forgetting to
update the rest, perhaps even because you never _saw_ the rest during
a search-replace). But a few didn't quite fit into that pattern, in
particular an outright misleading case in unix/askpass.c where the
second line of a call was aligned neatly below the _wrong_ one of the
open parens on the opening line.
Restored as many alignments as I could easily find.
In several pieces of development recently I've run across the
occasional code block in the middle of a function which suddenly
switched to 2-space indent from this code base's usual 4. I decided I
was tired of it, so I ran the whole code base through a re-indenter,
which made a huge mess, and then manually sifted out the changes that
actually made sense from that pass.
Indeed, this caught quite a few large sections with 2-space indent
level, a couple with 8, and a handful of even weirder things like 3
spaces or 12. This commit fixes them all.
I must have dashed off that branch of the key reading function without
ever testing it, or I'd have noticed by now that it was looking for
the wrong string to terminate the file. Ahem.
As distinct from the type of signature generated by the SSH server
itself from the host key, this lets you exclude (and by default does
exclude) the old "ssh-rsa" SHA-1 signature type from the signature of
the CA on the certificate.
I'm about to change my mind about whether its top-level nature is
struct or union, and rather than change the key word 'union' to
'struct' at every point of use, it's nicer to just get rid of the
keyword completely. So it has a shiny new name.
This uses the test-CA code to construct a series of certificates with
various properties so as to check all the error cases of certificate
validation. It also tests the various different key types, and all the
RSA signature flags on both the certified key and the certifying one.
This is mostly intended to be invoked from cryptsuite, so that I can
make test certificates with various features to check the validation
function. But it also has a command-line interface, which currently
contains just enough features that I was able to generate a
certificate and actually make sure OpenSSH accepted it (proving that I
got the format right in this script).
You _could_ expand this script into a full production CA, with a
couple more command-line options, if you didn't mind the slightly
awkward requirement that in command-line mode it insists on doing its
signing via an SSH agent. But for the moment it's only intended for
test purposes.
Certificate keys don't work the same as normal keys, so the rest of
the code is going to have to pay attention to whether a key is a
certificate, and if so, treat it differently and do cert-specific
stuff to it. So here's a collection of methods for that purpose.
With one exception, these methods of ssh_key are not expected to be
implemented at all in non-certificate key types: they should only ever
be called once you already know you're dealing with a certificate. So
most of the new method pointers can be left out of the ssh_keyalg
initialisers.
The exception is the base_key method, which retrieves the base key of
a certificate - the underlying one with the certificate stripped off.
It's convenient for non-certificate keys to implement this too, and
just return a pointer to themselves. So I've added an implementation
in nullkey.c doing that. (The returned pointer doesn't transfer
ownership; you have to use the new ssh_key_clone() if you want to keep
the base key after freeing the certificate key.)
The methods _only_ implemented in certificates:
Query methods to return the public key of the CA (for looking up in a
list of trusted ones), and to return the key id string (which exists
to be written into log files).
Obviously, we need a check_cert() method which will verify the CA's
actual signature, not to mention checking all the other details like
the principal and the validity period.
And there's another fiddly method for dealing with the RSA upgrade
system, called 'related_alg'. This is quite like alternate_ssh_id, in
that its job is to upgrade one key algorithm to a related one with
more modern RSA signing flags (or any other similar thing that might
later reuse the same mechanism). But where alternate_ssh_id took the
actual signing flags as an argument, this takes a pointer to the
upgraded base algorithm. So it answers the question "What is to this
key algorithm as you are to its base?" - if you call it on
opensshcert_ssh_rsa and give it ssh_rsa_sha512, it'll give you back
opensshcert_ssh_rsa_sha512.
(It's awkward to have to have another of these fiddly methods, and in
the longer term I'd like to try to clean up their proliferation a bit.
But I even more dislike the alternative of just going through
all_keyalgs looking for a cert algorithm with, say, ssh_rsa_sha512 as
the base: that approach would work fine now but it would be a lurking
time bomb for when all the -cert-v02@ methods appear one day. This
way, each certificate type can upgrade itself to the appropriately
related version. And at least related_alg is only needed if you _are_
a certificate key type - it's not adding yet another piece of
null-method boilerplate to the rest.)
This commit is groundwork for full certificate support, but doesn't
complete the job by itself. It introduces the new key types, and adds
a test in cryptsuite ensuring they work as expected, but nothing else.
If you manually construct a PPK file for one of the new key types, so
that it has a certificate in the public key field, then this commit
enables PuTTY to present that key to a server for user authentication,
either directly or via Pageant storing and using it. But I haven't yet
provided any mechanism for making such a PPK, so by itself, this isn't
much use.
Also, these new key types are not yet included in the KEXINIT host
keys list, because if they were, they'd just be treated as normal host
keys, in that you'd be asked to manually confirm the SSH fingerprint
of the certificate. I'll enable them for host keys once I add the
missing pieces.
This stores its data in the same format as the existing KCT_TEXT, but
it displays differently in puttygen --dump, expecting that the data
will be full of horrible control characters, invalid UTF-8, etc.
The displayed data is of the form b64("..."), so you get a hint about
what the encoding is, and can still paste into Python by defining the
identifier 'b64' to be base64.b64decode or equivalent.
Having recently pulled it out into its own file, I think it could also
do with a bit of tidying. In this rework:
- the substructure for a single component now has a globally visible
struct tag, so you can make a variable pointing at it, saving
verbiage in every piece of code looping over a key_components
- the 'is_mp_int' flag has been replaced with a type enum, so that
more types can be added without further upheaval
- the printing loop in cmdgen.c for puttygen --dump has factored out
the initial 'name=' prefix on each line so that it isn't repeated
per component type
- the storage format for text components is now a strbuf rather than
a plain char *, which I think is generally more useful.
The function will accept a public key file or a PPK, but if it fails
to parse as any of those, the error message says "not a PuTTY SSH-2
private key", which is particularly incongruous in situations where
you're specifically _not_ after the private half of the key.
Now says "not a public key or a PuTTY SSH-2 private key".
If you already have a string (of potentially-binary data) in the form
of a ptrlen reference to somewhere else, and you want to keep a copy
somewhere, it's useful to copy it into a strbuf. But it takes a couple
of lines of faff to do that, and it's nicer to wrap that up into a
tiny helper function.
This commit adds that helper function strbuf_dup, and its non-movable
sibling strbuf_dup_nm for secret data. Also, gone through the existing
code and found a bunch of cases where this makes things less verbose.
These days, the base64 module has 'b64decode', which can tolerate a
str or a bytes as input. Switched to using that, and also, imported it
under a nice short name 'b64'.
In the process, removed the obsolete equivocation between
base64.decodebytes and base64.decodestring. That was there to cope
with Python 2 - but the assert statement right next to it has been
enforcing P3 since commit 2ec2b796ed two years ago!
This consists of DJB's 'Streamlined NTRU Prime' quantum-resistant
cryptosystem, currently in round 3 of the NIST post-quantum key
exchange competition; it's run in parallel with ordinary Curve25519,
and generates a shared secret combining the output of both systems.
(Hence, even if you don't trust this newfangled NTRU Prime thing at
all, it's at least no _less_ secure than the kex you were using
already.)
As the OpenSSH developers point out, key exchange is the most urgent
thing to make quantum-resistant, even before working quantum computers
big enough to break crypto become available, because a break of the
kex algorithm can be applied retroactively to recordings of your past
sessions. By contrast, authentication is a real-time protocol, and can
only be broken by a quantum computer if there's one available to
attack you _already_.
I've implemented both sides of the mechanism, so that PuTTY and Uppity
both support it. In my initial testing, the two sides can both
interoperate with the appropriate half of OpenSSH, and also (of
course, but it would be embarrassing to mess it up) with each other.
This is already slightly nice because it lets me separate the
Weierstrass and Montgomery code more completely, without having to
have a vtable tucked into dh->extra. But more to the point, it will
allow completely different kex methods to fit into the same framework
later.
To that end, I've moved more of the descriptive message generation
into the vtable, and also provided the constructor with a flag that
will let it do different things in client and server.
Also, following on from a previous commit, I've arranged that the new
API returns arbitrary binary data for the exchange hash, rather than
an mp_int. An upcoming implementation of this interface will want to
return an encoded string instead of an encoded mp_int.
This means if I have functions like foo_subfoo_bar and foo_baz that
both operate on a foo, the Python testcrypt system can translate both
into .bar() and .baz() methods on the object, even though they don't
start with the same prefix.
In test_primegen, we loop round retrieving random data until we find
some that will permit a successful prime generation, so that we can
log only the successful attempts, and not the failures (which don't
have to be time-safe). But this itself introduces a potential mismatch
between logs, because the simplistic RNG used in testsc will have
different control flow depending on how far through a buffer of hash
data it is at the start of a given run.
random_advance_counter() gives it a fresh buffer, so calling that at
the start of a run should normalise this out. The code to do that was
already in the middle of random_read(); I've just pulled it out into a
separately callable function.
This hasn't _actually_ caused failures in test_primegen, but I'm not
sure why not. (Perhaps just luck.) But it did cause a failure in
another test of a similar nature, so before I commit _that_ test (and
the thing it's testing), I'd better fix this.
Correcting a source file name in the docs just now reminded me that
I've seen a lot of outdated source file names elsewhere in the code,
due to all the reorganisation since we moved to cmake. Here's a giant
pass of trying to make them all accurate again.
I happened to notice in passing that this function doesn't have any
tests (although it will have been at least somewhat tested by the
cmdgen interop test system).
This involved writing a wrapper that passes the passphrase and salt as
ptrlens, and I decided it made more sense to make the same change to
the original function too and adjust the call sites appropriately.
I derived a test case by getting OpenSSH itself to make an encrypted
key file, and then using the inputs and output from the password hash
operation that decrypted it again.
The return value of term_data() is used as the return value from the
GUI-terminal versions of the Seat output method, which means backends
will take it to be the amount of standard-output data currently
buffered, and exert back-pressure on the remote peer if it gets too
big (e.g. by ceasing to extend the window in that particular SSH-2
channel).
Historically, as a comment in term_data() explained, we always just
returned 0 from that function, on the basis that we were processing
all the terminal data through our terminal emulation code immediately,
and never retained any of it in the buffer at all. If the terminal
emulation code were to start running slowly, then it would slow down
the _whole_ PuTTY system, due to single-threadedness, and
back-pressure of a sort would be exerted on the remote by it simply
failing to get round to reading from the network socket. But by the
time we got back to the top level of term_data(), we'd have finished
reading all the data we had, so it was still appropriate to return 0.
That comment is still correct if you're thinking about the limiting
factor on terminal data processing being the CPU usage in term_out().
But now that's no longer the whole story, because sometimes we leave
data in term->inbuf without having processed it: during drag-selects
in the terminal window, and (just introduced) while waiting for the
response to a pending window resize request. For both those reasons,
we _don't_ always have a buffer size of zero when we return from
term_data().
So now that hole in our buffer size management is filled in:
term_data() returns the true size of the remaining unprocessed
terminal output, so that back-pressure will be exerted if the terminal
is currently not consuming it. And when processing resumes and we
start to clear our backlog, we call backend_unthrottle to let the
backend know it can relax the back-pressure if necessary.
I recently encountered a paper [1] which catalogues all kinds of
things that can go wrong when one party in a discrete-log system
invents a prime and the other party chooses an exponent. In
particular, some choices of prime make it reasonable to use a short
exponent to save time, but others make that strategy very bad.
That paper is about the ElGamal encryption scheme used in OpenPGP,
which is basically integer Diffie-Hellman with one side's key being
persistent: a shared-secret integer is derived exactly as in DH, and
then it's used to communicate a message integer by simply multiplying
the shared secret by the message, mod p.
I don't _know_ that any problem of this kind arises in the SSH usage
of Diffie-Hellman: the standard integer DH groups in SSH are safe
primes, and as far as I know, the usual generation of prime moduli for
DH group exchange also picks safe primes. So the short exponents PuTTY
has been using _should_ be OK.
However, the range of imaginative other possibilities shown in that
paper make me nervous, even so! So I think I'm going to retire the
short exponent strategy, on general principles of overcaution.
This slows down 4096-bit integer DH by about a factor of 3-4 (which
would be worse if it weren't for the modpow speedup in the previous
commit). I think that's OK, because, firstly, computers are a lot
faster these days than when I originally chose to use short exponents,
and secondly, more and more implementations are now switching to
elliptic-curve DH, which is unaffected by this change (and with which
we've always been using maximum-length exponents).
[1] On the (in)security of ElGamal in OpenPGP. Luca De Feo, Bertram
Poettering, Alessandro Sorniotti. https://eprint.iacr.org/2021/923
It's unquestionably a test program, and I'm generally clearing those
out of the top level. I only missed it in the last clearout because I
was looking for things with 'test' in the name.
They were there to work around that annoying feature of VS's
preprocessor when it expands __VA_ARGS__ into the argument list of
another macro. But I've just thought of a workaround that I can apply
in testcrypt.c itself, so that those parens don't have to appear in
every function definition in the header file.
The trick is, instead of writing
destination_macro(__VA_ARGS__)
you instead write
JUXTAPOSE(destination_macro, (__VA_ARGS__))
where JUXTAPOSE is defined to be a macro that simply expands its two
arguments next to each other:
#define JUXTAPOSE(first, second) first second
This works because the arguments to JUXTAPOSE get macro-expanded
_before_ passing them to JUXTAPOSE itself - the same reason that the
standard tricks with STR_INNER and CAT_INNER work (as seen in defs.h
here). So this defuses the magic behaviour of commas expanded from
__VA_ARGS__, and causes the destination macro to get all its arguments
in the expected places again.
I was dubious about it to begin with, when I found that RFC 7616's
example seemed to be treating it as a 256-bit truncation of SHA-512,
and not the thing FIPS 180-4 section 6.7 specifies as "SHA-512/256"
(which also changes the initial hash state). Having failed to get a
clarifying response from the RFC authors, I had the idea this morning
of testing other HTTP clients to see what _they_ thought that hash
function meant, and then at least I could go with an existing
in-practice consensus.
There is no in-practice consensus. Firefox doesn't support that
algorithm at all (but they do support SHA-256); wget doesn't support
anything that RFC 7616 added to the original RFC 2617. But the prize
for weirdness goes to curl, which does accept the name "SHA-512-256"
and ... treats it as an alias for SHA-256!
So I think the situation among real clients is too confusing to even
try to work with, and I'm going to stop adding to it. PuTTY will
follow Firefox's policy: if a proxy server asks for SHA-256 digests
we'll happily provide them, but if they ask for SHA-512-256 we'll
refuse on the grounds that it's not clear enough what it means.
Now testcrypt has _two_ header files, that's more files than I want at
the top level, so I decided to move it.
It has a good claim to live in either 'test' or 'crypto', but in the
end I decided it wasn't quite specific enough to crypto (it already
also tests things in keygen and proxy), and also, the Python half of
the mechanism already lives in 'test', so it can live alongside that.
Having done that, it seemed silly to leave testsc and testzlib at the
top level: those have 'test' in the names as well, so they can go in
the test subdir as well.
While I'm renaming, also renamed testcrypt.h to testcrypt-func.h to
distinguish it from the new testcrypt-enum.h.