New test system for mp_int and cryptography.
I've written a new standalone test program which incorporates all of
PuTTY's crypto code, including the mp_int and low-level elliptic curve
layers but also going all the way up to the implementations of the
MAC, hash, cipher, public key and kex abstractions.
The test program itself, 'testcrypt', speaks a simple line-oriented
protocol on standard I/O in which you write the name of a function
call followed by some inputs, and it gives you back a list of outputs
preceded by a line telling you how many there are. Dynamically
allocated objects are assigned string ids in the protocol, and there's
a 'free' function that tells testcrypt when it can dispose of one.
It's possible to speak that protocol by hand, but cumbersome. I've
also provided a Python module that wraps it, by running testcrypt as a
persistent subprocess and gatewaying all the function calls into
things that look reasonably natural to call from Python. The Python
module and testcrypt.c both read a carefully formatted header file
testcrypt.h which contains the name and signature of every exported
function, so it costs minimal effort to expose a given function
through this test API. In a few cases it's necessary to write a
wrapper in testcrypt.c that makes the function look more friendly, but
mostly you don't even need that. (Though that is one of the
motivations between a lot of API cleanups I've done recently!)
I considered doing Python integration in the more obvious way, by
linking parts of the PuTTY code directly into a native-code .so Python
module. I decided against it because this way is more flexible: I can
run the testcrypt program on its own, or compile it in a way that
Python wouldn't play nicely with (I bet compiling just that .so with
Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or
attach a debugger to it. I can even recompile testcrypt for a
different CPU architecture (32- vs 64-bit, or even running it on a
different machine over ssh or under emulation) and still layer the
nice API on top of that via the local Python interpreter. All I need
is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
|
|
|
/*
|
|
|
|
* testcrypt: a standalone test program that provides direct access to
|
|
|
|
* PuTTY's cryptography and mp_int code.
|
|
|
|
*/
|
|
|
|
|
|
|
|
/*
|
|
|
|
* This program speaks a line-oriented protocol on standard input and
|
|
|
|
* standard output. It's a half-duplex protocol: it expects to read
|
|
|
|
* one line of command, and then produce a fixed amount of output
|
|
|
|
* (namely a line containing a decimal integer, followed by that many
|
|
|
|
* lines each containing one return value).
|
|
|
|
*
|
|
|
|
* The protocol is human-readable enough to make it debuggable, but
|
|
|
|
* verbose enough that you probably wouldn't want to speak it by hand
|
|
|
|
* at any great length. The Python program test/testcrypt.py wraps it
|
|
|
|
* to give a more useful user-facing API, by invoking this binary as a
|
|
|
|
* subprocess.
|
|
|
|
*
|
|
|
|
* (I decided that was a better idea than making this program an
|
|
|
|
* actual Python module, partly because you can rewrap the same binary
|
|
|
|
* in another scripting language if you prefer, but mostly because
|
|
|
|
* it's easy to attach a debugger to testcrypt or to run it under
|
|
|
|
* sanitisers or valgrind or what have you.)
|
|
|
|
*/
|
|
|
|
|
|
|
|
#include <assert.h>
|
|
|
|
#include <stdio.h>
|
|
|
|
#include <stdlib.h>
|
|
|
|
#include <string.h>
|
|
|
|
#include <errno.h>
|
|
|
|
|
|
|
|
#include "defs.h"
|
|
|
|
#include "ssh.h"
|
2020-02-23 14:30:03 +00:00
|
|
|
#include "sshkeygen.h"
|
New test system for mp_int and cryptography.
I've written a new standalone test program which incorporates all of
PuTTY's crypto code, including the mp_int and low-level elliptic curve
layers but also going all the way up to the implementations of the
MAC, hash, cipher, public key and kex abstractions.
The test program itself, 'testcrypt', speaks a simple line-oriented
protocol on standard I/O in which you write the name of a function
call followed by some inputs, and it gives you back a list of outputs
preceded by a line telling you how many there are. Dynamically
allocated objects are assigned string ids in the protocol, and there's
a 'free' function that tells testcrypt when it can dispose of one.
It's possible to speak that protocol by hand, but cumbersome. I've
also provided a Python module that wraps it, by running testcrypt as a
persistent subprocess and gatewaying all the function calls into
things that look reasonably natural to call from Python. The Python
module and testcrypt.c both read a carefully formatted header file
testcrypt.h which contains the name and signature of every exported
function, so it costs minimal effort to expose a given function
through this test API. In a few cases it's necessary to write a
wrapper in testcrypt.c that makes the function look more friendly, but
mostly you don't even need that. (Though that is one of the
motivations between a lot of API cleanups I've done recently!)
I considered doing Python integration in the more obvious way, by
linking parts of the PuTTY code directly into a native-code .so Python
module. I decided against it because this way is more flexible: I can
run the testcrypt program on its own, or compile it in a way that
Python wouldn't play nicely with (I bet compiling just that .so with
Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or
attach a debugger to it. I can even recompile testcrypt for a
different CPU architecture (32- vs 64-bit, or even running it on a
different machine over ssh or under emulation) and still layer the
nice API on top of that via the local Python interpreter. All I need
is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
|
|
|
#include "misc.h"
|
|
|
|
#include "mpint.h"
|
2021-04-22 16:57:56 +00:00
|
|
|
#include "crypto/ecc.h"
|
Implement OpenSSH 9.x's NTRU Prime / Curve25519 kex.
This consists of DJB's 'Streamlined NTRU Prime' quantum-resistant
cryptosystem, currently in round 3 of the NIST post-quantum key
exchange competition; it's run in parallel with ordinary Curve25519,
and generates a shared secret combining the output of both systems.
(Hence, even if you don't trust this newfangled NTRU Prime thing at
all, it's at least no _less_ secure than the kex you were using
already.)
As the OpenSSH developers point out, key exchange is the most urgent
thing to make quantum-resistant, even before working quantum computers
big enough to break crypto become available, because a break of the
kex algorithm can be applied retroactively to recordings of your past
sessions. By contrast, authentication is a real-time protocol, and can
only be broken by a quantum computer if there's one available to
attack you _already_.
I've implemented both sides of the mechanism, so that PuTTY and Uppity
both support it. In my initial testing, the two sides can both
interoperate with the appropriate half of OpenSSH, and also (of
course, but it would be embarrassing to mess it up) with each other.
2022-04-15 16:19:47 +00:00
|
|
|
#include "crypto/ntru.h"
|
New post-quantum kex: ML-KEM, and three hybrids of it.
As standardised by NIST in FIPS 203, this is a lattice-based
post-quantum KEM.
Very vaguely, the idea of it is that your public key is a matrix A and
vector t, and the private key is the knowledge of how to decompose t
into two vectors with all their coefficients small, one transformed by
A relative to the other. Encryption of a binary secret starts by
turning each bit into one of two maximally separated residues mod a
prime q, and then adding 'noise' based on the public key in the form
of small increments and decrements mod q, again with some of the noise
transformed by A relative to the rest. Decryption uses the knowledge
of t's decomposition to align the two sets of noise so that the
_large_ changes (which masked the secret from an eavesdropper) cancel
out, leaving only a collection of small changes to the original secret
vector. Then the vector of input bits can be recovered by assuming
that those accumulated small pieces of noise haven't concentrated in
any particular residue enough to push it more than half way to the
other of its possible starting values.
A weird feature of it is that decryption is not a true mathematical
inverse of encryption. The assumption that the noise doesn't get large
enough to flip any bit of the secret is only probabilistically valid,
not a hard guarantee. In other words, key agreement can fail, simply
by getting particularly unlucky with the distribution of your random
noise! However, the probability of a failure is very low - less than
2^-138 even for ML-KEM-512, and gets even smaller with the larger
variants.
An awkward feature for our purposes is that the matrix A, containing a
large number of residues mod the prime q=3329, is required to be
constructed by a process of rejection sampling, i.e. generating random
12-bit values and throwing away the out-of-range ones. That would be a
real pain for our side-channel testing system, which generally handles
rejection sampling badly (since it necessarily involves data-dependent
control flow and timing variation). Fortunately, the matrix and the
random seed it was made from are both public: the matrix seed is
transmitted as part of the public key, so it's not necessary to try to
hide it. Accordingly, I was able to get the implementation to pass
testsc by means of not varying the matrix seed between runs, which is
justified by the principle of testsc that you vary the _secrets_ to
ensure timing is independent of them - and the matrix seed isn't a
secret, so you're allowed to keep it the same.
The three hybrid algorithms, defined by the current Internet-Draft
draft-kampanakis-curdle-ssh-pq-ke, include one hybrid of ML-KEM-768
with Curve25519 in exactly the same way we were already hybridising
NTRU Prime with Curve25519, and two more hybrids of ML-KEM with ECDH
over a NIST curve. The former hybrid interoperates with the
implementation in OpenSSH 9.9; all three interoperate with the fork
'openssh-oqs' at github.com/open-quantum-safe/openssh, and also with
the Python library AsyncSSH.
2024-12-07 19:33:39 +00:00
|
|
|
#include "crypto/mlkem.h"
|
2021-11-20 14:56:32 +00:00
|
|
|
#include "proxy/cproxy.h"
|
New test system for mp_int and cryptography.
I've written a new standalone test program which incorporates all of
PuTTY's crypto code, including the mp_int and low-level elliptic curve
layers but also going all the way up to the implementations of the
MAC, hash, cipher, public key and kex abstractions.
The test program itself, 'testcrypt', speaks a simple line-oriented
protocol on standard I/O in which you write the name of a function
call followed by some inputs, and it gives you back a list of outputs
preceded by a line telling you how many there are. Dynamically
allocated objects are assigned string ids in the protocol, and there's
a 'free' function that tells testcrypt when it can dispose of one.
It's possible to speak that protocol by hand, but cumbersome. I've
also provided a Python module that wraps it, by running testcrypt as a
persistent subprocess and gatewaying all the function calls into
things that look reasonably natural to call from Python. The Python
module and testcrypt.c both read a carefully formatted header file
testcrypt.h which contains the name and signature of every exported
function, so it costs minimal effort to expose a given function
through this test API. In a few cases it's necessary to write a
wrapper in testcrypt.c that makes the function look more friendly, but
mostly you don't even need that. (Though that is one of the
motivations between a lot of API cleanups I've done recently!)
I considered doing Python integration in the more obvious way, by
linking parts of the PuTTY code directly into a native-code .so Python
module. I decided against it because this way is more flexible: I can
run the testcrypt program on its own, or compile it in a way that
Python wouldn't play nicely with (I bet compiling just that .so with
Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or
attach a debugger to it. I can even recompile testcrypt for a
different CPU architecture (32- vs 64-bit, or even running it on a
different machine over ssh or under emulation) and still layer the
nice API on top of that via the local Python interpreter. All I need
is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
|
|
|
|
2020-01-26 14:49:31 +00:00
|
|
|
static NORETURN PRINTF_LIKE(1, 2) void fatal_error(const char *p, ...)
|
New test system for mp_int and cryptography.
I've written a new standalone test program which incorporates all of
PuTTY's crypto code, including the mp_int and low-level elliptic curve
layers but also going all the way up to the implementations of the
MAC, hash, cipher, public key and kex abstractions.
The test program itself, 'testcrypt', speaks a simple line-oriented
protocol on standard I/O in which you write the name of a function
call followed by some inputs, and it gives you back a list of outputs
preceded by a line telling you how many there are. Dynamically
allocated objects are assigned string ids in the protocol, and there's
a 'free' function that tells testcrypt when it can dispose of one.
It's possible to speak that protocol by hand, but cumbersome. I've
also provided a Python module that wraps it, by running testcrypt as a
persistent subprocess and gatewaying all the function calls into
things that look reasonably natural to call from Python. The Python
module and testcrypt.c both read a carefully formatted header file
testcrypt.h which contains the name and signature of every exported
function, so it costs minimal effort to expose a given function
through this test API. In a few cases it's necessary to write a
wrapper in testcrypt.c that makes the function look more friendly, but
mostly you don't even need that. (Though that is one of the
motivations between a lot of API cleanups I've done recently!)
I considered doing Python integration in the more obvious way, by
linking parts of the PuTTY code directly into a native-code .so Python
module. I decided against it because this way is more flexible: I can
run the testcrypt program on its own, or compile it in a way that
Python wouldn't play nicely with (I bet compiling just that .so with
Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or
attach a debugger to it. I can even recompile testcrypt for a
different CPU architecture (32- vs 64-bit, or even running it on a
different machine over ssh or under emulation) and still layer the
nice API on top of that via the local Python interpreter. All I need
is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
|
|
|
{
|
|
|
|
va_list ap;
|
|
|
|
fprintf(stderr, "testcrypt: ");
|
|
|
|
va_start(ap, p);
|
|
|
|
vfprintf(stderr, p, ap);
|
|
|
|
va_end(ap);
|
|
|
|
fputc('\n', stderr);
|
|
|
|
exit(1);
|
|
|
|
}
|
|
|
|
|
|
|
|
void out_of_memory(void) { fatal_error("out of memory"); }
|
|
|
|
|
2020-01-06 19:58:25 +00:00
|
|
|
static bool old_keyfile_warning_given;
|
|
|
|
void old_keyfile_warning(void) { old_keyfile_warning_given = true; }
|
|
|
|
|
New test system for mp_int and cryptography.
I've written a new standalone test program which incorporates all of
PuTTY's crypto code, including the mp_int and low-level elliptic curve
layers but also going all the way up to the implementations of the
MAC, hash, cipher, public key and kex abstractions.
The test program itself, 'testcrypt', speaks a simple line-oriented
protocol on standard I/O in which you write the name of a function
call followed by some inputs, and it gives you back a list of outputs
preceded by a line telling you how many there are. Dynamically
allocated objects are assigned string ids in the protocol, and there's
a 'free' function that tells testcrypt when it can dispose of one.
It's possible to speak that protocol by hand, but cumbersome. I've
also provided a Python module that wraps it, by running testcrypt as a
persistent subprocess and gatewaying all the function calls into
things that look reasonably natural to call from Python. The Python
module and testcrypt.c both read a carefully formatted header file
testcrypt.h which contains the name and signature of every exported
function, so it costs minimal effort to expose a given function
through this test API. In a few cases it's necessary to write a
wrapper in testcrypt.c that makes the function look more friendly, but
mostly you don't even need that. (Though that is one of the
motivations between a lot of API cleanups I've done recently!)
I considered doing Python integration in the more obvious way, by
linking parts of the PuTTY code directly into a native-code .so Python
module. I decided against it because this way is more flexible: I can
run the testcrypt program on its own, or compile it in a way that
Python wouldn't play nicely with (I bet compiling just that .so with
Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or
attach a debugger to it. I can even recompile testcrypt for a
different CPU architecture (32- vs 64-bit, or even running it on a
different machine over ssh or under emulation) and still layer the
nice API on top of that via the local Python interpreter. All I need
is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
|
|
|
static bufchain random_data_queue;
|
2020-02-19 19:12:59 +00:00
|
|
|
static prng *test_prng;
|
Replace random_byte() with random_read().
This is in preparation for a PRNG revamp which will want to have a
well defined boundary for any given request-for-randomness, so that it
can destroy the evidence afterwards. So no more looping round calling
random_byte() and then stopping when we feel like it: now you say up
front how many random bytes you want, and call random_read() which
gives you that many in one go.
Most of the call sites that had to be fixed are fairly mechanical, and
quite a few ended up more concise afterwards. A few became more
cumbersome, such as mp_random_bits, in which the new API doesn't let
me load the random bytes directly into the target integer without
triggering undefined behaviour, so instead I have to allocate a
separate temporary buffer.
The _most_ interesting call site was in the PKCS#1 v1.5 padding code
in sshrsa.c (used in SSH-1), in which you need a stream of _nonzero_
random bytes. The previous code just looped on random_byte, retrying
if it got a zero. Now I'm doing a much more interesting thing with an
mpint, essentially scaling a binary fraction repeatedly to extract a
number in the range [0,255) and then adding 1 to it.
2019-01-22 19:43:27 +00:00
|
|
|
void random_read(void *buf, size_t size)
|
New test system for mp_int and cryptography.
I've written a new standalone test program which incorporates all of
PuTTY's crypto code, including the mp_int and low-level elliptic curve
layers but also going all the way up to the implementations of the
MAC, hash, cipher, public key and kex abstractions.
The test program itself, 'testcrypt', speaks a simple line-oriented
protocol on standard I/O in which you write the name of a function
call followed by some inputs, and it gives you back a list of outputs
preceded by a line telling you how many there are. Dynamically
allocated objects are assigned string ids in the protocol, and there's
a 'free' function that tells testcrypt when it can dispose of one.
It's possible to speak that protocol by hand, but cumbersome. I've
also provided a Python module that wraps it, by running testcrypt as a
persistent subprocess and gatewaying all the function calls into
things that look reasonably natural to call from Python. The Python
module and testcrypt.c both read a carefully formatted header file
testcrypt.h which contains the name and signature of every exported
function, so it costs minimal effort to expose a given function
through this test API. In a few cases it's necessary to write a
wrapper in testcrypt.c that makes the function look more friendly, but
mostly you don't even need that. (Though that is one of the
motivations between a lot of API cleanups I've done recently!)
I considered doing Python integration in the more obvious way, by
linking parts of the PuTTY code directly into a native-code .so Python
module. I decided against it because this way is more flexible: I can
run the testcrypt program on its own, or compile it in a way that
Python wouldn't play nicely with (I bet compiling just that .so with
Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or
attach a debugger to it. I can even recompile testcrypt for a
different CPU architecture (32- vs 64-bit, or even running it on a
different machine over ssh or under emulation) and still layer the
nice API on top of that via the local Python interpreter. All I need
is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
|
|
|
{
|
2020-02-19 19:12:59 +00:00
|
|
|
if (test_prng) {
|
|
|
|
prng_read(test_prng, buf, size);
|
|
|
|
} else {
|
|
|
|
if (!bufchain_try_fetch_consume(&random_data_queue, buf, size))
|
|
|
|
fatal_error("No random data in queue");
|
|
|
|
}
|
New test system for mp_int and cryptography.
I've written a new standalone test program which incorporates all of
PuTTY's crypto code, including the mp_int and low-level elliptic curve
layers but also going all the way up to the implementations of the
MAC, hash, cipher, public key and kex abstractions.
The test program itself, 'testcrypt', speaks a simple line-oriented
protocol on standard I/O in which you write the name of a function
call followed by some inputs, and it gives you back a list of outputs
preceded by a line telling you how many there are. Dynamically
allocated objects are assigned string ids in the protocol, and there's
a 'free' function that tells testcrypt when it can dispose of one.
It's possible to speak that protocol by hand, but cumbersome. I've
also provided a Python module that wraps it, by running testcrypt as a
persistent subprocess and gatewaying all the function calls into
things that look reasonably natural to call from Python. The Python
module and testcrypt.c both read a carefully formatted header file
testcrypt.h which contains the name and signature of every exported
function, so it costs minimal effort to expose a given function
through this test API. In a few cases it's necessary to write a
wrapper in testcrypt.c that makes the function look more friendly, but
mostly you don't even need that. (Though that is one of the
motivations between a lot of API cleanups I've done recently!)
I considered doing Python integration in the more obvious way, by
linking parts of the PuTTY code directly into a native-code .so Python
module. I decided against it because this way is more flexible: I can
run the testcrypt program on its own, or compile it in a way that
Python wouldn't play nicely with (I bet compiling just that .so with
Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or
attach a debugger to it. I can even recompile testcrypt for a
different CPU architecture (32- vs 64-bit, or even running it on a
different machine over ssh or under emulation) and still layer the
nice API on top of that via the local Python interpreter. All I need
is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
|
|
|
}
|
|
|
|
|
Replace PuTTY's PRNG with a Fortuna-like system.
This tears out the entire previous random-pool system in sshrand.c. In
its place is a system pretty close to Ferguson and Schneier's
'Fortuna' generator, with the main difference being that I use SHA-256
instead of AES for the generation side of the system (rationale given
in comment).
The PRNG implementation lives in sshprng.c, and defines a self-
contained data type with no state stored outside the object, so you
can instantiate however many of them you like. The old sshrand.c still
exists, but in place of the previous random pool system, it's just
become a client of sshprng.c, whose job is to hold a single global
instance of the PRNG type, and manage its reference count, save file,
noise-collection timers and similar administrative business.
Advantages of this change include:
- Fortuna is designed with a more varied threat model in mind than my
old home-grown random pool. For example, after any request for
random numbers, it automatically re-seeds itself, so that if the
state of the PRNG should be leaked, it won't give enough
information to find out what past outputs _were_.
- The PRNG type can be instantiated with any hash function; the
instance used by the main tools is based on SHA-256, an improvement
on the old pool's use of SHA-1.
- The new PRNG only uses the completely standard interface to the
hash function API, instead of having to have privileged access to
the internal SHA-1 block transform function. This will make it
easier to revamp the hash code in general, and also it means that
hardware-accelerated versions of SHA-256 will automatically be used
for the PRNG as well as for everything else.
- The new PRNG can be _tested_! Because it has an actual (if not
quite explicit) specification for exactly what the output numbers
_ought_ to be derived from the hashes of, I can (and have) put
tests in cryptsuite that ensure the output really is being derived
in the way I think it is. The old pool could have been returning
any old nonsense and it would have been very hard to tell for sure.
2019-01-22 22:42:41 +00:00
|
|
|
uint64_t prng_reseed_time_ms(void)
|
|
|
|
{
|
|
|
|
static uint64_t previous_time = 0;
|
|
|
|
return previous_time += 200;
|
|
|
|
}
|
|
|
|
|
New test system for mp_int and cryptography.
I've written a new standalone test program which incorporates all of
PuTTY's crypto code, including the mp_int and low-level elliptic curve
layers but also going all the way up to the implementations of the
MAC, hash, cipher, public key and kex abstractions.
The test program itself, 'testcrypt', speaks a simple line-oriented
protocol on standard I/O in which you write the name of a function
call followed by some inputs, and it gives you back a list of outputs
preceded by a line telling you how many there are. Dynamically
allocated objects are assigned string ids in the protocol, and there's
a 'free' function that tells testcrypt when it can dispose of one.
It's possible to speak that protocol by hand, but cumbersome. I've
also provided a Python module that wraps it, by running testcrypt as a
persistent subprocess and gatewaying all the function calls into
things that look reasonably natural to call from Python. The Python
module and testcrypt.c both read a carefully formatted header file
testcrypt.h which contains the name and signature of every exported
function, so it costs minimal effort to expose a given function
through this test API. In a few cases it's necessary to write a
wrapper in testcrypt.c that makes the function look more friendly, but
mostly you don't even need that. (Though that is one of the
motivations between a lot of API cleanups I've done recently!)
I considered doing Python integration in the more obvious way, by
linking parts of the PuTTY code directly into a native-code .so Python
module. I decided against it because this way is more flexible: I can
run the testcrypt program on its own, or compile it in a way that
Python wouldn't play nicely with (I bet compiling just that .so with
Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or
attach a debugger to it. I can even recompile testcrypt for a
different CPU architecture (32- vs 64-bit, or even running it on a
different machine over ssh or under emulation) and still layer the
nice API on top of that via the local Python interpreter. All I need
is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
|
|
|
#define VALUE_TYPES(X) \
|
|
|
|
X(string, strbuf *, strbuf_free(v)) \
|
|
|
|
X(mpint, mp_int *, mp_free(v)) \
|
|
|
|
X(modsqrt, ModsqrtContext *, modsqrt_free(v)) \
|
|
|
|
X(monty, MontyContext *, monty_free(v)) \
|
|
|
|
X(wcurve, WeierstrassCurve *, ecc_weierstrass_curve_free(v)) \
|
|
|
|
X(wpoint, WeierstrassPoint *, ecc_weierstrass_point_free(v)) \
|
|
|
|
X(mcurve, MontgomeryCurve *, ecc_montgomery_curve_free(v)) \
|
|
|
|
X(mpoint, MontgomeryPoint *, ecc_montgomery_point_free(v)) \
|
|
|
|
X(ecurve, EdwardsCurve *, ecc_edwards_curve_free(v)) \
|
|
|
|
X(epoint, EdwardsPoint *, ecc_edwards_point_free(v)) \
|
|
|
|
X(hash, ssh_hash *, ssh_hash_free(v)) \
|
|
|
|
X(key, ssh_key *, ssh_key_free(v)) \
|
Merge the ssh1_cipher type into ssh2_cipher.
The aim of this reorganisation is to make it easier to test all the
ciphers in PuTTY in a uniform way. It was inconvenient that there were
two separate vtable systems for the ciphers used in SSH-1 and SSH-2
with different functionality.
Now there's only one type, called ssh_cipher. But really it's the old
ssh2_cipher, just renamed: I haven't made any changes to the API on
the SSH-2 side. Instead, I've removed ssh1_cipher completely, and
adapted the SSH-1 BPP to use the SSH-2 style API.
(The relevant differences are that ssh1_cipher encapsulated both the
sending and receiving directions in one object - so now ssh1bpp has to
make a separate cipher instance per direction - and that ssh1_cipher
automatically initialised the IV to all zeroes, which ssh1bpp now has
to do by hand.)
The previous ssh1_cipher vtable for single-DES has been removed
completely, because when converted into the new API it became
identical to the SSH-2 single-DES vtable; so now there's just one
vtable for DES-CBC which works in both protocols. The other two SSH-1
ciphers each had to stay separate, because 3DES is completely
different between SSH-1 and SSH-2 (three layers of CBC structure
versus one), and Blowfish varies in endianness and key length between
the two.
(Actually, while I'm here, I've only just noticed that the SSH-1
Blowfish cipher mis-describes itself in log messages as Blowfish-128.
In fact it passes the whole of the input key buffer, which has length
SSH1_SESSION_KEY_LENGTH == 32 bytes == 256 bits. So it's actually
Blowfish-256, and has been all along!)
2019-01-17 18:06:08 +00:00
|
|
|
X(cipher, ssh_cipher *, ssh_cipher_free(v)) \
|
New test system for mp_int and cryptography.
I've written a new standalone test program which incorporates all of
PuTTY's crypto code, including the mp_int and low-level elliptic curve
layers but also going all the way up to the implementations of the
MAC, hash, cipher, public key and kex abstractions.
The test program itself, 'testcrypt', speaks a simple line-oriented
protocol on standard I/O in which you write the name of a function
call followed by some inputs, and it gives you back a list of outputs
preceded by a line telling you how many there are. Dynamically
allocated objects are assigned string ids in the protocol, and there's
a 'free' function that tells testcrypt when it can dispose of one.
It's possible to speak that protocol by hand, but cumbersome. I've
also provided a Python module that wraps it, by running testcrypt as a
persistent subprocess and gatewaying all the function calls into
things that look reasonably natural to call from Python. The Python
module and testcrypt.c both read a carefully formatted header file
testcrypt.h which contains the name and signature of every exported
function, so it costs minimal effort to expose a given function
through this test API. In a few cases it's necessary to write a
wrapper in testcrypt.c that makes the function look more friendly, but
mostly you don't even need that. (Though that is one of the
motivations between a lot of API cleanups I've done recently!)
I considered doing Python integration in the more obvious way, by
linking parts of the PuTTY code directly into a native-code .so Python
module. I decided against it because this way is more flexible: I can
run the testcrypt program on its own, or compile it in a way that
Python wouldn't play nicely with (I bet compiling just that .so with
Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or
attach a debugger to it. I can even recompile testcrypt for a
different CPU architecture (32- vs 64-bit, or even running it on a
different machine over ssh or under emulation) and still layer the
nice API on top of that via the local Python interpreter. All I need
is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
|
|
|
X(mac, ssh2_mac *, ssh2_mac_free(v)) \
|
|
|
|
X(dh, dh_ctx *, dh_cleanup(v)) \
|
2022-04-14 06:04:33 +00:00
|
|
|
X(ecdh, ecdh_key *, ecdh_key_free(v)) \
|
New test system for mp_int and cryptography.
I've written a new standalone test program which incorporates all of
PuTTY's crypto code, including the mp_int and low-level elliptic curve
layers but also going all the way up to the implementations of the
MAC, hash, cipher, public key and kex abstractions.
The test program itself, 'testcrypt', speaks a simple line-oriented
protocol on standard I/O in which you write the name of a function
call followed by some inputs, and it gives you back a list of outputs
preceded by a line telling you how many there are. Dynamically
allocated objects are assigned string ids in the protocol, and there's
a 'free' function that tells testcrypt when it can dispose of one.
It's possible to speak that protocol by hand, but cumbersome. I've
also provided a Python module that wraps it, by running testcrypt as a
persistent subprocess and gatewaying all the function calls into
things that look reasonably natural to call from Python. The Python
module and testcrypt.c both read a carefully formatted header file
testcrypt.h which contains the name and signature of every exported
function, so it costs minimal effort to expose a given function
through this test API. In a few cases it's necessary to write a
wrapper in testcrypt.c that makes the function look more friendly, but
mostly you don't even need that. (Though that is one of the
motivations between a lot of API cleanups I've done recently!)
I considered doing Python integration in the more obvious way, by
linking parts of the PuTTY code directly into a native-code .so Python
module. I decided against it because this way is more flexible: I can
run the testcrypt program on its own, or compile it in a way that
Python wouldn't play nicely with (I bet compiling just that .so with
Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or
attach a debugger to it. I can even recompile testcrypt for a
different CPU architecture (32- vs 64-bit, or even running it on a
different machine over ssh or under emulation) and still layer the
nice API on top of that via the local Python interpreter. All I need
is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
|
|
|
X(rsakex, RSAKey *, ssh_rsakex_freekey(v)) \
|
|
|
|
X(rsa, RSAKey *, rsa_free(v)) \
|
Replace PuTTY's PRNG with a Fortuna-like system.
This tears out the entire previous random-pool system in sshrand.c. In
its place is a system pretty close to Ferguson and Schneier's
'Fortuna' generator, with the main difference being that I use SHA-256
instead of AES for the generation side of the system (rationale given
in comment).
The PRNG implementation lives in sshprng.c, and defines a self-
contained data type with no state stored outside the object, so you
can instantiate however many of them you like. The old sshrand.c still
exists, but in place of the previous random pool system, it's just
become a client of sshprng.c, whose job is to hold a single global
instance of the PRNG type, and manage its reference count, save file,
noise-collection timers and similar administrative business.
Advantages of this change include:
- Fortuna is designed with a more varied threat model in mind than my
old home-grown random pool. For example, after any request for
random numbers, it automatically re-seeds itself, so that if the
state of the PRNG should be leaked, it won't give enough
information to find out what past outputs _were_.
- The PRNG type can be instantiated with any hash function; the
instance used by the main tools is based on SHA-256, an improvement
on the old pool's use of SHA-1.
- The new PRNG only uses the completely standard interface to the
hash function API, instead of having to have privileged access to
the internal SHA-1 block transform function. This will make it
easier to revamp the hash code in general, and also it means that
hardware-accelerated versions of SHA-256 will automatically be used
for the PRNG as well as for everything else.
- The new PRNG can be _tested_! Because it has an actual (if not
quite explicit) specification for exactly what the output numbers
_ought_ to be derived from the hashes of, I can (and have) put
tests in cryptsuite that ensure the output really is being derived
in the way I think it is. The old pool could have been returning
any old nonsense and it would have been very hard to tell for sure.
2019-01-22 22:42:41 +00:00
|
|
|
X(prng, prng *, prng_free(v)) \
|
cmdgen: add a --dump option.
Also spelled '-O text', this takes a public or private key as input,
and produces on standard output a dump of all the actual numbers
involved in the key: the exponent and modulus for RSA, the p,q,g,y
parameters for DSA, the affine x and y coordinates of the public
elliptic curve point for ECC keys, and all the extra bits and pieces
in the private keys too.
Partly I expect this to be useful to me for debugging: I've had to
paste key files a few too many times through base64 decoders and hex
dump tools, then manually decode SSH marshalling and paste the result
into the Python REPL to get an integer object. Now I should be able to
get _straight_ to text I can paste into Python.
But also, it's a way that other applications can use the key
generator: if you need to generate, say, an RSA key in some format I
don't support (I've recently heard of an XML-based one, for example),
then you can run 'puttygen -t rsa --dump' and have it print the
elements of a freshly generated keypair on standard output, and then
all you have to do is understand the output format.
2020-02-17 19:53:19 +00:00
|
|
|
X(keycomponents, key_components *, key_components_free(v)) \
|
2020-02-23 14:30:03 +00:00
|
|
|
X(pcs, PrimeCandidateSource *, pcs_free(v)) \
|
2020-02-29 09:10:47 +00:00
|
|
|
X(pgc, PrimeGenerationContext *, primegen_free_context(v)) \
|
2020-02-23 15:16:30 +00:00
|
|
|
X(pockle, Pockle *, pockle_free(v)) \
|
2021-08-27 16:43:40 +00:00
|
|
|
X(millerrabin, MillerRabin *, miller_rabin_free(v)) \
|
Implement OpenSSH 9.x's NTRU Prime / Curve25519 kex.
This consists of DJB's 'Streamlined NTRU Prime' quantum-resistant
cryptosystem, currently in round 3 of the NIST post-quantum key
exchange competition; it's run in parallel with ordinary Curve25519,
and generates a shared secret combining the output of both systems.
(Hence, even if you don't trust this newfangled NTRU Prime thing at
all, it's at least no _less_ secure than the kex you were using
already.)
As the OpenSSH developers point out, key exchange is the most urgent
thing to make quantum-resistant, even before working quantum computers
big enough to break crypto become available, because a break of the
kex algorithm can be applied retroactively to recordings of your past
sessions. By contrast, authentication is a real-time protocol, and can
only be broken by a quantum computer if there's one available to
attack you _already_.
I've implemented both sides of the mechanism, so that PuTTY and Uppity
both support it. In my initial testing, the two sides can both
interoperate with the appropriate half of OpenSSH, and also (of
course, but it would be embarrassing to mess it up) with each other.
2022-04-15 16:19:47 +00:00
|
|
|
X(ntrukeypair, NTRUKeyPair *, ntru_keypair_free(v)) \
|
|
|
|
X(ntruencodeschedule, NTRUEncodeSchedule *, ntru_encode_schedule_free(v)) \
|
2024-12-07 19:32:46 +00:00
|
|
|
X(shakexof, ShakeXOF *, shake_xof_free(v)) \
|
New test system for mp_int and cryptography.
I've written a new standalone test program which incorporates all of
PuTTY's crypto code, including the mp_int and low-level elliptic curve
layers but also going all the way up to the implementations of the
MAC, hash, cipher, public key and kex abstractions.
The test program itself, 'testcrypt', speaks a simple line-oriented
protocol on standard I/O in which you write the name of a function
call followed by some inputs, and it gives you back a list of outputs
preceded by a line telling you how many there are. Dynamically
allocated objects are assigned string ids in the protocol, and there's
a 'free' function that tells testcrypt when it can dispose of one.
It's possible to speak that protocol by hand, but cumbersome. I've
also provided a Python module that wraps it, by running testcrypt as a
persistent subprocess and gatewaying all the function calls into
things that look reasonably natural to call from Python. The Python
module and testcrypt.c both read a carefully formatted header file
testcrypt.h which contains the name and signature of every exported
function, so it costs minimal effort to expose a given function
through this test API. In a few cases it's necessary to write a
wrapper in testcrypt.c that makes the function look more friendly, but
mostly you don't even need that. (Though that is one of the
motivations between a lot of API cleanups I've done recently!)
I considered doing Python integration in the more obvious way, by
linking parts of the PuTTY code directly into a native-code .so Python
module. I decided against it because this way is more flexible: I can
run the testcrypt program on its own, or compile it in a way that
Python wouldn't play nicely with (I bet compiling just that .so with
Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or
attach a debugger to it. I can even recompile testcrypt for a
different CPU architecture (32- vs 64-bit, or even running it on a
different machine over ssh or under emulation) and still layer the
nice API on top of that via the local Python interpreter. All I need
is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
|
|
|
/* end of list */
|
|
|
|
|
|
|
|
typedef struct Value Value;
|
|
|
|
|
|
|
|
enum ValueType {
|
|
|
|
#define VALTYPE_ENUM(n,t,f) VT_##n,
|
|
|
|
VALUE_TYPES(VALTYPE_ENUM)
|
|
|
|
#undef VALTYPE_ENUM
|
2019-09-08 19:29:00 +00:00
|
|
|
};
|
New test system for mp_int and cryptography.
I've written a new standalone test program which incorporates all of
PuTTY's crypto code, including the mp_int and low-level elliptic curve
layers but also going all the way up to the implementations of the
MAC, hash, cipher, public key and kex abstractions.
The test program itself, 'testcrypt', speaks a simple line-oriented
protocol on standard I/O in which you write the name of a function
call followed by some inputs, and it gives you back a list of outputs
preceded by a line telling you how many there are. Dynamically
allocated objects are assigned string ids in the protocol, and there's
a 'free' function that tells testcrypt when it can dispose of one.
It's possible to speak that protocol by hand, but cumbersome. I've
also provided a Python module that wraps it, by running testcrypt as a
persistent subprocess and gatewaying all the function calls into
things that look reasonably natural to call from Python. The Python
module and testcrypt.c both read a carefully formatted header file
testcrypt.h which contains the name and signature of every exported
function, so it costs minimal effort to expose a given function
through this test API. In a few cases it's necessary to write a
wrapper in testcrypt.c that makes the function look more friendly, but
mostly you don't even need that. (Though that is one of the
motivations between a lot of API cleanups I've done recently!)
I considered doing Python integration in the more obvious way, by
linking parts of the PuTTY code directly into a native-code .so Python
module. I decided against it because this way is more flexible: I can
run the testcrypt program on its own, or compile it in a way that
Python wouldn't play nicely with (I bet compiling just that .so with
Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or
attach a debugger to it. I can even recompile testcrypt for a
different CPU architecture (32- vs 64-bit, or even running it on a
different machine over ssh or under emulation) and still layer the
nice API on top of that via the local Python interpreter. All I need
is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
|
|
|
|
2019-01-11 19:13:27 +00:00
|
|
|
typedef enum ValueType ValueType;
|
|
|
|
|
2020-01-29 06:22:01 +00:00
|
|
|
static const char *const type_names[] = {
|
New test system for mp_int and cryptography.
I've written a new standalone test program which incorporates all of
PuTTY's crypto code, including the mp_int and low-level elliptic curve
layers but also going all the way up to the implementations of the
MAC, hash, cipher, public key and kex abstractions.
The test program itself, 'testcrypt', speaks a simple line-oriented
protocol on standard I/O in which you write the name of a function
call followed by some inputs, and it gives you back a list of outputs
preceded by a line telling you how many there are. Dynamically
allocated objects are assigned string ids in the protocol, and there's
a 'free' function that tells testcrypt when it can dispose of one.
It's possible to speak that protocol by hand, but cumbersome. I've
also provided a Python module that wraps it, by running testcrypt as a
persistent subprocess and gatewaying all the function calls into
things that look reasonably natural to call from Python. The Python
module and testcrypt.c both read a carefully formatted header file
testcrypt.h which contains the name and signature of every exported
function, so it costs minimal effort to expose a given function
through this test API. In a few cases it's necessary to write a
wrapper in testcrypt.c that makes the function look more friendly, but
mostly you don't even need that. (Though that is one of the
motivations between a lot of API cleanups I've done recently!)
I considered doing Python integration in the more obvious way, by
linking parts of the PuTTY code directly into a native-code .so Python
module. I decided against it because this way is more flexible: I can
run the testcrypt program on its own, or compile it in a way that
Python wouldn't play nicely with (I bet compiling just that .so with
Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or
attach a debugger to it. I can even recompile testcrypt for a
different CPU architecture (32- vs 64-bit, or even running it on a
different machine over ssh or under emulation) and still layer the
nice API on top of that via the local Python interpreter. All I need
is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
|
|
|
#define VALTYPE_NAME(n,t,f) #n,
|
|
|
|
VALUE_TYPES(VALTYPE_NAME)
|
|
|
|
#undef VALTYPE_NAME
|
|
|
|
};
|
|
|
|
|
2020-02-29 09:10:47 +00:00
|
|
|
#define VALTYPE_TYPEDEF(n,t,f) \
|
|
|
|
typedef t TD_val_##n; \
|
|
|
|
typedef t *TD_out_val_##n;
|
|
|
|
VALUE_TYPES(VALTYPE_TYPEDEF)
|
|
|
|
#undef VALTYPE_TYPEDEF
|
|
|
|
|
New test system for mp_int and cryptography.
I've written a new standalone test program which incorporates all of
PuTTY's crypto code, including the mp_int and low-level elliptic curve
layers but also going all the way up to the implementations of the
MAC, hash, cipher, public key and kex abstractions.
The test program itself, 'testcrypt', speaks a simple line-oriented
protocol on standard I/O in which you write the name of a function
call followed by some inputs, and it gives you back a list of outputs
preceded by a line telling you how many there are. Dynamically
allocated objects are assigned string ids in the protocol, and there's
a 'free' function that tells testcrypt when it can dispose of one.
It's possible to speak that protocol by hand, but cumbersome. I've
also provided a Python module that wraps it, by running testcrypt as a
persistent subprocess and gatewaying all the function calls into
things that look reasonably natural to call from Python. The Python
module and testcrypt.c both read a carefully formatted header file
testcrypt.h which contains the name and signature of every exported
function, so it costs minimal effort to expose a given function
through this test API. In a few cases it's necessary to write a
wrapper in testcrypt.c that makes the function look more friendly, but
mostly you don't even need that. (Though that is one of the
motivations between a lot of API cleanups I've done recently!)
I considered doing Python integration in the more obvious way, by
linking parts of the PuTTY code directly into a native-code .so Python
module. I decided against it because this way is more flexible: I can
run the testcrypt program on its own, or compile it in a way that
Python wouldn't play nicely with (I bet compiling just that .so with
Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or
attach a debugger to it. I can even recompile testcrypt for a
different CPU architecture (32- vs 64-bit, or even running it on a
different machine over ssh or under emulation) and still layer the
nice API on top of that via the local Python interpreter. All I need
is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
|
|
|
struct Value {
|
|
|
|
/*
|
|
|
|
* Protocol identifier assigned to this value when it was created.
|
|
|
|
* Lives in the same malloced block as this Value object itself.
|
|
|
|
*/
|
|
|
|
ptrlen id;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Type of the value.
|
|
|
|
*/
|
|
|
|
ValueType type;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Union of all the things it could hold.
|
|
|
|
*/
|
|
|
|
union {
|
|
|
|
#define VALTYPE_UNION(n,t,f) t vu_##n;
|
|
|
|
VALUE_TYPES(VALTYPE_UNION)
|
|
|
|
#undef VALTYPE_UNION
|
2020-01-06 19:58:25 +00:00
|
|
|
|
|
|
|
char *bare_string;
|
New test system for mp_int and cryptography.
I've written a new standalone test program which incorporates all of
PuTTY's crypto code, including the mp_int and low-level elliptic curve
layers but also going all the way up to the implementations of the
MAC, hash, cipher, public key and kex abstractions.
The test program itself, 'testcrypt', speaks a simple line-oriented
protocol on standard I/O in which you write the name of a function
call followed by some inputs, and it gives you back a list of outputs
preceded by a line telling you how many there are. Dynamically
allocated objects are assigned string ids in the protocol, and there's
a 'free' function that tells testcrypt when it can dispose of one.
It's possible to speak that protocol by hand, but cumbersome. I've
also provided a Python module that wraps it, by running testcrypt as a
persistent subprocess and gatewaying all the function calls into
things that look reasonably natural to call from Python. The Python
module and testcrypt.c both read a carefully formatted header file
testcrypt.h which contains the name and signature of every exported
function, so it costs minimal effort to expose a given function
through this test API. In a few cases it's necessary to write a
wrapper in testcrypt.c that makes the function look more friendly, but
mostly you don't even need that. (Though that is one of the
motivations between a lot of API cleanups I've done recently!)
I considered doing Python integration in the more obvious way, by
linking parts of the PuTTY code directly into a native-code .so Python
module. I decided against it because this way is more flexible: I can
run the testcrypt program on its own, or compile it in a way that
Python wouldn't play nicely with (I bet compiling just that .so with
Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or
attach a debugger to it. I can even recompile testcrypt for a
different CPU architecture (32- vs 64-bit, or even running it on a
different machine over ssh or under emulation) and still layer the
nice API on top of that via the local Python interpreter. All I need
is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
|
|
|
};
|
|
|
|
};
|
|
|
|
|
|
|
|
static int valuecmp(void *av, void *bv)
|
|
|
|
{
|
|
|
|
Value *a = (Value *)av, *b = (Value *)bv;
|
|
|
|
return ptrlen_strcmp(a->id, b->id);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int valuefind(void *av, void *bv)
|
|
|
|
{
|
|
|
|
ptrlen *a = (ptrlen *)av;
|
|
|
|
Value *b = (Value *)bv;
|
|
|
|
return ptrlen_strcmp(*a, b->id);
|
|
|
|
}
|
|
|
|
|
|
|
|
static tree234 *values;
|
|
|
|
|
|
|
|
static Value *value_new(ValueType vt)
|
|
|
|
{
|
|
|
|
static uint64_t next_index = 0;
|
|
|
|
|
|
|
|
char *name = dupprintf("%s%"PRIu64, type_names[vt], next_index++);
|
|
|
|
size_t namelen = strlen(name);
|
|
|
|
|
|
|
|
Value *val = snew_plus(Value, namelen+1);
|
|
|
|
memcpy(snew_plus_get_aux(val), name, namelen+1);
|
|
|
|
val->id.ptr = snew_plus_get_aux(val);
|
|
|
|
val->id.len = namelen;
|
|
|
|
val->type = vt;
|
|
|
|
|
|
|
|
Value *added = add234(values, val);
|
|
|
|
assert(added == val);
|
|
|
|
|
|
|
|
sfree(name);
|
|
|
|
|
|
|
|
return val;
|
|
|
|
}
|
|
|
|
|
|
|
|
#define VALTYPE_RETURNFN(n,t,f) \
|
|
|
|
void return_val_##n(strbuf *out, t v) { \
|
|
|
|
Value *val = value_new(VT_##n); \
|
|
|
|
val->vu_##n = v; \
|
|
|
|
put_datapl(out, val->id); \
|
|
|
|
put_byte(out, '\n'); \
|
|
|
|
}
|
|
|
|
VALUE_TYPES(VALTYPE_RETURNFN)
|
|
|
|
#undef VALTYPE_RETURNFN
|
|
|
|
|
|
|
|
static ptrlen get_word(BinarySource *in)
|
|
|
|
{
|
|
|
|
ptrlen toret;
|
|
|
|
toret.ptr = get_ptr(in);
|
|
|
|
toret.len = 0;
|
|
|
|
while (get_avail(in) && get_byte(in) != ' ')
|
|
|
|
toret.len++;
|
|
|
|
return toret;
|
|
|
|
}
|
|
|
|
|
2021-11-22 18:35:08 +00:00
|
|
|
typedef uintmax_t TD_uint;
|
|
|
|
typedef bool TD_boolean;
|
|
|
|
typedef ptrlen TD_val_string_ptrlen;
|
|
|
|
typedef char *TD_val_string_asciz;
|
|
|
|
typedef BinarySource *TD_val_string_binarysource;
|
|
|
|
typedef unsigned *TD_out_uint;
|
|
|
|
typedef BinarySink *TD_out_val_string_binarysink;
|
|
|
|
typedef const char *TD_opt_val_string_asciz;
|
|
|
|
typedef char **TD_out_val_string_asciz;
|
|
|
|
typedef char **TD_out_opt_val_string_asciz;
|
|
|
|
typedef const char **TD_out_opt_val_string_asciz_const;
|
|
|
|
typedef const ssh_hashalg *TD_hashalg;
|
|
|
|
typedef const ssh2_macalg *TD_macalg;
|
|
|
|
typedef const ssh_keyalg *TD_keyalg;
|
|
|
|
typedef const ssh_cipheralg *TD_cipheralg;
|
|
|
|
typedef const ssh_kex *TD_dh_group;
|
|
|
|
typedef const ssh_kex *TD_ecdh_alg;
|
|
|
|
typedef RsaSsh1Order TD_rsaorder;
|
|
|
|
typedef key_components *TD_keycomponents;
|
|
|
|
typedef const PrimeGenerationPolicy *TD_primegenpolicy;
|
|
|
|
typedef struct mpint_list TD_mpint_list;
|
Implement OpenSSH 9.x's NTRU Prime / Curve25519 kex.
This consists of DJB's 'Streamlined NTRU Prime' quantum-resistant
cryptosystem, currently in round 3 of the NIST post-quantum key
exchange competition; it's run in parallel with ordinary Curve25519,
and generates a shared secret combining the output of both systems.
(Hence, even if you don't trust this newfangled NTRU Prime thing at
all, it's at least no _less_ secure than the kex you were using
already.)
As the OpenSSH developers point out, key exchange is the most urgent
thing to make quantum-resistant, even before working quantum computers
big enough to break crypto become available, because a break of the
kex algorithm can be applied retroactively to recordings of your past
sessions. By contrast, authentication is a real-time protocol, and can
only be broken by a quantum computer if there's one available to
attack you _already_.
I've implemented both sides of the mechanism, so that PuTTY and Uppity
both support it. In my initial testing, the two sides can both
interoperate with the appropriate half of OpenSSH, and also (of
course, but it would be embarrassing to mess it up) with each other.
2022-04-15 16:19:47 +00:00
|
|
|
typedef struct int16_list *TD_int16_list;
|
2021-11-22 18:35:08 +00:00
|
|
|
typedef PockleStatus TD_pocklestatus;
|
|
|
|
typedef struct mr_result TD_mr_result;
|
|
|
|
typedef Argon2Flavour TD_argon2flavour;
|
|
|
|
typedef FingerprintType TD_fptype;
|
|
|
|
typedef HttpDigestHash TD_httpdigesthash;
|
New post-quantum kex: ML-KEM, and three hybrids of it.
As standardised by NIST in FIPS 203, this is a lattice-based
post-quantum KEM.
Very vaguely, the idea of it is that your public key is a matrix A and
vector t, and the private key is the knowledge of how to decompose t
into two vectors with all their coefficients small, one transformed by
A relative to the other. Encryption of a binary secret starts by
turning each bit into one of two maximally separated residues mod a
prime q, and then adding 'noise' based on the public key in the form
of small increments and decrements mod q, again with some of the noise
transformed by A relative to the rest. Decryption uses the knowledge
of t's decomposition to align the two sets of noise so that the
_large_ changes (which masked the secret from an eavesdropper) cancel
out, leaving only a collection of small changes to the original secret
vector. Then the vector of input bits can be recovered by assuming
that those accumulated small pieces of noise haven't concentrated in
any particular residue enough to push it more than half way to the
other of its possible starting values.
A weird feature of it is that decryption is not a true mathematical
inverse of encryption. The assumption that the noise doesn't get large
enough to flip any bit of the secret is only probabilistically valid,
not a hard guarantee. In other words, key agreement can fail, simply
by getting particularly unlucky with the distribution of your random
noise! However, the probability of a failure is very low - less than
2^-138 even for ML-KEM-512, and gets even smaller with the larger
variants.
An awkward feature for our purposes is that the matrix A, containing a
large number of residues mod the prime q=3329, is required to be
constructed by a process of rejection sampling, i.e. generating random
12-bit values and throwing away the out-of-range ones. That would be a
real pain for our side-channel testing system, which generally handles
rejection sampling badly (since it necessarily involves data-dependent
control flow and timing variation). Fortunately, the matrix and the
random seed it was made from are both public: the matrix seed is
transmitted as part of the public key, so it's not necessary to try to
hide it. Accordingly, I was able to get the implementation to pass
testsc by means of not varying the matrix seed between runs, which is
justified by the principle of testsc that you vary the _secrets_ to
ensure timing is independent of them - and the matrix seed isn't a
secret, so you're allowed to keep it the same.
The three hybrid algorithms, defined by the current Internet-Draft
draft-kampanakis-curdle-ssh-pq-ke, include one hybrid of ML-KEM-768
with Curve25519 in exactly the same way we were already hybridising
NTRU Prime with Curve25519, and two more hybrids of ML-KEM with ECDH
over a NIST curve. The former hybrid interoperates with the
implementation in OpenSSH 9.9; all three interoperate with the fork
'openssh-oqs' at github.com/open-quantum-safe/openssh, and also with
the Python library AsyncSSH.
2024-12-07 19:33:39 +00:00
|
|
|
typedef const mlkem_params *TD_mlkem_params;
|
2021-11-22 18:35:08 +00:00
|
|
|
|
2021-11-22 18:38:36 +00:00
|
|
|
#define BEGIN_ENUM_TYPE(name) \
|
|
|
|
static bool enum_translate_##name(ptrlen valname, TD_##name *out) { \
|
|
|
|
static const struct { \
|
|
|
|
const char *key; \
|
|
|
|
TD_##name value; \
|
|
|
|
} mapping[] = {
|
|
|
|
#define ENUM_VALUE(name, value) {name, value},
|
|
|
|
#define END_ENUM_TYPE(name) \
|
|
|
|
}; \
|
|
|
|
for (size_t i = 0; i < lenof(mapping); i++) \
|
|
|
|
if (ptrlen_eq_string(valname, mapping[i].key)) { \
|
|
|
|
if (out) \
|
|
|
|
*out = mapping[i].value; \
|
|
|
|
return true; \
|
|
|
|
} \
|
|
|
|
return false; \
|
|
|
|
} \
|
|
|
|
\
|
|
|
|
static TD_##name get_##name(BinarySource *in) { \
|
|
|
|
ptrlen valname = get_word(in); \
|
|
|
|
TD_##name out; \
|
|
|
|
if (enum_translate_##name(valname, &out)) \
|
|
|
|
return out; \
|
|
|
|
else \
|
|
|
|
fatal_error("%s '%.*s': not found", \
|
|
|
|
#name, PTRLEN_PRINTF(valname)); \
|
|
|
|
}
|
|
|
|
#include "testcrypt-enum.h"
|
|
|
|
#undef BEGIN_ENUM_TYPE
|
|
|
|
#undef ENUM_VALUE
|
|
|
|
#undef END_ENUM_TYPE
|
2021-11-20 14:56:32 +00:00
|
|
|
|
New test system for mp_int and cryptography.
I've written a new standalone test program which incorporates all of
PuTTY's crypto code, including the mp_int and low-level elliptic curve
layers but also going all the way up to the implementations of the
MAC, hash, cipher, public key and kex abstractions.
The test program itself, 'testcrypt', speaks a simple line-oriented
protocol on standard I/O in which you write the name of a function
call followed by some inputs, and it gives you back a list of outputs
preceded by a line telling you how many there are. Dynamically
allocated objects are assigned string ids in the protocol, and there's
a 'free' function that tells testcrypt when it can dispose of one.
It's possible to speak that protocol by hand, but cumbersome. I've
also provided a Python module that wraps it, by running testcrypt as a
persistent subprocess and gatewaying all the function calls into
things that look reasonably natural to call from Python. The Python
module and testcrypt.c both read a carefully formatted header file
testcrypt.h which contains the name and signature of every exported
function, so it costs minimal effort to expose a given function
through this test API. In a few cases it's necessary to write a
wrapper in testcrypt.c that makes the function look more friendly, but
mostly you don't even need that. (Though that is one of the
motivations between a lot of API cleanups I've done recently!)
I considered doing Python integration in the more obvious way, by
linking parts of the PuTTY code directly into a native-code .so Python
module. I decided against it because this way is more flexible: I can
run the testcrypt program on its own, or compile it in a way that
Python wouldn't play nicely with (I bet compiling just that .so with
Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or
attach a debugger to it. I can even recompile testcrypt for a
different CPU architecture (32- vs 64-bit, or even running it on a
different machine over ssh or under emulation) and still layer the
nice API on top of that via the local Python interpreter. All I need
is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
|
|
|
static uintmax_t get_uint(BinarySource *in)
|
|
|
|
{
|
|
|
|
ptrlen word = get_word(in);
|
|
|
|
char *string = mkstr(word);
|
|
|
|
uintmax_t toret = strtoumax(string, NULL, 0);
|
|
|
|
sfree(string);
|
|
|
|
return toret;
|
|
|
|
}
|
|
|
|
|
RSA generation: option to generate strong primes.
A 'strong' prime, as defined by the Handbook of Applied Cryptography,
is a prime p such that each of p-1 and p+1 has a large prime factor,
and that the large factor q of p-1 is such that q-1 in turn _also_ has
a large prime factor.
HoAC says that making your RSA key using primes of this form defeats
some factoring algorithms - but there are other faster algorithms to
which it makes no difference. So this is probably not a useful
precaution in practice. However, it has been recommended in the past
by some official standards, and it's easy to implement given the new
general facility in PrimeCandidateSource that lets you ask for your
prime to satisfy an arbitrary modular congruence. (And HoAC also says
there's no particular reason _not_ to use strong primes.) So I provide
it as an option, just in case anyone wants to select it.
The change to the key generation algorithm is entirely in sshrsag.c,
and is neatly independent of the prime-generation system in use. If
you're using Maurer provable prime generation, then the known factor q
of p-1 can be used to help certify p, and the one for q-1 to help with
q in turn; if you switch to probabilistic prime generation then you
still get an RSA key with the right structure, except that every time
the definition says 'prime factor' you just append '(probably)'.
(The probabilistic version of this procedure is described as 'Gordon's
algorithm' in HoAC section 4.4.2.)
2020-03-02 06:52:09 +00:00
|
|
|
static bool get_boolean(BinarySource *in)
|
|
|
|
{
|
|
|
|
return ptrlen_eq_string(get_word(in), "true");
|
|
|
|
}
|
|
|
|
|
New test system for mp_int and cryptography.
I've written a new standalone test program which incorporates all of
PuTTY's crypto code, including the mp_int and low-level elliptic curve
layers but also going all the way up to the implementations of the
MAC, hash, cipher, public key and kex abstractions.
The test program itself, 'testcrypt', speaks a simple line-oriented
protocol on standard I/O in which you write the name of a function
call followed by some inputs, and it gives you back a list of outputs
preceded by a line telling you how many there are. Dynamically
allocated objects are assigned string ids in the protocol, and there's
a 'free' function that tells testcrypt when it can dispose of one.
It's possible to speak that protocol by hand, but cumbersome. I've
also provided a Python module that wraps it, by running testcrypt as a
persistent subprocess and gatewaying all the function calls into
things that look reasonably natural to call from Python. The Python
module and testcrypt.c both read a carefully formatted header file
testcrypt.h which contains the name and signature of every exported
function, so it costs minimal effort to expose a given function
through this test API. In a few cases it's necessary to write a
wrapper in testcrypt.c that makes the function look more friendly, but
mostly you don't even need that. (Though that is one of the
motivations between a lot of API cleanups I've done recently!)
I considered doing Python integration in the more obvious way, by
linking parts of the PuTTY code directly into a native-code .so Python
module. I decided against it because this way is more flexible: I can
run the testcrypt program on its own, or compile it in a way that
Python wouldn't play nicely with (I bet compiling just that .so with
Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or
attach a debugger to it. I can even recompile testcrypt for a
different CPU architecture (32- vs 64-bit, or even running it on a
different machine over ssh or under emulation) and still layer the
nice API on top of that via the local Python interpreter. All I need
is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
|
|
|
static Value *lookup_value(ptrlen word)
|
|
|
|
{
|
|
|
|
Value *val = find234(values, &word, valuefind);
|
|
|
|
if (!val)
|
|
|
|
fatal_error("id '%.*s': not found", PTRLEN_PRINTF(word));
|
|
|
|
return val;
|
|
|
|
}
|
|
|
|
|
|
|
|
static Value *get_value(BinarySource *in)
|
|
|
|
{
|
|
|
|
return lookup_value(get_word(in));
|
|
|
|
}
|
|
|
|
|
|
|
|
typedef void (*finaliser_fn_t)(strbuf *out, void *ctx);
|
|
|
|
struct finaliser {
|
|
|
|
finaliser_fn_t fn;
|
|
|
|
void *ctx;
|
|
|
|
};
|
|
|
|
|
|
|
|
static struct finaliser *finalisers;
|
2020-01-29 06:22:01 +00:00
|
|
|
static size_t nfinalisers, finalisersize;
|
New test system for mp_int and cryptography.
I've written a new standalone test program which incorporates all of
PuTTY's crypto code, including the mp_int and low-level elliptic curve
layers but also going all the way up to the implementations of the
MAC, hash, cipher, public key and kex abstractions.
The test program itself, 'testcrypt', speaks a simple line-oriented
protocol on standard I/O in which you write the name of a function
call followed by some inputs, and it gives you back a list of outputs
preceded by a line telling you how many there are. Dynamically
allocated objects are assigned string ids in the protocol, and there's
a 'free' function that tells testcrypt when it can dispose of one.
It's possible to speak that protocol by hand, but cumbersome. I've
also provided a Python module that wraps it, by running testcrypt as a
persistent subprocess and gatewaying all the function calls into
things that look reasonably natural to call from Python. The Python
module and testcrypt.c both read a carefully formatted header file
testcrypt.h which contains the name and signature of every exported
function, so it costs minimal effort to expose a given function
through this test API. In a few cases it's necessary to write a
wrapper in testcrypt.c that makes the function look more friendly, but
mostly you don't even need that. (Though that is one of the
motivations between a lot of API cleanups I've done recently!)
I considered doing Python integration in the more obvious way, by
linking parts of the PuTTY code directly into a native-code .so Python
module. I decided against it because this way is more flexible: I can
run the testcrypt program on its own, or compile it in a way that
Python wouldn't play nicely with (I bet compiling just that .so with
Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or
attach a debugger to it. I can even recompile testcrypt for a
different CPU architecture (32- vs 64-bit, or even running it on a
different machine over ssh or under emulation) and still layer the
nice API on top of that via the local Python interpreter. All I need
is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
|
|
|
|
|
|
|
static void add_finaliser(finaliser_fn_t fn, void *ctx)
|
|
|
|
{
|
New array-growing macros: sgrowarray and sgrowarrayn.
The idea of these is that they centralise the common idiom along the
lines of
if (logical_array_len >= physical_array_size) {
physical_array_size = logical_array_len * 5 / 4 + 256;
array = sresize(array, physical_array_size, ElementType);
}
which happens at a zillion call sites throughout this code base, with
different random choices of the geometric factor and additive
constant, sometimes forgetting them completely, and generally doing a
lot of repeated work.
The new macro sgrowarray(array,size,n) has the semantics: here are the
array pointer and its physical size for you to modify, now please
ensure that the nth element exists, so I can write into it. And
sgrowarrayn(array,size,n,m) is the same except that it ensures that
the array has size at least n+m (so sgrowarray is just the special
case where m=1).
Now that this is a single centralised implementation that will be used
everywhere, I've also gone to more effort in the implementation, with
careful overflow checks that would have been painful to put at all the
previous call sites.
This commit also switches over every use of sresize(), apart from a
few where I really didn't think it would gain anything. A consequence
of that is that a lot of array-size variables have to have their types
changed to size_t, because the macros require that (they address-take
the size to pass to the underlying function).
2019-02-28 20:07:30 +00:00
|
|
|
sgrowarray(finalisers, finalisersize, nfinalisers);
|
New test system for mp_int and cryptography.
I've written a new standalone test program which incorporates all of
PuTTY's crypto code, including the mp_int and low-level elliptic curve
layers but also going all the way up to the implementations of the
MAC, hash, cipher, public key and kex abstractions.
The test program itself, 'testcrypt', speaks a simple line-oriented
protocol on standard I/O in which you write the name of a function
call followed by some inputs, and it gives you back a list of outputs
preceded by a line telling you how many there are. Dynamically
allocated objects are assigned string ids in the protocol, and there's
a 'free' function that tells testcrypt when it can dispose of one.
It's possible to speak that protocol by hand, but cumbersome. I've
also provided a Python module that wraps it, by running testcrypt as a
persistent subprocess and gatewaying all the function calls into
things that look reasonably natural to call from Python. The Python
module and testcrypt.c both read a carefully formatted header file
testcrypt.h which contains the name and signature of every exported
function, so it costs minimal effort to expose a given function
through this test API. In a few cases it's necessary to write a
wrapper in testcrypt.c that makes the function look more friendly, but
mostly you don't even need that. (Though that is one of the
motivations between a lot of API cleanups I've done recently!)
I considered doing Python integration in the more obvious way, by
linking parts of the PuTTY code directly into a native-code .so Python
module. I decided against it because this way is more flexible: I can
run the testcrypt program on its own, or compile it in a way that
Python wouldn't play nicely with (I bet compiling just that .so with
Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or
attach a debugger to it. I can even recompile testcrypt for a
different CPU architecture (32- vs 64-bit, or even running it on a
different machine over ssh or under emulation) and still layer the
nice API on top of that via the local Python interpreter. All I need
is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
|
|
|
finalisers[nfinalisers].fn = fn;
|
|
|
|
finalisers[nfinalisers].ctx = ctx;
|
|
|
|
nfinalisers++;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void run_finalisers(strbuf *out)
|
|
|
|
{
|
|
|
|
for (size_t i = 0; i < nfinalisers; i++)
|
|
|
|
finalisers[i].fn(out, finalisers[i].ctx);
|
|
|
|
nfinalisers = 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void finaliser_return_value(strbuf *out, void *ctx)
|
|
|
|
{
|
|
|
|
Value *val = (Value *)ctx;
|
|
|
|
put_datapl(out, val->id);
|
|
|
|
put_byte(out, '\n');
|
|
|
|
}
|
|
|
|
|
2020-02-23 15:16:30 +00:00
|
|
|
static void finaliser_sfree(strbuf *out, void *ctx)
|
|
|
|
{
|
|
|
|
sfree(ctx);
|
|
|
|
}
|
|
|
|
|
New test system for mp_int and cryptography.
I've written a new standalone test program which incorporates all of
PuTTY's crypto code, including the mp_int and low-level elliptic curve
layers but also going all the way up to the implementations of the
MAC, hash, cipher, public key and kex abstractions.
The test program itself, 'testcrypt', speaks a simple line-oriented
protocol on standard I/O in which you write the name of a function
call followed by some inputs, and it gives you back a list of outputs
preceded by a line telling you how many there are. Dynamically
allocated objects are assigned string ids in the protocol, and there's
a 'free' function that tells testcrypt when it can dispose of one.
It's possible to speak that protocol by hand, but cumbersome. I've
also provided a Python module that wraps it, by running testcrypt as a
persistent subprocess and gatewaying all the function calls into
things that look reasonably natural to call from Python. The Python
module and testcrypt.c both read a carefully formatted header file
testcrypt.h which contains the name and signature of every exported
function, so it costs minimal effort to expose a given function
through this test API. In a few cases it's necessary to write a
wrapper in testcrypt.c that makes the function look more friendly, but
mostly you don't even need that. (Though that is one of the
motivations between a lot of API cleanups I've done recently!)
I considered doing Python integration in the more obvious way, by
linking parts of the PuTTY code directly into a native-code .so Python
module. I decided against it because this way is more flexible: I can
run the testcrypt program on its own, or compile it in a way that
Python wouldn't play nicely with (I bet compiling just that .so with
Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or
attach a debugger to it. I can even recompile testcrypt for a
different CPU architecture (32- vs 64-bit, or even running it on a
different machine over ssh or under emulation) and still layer the
nice API on top of that via the local Python interpreter. All I need
is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
|
|
|
#define VALTYPE_GETFN(n,t,f) \
|
|
|
|
static Value *unwrap_value_##n(Value *val) { \
|
|
|
|
ValueType expected = VT_##n; \
|
|
|
|
if (expected != val->type) \
|
|
|
|
fatal_error("id '%.*s': expected %s, got %s", \
|
|
|
|
PTRLEN_PRINTF(val->id), \
|
|
|
|
type_names[expected], type_names[val->type]); \
|
|
|
|
return val; \
|
|
|
|
} \
|
|
|
|
static Value *get_value_##n(BinarySource *in) { \
|
|
|
|
return unwrap_value_##n(get_value(in)); \
|
|
|
|
} \
|
|
|
|
static t get_val_##n(BinarySource *in) { \
|
|
|
|
return get_value_##n(in)->vu_##n; \
|
|
|
|
}
|
|
|
|
VALUE_TYPES(VALTYPE_GETFN)
|
|
|
|
#undef VALTYPE_GETFN
|
|
|
|
|
|
|
|
static ptrlen get_val_string_ptrlen(BinarySource *in)
|
|
|
|
{
|
|
|
|
return ptrlen_from_strbuf(get_val_string(in));
|
|
|
|
}
|
|
|
|
|
|
|
|
static char *get_val_string_asciz(BinarySource *in)
|
|
|
|
{
|
|
|
|
return get_val_string(in)->s;
|
|
|
|
}
|
|
|
|
|
2020-01-06 19:58:25 +00:00
|
|
|
static strbuf *get_opt_val_string(BinarySource *in);
|
|
|
|
|
|
|
|
static char *get_opt_val_string_asciz(BinarySource *in)
|
|
|
|
{
|
|
|
|
strbuf *sb = get_opt_val_string(in);
|
|
|
|
return sb ? sb->s : NULL;
|
|
|
|
}
|
|
|
|
|
New test system for mp_int and cryptography.
I've written a new standalone test program which incorporates all of
PuTTY's crypto code, including the mp_int and low-level elliptic curve
layers but also going all the way up to the implementations of the
MAC, hash, cipher, public key and kex abstractions.
The test program itself, 'testcrypt', speaks a simple line-oriented
protocol on standard I/O in which you write the name of a function
call followed by some inputs, and it gives you back a list of outputs
preceded by a line telling you how many there are. Dynamically
allocated objects are assigned string ids in the protocol, and there's
a 'free' function that tells testcrypt when it can dispose of one.
It's possible to speak that protocol by hand, but cumbersome. I've
also provided a Python module that wraps it, by running testcrypt as a
persistent subprocess and gatewaying all the function calls into
things that look reasonably natural to call from Python. The Python
module and testcrypt.c both read a carefully formatted header file
testcrypt.h which contains the name and signature of every exported
function, so it costs minimal effort to expose a given function
through this test API. In a few cases it's necessary to write a
wrapper in testcrypt.c that makes the function look more friendly, but
mostly you don't even need that. (Though that is one of the
motivations between a lot of API cleanups I've done recently!)
I considered doing Python integration in the more obvious way, by
linking parts of the PuTTY code directly into a native-code .so Python
module. I decided against it because this way is more flexible: I can
run the testcrypt program on its own, or compile it in a way that
Python wouldn't play nicely with (I bet compiling just that .so with
Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or
attach a debugger to it. I can even recompile testcrypt for a
different CPU architecture (32- vs 64-bit, or even running it on a
different machine over ssh or under emulation) and still layer the
nice API on top of that via the local Python interpreter. All I need
is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
|
|
|
static mp_int **get_out_val_mpint(BinarySource *in)
|
|
|
|
{
|
|
|
|
Value *val = value_new(VT_mpint);
|
|
|
|
add_finaliser(finaliser_return_value, val);
|
|
|
|
return &val->vu_mpint;
|
|
|
|
}
|
|
|
|
|
2020-02-23 15:16:30 +00:00
|
|
|
struct mpint_list {
|
|
|
|
size_t n;
|
|
|
|
mp_int **integers;
|
|
|
|
};
|
|
|
|
|
|
|
|
static struct mpint_list get_mpint_list(BinarySource *in)
|
|
|
|
{
|
|
|
|
size_t n = get_uint(in);
|
|
|
|
|
|
|
|
struct mpint_list mpl;
|
|
|
|
mpl.n = n;
|
|
|
|
|
|
|
|
mpl.integers = snewn(n, mp_int *);
|
|
|
|
for (size_t i = 0; i < n; i++)
|
|
|
|
mpl.integers[i] = get_val_mpint(in);
|
|
|
|
|
|
|
|
add_finaliser(finaliser_sfree, mpl.integers);
|
|
|
|
return mpl;
|
|
|
|
}
|
|
|
|
|
Implement OpenSSH 9.x's NTRU Prime / Curve25519 kex.
This consists of DJB's 'Streamlined NTRU Prime' quantum-resistant
cryptosystem, currently in round 3 of the NIST post-quantum key
exchange competition; it's run in parallel with ordinary Curve25519,
and generates a shared secret combining the output of both systems.
(Hence, even if you don't trust this newfangled NTRU Prime thing at
all, it's at least no _less_ secure than the kex you were using
already.)
As the OpenSSH developers point out, key exchange is the most urgent
thing to make quantum-resistant, even before working quantum computers
big enough to break crypto become available, because a break of the
kex algorithm can be applied retroactively to recordings of your past
sessions. By contrast, authentication is a real-time protocol, and can
only be broken by a quantum computer if there's one available to
attack you _already_.
I've implemented both sides of the mechanism, so that PuTTY and Uppity
both support it. In my initial testing, the two sides can both
interoperate with the appropriate half of OpenSSH, and also (of
course, but it would be embarrassing to mess it up) with each other.
2022-04-15 16:19:47 +00:00
|
|
|
typedef struct int16_list {
|
|
|
|
size_t n;
|
|
|
|
uint16_t *integers;
|
|
|
|
} int16_list;
|
|
|
|
|
|
|
|
static void finaliser_int16_list_free(strbuf *out, void *vlist)
|
|
|
|
{
|
|
|
|
int16_list *list = (int16_list *)vlist;
|
|
|
|
sfree(list->integers);
|
|
|
|
sfree(list);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int16_list *make_int16_list(size_t n)
|
|
|
|
{
|
|
|
|
int16_list *list = snew(int16_list);
|
|
|
|
list->n = n;
|
|
|
|
list->integers = snewn(n, uint16_t);
|
|
|
|
add_finaliser(finaliser_int16_list_free, list);
|
|
|
|
return list;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int16_list *get_int16_list(BinarySource *in)
|
|
|
|
{
|
|
|
|
size_t n = get_uint(in);
|
|
|
|
int16_list *list = make_int16_list(n);
|
|
|
|
for (size_t i = 0; i < n; i++)
|
|
|
|
list->integers[i] = get_uint(in);
|
|
|
|
return list;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void return_int16_list(strbuf *out, int16_list *list)
|
|
|
|
{
|
|
|
|
for (size_t i = 0; i < list->n; i++) {
|
|
|
|
if (i > 0)
|
|
|
|
put_byte(out, ',');
|
|
|
|
put_fmt(out, "%d", (int)(int16_t)list->integers[i]);
|
|
|
|
}
|
|
|
|
put_byte(out, '\n');
|
|
|
|
}
|
|
|
|
|
New test system for mp_int and cryptography.
I've written a new standalone test program which incorporates all of
PuTTY's crypto code, including the mp_int and low-level elliptic curve
layers but also going all the way up to the implementations of the
MAC, hash, cipher, public key and kex abstractions.
The test program itself, 'testcrypt', speaks a simple line-oriented
protocol on standard I/O in which you write the name of a function
call followed by some inputs, and it gives you back a list of outputs
preceded by a line telling you how many there are. Dynamically
allocated objects are assigned string ids in the protocol, and there's
a 'free' function that tells testcrypt when it can dispose of one.
It's possible to speak that protocol by hand, but cumbersome. I've
also provided a Python module that wraps it, by running testcrypt as a
persistent subprocess and gatewaying all the function calls into
things that look reasonably natural to call from Python. The Python
module and testcrypt.c both read a carefully formatted header file
testcrypt.h which contains the name and signature of every exported
function, so it costs minimal effort to expose a given function
through this test API. In a few cases it's necessary to write a
wrapper in testcrypt.c that makes the function look more friendly, but
mostly you don't even need that. (Though that is one of the
motivations between a lot of API cleanups I've done recently!)
I considered doing Python integration in the more obvious way, by
linking parts of the PuTTY code directly into a native-code .so Python
module. I decided against it because this way is more flexible: I can
run the testcrypt program on its own, or compile it in a way that
Python wouldn't play nicely with (I bet compiling just that .so with
Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or
attach a debugger to it. I can even recompile testcrypt for a
different CPU architecture (32- vs 64-bit, or even running it on a
different machine over ssh or under emulation) and still layer the
nice API on top of that via the local Python interpreter. All I need
is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
|
|
|
static void finaliser_return_uint(strbuf *out, void *ctx)
|
|
|
|
{
|
|
|
|
unsigned *uval = (unsigned *)ctx;
|
2021-11-19 10:23:32 +00:00
|
|
|
put_fmt(out, "%u\n", *uval);
|
New test system for mp_int and cryptography.
I've written a new standalone test program which incorporates all of
PuTTY's crypto code, including the mp_int and low-level elliptic curve
layers but also going all the way up to the implementations of the
MAC, hash, cipher, public key and kex abstractions.
The test program itself, 'testcrypt', speaks a simple line-oriented
protocol on standard I/O in which you write the name of a function
call followed by some inputs, and it gives you back a list of outputs
preceded by a line telling you how many there are. Dynamically
allocated objects are assigned string ids in the protocol, and there's
a 'free' function that tells testcrypt when it can dispose of one.
It's possible to speak that protocol by hand, but cumbersome. I've
also provided a Python module that wraps it, by running testcrypt as a
persistent subprocess and gatewaying all the function calls into
things that look reasonably natural to call from Python. The Python
module and testcrypt.c both read a carefully formatted header file
testcrypt.h which contains the name and signature of every exported
function, so it costs minimal effort to expose a given function
through this test API. In a few cases it's necessary to write a
wrapper in testcrypt.c that makes the function look more friendly, but
mostly you don't even need that. (Though that is one of the
motivations between a lot of API cleanups I've done recently!)
I considered doing Python integration in the more obvious way, by
linking parts of the PuTTY code directly into a native-code .so Python
module. I decided against it because this way is more flexible: I can
run the testcrypt program on its own, or compile it in a way that
Python wouldn't play nicely with (I bet compiling just that .so with
Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or
attach a debugger to it. I can even recompile testcrypt for a
different CPU architecture (32- vs 64-bit, or even running it on a
different machine over ssh or under emulation) and still layer the
nice API on top of that via the local Python interpreter. All I need
is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
|
|
|
sfree(uval);
|
|
|
|
}
|
|
|
|
|
|
|
|
static unsigned *get_out_uint(BinarySource *in)
|
|
|
|
{
|
|
|
|
unsigned *uval = snew(unsigned);
|
|
|
|
add_finaliser(finaliser_return_uint, uval);
|
|
|
|
return uval;
|
|
|
|
}
|
|
|
|
|
New post-quantum kex: ML-KEM, and three hybrids of it.
As standardised by NIST in FIPS 203, this is a lattice-based
post-quantum KEM.
Very vaguely, the idea of it is that your public key is a matrix A and
vector t, and the private key is the knowledge of how to decompose t
into two vectors with all their coefficients small, one transformed by
A relative to the other. Encryption of a binary secret starts by
turning each bit into one of two maximally separated residues mod a
prime q, and then adding 'noise' based on the public key in the form
of small increments and decrements mod q, again with some of the noise
transformed by A relative to the rest. Decryption uses the knowledge
of t's decomposition to align the two sets of noise so that the
_large_ changes (which masked the secret from an eavesdropper) cancel
out, leaving only a collection of small changes to the original secret
vector. Then the vector of input bits can be recovered by assuming
that those accumulated small pieces of noise haven't concentrated in
any particular residue enough to push it more than half way to the
other of its possible starting values.
A weird feature of it is that decryption is not a true mathematical
inverse of encryption. The assumption that the noise doesn't get large
enough to flip any bit of the secret is only probabilistically valid,
not a hard guarantee. In other words, key agreement can fail, simply
by getting particularly unlucky with the distribution of your random
noise! However, the probability of a failure is very low - less than
2^-138 even for ML-KEM-512, and gets even smaller with the larger
variants.
An awkward feature for our purposes is that the matrix A, containing a
large number of residues mod the prime q=3329, is required to be
constructed by a process of rejection sampling, i.e. generating random
12-bit values and throwing away the out-of-range ones. That would be a
real pain for our side-channel testing system, which generally handles
rejection sampling badly (since it necessarily involves data-dependent
control flow and timing variation). Fortunately, the matrix and the
random seed it was made from are both public: the matrix seed is
transmitted as part of the public key, so it's not necessary to try to
hide it. Accordingly, I was able to get the implementation to pass
testsc by means of not varying the matrix seed between runs, which is
justified by the principle of testsc that you vary the _secrets_ to
ensure timing is independent of them - and the matrix seed isn't a
secret, so you're allowed to keep it the same.
The three hybrid algorithms, defined by the current Internet-Draft
draft-kampanakis-curdle-ssh-pq-ke, include one hybrid of ML-KEM-768
with Curve25519 in exactly the same way we were already hybridising
NTRU Prime with Curve25519, and two more hybrids of ML-KEM with ECDH
over a NIST curve. The former hybrid interoperates with the
implementation in OpenSSH 9.9; all three interoperate with the fork
'openssh-oqs' at github.com/open-quantum-safe/openssh, and also with
the Python library AsyncSSH.
2024-12-07 19:33:39 +00:00
|
|
|
static strbuf **get_out_val_string(BinarySource *in)
|
New test system for mp_int and cryptography.
I've written a new standalone test program which incorporates all of
PuTTY's crypto code, including the mp_int and low-level elliptic curve
layers but also going all the way up to the implementations of the
MAC, hash, cipher, public key and kex abstractions.
The test program itself, 'testcrypt', speaks a simple line-oriented
protocol on standard I/O in which you write the name of a function
call followed by some inputs, and it gives you back a list of outputs
preceded by a line telling you how many there are. Dynamically
allocated objects are assigned string ids in the protocol, and there's
a 'free' function that tells testcrypt when it can dispose of one.
It's possible to speak that protocol by hand, but cumbersome. I've
also provided a Python module that wraps it, by running testcrypt as a
persistent subprocess and gatewaying all the function calls into
things that look reasonably natural to call from Python. The Python
module and testcrypt.c both read a carefully formatted header file
testcrypt.h which contains the name and signature of every exported
function, so it costs minimal effort to expose a given function
through this test API. In a few cases it's necessary to write a
wrapper in testcrypt.c that makes the function look more friendly, but
mostly you don't even need that. (Though that is one of the
motivations between a lot of API cleanups I've done recently!)
I considered doing Python integration in the more obvious way, by
linking parts of the PuTTY code directly into a native-code .so Python
module. I decided against it because this way is more flexible: I can
run the testcrypt program on its own, or compile it in a way that
Python wouldn't play nicely with (I bet compiling just that .so with
Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or
attach a debugger to it. I can even recompile testcrypt for a
different CPU architecture (32- vs 64-bit, or even running it on a
different machine over ssh or under emulation) and still layer the
nice API on top of that via the local Python interpreter. All I need
is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
|
|
|
{
|
|
|
|
Value *val = value_new(VT_string);
|
New post-quantum kex: ML-KEM, and three hybrids of it.
As standardised by NIST in FIPS 203, this is a lattice-based
post-quantum KEM.
Very vaguely, the idea of it is that your public key is a matrix A and
vector t, and the private key is the knowledge of how to decompose t
into two vectors with all their coefficients small, one transformed by
A relative to the other. Encryption of a binary secret starts by
turning each bit into one of two maximally separated residues mod a
prime q, and then adding 'noise' based on the public key in the form
of small increments and decrements mod q, again with some of the noise
transformed by A relative to the rest. Decryption uses the knowledge
of t's decomposition to align the two sets of noise so that the
_large_ changes (which masked the secret from an eavesdropper) cancel
out, leaving only a collection of small changes to the original secret
vector. Then the vector of input bits can be recovered by assuming
that those accumulated small pieces of noise haven't concentrated in
any particular residue enough to push it more than half way to the
other of its possible starting values.
A weird feature of it is that decryption is not a true mathematical
inverse of encryption. The assumption that the noise doesn't get large
enough to flip any bit of the secret is only probabilistically valid,
not a hard guarantee. In other words, key agreement can fail, simply
by getting particularly unlucky with the distribution of your random
noise! However, the probability of a failure is very low - less than
2^-138 even for ML-KEM-512, and gets even smaller with the larger
variants.
An awkward feature for our purposes is that the matrix A, containing a
large number of residues mod the prime q=3329, is required to be
constructed by a process of rejection sampling, i.e. generating random
12-bit values and throwing away the out-of-range ones. That would be a
real pain for our side-channel testing system, which generally handles
rejection sampling badly (since it necessarily involves data-dependent
control flow and timing variation). Fortunately, the matrix and the
random seed it was made from are both public: the matrix seed is
transmitted as part of the public key, so it's not necessary to try to
hide it. Accordingly, I was able to get the implementation to pass
testsc by means of not varying the matrix seed between runs, which is
justified by the principle of testsc that you vary the _secrets_ to
ensure timing is independent of them - and the matrix seed isn't a
secret, so you're allowed to keep it the same.
The three hybrid algorithms, defined by the current Internet-Draft
draft-kampanakis-curdle-ssh-pq-ke, include one hybrid of ML-KEM-768
with Curve25519 in exactly the same way we were already hybridising
NTRU Prime with Curve25519, and two more hybrids of ML-KEM with ECDH
over a NIST curve. The former hybrid interoperates with the
implementation in OpenSSH 9.9; all three interoperate with the fork
'openssh-oqs' at github.com/open-quantum-safe/openssh, and also with
the Python library AsyncSSH.
2024-12-07 19:33:39 +00:00
|
|
|
val->vu_string = NULL;
|
New test system for mp_int and cryptography.
I've written a new standalone test program which incorporates all of
PuTTY's crypto code, including the mp_int and low-level elliptic curve
layers but also going all the way up to the implementations of the
MAC, hash, cipher, public key and kex abstractions.
The test program itself, 'testcrypt', speaks a simple line-oriented
protocol on standard I/O in which you write the name of a function
call followed by some inputs, and it gives you back a list of outputs
preceded by a line telling you how many there are. Dynamically
allocated objects are assigned string ids in the protocol, and there's
a 'free' function that tells testcrypt when it can dispose of one.
It's possible to speak that protocol by hand, but cumbersome. I've
also provided a Python module that wraps it, by running testcrypt as a
persistent subprocess and gatewaying all the function calls into
things that look reasonably natural to call from Python. The Python
module and testcrypt.c both read a carefully formatted header file
testcrypt.h which contains the name and signature of every exported
function, so it costs minimal effort to expose a given function
through this test API. In a few cases it's necessary to write a
wrapper in testcrypt.c that makes the function look more friendly, but
mostly you don't even need that. (Though that is one of the
motivations between a lot of API cleanups I've done recently!)
I considered doing Python integration in the more obvious way, by
linking parts of the PuTTY code directly into a native-code .so Python
module. I decided against it because this way is more flexible: I can
run the testcrypt program on its own, or compile it in a way that
Python wouldn't play nicely with (I bet compiling just that .so with
Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or
attach a debugger to it. I can even recompile testcrypt for a
different CPU architecture (32- vs 64-bit, or even running it on a
different machine over ssh or under emulation) and still layer the
nice API on top of that via the local Python interpreter. All I need
is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
|
|
|
add_finaliser(finaliser_return_value, val);
|
New post-quantum kex: ML-KEM, and three hybrids of it.
As standardised by NIST in FIPS 203, this is a lattice-based
post-quantum KEM.
Very vaguely, the idea of it is that your public key is a matrix A and
vector t, and the private key is the knowledge of how to decompose t
into two vectors with all their coefficients small, one transformed by
A relative to the other. Encryption of a binary secret starts by
turning each bit into one of two maximally separated residues mod a
prime q, and then adding 'noise' based on the public key in the form
of small increments and decrements mod q, again with some of the noise
transformed by A relative to the rest. Decryption uses the knowledge
of t's decomposition to align the two sets of noise so that the
_large_ changes (which masked the secret from an eavesdropper) cancel
out, leaving only a collection of small changes to the original secret
vector. Then the vector of input bits can be recovered by assuming
that those accumulated small pieces of noise haven't concentrated in
any particular residue enough to push it more than half way to the
other of its possible starting values.
A weird feature of it is that decryption is not a true mathematical
inverse of encryption. The assumption that the noise doesn't get large
enough to flip any bit of the secret is only probabilistically valid,
not a hard guarantee. In other words, key agreement can fail, simply
by getting particularly unlucky with the distribution of your random
noise! However, the probability of a failure is very low - less than
2^-138 even for ML-KEM-512, and gets even smaller with the larger
variants.
An awkward feature for our purposes is that the matrix A, containing a
large number of residues mod the prime q=3329, is required to be
constructed by a process of rejection sampling, i.e. generating random
12-bit values and throwing away the out-of-range ones. That would be a
real pain for our side-channel testing system, which generally handles
rejection sampling badly (since it necessarily involves data-dependent
control flow and timing variation). Fortunately, the matrix and the
random seed it was made from are both public: the matrix seed is
transmitted as part of the public key, so it's not necessary to try to
hide it. Accordingly, I was able to get the implementation to pass
testsc by means of not varying the matrix seed between runs, which is
justified by the principle of testsc that you vary the _secrets_ to
ensure timing is independent of them - and the matrix seed isn't a
secret, so you're allowed to keep it the same.
The three hybrid algorithms, defined by the current Internet-Draft
draft-kampanakis-curdle-ssh-pq-ke, include one hybrid of ML-KEM-768
with Curve25519 in exactly the same way we were already hybridising
NTRU Prime with Curve25519, and two more hybrids of ML-KEM with ECDH
over a NIST curve. The former hybrid interoperates with the
implementation in OpenSSH 9.9; all three interoperate with the fork
'openssh-oqs' at github.com/open-quantum-safe/openssh, and also with
the Python library AsyncSSH.
2024-12-07 19:33:39 +00:00
|
|
|
return &val->vu_string;
|
|
|
|
}
|
|
|
|
|
|
|
|
static BinarySink *get_out_val_string_binarysink(BinarySource *in)
|
|
|
|
{
|
|
|
|
strbuf *sb = strbuf_new();
|
|
|
|
*get_out_val_string(in) = sb;
|
|
|
|
return BinarySink_UPCAST(sb);
|
New test system for mp_int and cryptography.
I've written a new standalone test program which incorporates all of
PuTTY's crypto code, including the mp_int and low-level elliptic curve
layers but also going all the way up to the implementations of the
MAC, hash, cipher, public key and kex abstractions.
The test program itself, 'testcrypt', speaks a simple line-oriented
protocol on standard I/O in which you write the name of a function
call followed by some inputs, and it gives you back a list of outputs
preceded by a line telling you how many there are. Dynamically
allocated objects are assigned string ids in the protocol, and there's
a 'free' function that tells testcrypt when it can dispose of one.
It's possible to speak that protocol by hand, but cumbersome. I've
also provided a Python module that wraps it, by running testcrypt as a
persistent subprocess and gatewaying all the function calls into
things that look reasonably natural to call from Python. The Python
module and testcrypt.c both read a carefully formatted header file
testcrypt.h which contains the name and signature of every exported
function, so it costs minimal effort to expose a given function
through this test API. In a few cases it's necessary to write a
wrapper in testcrypt.c that makes the function look more friendly, but
mostly you don't even need that. (Though that is one of the
motivations between a lot of API cleanups I've done recently!)
I considered doing Python integration in the more obvious way, by
linking parts of the PuTTY code directly into a native-code .so Python
module. I decided against it because this way is more flexible: I can
run the testcrypt program on its own, or compile it in a way that
Python wouldn't play nicely with (I bet compiling just that .so with
Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or
attach a debugger to it. I can even recompile testcrypt for a
different CPU architecture (32- vs 64-bit, or even running it on a
different machine over ssh or under emulation) and still layer the
nice API on top of that via the local Python interpreter. All I need
is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
|
|
|
}
|
|
|
|
|
2020-01-06 19:58:25 +00:00
|
|
|
static void return_val_string_asciz_const(strbuf *out, const char *s);
|
|
|
|
static void return_val_string_asciz(strbuf *out, char *s);
|
|
|
|
|
|
|
|
static void finaliser_return_opt_string_asciz(strbuf *out, void *ctx)
|
|
|
|
{
|
|
|
|
char **valp = (char **)ctx;
|
|
|
|
char *val = *valp;
|
|
|
|
sfree(valp);
|
|
|
|
if (!val)
|
2021-11-19 10:23:32 +00:00
|
|
|
put_fmt(out, "NULL\n");
|
2020-01-06 19:58:25 +00:00
|
|
|
else
|
|
|
|
return_val_string_asciz(out, val);
|
|
|
|
}
|
|
|
|
|
|
|
|
static char **get_out_opt_val_string_asciz(BinarySource *in)
|
|
|
|
{
|
|
|
|
char **valp = snew(char *);
|
|
|
|
*valp = NULL;
|
|
|
|
add_finaliser(finaliser_return_opt_string_asciz, valp);
|
|
|
|
return valp;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void finaliser_return_opt_string_asciz_const(strbuf *out, void *ctx)
|
|
|
|
{
|
|
|
|
const char **valp = (const char **)ctx;
|
|
|
|
const char *val = *valp;
|
|
|
|
sfree(valp);
|
|
|
|
if (!val)
|
2021-11-19 10:23:32 +00:00
|
|
|
put_fmt(out, "NULL\n");
|
2020-01-06 19:58:25 +00:00
|
|
|
else
|
|
|
|
return_val_string_asciz_const(out, val);
|
|
|
|
}
|
|
|
|
|
|
|
|
static const char **get_out_opt_val_string_asciz_const(BinarySource *in)
|
|
|
|
{
|
|
|
|
const char **valp = snew(const char *);
|
|
|
|
*valp = NULL;
|
|
|
|
add_finaliser(finaliser_return_opt_string_asciz_const, valp);
|
|
|
|
return valp;
|
|
|
|
}
|
|
|
|
|
New test system for mp_int and cryptography.
I've written a new standalone test program which incorporates all of
PuTTY's crypto code, including the mp_int and low-level elliptic curve
layers but also going all the way up to the implementations of the
MAC, hash, cipher, public key and kex abstractions.
The test program itself, 'testcrypt', speaks a simple line-oriented
protocol on standard I/O in which you write the name of a function
call followed by some inputs, and it gives you back a list of outputs
preceded by a line telling you how many there are. Dynamically
allocated objects are assigned string ids in the protocol, and there's
a 'free' function that tells testcrypt when it can dispose of one.
It's possible to speak that protocol by hand, but cumbersome. I've
also provided a Python module that wraps it, by running testcrypt as a
persistent subprocess and gatewaying all the function calls into
things that look reasonably natural to call from Python. The Python
module and testcrypt.c both read a carefully formatted header file
testcrypt.h which contains the name and signature of every exported
function, so it costs minimal effort to expose a given function
through this test API. In a few cases it's necessary to write a
wrapper in testcrypt.c that makes the function look more friendly, but
mostly you don't even need that. (Though that is one of the
motivations between a lot of API cleanups I've done recently!)
I considered doing Python integration in the more obvious way, by
linking parts of the PuTTY code directly into a native-code .so Python
module. I decided against it because this way is more flexible: I can
run the testcrypt program on its own, or compile it in a way that
Python wouldn't play nicely with (I bet compiling just that .so with
Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or
attach a debugger to it. I can even recompile testcrypt for a
different CPU architecture (32- vs 64-bit, or even running it on a
different machine over ssh or under emulation) and still layer the
nice API on top of that via the local Python interpreter. All I need
is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
|
|
|
static BinarySource *get_val_string_binarysource(BinarySource *in)
|
|
|
|
{
|
|
|
|
strbuf *sb = get_val_string(in);
|
|
|
|
BinarySource *src = snew(BinarySource);
|
|
|
|
BinarySource_BARE_INIT(src, sb->u, sb->len);
|
|
|
|
add_finaliser(finaliser_sfree, src);
|
|
|
|
return src;
|
|
|
|
}
|
|
|
|
|
2020-02-29 09:10:47 +00:00
|
|
|
#define GET_CONSUMED_FN(type) \
|
|
|
|
typedef TD_val_##type TD_consumed_val_##type; \
|
|
|
|
static TD_val_##type get_consumed_val_##type(BinarySource *in) \
|
|
|
|
{ \
|
|
|
|
Value *val = get_value_##type(in); \
|
|
|
|
TD_val_##type toret = val->vu_##type; \
|
|
|
|
del234(values, val); \
|
|
|
|
sfree(val); \
|
|
|
|
return toret; \
|
|
|
|
}
|
|
|
|
GET_CONSUMED_FN(hash)
|
|
|
|
GET_CONSUMED_FN(pcs)
|
New test system for mp_int and cryptography.
I've written a new standalone test program which incorporates all of
PuTTY's crypto code, including the mp_int and low-level elliptic curve
layers but also going all the way up to the implementations of the
MAC, hash, cipher, public key and kex abstractions.
The test program itself, 'testcrypt', speaks a simple line-oriented
protocol on standard I/O in which you write the name of a function
call followed by some inputs, and it gives you back a list of outputs
preceded by a line telling you how many there are. Dynamically
allocated objects are assigned string ids in the protocol, and there's
a 'free' function that tells testcrypt when it can dispose of one.
It's possible to speak that protocol by hand, but cumbersome. I've
also provided a Python module that wraps it, by running testcrypt as a
persistent subprocess and gatewaying all the function calls into
things that look reasonably natural to call from Python. The Python
module and testcrypt.c both read a carefully formatted header file
testcrypt.h which contains the name and signature of every exported
function, so it costs minimal effort to expose a given function
through this test API. In a few cases it's necessary to write a
wrapper in testcrypt.c that makes the function look more friendly, but
mostly you don't even need that. (Though that is one of the
motivations between a lot of API cleanups I've done recently!)
I considered doing Python integration in the more obvious way, by
linking parts of the PuTTY code directly into a native-code .so Python
module. I decided against it because this way is more flexible: I can
run the testcrypt program on its own, or compile it in a way that
Python wouldn't play nicely with (I bet compiling just that .so with
Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or
attach a debugger to it. I can even recompile testcrypt for a
different CPU architecture (32- vs 64-bit, or even running it on a
different machine over ssh or under emulation) and still layer the
nice API on top of that via the local Python interpreter. All I need
is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
|
|
|
|
|
|
|
static void return_int(strbuf *out, intmax_t u)
|
|
|
|
{
|
2021-11-19 10:23:32 +00:00
|
|
|
put_fmt(out, "%"PRIdMAX"\n", u);
|
New test system for mp_int and cryptography.
I've written a new standalone test program which incorporates all of
PuTTY's crypto code, including the mp_int and low-level elliptic curve
layers but also going all the way up to the implementations of the
MAC, hash, cipher, public key and kex abstractions.
The test program itself, 'testcrypt', speaks a simple line-oriented
protocol on standard I/O in which you write the name of a function
call followed by some inputs, and it gives you back a list of outputs
preceded by a line telling you how many there are. Dynamically
allocated objects are assigned string ids in the protocol, and there's
a 'free' function that tells testcrypt when it can dispose of one.
It's possible to speak that protocol by hand, but cumbersome. I've
also provided a Python module that wraps it, by running testcrypt as a
persistent subprocess and gatewaying all the function calls into
things that look reasonably natural to call from Python. The Python
module and testcrypt.c both read a carefully formatted header file
testcrypt.h which contains the name and signature of every exported
function, so it costs minimal effort to expose a given function
through this test API. In a few cases it's necessary to write a
wrapper in testcrypt.c that makes the function look more friendly, but
mostly you don't even need that. (Though that is one of the
motivations between a lot of API cleanups I've done recently!)
I considered doing Python integration in the more obvious way, by
linking parts of the PuTTY code directly into a native-code .so Python
module. I decided against it because this way is more flexible: I can
run the testcrypt program on its own, or compile it in a way that
Python wouldn't play nicely with (I bet compiling just that .so with
Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or
attach a debugger to it. I can even recompile testcrypt for a
different CPU architecture (32- vs 64-bit, or even running it on a
different machine over ssh or under emulation) and still layer the
nice API on top of that via the local Python interpreter. All I need
is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
static void return_uint(strbuf *out, uintmax_t u)
|
|
|
|
{
|
2021-11-19 10:23:32 +00:00
|
|
|
put_fmt(out, "0x%"PRIXMAX"\n", u);
|
New test system for mp_int and cryptography.
I've written a new standalone test program which incorporates all of
PuTTY's crypto code, including the mp_int and low-level elliptic curve
layers but also going all the way up to the implementations of the
MAC, hash, cipher, public key and kex abstractions.
The test program itself, 'testcrypt', speaks a simple line-oriented
protocol on standard I/O in which you write the name of a function
call followed by some inputs, and it gives you back a list of outputs
preceded by a line telling you how many there are. Dynamically
allocated objects are assigned string ids in the protocol, and there's
a 'free' function that tells testcrypt when it can dispose of one.
It's possible to speak that protocol by hand, but cumbersome. I've
also provided a Python module that wraps it, by running testcrypt as a
persistent subprocess and gatewaying all the function calls into
things that look reasonably natural to call from Python. The Python
module and testcrypt.c both read a carefully formatted header file
testcrypt.h which contains the name and signature of every exported
function, so it costs minimal effort to expose a given function
through this test API. In a few cases it's necessary to write a
wrapper in testcrypt.c that makes the function look more friendly, but
mostly you don't even need that. (Though that is one of the
motivations between a lot of API cleanups I've done recently!)
I considered doing Python integration in the more obvious way, by
linking parts of the PuTTY code directly into a native-code .so Python
module. I decided against it because this way is more flexible: I can
run the testcrypt program on its own, or compile it in a way that
Python wouldn't play nicely with (I bet compiling just that .so with
Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or
attach a debugger to it. I can even recompile testcrypt for a
different CPU architecture (32- vs 64-bit, or even running it on a
different machine over ssh or under emulation) and still layer the
nice API on top of that via the local Python interpreter. All I need
is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
static void return_boolean(strbuf *out, bool b)
|
|
|
|
{
|
2021-11-19 10:23:32 +00:00
|
|
|
put_fmt(out, "%s\n", b ? "true" : "false");
|
New test system for mp_int and cryptography.
I've written a new standalone test program which incorporates all of
PuTTY's crypto code, including the mp_int and low-level elliptic curve
layers but also going all the way up to the implementations of the
MAC, hash, cipher, public key and kex abstractions.
The test program itself, 'testcrypt', speaks a simple line-oriented
protocol on standard I/O in which you write the name of a function
call followed by some inputs, and it gives you back a list of outputs
preceded by a line telling you how many there are. Dynamically
allocated objects are assigned string ids in the protocol, and there's
a 'free' function that tells testcrypt when it can dispose of one.
It's possible to speak that protocol by hand, but cumbersome. I've
also provided a Python module that wraps it, by running testcrypt as a
persistent subprocess and gatewaying all the function calls into
things that look reasonably natural to call from Python. The Python
module and testcrypt.c both read a carefully formatted header file
testcrypt.h which contains the name and signature of every exported
function, so it costs minimal effort to expose a given function
through this test API. In a few cases it's necessary to write a
wrapper in testcrypt.c that makes the function look more friendly, but
mostly you don't even need that. (Though that is one of the
motivations between a lot of API cleanups I've done recently!)
I considered doing Python integration in the more obvious way, by
linking parts of the PuTTY code directly into a native-code .so Python
module. I decided against it because this way is more flexible: I can
run the testcrypt program on its own, or compile it in a way that
Python wouldn't play nicely with (I bet compiling just that .so with
Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or
attach a debugger to it. I can even recompile testcrypt for a
different CPU architecture (32- vs 64-bit, or even running it on a
different machine over ssh or under emulation) and still layer the
nice API on top of that via the local Python interpreter. All I need
is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
|
|
|
}
|
|
|
|
|
2020-02-23 15:16:30 +00:00
|
|
|
static void return_pocklestatus(strbuf *out, PockleStatus status)
|
|
|
|
{
|
|
|
|
switch (status) {
|
|
|
|
default:
|
2021-11-19 10:23:32 +00:00
|
|
|
put_fmt(out, "POCKLE_BAD_STATUS_VALUE\n");
|
2020-02-23 15:16:30 +00:00
|
|
|
break;
|
|
|
|
|
|
|
|
#define STATUS_CASE(id) \
|
|
|
|
case id: \
|
2021-11-19 10:23:32 +00:00
|
|
|
put_fmt(out, "%s\n", #id); \
|
2020-02-23 15:16:30 +00:00
|
|
|
break;
|
|
|
|
|
|
|
|
POCKLE_STATUSES(STATUS_CASE);
|
|
|
|
|
|
|
|
#undef STATUS_CASE
|
|
|
|
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2021-08-27 16:43:40 +00:00
|
|
|
static void return_mr_result(strbuf *out, struct mr_result result)
|
|
|
|
{
|
|
|
|
if (!result.passed)
|
2021-11-19 10:23:32 +00:00
|
|
|
put_fmt(out, "failed\n");
|
2021-08-27 16:43:40 +00:00
|
|
|
else if (!result.potential_primitive_root)
|
2021-11-19 10:23:32 +00:00
|
|
|
put_fmt(out, "passed\n");
|
2021-08-27 16:43:40 +00:00
|
|
|
else
|
2021-11-19 10:23:32 +00:00
|
|
|
put_fmt(out, "passed+ppr\n");
|
2021-08-27 16:43:40 +00:00
|
|
|
}
|
|
|
|
|
2020-01-06 19:58:25 +00:00
|
|
|
static void return_val_string_asciz_const(strbuf *out, const char *s)
|
New test system for mp_int and cryptography.
I've written a new standalone test program which incorporates all of
PuTTY's crypto code, including the mp_int and low-level elliptic curve
layers but also going all the way up to the implementations of the
MAC, hash, cipher, public key and kex abstractions.
The test program itself, 'testcrypt', speaks a simple line-oriented
protocol on standard I/O in which you write the name of a function
call followed by some inputs, and it gives you back a list of outputs
preceded by a line telling you how many there are. Dynamically
allocated objects are assigned string ids in the protocol, and there's
a 'free' function that tells testcrypt when it can dispose of one.
It's possible to speak that protocol by hand, but cumbersome. I've
also provided a Python module that wraps it, by running testcrypt as a
persistent subprocess and gatewaying all the function calls into
things that look reasonably natural to call from Python. The Python
module and testcrypt.c both read a carefully formatted header file
testcrypt.h which contains the name and signature of every exported
function, so it costs minimal effort to expose a given function
through this test API. In a few cases it's necessary to write a
wrapper in testcrypt.c that makes the function look more friendly, but
mostly you don't even need that. (Though that is one of the
motivations between a lot of API cleanups I've done recently!)
I considered doing Python integration in the more obvious way, by
linking parts of the PuTTY code directly into a native-code .so Python
module. I decided against it because this way is more flexible: I can
run the testcrypt program on its own, or compile it in a way that
Python wouldn't play nicely with (I bet compiling just that .so with
Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or
attach a debugger to it. I can even recompile testcrypt for a
different CPU architecture (32- vs 64-bit, or even running it on a
different machine over ssh or under emulation) and still layer the
nice API on top of that via the local Python interpreter. All I need
is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
|
|
|
{
|
|
|
|
strbuf *sb = strbuf_new();
|
|
|
|
put_data(sb, s, strlen(s));
|
|
|
|
return_val_string(out, sb);
|
|
|
|
}
|
|
|
|
|
2020-01-06 19:58:25 +00:00
|
|
|
static void return_val_string_asciz(strbuf *out, char *s)
|
|
|
|
{
|
|
|
|
return_val_string_asciz_const(out, s);
|
|
|
|
sfree(s);
|
|
|
|
}
|
|
|
|
|
2019-04-28 08:59:28 +00:00
|
|
|
#define NULLABLE_RETURN_WRAPPER(type_name, c_type) \
|
|
|
|
static void return_opt_##type_name(strbuf *out, c_type ptr) \
|
|
|
|
{ \
|
|
|
|
if (!ptr) \
|
2021-11-19 10:23:32 +00:00
|
|
|
put_fmt(out, "NULL\n"); \
|
2019-04-28 08:59:28 +00:00
|
|
|
else \
|
|
|
|
return_##type_name(out, ptr); \
|
|
|
|
}
|
2019-01-11 06:47:39 +00:00
|
|
|
|
2020-01-09 19:16:58 +00:00
|
|
|
NULLABLE_RETURN_WRAPPER(val_string, strbuf *)
|
2019-04-28 08:59:28 +00:00
|
|
|
NULLABLE_RETURN_WRAPPER(val_string_asciz, char *)
|
cmdgen: add a --dump option.
Also spelled '-O text', this takes a public or private key as input,
and produces on standard output a dump of all the actual numbers
involved in the key: the exponent and modulus for RSA, the p,q,g,y
parameters for DSA, the affine x and y coordinates of the public
elliptic curve point for ECC keys, and all the extra bits and pieces
in the private keys too.
Partly I expect this to be useful to me for debugging: I've had to
paste key files a few too many times through base64 decoders and hex
dump tools, then manually decode SSH marshalling and paste the result
into the Python REPL to get an integer object. Now I should be able to
get _straight_ to text I can paste into Python.
But also, it's a way that other applications can use the key
generator: if you need to generate, say, an RSA key in some format I
don't support (I've recently heard of an XML-based one, for example),
then you can run 'puttygen -t rsa --dump' and have it print the
elements of a freshly generated keypair on standard output, and then
all you have to do is understand the output format.
2020-02-17 19:53:19 +00:00
|
|
|
NULLABLE_RETURN_WRAPPER(val_string_asciz_const, const char *)
|
2019-04-28 08:59:28 +00:00
|
|
|
NULLABLE_RETURN_WRAPPER(val_cipher, ssh_cipher *)
|
2023-08-22 16:59:19 +00:00
|
|
|
NULLABLE_RETURN_WRAPPER(val_mac, ssh2_mac *)
|
2019-04-28 08:59:28 +00:00
|
|
|
NULLABLE_RETURN_WRAPPER(val_hash, ssh_hash *)
|
2019-04-28 08:59:28 +00:00
|
|
|
NULLABLE_RETURN_WRAPPER(val_key, ssh_key *)
|
2019-12-15 20:13:13 +00:00
|
|
|
NULLABLE_RETURN_WRAPPER(val_mpint, mp_int *)
|
Implement OpenSSH 9.x's NTRU Prime / Curve25519 kex.
This consists of DJB's 'Streamlined NTRU Prime' quantum-resistant
cryptosystem, currently in round 3 of the NIST post-quantum key
exchange competition; it's run in parallel with ordinary Curve25519,
and generates a shared secret combining the output of both systems.
(Hence, even if you don't trust this newfangled NTRU Prime thing at
all, it's at least no _less_ secure than the kex you were using
already.)
As the OpenSSH developers point out, key exchange is the most urgent
thing to make quantum-resistant, even before working quantum computers
big enough to break crypto become available, because a break of the
kex algorithm can be applied retroactively to recordings of your past
sessions. By contrast, authentication is a real-time protocol, and can
only be broken by a quantum computer if there's one available to
attack you _already_.
I've implemented both sides of the mechanism, so that PuTTY and Uppity
both support it. In my initial testing, the two sides can both
interoperate with the appropriate half of OpenSSH, and also (of
course, but it would be embarrassing to mess it up) with each other.
2022-04-15 16:19:47 +00:00
|
|
|
NULLABLE_RETURN_WRAPPER(int16_list, int16_list *)
|
2019-01-23 18:54:12 +00:00
|
|
|
|
New test system for mp_int and cryptography.
I've written a new standalone test program which incorporates all of
PuTTY's crypto code, including the mp_int and low-level elliptic curve
layers but also going all the way up to the implementations of the
MAC, hash, cipher, public key and kex abstractions.
The test program itself, 'testcrypt', speaks a simple line-oriented
protocol on standard I/O in which you write the name of a function
call followed by some inputs, and it gives you back a list of outputs
preceded by a line telling you how many there are. Dynamically
allocated objects are assigned string ids in the protocol, and there's
a 'free' function that tells testcrypt when it can dispose of one.
It's possible to speak that protocol by hand, but cumbersome. I've
also provided a Python module that wraps it, by running testcrypt as a
persistent subprocess and gatewaying all the function calls into
things that look reasonably natural to call from Python. The Python
module and testcrypt.c both read a carefully formatted header file
testcrypt.h which contains the name and signature of every exported
function, so it costs minimal effort to expose a given function
through this test API. In a few cases it's necessary to write a
wrapper in testcrypt.c that makes the function look more friendly, but
mostly you don't even need that. (Though that is one of the
motivations between a lot of API cleanups I've done recently!)
I considered doing Python integration in the more obvious way, by
linking parts of the PuTTY code directly into a native-code .so Python
module. I decided against it because this way is more flexible: I can
run the testcrypt program on its own, or compile it in a way that
Python wouldn't play nicely with (I bet compiling just that .so with
Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or
attach a debugger to it. I can even recompile testcrypt for a
different CPU architecture (32- vs 64-bit, or even running it on a
different machine over ssh or under emulation) and still layer the
nice API on top of that via the local Python interpreter. All I need
is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
|
|
|
static void handle_hello(BinarySource *in, strbuf *out)
|
|
|
|
{
|
2021-11-19 10:23:32 +00:00
|
|
|
put_fmt(out, "hello, world\n");
|
New test system for mp_int and cryptography.
I've written a new standalone test program which incorporates all of
PuTTY's crypto code, including the mp_int and low-level elliptic curve
layers but also going all the way up to the implementations of the
MAC, hash, cipher, public key and kex abstractions.
The test program itself, 'testcrypt', speaks a simple line-oriented
protocol on standard I/O in which you write the name of a function
call followed by some inputs, and it gives you back a list of outputs
preceded by a line telling you how many there are. Dynamically
allocated objects are assigned string ids in the protocol, and there's
a 'free' function that tells testcrypt when it can dispose of one.
It's possible to speak that protocol by hand, but cumbersome. I've
also provided a Python module that wraps it, by running testcrypt as a
persistent subprocess and gatewaying all the function calls into
things that look reasonably natural to call from Python. The Python
module and testcrypt.c both read a carefully formatted header file
testcrypt.h which contains the name and signature of every exported
function, so it costs minimal effort to expose a given function
through this test API. In a few cases it's necessary to write a
wrapper in testcrypt.c that makes the function look more friendly, but
mostly you don't even need that. (Though that is one of the
motivations between a lot of API cleanups I've done recently!)
I considered doing Python integration in the more obvious way, by
linking parts of the PuTTY code directly into a native-code .so Python
module. I decided against it because this way is more flexible: I can
run the testcrypt program on its own, or compile it in a way that
Python wouldn't play nicely with (I bet compiling just that .so with
Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or
attach a debugger to it. I can even recompile testcrypt for a
different CPU architecture (32- vs 64-bit, or even running it on a
different machine over ssh or under emulation) and still layer the
nice API on top of that via the local Python interpreter. All I need
is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
static void rsa_free(RSAKey *rsa)
|
|
|
|
{
|
|
|
|
freersakey(rsa);
|
|
|
|
sfree(rsa);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void free_value(Value *val)
|
|
|
|
{
|
|
|
|
switch (val->type) {
|
|
|
|
#define VALTYPE_FREE(n,t,f) case VT_##n: { t v = val->vu_##n; (f); break; }
|
|
|
|
VALUE_TYPES(VALTYPE_FREE)
|
|
|
|
#undef VALTYPE_FREE
|
|
|
|
}
|
|
|
|
sfree(val);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void handle_free(BinarySource *in, strbuf *out)
|
|
|
|
{
|
|
|
|
Value *val = get_value(in);
|
|
|
|
del234(values, val);
|
|
|
|
free_value(val);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void handle_newstring(BinarySource *in, strbuf *out)
|
|
|
|
{
|
|
|
|
strbuf *sb = strbuf_new();
|
|
|
|
while (get_avail(in)) {
|
|
|
|
char c = get_byte(in);
|
|
|
|
if (c == '%') {
|
|
|
|
char hex[3];
|
|
|
|
hex[0] = get_byte(in);
|
|
|
|
if (hex[0] != '%') {
|
|
|
|
hex[1] = get_byte(in);
|
|
|
|
hex[2] = '\0';
|
|
|
|
c = strtoul(hex, NULL, 16);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
put_byte(sb, c);
|
|
|
|
}
|
|
|
|
return_val_string(out, sb);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void handle_getstring(BinarySource *in, strbuf *out)
|
|
|
|
{
|
|
|
|
strbuf *sb = get_val_string(in);
|
|
|
|
for (size_t i = 0; i < sb->len; i++) {
|
|
|
|
char c = sb->s[i];
|
|
|
|
if (c > ' ' && c < 0x7F && c != '%') {
|
|
|
|
put_byte(out, c);
|
|
|
|
} else {
|
2021-11-19 10:23:32 +00:00
|
|
|
put_fmt(out, "%%%02X", 0xFFU & (unsigned)c);
|
New test system for mp_int and cryptography.
I've written a new standalone test program which incorporates all of
PuTTY's crypto code, including the mp_int and low-level elliptic curve
layers but also going all the way up to the implementations of the
MAC, hash, cipher, public key and kex abstractions.
The test program itself, 'testcrypt', speaks a simple line-oriented
protocol on standard I/O in which you write the name of a function
call followed by some inputs, and it gives you back a list of outputs
preceded by a line telling you how many there are. Dynamically
allocated objects are assigned string ids in the protocol, and there's
a 'free' function that tells testcrypt when it can dispose of one.
It's possible to speak that protocol by hand, but cumbersome. I've
also provided a Python module that wraps it, by running testcrypt as a
persistent subprocess and gatewaying all the function calls into
things that look reasonably natural to call from Python. The Python
module and testcrypt.c both read a carefully formatted header file
testcrypt.h which contains the name and signature of every exported
function, so it costs minimal effort to expose a given function
through this test API. In a few cases it's necessary to write a
wrapper in testcrypt.c that makes the function look more friendly, but
mostly you don't even need that. (Though that is one of the
motivations between a lot of API cleanups I've done recently!)
I considered doing Python integration in the more obvious way, by
linking parts of the PuTTY code directly into a native-code .so Python
module. I decided against it because this way is more flexible: I can
run the testcrypt program on its own, or compile it in a way that
Python wouldn't play nicely with (I bet compiling just that .so with
Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or
attach a debugger to it. I can even recompile testcrypt for a
different CPU architecture (32- vs 64-bit, or even running it on a
different machine over ssh or under emulation) and still layer the
nice API on top of that via the local Python interpreter. All I need
is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
put_byte(out, '\n');
|
|
|
|
}
|
|
|
|
|
|
|
|
static void handle_mp_literal(BinarySource *in, strbuf *out)
|
|
|
|
{
|
|
|
|
ptrlen pl = get_word(in);
|
|
|
|
char *str = mkstr(pl);
|
|
|
|
mp_int *mp = mp__from_string_literal(str);
|
|
|
|
sfree(str);
|
|
|
|
return_val_mpint(out, mp);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void handle_mp_dump(BinarySource *in, strbuf *out)
|
|
|
|
{
|
|
|
|
mp_int *mp = get_val_mpint(in);
|
|
|
|
for (size_t i = mp_max_bytes(mp); i-- > 0 ;)
|
2021-11-19 10:23:32 +00:00
|
|
|
put_fmt(out, "%02X", mp_get_byte(mp, i));
|
New test system for mp_int and cryptography.
I've written a new standalone test program which incorporates all of
PuTTY's crypto code, including the mp_int and low-level elliptic curve
layers but also going all the way up to the implementations of the
MAC, hash, cipher, public key and kex abstractions.
The test program itself, 'testcrypt', speaks a simple line-oriented
protocol on standard I/O in which you write the name of a function
call followed by some inputs, and it gives you back a list of outputs
preceded by a line telling you how many there are. Dynamically
allocated objects are assigned string ids in the protocol, and there's
a 'free' function that tells testcrypt when it can dispose of one.
It's possible to speak that protocol by hand, but cumbersome. I've
also provided a Python module that wraps it, by running testcrypt as a
persistent subprocess and gatewaying all the function calls into
things that look reasonably natural to call from Python. The Python
module and testcrypt.c both read a carefully formatted header file
testcrypt.h which contains the name and signature of every exported
function, so it costs minimal effort to expose a given function
through this test API. In a few cases it's necessary to write a
wrapper in testcrypt.c that makes the function look more friendly, but
mostly you don't even need that. (Though that is one of the
motivations between a lot of API cleanups I've done recently!)
I considered doing Python integration in the more obvious way, by
linking parts of the PuTTY code directly into a native-code .so Python
module. I decided against it because this way is more flexible: I can
run the testcrypt program on its own, or compile it in a way that
Python wouldn't play nicely with (I bet compiling just that .so with
Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or
attach a debugger to it. I can even recompile testcrypt for a
different CPU architecture (32- vs 64-bit, or even running it on a
different machine over ssh or under emulation) and still layer the
nice API on top of that via the local Python interpreter. All I need
is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
|
|
|
put_byte(out, '\n');
|
|
|
|
}
|
|
|
|
|
2021-11-22 18:39:45 +00:00
|
|
|
static void handle_checkenum(BinarySource *in, strbuf *out)
|
|
|
|
{
|
|
|
|
ptrlen type = get_word(in);
|
|
|
|
ptrlen value = get_word(in);
|
|
|
|
bool ok = false;
|
|
|
|
|
|
|
|
#define BEGIN_ENUM_TYPE(name) \
|
|
|
|
if (ptrlen_eq_string(type, #name)) \
|
|
|
|
ok = enum_translate_##name(value, NULL);
|
|
|
|
#define ENUM_VALUE(name, value)
|
|
|
|
#define END_ENUM_TYPE(name)
|
|
|
|
#include "testcrypt-enum.h"
|
|
|
|
#undef BEGIN_ENUM_TYPE
|
|
|
|
#undef ENUM_VALUE
|
|
|
|
#undef END_ENUM_TYPE
|
|
|
|
|
|
|
|
put_dataz(out, ok ? "ok\n" : "bad\n");
|
|
|
|
}
|
|
|
|
|
New test system for mp_int and cryptography.
I've written a new standalone test program which incorporates all of
PuTTY's crypto code, including the mp_int and low-level elliptic curve
layers but also going all the way up to the implementations of the
MAC, hash, cipher, public key and kex abstractions.
The test program itself, 'testcrypt', speaks a simple line-oriented
protocol on standard I/O in which you write the name of a function
call followed by some inputs, and it gives you back a list of outputs
preceded by a line telling you how many there are. Dynamically
allocated objects are assigned string ids in the protocol, and there's
a 'free' function that tells testcrypt when it can dispose of one.
It's possible to speak that protocol by hand, but cumbersome. I've
also provided a Python module that wraps it, by running testcrypt as a
persistent subprocess and gatewaying all the function calls into
things that look reasonably natural to call from Python. The Python
module and testcrypt.c both read a carefully formatted header file
testcrypt.h which contains the name and signature of every exported
function, so it costs minimal effort to expose a given function
through this test API. In a few cases it's necessary to write a
wrapper in testcrypt.c that makes the function look more friendly, but
mostly you don't even need that. (Though that is one of the
motivations between a lot of API cleanups I've done recently!)
I considered doing Python integration in the more obvious way, by
linking parts of the PuTTY code directly into a native-code .so Python
module. I decided against it because this way is more flexible: I can
run the testcrypt program on its own, or compile it in a way that
Python wouldn't play nicely with (I bet compiling just that .so with
Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or
attach a debugger to it. I can even recompile testcrypt for a
different CPU architecture (32- vs 64-bit, or even running it on a
different machine over ssh or under emulation) and still layer the
nice API on top of that via the local Python interpreter. All I need
is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
|
|
|
static void random_queue(ptrlen pl)
|
|
|
|
{
|
|
|
|
bufchain_add(&random_data_queue, pl.ptr, pl.len);
|
|
|
|
}
|
|
|
|
|
|
|
|
static size_t random_queue_len(void)
|
|
|
|
{
|
|
|
|
return bufchain_size(&random_data_queue);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void random_clear(void)
|
|
|
|
{
|
2020-02-19 19:12:59 +00:00
|
|
|
if (test_prng) {
|
|
|
|
prng_free(test_prng);
|
|
|
|
test_prng = NULL;
|
|
|
|
}
|
|
|
|
|
New test system for mp_int and cryptography.
I've written a new standalone test program which incorporates all of
PuTTY's crypto code, including the mp_int and low-level elliptic curve
layers but also going all the way up to the implementations of the
MAC, hash, cipher, public key and kex abstractions.
The test program itself, 'testcrypt', speaks a simple line-oriented
protocol on standard I/O in which you write the name of a function
call followed by some inputs, and it gives you back a list of outputs
preceded by a line telling you how many there are. Dynamically
allocated objects are assigned string ids in the protocol, and there's
a 'free' function that tells testcrypt when it can dispose of one.
It's possible to speak that protocol by hand, but cumbersome. I've
also provided a Python module that wraps it, by running testcrypt as a
persistent subprocess and gatewaying all the function calls into
things that look reasonably natural to call from Python. The Python
module and testcrypt.c both read a carefully formatted header file
testcrypt.h which contains the name and signature of every exported
function, so it costs minimal effort to expose a given function
through this test API. In a few cases it's necessary to write a
wrapper in testcrypt.c that makes the function look more friendly, but
mostly you don't even need that. (Though that is one of the
motivations between a lot of API cleanups I've done recently!)
I considered doing Python integration in the more obvious way, by
linking parts of the PuTTY code directly into a native-code .so Python
module. I decided against it because this way is more flexible: I can
run the testcrypt program on its own, or compile it in a way that
Python wouldn't play nicely with (I bet compiling just that .so with
Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or
attach a debugger to it. I can even recompile testcrypt for a
different CPU architecture (32- vs 64-bit, or even running it on a
different machine over ssh or under emulation) and still layer the
nice API on top of that via the local Python interpreter. All I need
is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
|
|
|
bufchain_clear(&random_data_queue);
|
|
|
|
}
|
|
|
|
|
2020-02-19 19:12:59 +00:00
|
|
|
static void random_make_prng(const ssh_hashalg *hashalg, ptrlen seed)
|
|
|
|
{
|
|
|
|
random_clear();
|
|
|
|
|
|
|
|
test_prng = prng_new(hashalg);
|
|
|
|
prng_seed_begin(test_prng);
|
|
|
|
put_datapl(test_prng, seed);
|
|
|
|
prng_seed_finish(test_prng);
|
|
|
|
}
|
|
|
|
|
New test system for mp_int and cryptography.
I've written a new standalone test program which incorporates all of
PuTTY's crypto code, including the mp_int and low-level elliptic curve
layers but also going all the way up to the implementations of the
MAC, hash, cipher, public key and kex abstractions.
The test program itself, 'testcrypt', speaks a simple line-oriented
protocol on standard I/O in which you write the name of a function
call followed by some inputs, and it gives you back a list of outputs
preceded by a line telling you how many there are. Dynamically
allocated objects are assigned string ids in the protocol, and there's
a 'free' function that tells testcrypt when it can dispose of one.
It's possible to speak that protocol by hand, but cumbersome. I've
also provided a Python module that wraps it, by running testcrypt as a
persistent subprocess and gatewaying all the function calls into
things that look reasonably natural to call from Python. The Python
module and testcrypt.c both read a carefully formatted header file
testcrypt.h which contains the name and signature of every exported
function, so it costs minimal effort to expose a given function
through this test API. In a few cases it's necessary to write a
wrapper in testcrypt.c that makes the function look more friendly, but
mostly you don't even need that. (Though that is one of the
motivations between a lot of API cleanups I've done recently!)
I considered doing Python integration in the more obvious way, by
linking parts of the PuTTY code directly into a native-code .so Python
module. I decided against it because this way is more flexible: I can
run the testcrypt program on its own, or compile it in a way that
Python wouldn't play nicely with (I bet compiling just that .so with
Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or
attach a debugger to it. I can even recompile testcrypt for a
different CPU architecture (32- vs 64-bit, or even running it on a
different machine over ssh or under emulation) and still layer the
nice API on top of that via the local Python interpreter. All I need
is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
|
|
|
mp_int *monty_identity_wrapper(MontyContext *mc)
|
|
|
|
{
|
|
|
|
return mp_copy(monty_identity(mc));
|
|
|
|
}
|
|
|
|
|
|
|
|
mp_int *monty_modulus_wrapper(MontyContext *mc)
|
|
|
|
{
|
|
|
|
return mp_copy(monty_modulus(mc));
|
|
|
|
}
|
|
|
|
|
2019-12-15 09:57:30 +00:00
|
|
|
strbuf *ssh_hash_digest_wrapper(ssh_hash *h)
|
|
|
|
{
|
|
|
|
strbuf *sb = strbuf_new();
|
|
|
|
void *p = strbuf_append(sb, ssh_hash_alg(h)->hlen);
|
|
|
|
ssh_hash_digest(h, p);
|
|
|
|
return sb;
|
|
|
|
}
|
|
|
|
|
New test system for mp_int and cryptography.
I've written a new standalone test program which incorporates all of
PuTTY's crypto code, including the mp_int and low-level elliptic curve
layers but also going all the way up to the implementations of the
MAC, hash, cipher, public key and kex abstractions.
The test program itself, 'testcrypt', speaks a simple line-oriented
protocol on standard I/O in which you write the name of a function
call followed by some inputs, and it gives you back a list of outputs
preceded by a line telling you how many there are. Dynamically
allocated objects are assigned string ids in the protocol, and there's
a 'free' function that tells testcrypt when it can dispose of one.
It's possible to speak that protocol by hand, but cumbersome. I've
also provided a Python module that wraps it, by running testcrypt as a
persistent subprocess and gatewaying all the function calls into
things that look reasonably natural to call from Python. The Python
module and testcrypt.c both read a carefully formatted header file
testcrypt.h which contains the name and signature of every exported
function, so it costs minimal effort to expose a given function
through this test API. In a few cases it's necessary to write a
wrapper in testcrypt.c that makes the function look more friendly, but
mostly you don't even need that. (Though that is one of the
motivations between a lot of API cleanups I've done recently!)
I considered doing Python integration in the more obvious way, by
linking parts of the PuTTY code directly into a native-code .so Python
module. I decided against it because this way is more flexible: I can
run the testcrypt program on its own, or compile it in a way that
Python wouldn't play nicely with (I bet compiling just that .so with
Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or
attach a debugger to it. I can even recompile testcrypt for a
different CPU architecture (32- vs 64-bit, or even running it on a
different machine over ssh or under emulation) and still layer the
nice API on top of that via the local Python interpreter. All I need
is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
|
|
|
strbuf *ssh_hash_final_wrapper(ssh_hash *h)
|
|
|
|
{
|
|
|
|
strbuf *sb = strbuf_new();
|
|
|
|
void *p = strbuf_append(sb, ssh_hash_alg(h)->hlen);
|
|
|
|
ssh_hash_final(h, p);
|
|
|
|
return sb;
|
|
|
|
}
|
|
|
|
|
2024-12-07 19:32:46 +00:00
|
|
|
strbuf *shake_xof_read_wrapper(ShakeXOF *sx, TD_uint size)
|
|
|
|
{
|
|
|
|
strbuf *sb = strbuf_new();
|
|
|
|
void *p = strbuf_append(sb, size);
|
|
|
|
shake_xof_read(sx, p, size);
|
|
|
|
return sb;
|
|
|
|
}
|
|
|
|
|
2021-11-21 11:46:00 +00:00
|
|
|
void ssh_cipher_setiv_wrapper(ssh_cipher *c, ptrlen iv)
|
New test system for mp_int and cryptography.
I've written a new standalone test program which incorporates all of
PuTTY's crypto code, including the mp_int and low-level elliptic curve
layers but also going all the way up to the implementations of the
MAC, hash, cipher, public key and kex abstractions.
The test program itself, 'testcrypt', speaks a simple line-oriented
protocol on standard I/O in which you write the name of a function
call followed by some inputs, and it gives you back a list of outputs
preceded by a line telling you how many there are. Dynamically
allocated objects are assigned string ids in the protocol, and there's
a 'free' function that tells testcrypt when it can dispose of one.
It's possible to speak that protocol by hand, but cumbersome. I've
also provided a Python module that wraps it, by running testcrypt as a
persistent subprocess and gatewaying all the function calls into
things that look reasonably natural to call from Python. The Python
module and testcrypt.c both read a carefully formatted header file
testcrypt.h which contains the name and signature of every exported
function, so it costs minimal effort to expose a given function
through this test API. In a few cases it's necessary to write a
wrapper in testcrypt.c that makes the function look more friendly, but
mostly you don't even need that. (Though that is one of the
motivations between a lot of API cleanups I've done recently!)
I considered doing Python integration in the more obvious way, by
linking parts of the PuTTY code directly into a native-code .so Python
module. I decided against it because this way is more flexible: I can
run the testcrypt program on its own, or compile it in a way that
Python wouldn't play nicely with (I bet compiling just that .so with
Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or
attach a debugger to it. I can even recompile testcrypt for a
different CPU architecture (32- vs 64-bit, or even running it on a
different machine over ssh or under emulation) and still layer the
nice API on top of that via the local Python interpreter. All I need
is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
|
|
|
{
|
2021-11-21 11:46:00 +00:00
|
|
|
if (iv.len != ssh_cipher_alg(c)->blksize)
|
Merge the ssh1_cipher type into ssh2_cipher.
The aim of this reorganisation is to make it easier to test all the
ciphers in PuTTY in a uniform way. It was inconvenient that there were
two separate vtable systems for the ciphers used in SSH-1 and SSH-2
with different functionality.
Now there's only one type, called ssh_cipher. But really it's the old
ssh2_cipher, just renamed: I haven't made any changes to the API on
the SSH-2 side. Instead, I've removed ssh1_cipher completely, and
adapted the SSH-1 BPP to use the SSH-2 style API.
(The relevant differences are that ssh1_cipher encapsulated both the
sending and receiving directions in one object - so now ssh1bpp has to
make a separate cipher instance per direction - and that ssh1_cipher
automatically initialised the IV to all zeroes, which ssh1bpp now has
to do by hand.)
The previous ssh1_cipher vtable for single-DES has been removed
completely, because when converted into the new API it became
identical to the SSH-2 single-DES vtable; so now there's just one
vtable for DES-CBC which works in both protocols. The other two SSH-1
ciphers each had to stay separate, because 3DES is completely
different between SSH-1 and SSH-2 (three layers of CBC structure
versus one), and Blowfish varies in endianness and key length between
the two.
(Actually, while I'm here, I've only just noticed that the SSH-1
Blowfish cipher mis-describes itself in log messages as Blowfish-128.
In fact it passes the whole of the input key buffer, which has length
SSH1_SESSION_KEY_LENGTH == 32 bytes == 256 bits. So it's actually
Blowfish-256, and has been all along!)
2019-01-17 18:06:08 +00:00
|
|
|
fatal_error("ssh_cipher_setiv: needs exactly %d bytes",
|
|
|
|
ssh_cipher_alg(c)->blksize);
|
2021-11-21 11:46:00 +00:00
|
|
|
ssh_cipher_setiv(c, iv.ptr);
|
New test system for mp_int and cryptography.
I've written a new standalone test program which incorporates all of
PuTTY's crypto code, including the mp_int and low-level elliptic curve
layers but also going all the way up to the implementations of the
MAC, hash, cipher, public key and kex abstractions.
The test program itself, 'testcrypt', speaks a simple line-oriented
protocol on standard I/O in which you write the name of a function
call followed by some inputs, and it gives you back a list of outputs
preceded by a line telling you how many there are. Dynamically
allocated objects are assigned string ids in the protocol, and there's
a 'free' function that tells testcrypt when it can dispose of one.
It's possible to speak that protocol by hand, but cumbersome. I've
also provided a Python module that wraps it, by running testcrypt as a
persistent subprocess and gatewaying all the function calls into
things that look reasonably natural to call from Python. The Python
module and testcrypt.c both read a carefully formatted header file
testcrypt.h which contains the name and signature of every exported
function, so it costs minimal effort to expose a given function
through this test API. In a few cases it's necessary to write a
wrapper in testcrypt.c that makes the function look more friendly, but
mostly you don't even need that. (Though that is one of the
motivations between a lot of API cleanups I've done recently!)
I considered doing Python integration in the more obvious way, by
linking parts of the PuTTY code directly into a native-code .so Python
module. I decided against it because this way is more flexible: I can
run the testcrypt program on its own, or compile it in a way that
Python wouldn't play nicely with (I bet compiling just that .so with
Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or
attach a debugger to it. I can even recompile testcrypt for a
different CPU architecture (32- vs 64-bit, or even running it on a
different machine over ssh or under emulation) and still layer the
nice API on top of that via the local Python interpreter. All I need
is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
|
|
|
}
|
|
|
|
|
Merge the ssh1_cipher type into ssh2_cipher.
The aim of this reorganisation is to make it easier to test all the
ciphers in PuTTY in a uniform way. It was inconvenient that there were
two separate vtable systems for the ciphers used in SSH-1 and SSH-2
with different functionality.
Now there's only one type, called ssh_cipher. But really it's the old
ssh2_cipher, just renamed: I haven't made any changes to the API on
the SSH-2 side. Instead, I've removed ssh1_cipher completely, and
adapted the SSH-1 BPP to use the SSH-2 style API.
(The relevant differences are that ssh1_cipher encapsulated both the
sending and receiving directions in one object - so now ssh1bpp has to
make a separate cipher instance per direction - and that ssh1_cipher
automatically initialised the IV to all zeroes, which ssh1bpp now has
to do by hand.)
The previous ssh1_cipher vtable for single-DES has been removed
completely, because when converted into the new API it became
identical to the SSH-2 single-DES vtable; so now there's just one
vtable for DES-CBC which works in both protocols. The other two SSH-1
ciphers each had to stay separate, because 3DES is completely
different between SSH-1 and SSH-2 (three layers of CBC structure
versus one), and Blowfish varies in endianness and key length between
the two.
(Actually, while I'm here, I've only just noticed that the SSH-1
Blowfish cipher mis-describes itself in log messages as Blowfish-128.
In fact it passes the whole of the input key buffer, which has length
SSH1_SESSION_KEY_LENGTH == 32 bytes == 256 bits. So it's actually
Blowfish-256, and has been all along!)
2019-01-17 18:06:08 +00:00
|
|
|
void ssh_cipher_setkey_wrapper(ssh_cipher *c, ptrlen key)
|
New test system for mp_int and cryptography.
I've written a new standalone test program which incorporates all of
PuTTY's crypto code, including the mp_int and low-level elliptic curve
layers but also going all the way up to the implementations of the
MAC, hash, cipher, public key and kex abstractions.
The test program itself, 'testcrypt', speaks a simple line-oriented
protocol on standard I/O in which you write the name of a function
call followed by some inputs, and it gives you back a list of outputs
preceded by a line telling you how many there are. Dynamically
allocated objects are assigned string ids in the protocol, and there's
a 'free' function that tells testcrypt when it can dispose of one.
It's possible to speak that protocol by hand, but cumbersome. I've
also provided a Python module that wraps it, by running testcrypt as a
persistent subprocess and gatewaying all the function calls into
things that look reasonably natural to call from Python. The Python
module and testcrypt.c both read a carefully formatted header file
testcrypt.h which contains the name and signature of every exported
function, so it costs minimal effort to expose a given function
through this test API. In a few cases it's necessary to write a
wrapper in testcrypt.c that makes the function look more friendly, but
mostly you don't even need that. (Though that is one of the
motivations between a lot of API cleanups I've done recently!)
I considered doing Python integration in the more obvious way, by
linking parts of the PuTTY code directly into a native-code .so Python
module. I decided against it because this way is more flexible: I can
run the testcrypt program on its own, or compile it in a way that
Python wouldn't play nicely with (I bet compiling just that .so with
Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or
attach a debugger to it. I can even recompile testcrypt for a
different CPU architecture (32- vs 64-bit, or even running it on a
different machine over ssh or under emulation) and still layer the
nice API on top of that via the local Python interpreter. All I need
is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
|
|
|
{
|
Merge the ssh1_cipher type into ssh2_cipher.
The aim of this reorganisation is to make it easier to test all the
ciphers in PuTTY in a uniform way. It was inconvenient that there were
two separate vtable systems for the ciphers used in SSH-1 and SSH-2
with different functionality.
Now there's only one type, called ssh_cipher. But really it's the old
ssh2_cipher, just renamed: I haven't made any changes to the API on
the SSH-2 side. Instead, I've removed ssh1_cipher completely, and
adapted the SSH-1 BPP to use the SSH-2 style API.
(The relevant differences are that ssh1_cipher encapsulated both the
sending and receiving directions in one object - so now ssh1bpp has to
make a separate cipher instance per direction - and that ssh1_cipher
automatically initialised the IV to all zeroes, which ssh1bpp now has
to do by hand.)
The previous ssh1_cipher vtable for single-DES has been removed
completely, because when converted into the new API it became
identical to the SSH-2 single-DES vtable; so now there's just one
vtable for DES-CBC which works in both protocols. The other two SSH-1
ciphers each had to stay separate, because 3DES is completely
different between SSH-1 and SSH-2 (three layers of CBC structure
versus one), and Blowfish varies in endianness and key length between
the two.
(Actually, while I'm here, I've only just noticed that the SSH-1
Blowfish cipher mis-describes itself in log messages as Blowfish-128.
In fact it passes the whole of the input key buffer, which has length
SSH1_SESSION_KEY_LENGTH == 32 bytes == 256 bits. So it's actually
Blowfish-256, and has been all along!)
2019-01-17 18:06:08 +00:00
|
|
|
if (key.len != ssh_cipher_alg(c)->padded_keybytes)
|
|
|
|
fatal_error("ssh_cipher_setkey: needs exactly %d bytes",
|
|
|
|
ssh_cipher_alg(c)->padded_keybytes);
|
|
|
|
ssh_cipher_setkey(c, key.ptr);
|
New test system for mp_int and cryptography.
I've written a new standalone test program which incorporates all of
PuTTY's crypto code, including the mp_int and low-level elliptic curve
layers but also going all the way up to the implementations of the
MAC, hash, cipher, public key and kex abstractions.
The test program itself, 'testcrypt', speaks a simple line-oriented
protocol on standard I/O in which you write the name of a function
call followed by some inputs, and it gives you back a list of outputs
preceded by a line telling you how many there are. Dynamically
allocated objects are assigned string ids in the protocol, and there's
a 'free' function that tells testcrypt when it can dispose of one.
It's possible to speak that protocol by hand, but cumbersome. I've
also provided a Python module that wraps it, by running testcrypt as a
persistent subprocess and gatewaying all the function calls into
things that look reasonably natural to call from Python. The Python
module and testcrypt.c both read a carefully formatted header file
testcrypt.h which contains the name and signature of every exported
function, so it costs minimal effort to expose a given function
through this test API. In a few cases it's necessary to write a
wrapper in testcrypt.c that makes the function look more friendly, but
mostly you don't even need that. (Though that is one of the
motivations between a lot of API cleanups I've done recently!)
I considered doing Python integration in the more obvious way, by
linking parts of the PuTTY code directly into a native-code .so Python
module. I decided against it because this way is more flexible: I can
run the testcrypt program on its own, or compile it in a way that
Python wouldn't play nicely with (I bet compiling just that .so with
Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or
attach a debugger to it. I can even recompile testcrypt for a
different CPU architecture (32- vs 64-bit, or even running it on a
different machine over ssh or under emulation) and still layer the
nice API on top of that via the local Python interpreter. All I need
is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
|
|
|
}
|
|
|
|
|
Merge the ssh1_cipher type into ssh2_cipher.
The aim of this reorganisation is to make it easier to test all the
ciphers in PuTTY in a uniform way. It was inconvenient that there were
two separate vtable systems for the ciphers used in SSH-1 and SSH-2
with different functionality.
Now there's only one type, called ssh_cipher. But really it's the old
ssh2_cipher, just renamed: I haven't made any changes to the API on
the SSH-2 side. Instead, I've removed ssh1_cipher completely, and
adapted the SSH-1 BPP to use the SSH-2 style API.
(The relevant differences are that ssh1_cipher encapsulated both the
sending and receiving directions in one object - so now ssh1bpp has to
make a separate cipher instance per direction - and that ssh1_cipher
automatically initialised the IV to all zeroes, which ssh1bpp now has
to do by hand.)
The previous ssh1_cipher vtable for single-DES has been removed
completely, because when converted into the new API it became
identical to the SSH-2 single-DES vtable; so now there's just one
vtable for DES-CBC which works in both protocols. The other two SSH-1
ciphers each had to stay separate, because 3DES is completely
different between SSH-1 and SSH-2 (three layers of CBC structure
versus one), and Blowfish varies in endianness and key length between
the two.
(Actually, while I'm here, I've only just noticed that the SSH-1
Blowfish cipher mis-describes itself in log messages as Blowfish-128.
In fact it passes the whole of the input key buffer, which has length
SSH1_SESSION_KEY_LENGTH == 32 bytes == 256 bits. So it's actually
Blowfish-256, and has been all along!)
2019-01-17 18:06:08 +00:00
|
|
|
strbuf *ssh_cipher_encrypt_wrapper(ssh_cipher *c, ptrlen input)
|
New test system for mp_int and cryptography.
I've written a new standalone test program which incorporates all of
PuTTY's crypto code, including the mp_int and low-level elliptic curve
layers but also going all the way up to the implementations of the
MAC, hash, cipher, public key and kex abstractions.
The test program itself, 'testcrypt', speaks a simple line-oriented
protocol on standard I/O in which you write the name of a function
call followed by some inputs, and it gives you back a list of outputs
preceded by a line telling you how many there are. Dynamically
allocated objects are assigned string ids in the protocol, and there's
a 'free' function that tells testcrypt when it can dispose of one.
It's possible to speak that protocol by hand, but cumbersome. I've
also provided a Python module that wraps it, by running testcrypt as a
persistent subprocess and gatewaying all the function calls into
things that look reasonably natural to call from Python. The Python
module and testcrypt.c both read a carefully formatted header file
testcrypt.h which contains the name and signature of every exported
function, so it costs minimal effort to expose a given function
through this test API. In a few cases it's necessary to write a
wrapper in testcrypt.c that makes the function look more friendly, but
mostly you don't even need that. (Though that is one of the
motivations between a lot of API cleanups I've done recently!)
I considered doing Python integration in the more obvious way, by
linking parts of the PuTTY code directly into a native-code .so Python
module. I decided against it because this way is more flexible: I can
run the testcrypt program on its own, or compile it in a way that
Python wouldn't play nicely with (I bet compiling just that .so with
Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or
attach a debugger to it. I can even recompile testcrypt for a
different CPU architecture (32- vs 64-bit, or even running it on a
different machine over ssh or under emulation) and still layer the
nice API on top of that via the local Python interpreter. All I need
is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
|
|
|
{
|
Merge the ssh1_cipher type into ssh2_cipher.
The aim of this reorganisation is to make it easier to test all the
ciphers in PuTTY in a uniform way. It was inconvenient that there were
two separate vtable systems for the ciphers used in SSH-1 and SSH-2
with different functionality.
Now there's only one type, called ssh_cipher. But really it's the old
ssh2_cipher, just renamed: I haven't made any changes to the API on
the SSH-2 side. Instead, I've removed ssh1_cipher completely, and
adapted the SSH-1 BPP to use the SSH-2 style API.
(The relevant differences are that ssh1_cipher encapsulated both the
sending and receiving directions in one object - so now ssh1bpp has to
make a separate cipher instance per direction - and that ssh1_cipher
automatically initialised the IV to all zeroes, which ssh1bpp now has
to do by hand.)
The previous ssh1_cipher vtable for single-DES has been removed
completely, because when converted into the new API it became
identical to the SSH-2 single-DES vtable; so now there's just one
vtable for DES-CBC which works in both protocols. The other two SSH-1
ciphers each had to stay separate, because 3DES is completely
different between SSH-1 and SSH-2 (three layers of CBC structure
versus one), and Blowfish varies in endianness and key length between
the two.
(Actually, while I'm here, I've only just noticed that the SSH-1
Blowfish cipher mis-describes itself in log messages as Blowfish-128.
In fact it passes the whole of the input key buffer, which has length
SSH1_SESSION_KEY_LENGTH == 32 bytes == 256 bits. So it's actually
Blowfish-256, and has been all along!)
2019-01-17 18:06:08 +00:00
|
|
|
if (input.len % ssh_cipher_alg(c)->blksize)
|
|
|
|
fatal_error("ssh_cipher_encrypt: needs a multiple of %d bytes",
|
|
|
|
ssh_cipher_alg(c)->blksize);
|
2022-04-19 09:53:44 +00:00
|
|
|
strbuf *sb = strbuf_dup(input);
|
Merge the ssh1_cipher type into ssh2_cipher.
The aim of this reorganisation is to make it easier to test all the
ciphers in PuTTY in a uniform way. It was inconvenient that there were
two separate vtable systems for the ciphers used in SSH-1 and SSH-2
with different functionality.
Now there's only one type, called ssh_cipher. But really it's the old
ssh2_cipher, just renamed: I haven't made any changes to the API on
the SSH-2 side. Instead, I've removed ssh1_cipher completely, and
adapted the SSH-1 BPP to use the SSH-2 style API.
(The relevant differences are that ssh1_cipher encapsulated both the
sending and receiving directions in one object - so now ssh1bpp has to
make a separate cipher instance per direction - and that ssh1_cipher
automatically initialised the IV to all zeroes, which ssh1bpp now has
to do by hand.)
The previous ssh1_cipher vtable for single-DES has been removed
completely, because when converted into the new API it became
identical to the SSH-2 single-DES vtable; so now there's just one
vtable for DES-CBC which works in both protocols. The other two SSH-1
ciphers each had to stay separate, because 3DES is completely
different between SSH-1 and SSH-2 (three layers of CBC structure
versus one), and Blowfish varies in endianness and key length between
the two.
(Actually, while I'm here, I've only just noticed that the SSH-1
Blowfish cipher mis-describes itself in log messages as Blowfish-128.
In fact it passes the whole of the input key buffer, which has length
SSH1_SESSION_KEY_LENGTH == 32 bytes == 256 bits. So it's actually
Blowfish-256, and has been all along!)
2019-01-17 18:06:08 +00:00
|
|
|
ssh_cipher_encrypt(c, sb->u, sb->len);
|
New test system for mp_int and cryptography.
I've written a new standalone test program which incorporates all of
PuTTY's crypto code, including the mp_int and low-level elliptic curve
layers but also going all the way up to the implementations of the
MAC, hash, cipher, public key and kex abstractions.
The test program itself, 'testcrypt', speaks a simple line-oriented
protocol on standard I/O in which you write the name of a function
call followed by some inputs, and it gives you back a list of outputs
preceded by a line telling you how many there are. Dynamically
allocated objects are assigned string ids in the protocol, and there's
a 'free' function that tells testcrypt when it can dispose of one.
It's possible to speak that protocol by hand, but cumbersome. I've
also provided a Python module that wraps it, by running testcrypt as a
persistent subprocess and gatewaying all the function calls into
things that look reasonably natural to call from Python. The Python
module and testcrypt.c both read a carefully formatted header file
testcrypt.h which contains the name and signature of every exported
function, so it costs minimal effort to expose a given function
through this test API. In a few cases it's necessary to write a
wrapper in testcrypt.c that makes the function look more friendly, but
mostly you don't even need that. (Though that is one of the
motivations between a lot of API cleanups I've done recently!)
I considered doing Python integration in the more obvious way, by
linking parts of the PuTTY code directly into a native-code .so Python
module. I decided against it because this way is more flexible: I can
run the testcrypt program on its own, or compile it in a way that
Python wouldn't play nicely with (I bet compiling just that .so with
Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or
attach a debugger to it. I can even recompile testcrypt for a
different CPU architecture (32- vs 64-bit, or even running it on a
different machine over ssh or under emulation) and still layer the
nice API on top of that via the local Python interpreter. All I need
is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
|
|
|
return sb;
|
|
|
|
}
|
|
|
|
|
Merge the ssh1_cipher type into ssh2_cipher.
The aim of this reorganisation is to make it easier to test all the
ciphers in PuTTY in a uniform way. It was inconvenient that there were
two separate vtable systems for the ciphers used in SSH-1 and SSH-2
with different functionality.
Now there's only one type, called ssh_cipher. But really it's the old
ssh2_cipher, just renamed: I haven't made any changes to the API on
the SSH-2 side. Instead, I've removed ssh1_cipher completely, and
adapted the SSH-1 BPP to use the SSH-2 style API.
(The relevant differences are that ssh1_cipher encapsulated both the
sending and receiving directions in one object - so now ssh1bpp has to
make a separate cipher instance per direction - and that ssh1_cipher
automatically initialised the IV to all zeroes, which ssh1bpp now has
to do by hand.)
The previous ssh1_cipher vtable for single-DES has been removed
completely, because when converted into the new API it became
identical to the SSH-2 single-DES vtable; so now there's just one
vtable for DES-CBC which works in both protocols. The other two SSH-1
ciphers each had to stay separate, because 3DES is completely
different between SSH-1 and SSH-2 (three layers of CBC structure
versus one), and Blowfish varies in endianness and key length between
the two.
(Actually, while I'm here, I've only just noticed that the SSH-1
Blowfish cipher mis-describes itself in log messages as Blowfish-128.
In fact it passes the whole of the input key buffer, which has length
SSH1_SESSION_KEY_LENGTH == 32 bytes == 256 bits. So it's actually
Blowfish-256, and has been all along!)
2019-01-17 18:06:08 +00:00
|
|
|
strbuf *ssh_cipher_decrypt_wrapper(ssh_cipher *c, ptrlen input)
|
New test system for mp_int and cryptography.
I've written a new standalone test program which incorporates all of
PuTTY's crypto code, including the mp_int and low-level elliptic curve
layers but also going all the way up to the implementations of the
MAC, hash, cipher, public key and kex abstractions.
The test program itself, 'testcrypt', speaks a simple line-oriented
protocol on standard I/O in which you write the name of a function
call followed by some inputs, and it gives you back a list of outputs
preceded by a line telling you how many there are. Dynamically
allocated objects are assigned string ids in the protocol, and there's
a 'free' function that tells testcrypt when it can dispose of one.
It's possible to speak that protocol by hand, but cumbersome. I've
also provided a Python module that wraps it, by running testcrypt as a
persistent subprocess and gatewaying all the function calls into
things that look reasonably natural to call from Python. The Python
module and testcrypt.c both read a carefully formatted header file
testcrypt.h which contains the name and signature of every exported
function, so it costs minimal effort to expose a given function
through this test API. In a few cases it's necessary to write a
wrapper in testcrypt.c that makes the function look more friendly, but
mostly you don't even need that. (Though that is one of the
motivations between a lot of API cleanups I've done recently!)
I considered doing Python integration in the more obvious way, by
linking parts of the PuTTY code directly into a native-code .so Python
module. I decided against it because this way is more flexible: I can
run the testcrypt program on its own, or compile it in a way that
Python wouldn't play nicely with (I bet compiling just that .so with
Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or
attach a debugger to it. I can even recompile testcrypt for a
different CPU architecture (32- vs 64-bit, or even running it on a
different machine over ssh or under emulation) and still layer the
nice API on top of that via the local Python interpreter. All I need
is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
|
|
|
{
|
Merge the ssh1_cipher type into ssh2_cipher.
The aim of this reorganisation is to make it easier to test all the
ciphers in PuTTY in a uniform way. It was inconvenient that there were
two separate vtable systems for the ciphers used in SSH-1 and SSH-2
with different functionality.
Now there's only one type, called ssh_cipher. But really it's the old
ssh2_cipher, just renamed: I haven't made any changes to the API on
the SSH-2 side. Instead, I've removed ssh1_cipher completely, and
adapted the SSH-1 BPP to use the SSH-2 style API.
(The relevant differences are that ssh1_cipher encapsulated both the
sending and receiving directions in one object - so now ssh1bpp has to
make a separate cipher instance per direction - and that ssh1_cipher
automatically initialised the IV to all zeroes, which ssh1bpp now has
to do by hand.)
The previous ssh1_cipher vtable for single-DES has been removed
completely, because when converted into the new API it became
identical to the SSH-2 single-DES vtable; so now there's just one
vtable for DES-CBC which works in both protocols. The other two SSH-1
ciphers each had to stay separate, because 3DES is completely
different between SSH-1 and SSH-2 (three layers of CBC structure
versus one), and Blowfish varies in endianness and key length between
the two.
(Actually, while I'm here, I've only just noticed that the SSH-1
Blowfish cipher mis-describes itself in log messages as Blowfish-128.
In fact it passes the whole of the input key buffer, which has length
SSH1_SESSION_KEY_LENGTH == 32 bytes == 256 bits. So it's actually
Blowfish-256, and has been all along!)
2019-01-17 18:06:08 +00:00
|
|
|
if (input.len % ssh_cipher_alg(c)->blksize)
|
|
|
|
fatal_error("ssh_cipher_decrypt: needs a multiple of %d bytes",
|
|
|
|
ssh_cipher_alg(c)->blksize);
|
2022-04-19 09:53:44 +00:00
|
|
|
strbuf *sb = strbuf_dup(input);
|
Merge the ssh1_cipher type into ssh2_cipher.
The aim of this reorganisation is to make it easier to test all the
ciphers in PuTTY in a uniform way. It was inconvenient that there were
two separate vtable systems for the ciphers used in SSH-1 and SSH-2
with different functionality.
Now there's only one type, called ssh_cipher. But really it's the old
ssh2_cipher, just renamed: I haven't made any changes to the API on
the SSH-2 side. Instead, I've removed ssh1_cipher completely, and
adapted the SSH-1 BPP to use the SSH-2 style API.
(The relevant differences are that ssh1_cipher encapsulated both the
sending and receiving directions in one object - so now ssh1bpp has to
make a separate cipher instance per direction - and that ssh1_cipher
automatically initialised the IV to all zeroes, which ssh1bpp now has
to do by hand.)
The previous ssh1_cipher vtable for single-DES has been removed
completely, because when converted into the new API it became
identical to the SSH-2 single-DES vtable; so now there's just one
vtable for DES-CBC which works in both protocols. The other two SSH-1
ciphers each had to stay separate, because 3DES is completely
different between SSH-1 and SSH-2 (three layers of CBC structure
versus one), and Blowfish varies in endianness and key length between
the two.
(Actually, while I'm here, I've only just noticed that the SSH-1
Blowfish cipher mis-describes itself in log messages as Blowfish-128.
In fact it passes the whole of the input key buffer, which has length
SSH1_SESSION_KEY_LENGTH == 32 bytes == 256 bits. So it's actually
Blowfish-256, and has been all along!)
2019-01-17 18:06:08 +00:00
|
|
|
ssh_cipher_decrypt(c, sb->u, sb->len);
|
New test system for mp_int and cryptography.
I've written a new standalone test program which incorporates all of
PuTTY's crypto code, including the mp_int and low-level elliptic curve
layers but also going all the way up to the implementations of the
MAC, hash, cipher, public key and kex abstractions.
The test program itself, 'testcrypt', speaks a simple line-oriented
protocol on standard I/O in which you write the name of a function
call followed by some inputs, and it gives you back a list of outputs
preceded by a line telling you how many there are. Dynamically
allocated objects are assigned string ids in the protocol, and there's
a 'free' function that tells testcrypt when it can dispose of one.
It's possible to speak that protocol by hand, but cumbersome. I've
also provided a Python module that wraps it, by running testcrypt as a
persistent subprocess and gatewaying all the function calls into
things that look reasonably natural to call from Python. The Python
module and testcrypt.c both read a carefully formatted header file
testcrypt.h which contains the name and signature of every exported
function, so it costs minimal effort to expose a given function
through this test API. In a few cases it's necessary to write a
wrapper in testcrypt.c that makes the function look more friendly, but
mostly you don't even need that. (Though that is one of the
motivations between a lot of API cleanups I've done recently!)
I considered doing Python integration in the more obvious way, by
linking parts of the PuTTY code directly into a native-code .so Python
module. I decided against it because this way is more flexible: I can
run the testcrypt program on its own, or compile it in a way that
Python wouldn't play nicely with (I bet compiling just that .so with
Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or
attach a debugger to it. I can even recompile testcrypt for a
different CPU architecture (32- vs 64-bit, or even running it on a
different machine over ssh or under emulation) and still layer the
nice API on top of that via the local Python interpreter. All I need
is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
|
|
|
return sb;
|
|
|
|
}
|
|
|
|
|
Merge the ssh1_cipher type into ssh2_cipher.
The aim of this reorganisation is to make it easier to test all the
ciphers in PuTTY in a uniform way. It was inconvenient that there were
two separate vtable systems for the ciphers used in SSH-1 and SSH-2
with different functionality.
Now there's only one type, called ssh_cipher. But really it's the old
ssh2_cipher, just renamed: I haven't made any changes to the API on
the SSH-2 side. Instead, I've removed ssh1_cipher completely, and
adapted the SSH-1 BPP to use the SSH-2 style API.
(The relevant differences are that ssh1_cipher encapsulated both the
sending and receiving directions in one object - so now ssh1bpp has to
make a separate cipher instance per direction - and that ssh1_cipher
automatically initialised the IV to all zeroes, which ssh1bpp now has
to do by hand.)
The previous ssh1_cipher vtable for single-DES has been removed
completely, because when converted into the new API it became
identical to the SSH-2 single-DES vtable; so now there's just one
vtable for DES-CBC which works in both protocols. The other two SSH-1
ciphers each had to stay separate, because 3DES is completely
different between SSH-1 and SSH-2 (three layers of CBC structure
versus one), and Blowfish varies in endianness and key length between
the two.
(Actually, while I'm here, I've only just noticed that the SSH-1
Blowfish cipher mis-describes itself in log messages as Blowfish-128.
In fact it passes the whole of the input key buffer, which has length
SSH1_SESSION_KEY_LENGTH == 32 bytes == 256 bits. So it's actually
Blowfish-256, and has been all along!)
2019-01-17 18:06:08 +00:00
|
|
|
strbuf *ssh_cipher_encrypt_length_wrapper(ssh_cipher *c, ptrlen input,
|
2022-08-03 19:48:46 +00:00
|
|
|
unsigned long seq)
|
New test system for mp_int and cryptography.
I've written a new standalone test program which incorporates all of
PuTTY's crypto code, including the mp_int and low-level elliptic curve
layers but also going all the way up to the implementations of the
MAC, hash, cipher, public key and kex abstractions.
The test program itself, 'testcrypt', speaks a simple line-oriented
protocol on standard I/O in which you write the name of a function
call followed by some inputs, and it gives you back a list of outputs
preceded by a line telling you how many there are. Dynamically
allocated objects are assigned string ids in the protocol, and there's
a 'free' function that tells testcrypt when it can dispose of one.
It's possible to speak that protocol by hand, but cumbersome. I've
also provided a Python module that wraps it, by running testcrypt as a
persistent subprocess and gatewaying all the function calls into
things that look reasonably natural to call from Python. The Python
module and testcrypt.c both read a carefully formatted header file
testcrypt.h which contains the name and signature of every exported
function, so it costs minimal effort to expose a given function
through this test API. In a few cases it's necessary to write a
wrapper in testcrypt.c that makes the function look more friendly, but
mostly you don't even need that. (Though that is one of the
motivations between a lot of API cleanups I've done recently!)
I considered doing Python integration in the more obvious way, by
linking parts of the PuTTY code directly into a native-code .so Python
module. I decided against it because this way is more flexible: I can
run the testcrypt program on its own, or compile it in a way that
Python wouldn't play nicely with (I bet compiling just that .so with
Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or
attach a debugger to it. I can even recompile testcrypt for a
different CPU architecture (32- vs 64-bit, or even running it on a
different machine over ssh or under emulation) and still layer the
nice API on top of that via the local Python interpreter. All I need
is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
|
|
|
{
|
|
|
|
if (input.len != 4)
|
Merge the ssh1_cipher type into ssh2_cipher.
The aim of this reorganisation is to make it easier to test all the
ciphers in PuTTY in a uniform way. It was inconvenient that there were
two separate vtable systems for the ciphers used in SSH-1 and SSH-2
with different functionality.
Now there's only one type, called ssh_cipher. But really it's the old
ssh2_cipher, just renamed: I haven't made any changes to the API on
the SSH-2 side. Instead, I've removed ssh1_cipher completely, and
adapted the SSH-1 BPP to use the SSH-2 style API.
(The relevant differences are that ssh1_cipher encapsulated both the
sending and receiving directions in one object - so now ssh1bpp has to
make a separate cipher instance per direction - and that ssh1_cipher
automatically initialised the IV to all zeroes, which ssh1bpp now has
to do by hand.)
The previous ssh1_cipher vtable for single-DES has been removed
completely, because when converted into the new API it became
identical to the SSH-2 single-DES vtable; so now there's just one
vtable for DES-CBC which works in both protocols. The other two SSH-1
ciphers each had to stay separate, because 3DES is completely
different between SSH-1 and SSH-2 (three layers of CBC structure
versus one), and Blowfish varies in endianness and key length between
the two.
(Actually, while I'm here, I've only just noticed that the SSH-1
Blowfish cipher mis-describes itself in log messages as Blowfish-128.
In fact it passes the whole of the input key buffer, which has length
SSH1_SESSION_KEY_LENGTH == 32 bytes == 256 bits. So it's actually
Blowfish-256, and has been all along!)
2019-01-17 18:06:08 +00:00
|
|
|
fatal_error("ssh_cipher_encrypt_length: needs exactly 4 bytes");
|
2022-04-19 09:53:44 +00:00
|
|
|
strbuf *sb = strbuf_dup(input);
|
Merge the ssh1_cipher type into ssh2_cipher.
The aim of this reorganisation is to make it easier to test all the
ciphers in PuTTY in a uniform way. It was inconvenient that there were
two separate vtable systems for the ciphers used in SSH-1 and SSH-2
with different functionality.
Now there's only one type, called ssh_cipher. But really it's the old
ssh2_cipher, just renamed: I haven't made any changes to the API on
the SSH-2 side. Instead, I've removed ssh1_cipher completely, and
adapted the SSH-1 BPP to use the SSH-2 style API.
(The relevant differences are that ssh1_cipher encapsulated both the
sending and receiving directions in one object - so now ssh1bpp has to
make a separate cipher instance per direction - and that ssh1_cipher
automatically initialised the IV to all zeroes, which ssh1bpp now has
to do by hand.)
The previous ssh1_cipher vtable for single-DES has been removed
completely, because when converted into the new API it became
identical to the SSH-2 single-DES vtable; so now there's just one
vtable for DES-CBC which works in both protocols. The other two SSH-1
ciphers each had to stay separate, because 3DES is completely
different between SSH-1 and SSH-2 (three layers of CBC structure
versus one), and Blowfish varies in endianness and key length between
the two.
(Actually, while I'm here, I've only just noticed that the SSH-1
Blowfish cipher mis-describes itself in log messages as Blowfish-128.
In fact it passes the whole of the input key buffer, which has length
SSH1_SESSION_KEY_LENGTH == 32 bytes == 256 bits. So it's actually
Blowfish-256, and has been all along!)
2019-01-17 18:06:08 +00:00
|
|
|
ssh_cipher_encrypt_length(c, sb->u, sb->len, seq);
|
New test system for mp_int and cryptography.
I've written a new standalone test program which incorporates all of
PuTTY's crypto code, including the mp_int and low-level elliptic curve
layers but also going all the way up to the implementations of the
MAC, hash, cipher, public key and kex abstractions.
The test program itself, 'testcrypt', speaks a simple line-oriented
protocol on standard I/O in which you write the name of a function
call followed by some inputs, and it gives you back a list of outputs
preceded by a line telling you how many there are. Dynamically
allocated objects are assigned string ids in the protocol, and there's
a 'free' function that tells testcrypt when it can dispose of one.
It's possible to speak that protocol by hand, but cumbersome. I've
also provided a Python module that wraps it, by running testcrypt as a
persistent subprocess and gatewaying all the function calls into
things that look reasonably natural to call from Python. The Python
module and testcrypt.c both read a carefully formatted header file
testcrypt.h which contains the name and signature of every exported
function, so it costs minimal effort to expose a given function
through this test API. In a few cases it's necessary to write a
wrapper in testcrypt.c that makes the function look more friendly, but
mostly you don't even need that. (Though that is one of the
motivations between a lot of API cleanups I've done recently!)
I considered doing Python integration in the more obvious way, by
linking parts of the PuTTY code directly into a native-code .so Python
module. I decided against it because this way is more flexible: I can
run the testcrypt program on its own, or compile it in a way that
Python wouldn't play nicely with (I bet compiling just that .so with
Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or
attach a debugger to it. I can even recompile testcrypt for a
different CPU architecture (32- vs 64-bit, or even running it on a
different machine over ssh or under emulation) and still layer the
nice API on top of that via the local Python interpreter. All I need
is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
|
|
|
return sb;
|
|
|
|
}
|
|
|
|
|
Merge the ssh1_cipher type into ssh2_cipher.
The aim of this reorganisation is to make it easier to test all the
ciphers in PuTTY in a uniform way. It was inconvenient that there were
two separate vtable systems for the ciphers used in SSH-1 and SSH-2
with different functionality.
Now there's only one type, called ssh_cipher. But really it's the old
ssh2_cipher, just renamed: I haven't made any changes to the API on
the SSH-2 side. Instead, I've removed ssh1_cipher completely, and
adapted the SSH-1 BPP to use the SSH-2 style API.
(The relevant differences are that ssh1_cipher encapsulated both the
sending and receiving directions in one object - so now ssh1bpp has to
make a separate cipher instance per direction - and that ssh1_cipher
automatically initialised the IV to all zeroes, which ssh1bpp now has
to do by hand.)
The previous ssh1_cipher vtable for single-DES has been removed
completely, because when converted into the new API it became
identical to the SSH-2 single-DES vtable; so now there's just one
vtable for DES-CBC which works in both protocols. The other two SSH-1
ciphers each had to stay separate, because 3DES is completely
different between SSH-1 and SSH-2 (three layers of CBC structure
versus one), and Blowfish varies in endianness and key length between
the two.
(Actually, while I'm here, I've only just noticed that the SSH-1
Blowfish cipher mis-describes itself in log messages as Blowfish-128.
In fact it passes the whole of the input key buffer, which has length
SSH1_SESSION_KEY_LENGTH == 32 bytes == 256 bits. So it's actually
Blowfish-256, and has been all along!)
2019-01-17 18:06:08 +00:00
|
|
|
strbuf *ssh_cipher_decrypt_length_wrapper(ssh_cipher *c, ptrlen input,
|
2022-08-03 19:48:46 +00:00
|
|
|
unsigned long seq)
|
New test system for mp_int and cryptography.
I've written a new standalone test program which incorporates all of
PuTTY's crypto code, including the mp_int and low-level elliptic curve
layers but also going all the way up to the implementations of the
MAC, hash, cipher, public key and kex abstractions.
The test program itself, 'testcrypt', speaks a simple line-oriented
protocol on standard I/O in which you write the name of a function
call followed by some inputs, and it gives you back a list of outputs
preceded by a line telling you how many there are. Dynamically
allocated objects are assigned string ids in the protocol, and there's
a 'free' function that tells testcrypt when it can dispose of one.
It's possible to speak that protocol by hand, but cumbersome. I've
also provided a Python module that wraps it, by running testcrypt as a
persistent subprocess and gatewaying all the function calls into
things that look reasonably natural to call from Python. The Python
module and testcrypt.c both read a carefully formatted header file
testcrypt.h which contains the name and signature of every exported
function, so it costs minimal effort to expose a given function
through this test API. In a few cases it's necessary to write a
wrapper in testcrypt.c that makes the function look more friendly, but
mostly you don't even need that. (Though that is one of the
motivations between a lot of API cleanups I've done recently!)
I considered doing Python integration in the more obvious way, by
linking parts of the PuTTY code directly into a native-code .so Python
module. I decided against it because this way is more flexible: I can
run the testcrypt program on its own, or compile it in a way that
Python wouldn't play nicely with (I bet compiling just that .so with
Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or
attach a debugger to it. I can even recompile testcrypt for a
different CPU architecture (32- vs 64-bit, or even running it on a
different machine over ssh or under emulation) and still layer the
nice API on top of that via the local Python interpreter. All I need
is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
|
|
|
{
|
2022-08-16 17:23:15 +00:00
|
|
|
if (input.len != 4)
|
Merge the ssh1_cipher type into ssh2_cipher.
The aim of this reorganisation is to make it easier to test all the
ciphers in PuTTY in a uniform way. It was inconvenient that there were
two separate vtable systems for the ciphers used in SSH-1 and SSH-2
with different functionality.
Now there's only one type, called ssh_cipher. But really it's the old
ssh2_cipher, just renamed: I haven't made any changes to the API on
the SSH-2 side. Instead, I've removed ssh1_cipher completely, and
adapted the SSH-1 BPP to use the SSH-2 style API.
(The relevant differences are that ssh1_cipher encapsulated both the
sending and receiving directions in one object - so now ssh1bpp has to
make a separate cipher instance per direction - and that ssh1_cipher
automatically initialised the IV to all zeroes, which ssh1bpp now has
to do by hand.)
The previous ssh1_cipher vtable for single-DES has been removed
completely, because when converted into the new API it became
identical to the SSH-2 single-DES vtable; so now there's just one
vtable for DES-CBC which works in both protocols. The other two SSH-1
ciphers each had to stay separate, because 3DES is completely
different between SSH-1 and SSH-2 (three layers of CBC structure
versus one), and Blowfish varies in endianness and key length between
the two.
(Actually, while I'm here, I've only just noticed that the SSH-1
Blowfish cipher mis-describes itself in log messages as Blowfish-128.
In fact it passes the whole of the input key buffer, which has length
SSH1_SESSION_KEY_LENGTH == 32 bytes == 256 bits. So it's actually
Blowfish-256, and has been all along!)
2019-01-17 18:06:08 +00:00
|
|
|
fatal_error("ssh_cipher_decrypt_length: needs exactly 4 bytes");
|
2022-04-19 09:53:44 +00:00
|
|
|
strbuf *sb = strbuf_dup(input);
|
Merge the ssh1_cipher type into ssh2_cipher.
The aim of this reorganisation is to make it easier to test all the
ciphers in PuTTY in a uniform way. It was inconvenient that there were
two separate vtable systems for the ciphers used in SSH-1 and SSH-2
with different functionality.
Now there's only one type, called ssh_cipher. But really it's the old
ssh2_cipher, just renamed: I haven't made any changes to the API on
the SSH-2 side. Instead, I've removed ssh1_cipher completely, and
adapted the SSH-1 BPP to use the SSH-2 style API.
(The relevant differences are that ssh1_cipher encapsulated both the
sending and receiving directions in one object - so now ssh1bpp has to
make a separate cipher instance per direction - and that ssh1_cipher
automatically initialised the IV to all zeroes, which ssh1bpp now has
to do by hand.)
The previous ssh1_cipher vtable for single-DES has been removed
completely, because when converted into the new API it became
identical to the SSH-2 single-DES vtable; so now there's just one
vtable for DES-CBC which works in both protocols. The other two SSH-1
ciphers each had to stay separate, because 3DES is completely
different between SSH-1 and SSH-2 (three layers of CBC structure
versus one), and Blowfish varies in endianness and key length between
the two.
(Actually, while I'm here, I've only just noticed that the SSH-1
Blowfish cipher mis-describes itself in log messages as Blowfish-128.
In fact it passes the whole of the input key buffer, which has length
SSH1_SESSION_KEY_LENGTH == 32 bytes == 256 bits. So it's actually
Blowfish-256, and has been all along!)
2019-01-17 18:06:08 +00:00
|
|
|
ssh_cipher_decrypt_length(c, sb->u, sb->len, seq);
|
New test system for mp_int and cryptography.
I've written a new standalone test program which incorporates all of
PuTTY's crypto code, including the mp_int and low-level elliptic curve
layers but also going all the way up to the implementations of the
MAC, hash, cipher, public key and kex abstractions.
The test program itself, 'testcrypt', speaks a simple line-oriented
protocol on standard I/O in which you write the name of a function
call followed by some inputs, and it gives you back a list of outputs
preceded by a line telling you how many there are. Dynamically
allocated objects are assigned string ids in the protocol, and there's
a 'free' function that tells testcrypt when it can dispose of one.
It's possible to speak that protocol by hand, but cumbersome. I've
also provided a Python module that wraps it, by running testcrypt as a
persistent subprocess and gatewaying all the function calls into
things that look reasonably natural to call from Python. The Python
module and testcrypt.c both read a carefully formatted header file
testcrypt.h which contains the name and signature of every exported
function, so it costs minimal effort to expose a given function
through this test API. In a few cases it's necessary to write a
wrapper in testcrypt.c that makes the function look more friendly, but
mostly you don't even need that. (Though that is one of the
motivations between a lot of API cleanups I've done recently!)
I considered doing Python integration in the more obvious way, by
linking parts of the PuTTY code directly into a native-code .so Python
module. I decided against it because this way is more flexible: I can
run the testcrypt program on its own, or compile it in a way that
Python wouldn't play nicely with (I bet compiling just that .so with
Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or
attach a debugger to it. I can even recompile testcrypt for a
different CPU architecture (32- vs 64-bit, or even running it on a
different machine over ssh or under emulation) and still layer the
nice API on top of that via the local Python interpreter. All I need
is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
|
|
|
return sb;
|
|
|
|
}
|
|
|
|
|
|
|
|
strbuf *ssh2_mac_genresult_wrapper(ssh2_mac *m)
|
|
|
|
{
|
|
|
|
strbuf *sb = strbuf_new();
|
|
|
|
void *u = strbuf_append(sb, ssh2_mac_alg(m)->len);
|
|
|
|
ssh2_mac_genresult(m, u);
|
|
|
|
return sb;
|
|
|
|
}
|
|
|
|
|
Certificate-specific ssh_key method suite.
Certificate keys don't work the same as normal keys, so the rest of
the code is going to have to pay attention to whether a key is a
certificate, and if so, treat it differently and do cert-specific
stuff to it. So here's a collection of methods for that purpose.
With one exception, these methods of ssh_key are not expected to be
implemented at all in non-certificate key types: they should only ever
be called once you already know you're dealing with a certificate. So
most of the new method pointers can be left out of the ssh_keyalg
initialisers.
The exception is the base_key method, which retrieves the base key of
a certificate - the underlying one with the certificate stripped off.
It's convenient for non-certificate keys to implement this too, and
just return a pointer to themselves. So I've added an implementation
in nullkey.c doing that. (The returned pointer doesn't transfer
ownership; you have to use the new ssh_key_clone() if you want to keep
the base key after freeing the certificate key.)
The methods _only_ implemented in certificates:
Query methods to return the public key of the CA (for looking up in a
list of trusted ones), and to return the key id string (which exists
to be written into log files).
Obviously, we need a check_cert() method which will verify the CA's
actual signature, not to mention checking all the other details like
the principal and the validity period.
And there's another fiddly method for dealing with the RSA upgrade
system, called 'related_alg'. This is quite like alternate_ssh_id, in
that its job is to upgrade one key algorithm to a related one with
more modern RSA signing flags (or any other similar thing that might
later reuse the same mechanism). But where alternate_ssh_id took the
actual signing flags as an argument, this takes a pointer to the
upgraded base algorithm. So it answers the question "What is to this
key algorithm as you are to its base?" - if you call it on
opensshcert_ssh_rsa and give it ssh_rsa_sha512, it'll give you back
opensshcert_ssh_rsa_sha512.
(It's awkward to have to have another of these fiddly methods, and in
the longer term I'd like to try to clean up their proliferation a bit.
But I even more dislike the alternative of just going through
all_keyalgs looking for a cert algorithm with, say, ssh_rsa_sha512 as
the base: that approach would work fine now but it would be a lurking
time bomb for when all the -cert-v02@ methods appear one day. This
way, each certificate type can upgrade itself to the appropriately
related version. And at least related_alg is only needed if you _are_
a certificate key type - it's not adding yet another piece of
null-method boilerplate to the rest.)
2022-04-20 12:06:08 +00:00
|
|
|
ssh_key *ssh_key_base_key_wrapper(ssh_key *key)
|
|
|
|
{
|
|
|
|
/* To avoid having to explain the borrowed reference to Python,
|
|
|
|
* just clone the key unconditionally */
|
|
|
|
return ssh_key_clone(ssh_key_base_key(key));
|
|
|
|
}
|
|
|
|
|
|
|
|
void ssh_key_ca_public_blob_wrapper(ssh_key *key, BinarySink *out)
|
|
|
|
{
|
|
|
|
/* Wrap to avoid null-pointer dereference */
|
|
|
|
if (!key->vt->is_certificate)
|
|
|
|
fatal_error("ssh_key_ca_public_blob: needs a certificate");
|
|
|
|
ssh_key_ca_public_blob(key, out);
|
|
|
|
}
|
|
|
|
|
|
|
|
void ssh_key_cert_id_string_wrapper(ssh_key *key, BinarySink *out)
|
|
|
|
{
|
|
|
|
/* Wrap to avoid null-pointer dereference */
|
|
|
|
if (!key->vt->is_certificate)
|
|
|
|
fatal_error("ssh_key_cert_id_string: needs a certificate");
|
|
|
|
ssh_key_cert_id_string(key, out);
|
|
|
|
}
|
|
|
|
|
|
|
|
static bool ssh_key_check_cert_wrapper(
|
2022-05-02 09:18:16 +00:00
|
|
|
ssh_key *key, bool host, ptrlen principal, uint64_t time, ptrlen optstr,
|
Certificate-specific ssh_key method suite.
Certificate keys don't work the same as normal keys, so the rest of
the code is going to have to pay attention to whether a key is a
certificate, and if so, treat it differently and do cert-specific
stuff to it. So here's a collection of methods for that purpose.
With one exception, these methods of ssh_key are not expected to be
implemented at all in non-certificate key types: they should only ever
be called once you already know you're dealing with a certificate. So
most of the new method pointers can be left out of the ssh_keyalg
initialisers.
The exception is the base_key method, which retrieves the base key of
a certificate - the underlying one with the certificate stripped off.
It's convenient for non-certificate keys to implement this too, and
just return a pointer to themselves. So I've added an implementation
in nullkey.c doing that. (The returned pointer doesn't transfer
ownership; you have to use the new ssh_key_clone() if you want to keep
the base key after freeing the certificate key.)
The methods _only_ implemented in certificates:
Query methods to return the public key of the CA (for looking up in a
list of trusted ones), and to return the key id string (which exists
to be written into log files).
Obviously, we need a check_cert() method which will verify the CA's
actual signature, not to mention checking all the other details like
the principal and the validity period.
And there's another fiddly method for dealing with the RSA upgrade
system, called 'related_alg'. This is quite like alternate_ssh_id, in
that its job is to upgrade one key algorithm to a related one with
more modern RSA signing flags (or any other similar thing that might
later reuse the same mechanism). But where alternate_ssh_id took the
actual signing flags as an argument, this takes a pointer to the
upgraded base algorithm. So it answers the question "What is to this
key algorithm as you are to its base?" - if you call it on
opensshcert_ssh_rsa and give it ssh_rsa_sha512, it'll give you back
opensshcert_ssh_rsa_sha512.
(It's awkward to have to have another of these fiddly methods, and in
the longer term I'd like to try to clean up their proliferation a bit.
But I even more dislike the alternative of just going through
all_keyalgs looking for a cert algorithm with, say, ssh_rsa_sha512 as
the base: that approach would work fine now but it would be a lurking
time bomb for when all the -cert-v02@ methods appear one day. This
way, each certificate type can upgrade itself to the appropriately
related version. And at least related_alg is only needed if you _are_
a certificate key type - it's not adding yet another piece of
null-method boilerplate to the rest.)
2022-04-20 12:06:08 +00:00
|
|
|
BinarySink *error)
|
|
|
|
{
|
|
|
|
/* Wrap to avoid null-pointer dereference */
|
|
|
|
if (!key->vt->is_certificate)
|
|
|
|
fatal_error("ssh_key_cert_id_string: needs a certificate");
|
2022-05-02 09:18:16 +00:00
|
|
|
|
|
|
|
ca_options opts;
|
|
|
|
opts.permit_rsa_sha1 = true;
|
|
|
|
opts.permit_rsa_sha256 = true;
|
|
|
|
opts.permit_rsa_sha512 = true;
|
|
|
|
|
|
|
|
while (optstr.len) {
|
|
|
|
ptrlen word = ptrlen_get_word(&optstr, ",");
|
|
|
|
ptrlen key = word, value = PTRLEN_LITERAL("");
|
|
|
|
const char *comma = memchr(word.ptr, '=', word.len);
|
|
|
|
if (comma) {
|
|
|
|
key.len = comma - (const char *)word.ptr;
|
|
|
|
value.ptr = comma + 1;
|
|
|
|
value.len = word.len - key.len - 1;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (ptrlen_eq_string(key, "permit_rsa_sha1"))
|
|
|
|
opts.permit_rsa_sha1 = ptrlen_eq_string(value, "true");
|
|
|
|
if (ptrlen_eq_string(key, "permit_rsa_sha256"))
|
|
|
|
opts.permit_rsa_sha256 = ptrlen_eq_string(value, "true");
|
|
|
|
if (ptrlen_eq_string(key, "permit_rsa_sha512"))
|
|
|
|
opts.permit_rsa_sha512 = ptrlen_eq_string(value, "true");
|
|
|
|
}
|
|
|
|
|
|
|
|
return ssh_key_check_cert(key, host, principal, time, &opts, error);
|
Certificate-specific ssh_key method suite.
Certificate keys don't work the same as normal keys, so the rest of
the code is going to have to pay attention to whether a key is a
certificate, and if so, treat it differently and do cert-specific
stuff to it. So here's a collection of methods for that purpose.
With one exception, these methods of ssh_key are not expected to be
implemented at all in non-certificate key types: they should only ever
be called once you already know you're dealing with a certificate. So
most of the new method pointers can be left out of the ssh_keyalg
initialisers.
The exception is the base_key method, which retrieves the base key of
a certificate - the underlying one with the certificate stripped off.
It's convenient for non-certificate keys to implement this too, and
just return a pointer to themselves. So I've added an implementation
in nullkey.c doing that. (The returned pointer doesn't transfer
ownership; you have to use the new ssh_key_clone() if you want to keep
the base key after freeing the certificate key.)
The methods _only_ implemented in certificates:
Query methods to return the public key of the CA (for looking up in a
list of trusted ones), and to return the key id string (which exists
to be written into log files).
Obviously, we need a check_cert() method which will verify the CA's
actual signature, not to mention checking all the other details like
the principal and the validity period.
And there's another fiddly method for dealing with the RSA upgrade
system, called 'related_alg'. This is quite like alternate_ssh_id, in
that its job is to upgrade one key algorithm to a related one with
more modern RSA signing flags (or any other similar thing that might
later reuse the same mechanism). But where alternate_ssh_id took the
actual signing flags as an argument, this takes a pointer to the
upgraded base algorithm. So it answers the question "What is to this
key algorithm as you are to its base?" - if you call it on
opensshcert_ssh_rsa and give it ssh_rsa_sha512, it'll give you back
opensshcert_ssh_rsa_sha512.
(It's awkward to have to have another of these fiddly methods, and in
the longer term I'd like to try to clean up their proliferation a bit.
But I even more dislike the alternative of just going through
all_keyalgs looking for a cert algorithm with, say, ssh_rsa_sha512 as
the base: that approach would work fine now but it would be a lurking
time bomb for when all the -cert-v02@ methods appear one day. This
way, each certificate type can upgrade itself to the appropriately
related version. And at least related_alg is only needed if you _are_
a certificate key type - it's not adding yet another piece of
null-method boilerplate to the rest.)
2022-04-20 12:06:08 +00:00
|
|
|
}
|
|
|
|
|
New test system for mp_int and cryptography.
I've written a new standalone test program which incorporates all of
PuTTY's crypto code, including the mp_int and low-level elliptic curve
layers but also going all the way up to the implementations of the
MAC, hash, cipher, public key and kex abstractions.
The test program itself, 'testcrypt', speaks a simple line-oriented
protocol on standard I/O in which you write the name of a function
call followed by some inputs, and it gives you back a list of outputs
preceded by a line telling you how many there are. Dynamically
allocated objects are assigned string ids in the protocol, and there's
a 'free' function that tells testcrypt when it can dispose of one.
It's possible to speak that protocol by hand, but cumbersome. I've
also provided a Python module that wraps it, by running testcrypt as a
persistent subprocess and gatewaying all the function calls into
things that look reasonably natural to call from Python. The Python
module and testcrypt.c both read a carefully formatted header file
testcrypt.h which contains the name and signature of every exported
function, so it costs minimal effort to expose a given function
through this test API. In a few cases it's necessary to write a
wrapper in testcrypt.c that makes the function look more friendly, but
mostly you don't even need that. (Though that is one of the
motivations between a lot of API cleanups I've done recently!)
I considered doing Python integration in the more obvious way, by
linking parts of the PuTTY code directly into a native-code .so Python
module. I decided against it because this way is more flexible: I can
run the testcrypt program on its own, or compile it in a way that
Python wouldn't play nicely with (I bet compiling just that .so with
Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or
attach a debugger to it. I can even recompile testcrypt for a
different CPU architecture (32- vs 64-bit, or even running it on a
different machine over ssh or under emulation) and still layer the
nice API on top of that via the local Python interpreter. All I need
is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
|
|
|
bool dh_validate_f_wrapper(dh_ctx *dh, mp_int *f)
|
|
|
|
{
|
|
|
|
return dh_validate_f(dh, f) == NULL;
|
|
|
|
}
|
|
|
|
|
|
|
|
void ssh_hash_update(ssh_hash *h, ptrlen pl)
|
|
|
|
{
|
|
|
|
put_datapl(h, pl);
|
|
|
|
}
|
|
|
|
|
|
|
|
void ssh2_mac_update(ssh2_mac *m, ptrlen pl)
|
|
|
|
{
|
|
|
|
put_datapl(m, pl);
|
|
|
|
}
|
|
|
|
|
|
|
|
static RSAKey *rsa_new(void)
|
|
|
|
{
|
|
|
|
RSAKey *rsa = snew(RSAKey);
|
|
|
|
memset(rsa, 0, sizeof(RSAKey));
|
|
|
|
return rsa;
|
|
|
|
}
|
|
|
|
|
2022-04-14 06:04:33 +00:00
|
|
|
strbuf *ecdh_key_getkey_wrapper(ecdh_key *ek, ptrlen remoteKey)
|
|
|
|
{
|
|
|
|
/* Fold the boolean return value in C into the string return value
|
|
|
|
* for this purpose, by returning NULL on failure */
|
|
|
|
strbuf *sb = strbuf_new();
|
|
|
|
if (!ecdh_key_getkey(ek, remoteKey, BinarySink_UPCAST(sb))) {
|
|
|
|
strbuf_free(sb);
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
return sb;
|
|
|
|
}
|
|
|
|
|
Implement OpenSSH 9.x's NTRU Prime / Curve25519 kex.
This consists of DJB's 'Streamlined NTRU Prime' quantum-resistant
cryptosystem, currently in round 3 of the NIST post-quantum key
exchange competition; it's run in parallel with ordinary Curve25519,
and generates a shared secret combining the output of both systems.
(Hence, even if you don't trust this newfangled NTRU Prime thing at
all, it's at least no _less_ secure than the kex you were using
already.)
As the OpenSSH developers point out, key exchange is the most urgent
thing to make quantum-resistant, even before working quantum computers
big enough to break crypto become available, because a break of the
kex algorithm can be applied retroactively to recordings of your past
sessions. By contrast, authentication is a real-time protocol, and can
only be broken by a quantum computer if there's one available to
attack you _already_.
I've implemented both sides of the mechanism, so that PuTTY and Uppity
both support it. In my initial testing, the two sides can both
interoperate with the appropriate half of OpenSSH, and also (of
course, but it would be embarrassing to mess it up) with each other.
2022-04-15 16:19:47 +00:00
|
|
|
static void int16_list_resize(int16_list *list, unsigned p)
|
|
|
|
{
|
|
|
|
list->integers = sresize(list->integers, p, uint16_t);
|
|
|
|
for (size_t i = list->n; i < p; i++)
|
|
|
|
list->integers[i] = 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
#if 0
|
|
|
|
static int16_list ntru_ring_to_list_and_free(uint16_t *out, unsigned p)
|
|
|
|
{
|
|
|
|
struct mpint_list mpl;
|
|
|
|
mpl.n = p;
|
|
|
|
mpl->integers = snewn(p, mp_int *);
|
|
|
|
for (unsigned i = 0; i < p; i++)
|
|
|
|
mpl->integers[i] = mp_from_integer((int16_t)out[i]);
|
|
|
|
sfree(out);
|
|
|
|
add_finaliser(finaliser_sfree, mpl->integers);
|
|
|
|
return mpl;
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
|
|
|
int16_list *ntru_ring_multiply_wrapper(
|
|
|
|
int16_list *a, int16_list *b, unsigned p, unsigned q)
|
|
|
|
{
|
|
|
|
int16_list_resize(a, p);
|
|
|
|
int16_list_resize(b, p);
|
|
|
|
int16_list *out = make_int16_list(p);
|
|
|
|
ntru_ring_multiply(out->integers, a->integers, b->integers, p, q);
|
|
|
|
return out;
|
|
|
|
}
|
|
|
|
|
|
|
|
int16_list *ntru_ring_invert_wrapper(int16_list *in, unsigned p, unsigned q)
|
|
|
|
{
|
|
|
|
int16_list_resize(in, p);
|
|
|
|
int16_list *out = make_int16_list(p);
|
|
|
|
unsigned success = ntru_ring_invert(out->integers, in->integers, p, q);
|
|
|
|
if (!success)
|
|
|
|
return NULL;
|
|
|
|
return out;
|
|
|
|
}
|
|
|
|
|
|
|
|
int16_list *ntru_mod3_wrapper(int16_list *in, unsigned p, unsigned q)
|
|
|
|
{
|
|
|
|
int16_list_resize(in, p);
|
|
|
|
int16_list *out = make_int16_list(p);
|
|
|
|
ntru_mod3(out->integers, in->integers, p, q);
|
|
|
|
return out;
|
|
|
|
}
|
|
|
|
|
|
|
|
int16_list *ntru_round3_wrapper(int16_list *in, unsigned p, unsigned q)
|
|
|
|
{
|
|
|
|
int16_list_resize(in, p);
|
|
|
|
int16_list *out = make_int16_list(p);
|
|
|
|
ntru_round3(out->integers, in->integers, p, q);
|
|
|
|
return out;
|
|
|
|
}
|
|
|
|
|
|
|
|
int16_list *ntru_bias_wrapper(int16_list *in, unsigned bias,
|
|
|
|
unsigned p, unsigned q)
|
|
|
|
{
|
|
|
|
int16_list_resize(in, p);
|
|
|
|
int16_list *out = make_int16_list(p);
|
|
|
|
ntru_bias(out->integers, in->integers, bias, p, q);
|
|
|
|
return out;
|
|
|
|
}
|
|
|
|
|
|
|
|
int16_list *ntru_scale_wrapper(int16_list *in, unsigned scale,
|
2022-08-03 19:48:46 +00:00
|
|
|
unsigned p, unsigned q)
|
Implement OpenSSH 9.x's NTRU Prime / Curve25519 kex.
This consists of DJB's 'Streamlined NTRU Prime' quantum-resistant
cryptosystem, currently in round 3 of the NIST post-quantum key
exchange competition; it's run in parallel with ordinary Curve25519,
and generates a shared secret combining the output of both systems.
(Hence, even if you don't trust this newfangled NTRU Prime thing at
all, it's at least no _less_ secure than the kex you were using
already.)
As the OpenSSH developers point out, key exchange is the most urgent
thing to make quantum-resistant, even before working quantum computers
big enough to break crypto become available, because a break of the
kex algorithm can be applied retroactively to recordings of your past
sessions. By contrast, authentication is a real-time protocol, and can
only be broken by a quantum computer if there's one available to
attack you _already_.
I've implemented both sides of the mechanism, so that PuTTY and Uppity
both support it. In my initial testing, the two sides can both
interoperate with the appropriate half of OpenSSH, and also (of
course, but it would be embarrassing to mess it up) with each other.
2022-04-15 16:19:47 +00:00
|
|
|
{
|
|
|
|
int16_list_resize(in, p);
|
|
|
|
int16_list *out = make_int16_list(p);
|
|
|
|
ntru_scale(out->integers, in->integers, scale, p, q);
|
|
|
|
return out;
|
|
|
|
}
|
|
|
|
|
|
|
|
NTRUEncodeSchedule *ntru_encode_schedule_wrapper(int16_list *in)
|
|
|
|
{
|
|
|
|
return ntru_encode_schedule(in->integers, in->n);
|
|
|
|
}
|
|
|
|
|
|
|
|
void ntru_encode_wrapper(NTRUEncodeSchedule *sched, int16_list *rs,
|
|
|
|
BinarySink *bs)
|
|
|
|
{
|
|
|
|
ntru_encode(sched, rs->integers, bs);
|
|
|
|
}
|
|
|
|
|
|
|
|
int16_list *ntru_decode_wrapper(NTRUEncodeSchedule *sched, ptrlen data)
|
|
|
|
{
|
|
|
|
int16_list *out = make_int16_list(ntru_encode_schedule_nvals(sched));
|
|
|
|
ntru_decode(sched, out->integers, data);
|
|
|
|
return out;
|
|
|
|
}
|
|
|
|
|
|
|
|
int16_list *ntru_gen_short_wrapper(unsigned p, unsigned w)
|
|
|
|
{
|
|
|
|
int16_list *out = make_int16_list(p);
|
|
|
|
ntru_gen_short(out->integers, p, w);
|
|
|
|
return out;
|
|
|
|
}
|
|
|
|
|
|
|
|
int16_list *ntru_pubkey_wrapper(NTRUKeyPair *keypair)
|
|
|
|
{
|
|
|
|
unsigned p = ntru_keypair_p(keypair);
|
|
|
|
int16_list *out = make_int16_list(p);
|
|
|
|
memcpy(out->integers, ntru_pubkey(keypair), p*sizeof(uint16_t));
|
|
|
|
return out;
|
|
|
|
}
|
|
|
|
|
|
|
|
int16_list *ntru_encrypt_wrapper(int16_list *plaintext, int16_list *pubkey,
|
|
|
|
unsigned p, unsigned q)
|
|
|
|
{
|
|
|
|
int16_list *out = make_int16_list(p);
|
|
|
|
ntru_encrypt(out->integers, plaintext->integers, pubkey->integers, p, q);
|
|
|
|
return out;
|
|
|
|
}
|
|
|
|
|
|
|
|
int16_list *ntru_decrypt_wrapper(int16_list *ciphertext, NTRUKeyPair *keypair)
|
|
|
|
{
|
|
|
|
unsigned p = ntru_keypair_p(keypair);
|
|
|
|
int16_list *out = make_int16_list(p);
|
|
|
|
ntru_decrypt(out->integers, ciphertext->integers, keypair);
|
|
|
|
return out;
|
|
|
|
}
|
|
|
|
|
New post-quantum kex: ML-KEM, and three hybrids of it.
As standardised by NIST in FIPS 203, this is a lattice-based
post-quantum KEM.
Very vaguely, the idea of it is that your public key is a matrix A and
vector t, and the private key is the knowledge of how to decompose t
into two vectors with all their coefficients small, one transformed by
A relative to the other. Encryption of a binary secret starts by
turning each bit into one of two maximally separated residues mod a
prime q, and then adding 'noise' based on the public key in the form
of small increments and decrements mod q, again with some of the noise
transformed by A relative to the rest. Decryption uses the knowledge
of t's decomposition to align the two sets of noise so that the
_large_ changes (which masked the secret from an eavesdropper) cancel
out, leaving only a collection of small changes to the original secret
vector. Then the vector of input bits can be recovered by assuming
that those accumulated small pieces of noise haven't concentrated in
any particular residue enough to push it more than half way to the
other of its possible starting values.
A weird feature of it is that decryption is not a true mathematical
inverse of encryption. The assumption that the noise doesn't get large
enough to flip any bit of the secret is only probabilistically valid,
not a hard guarantee. In other words, key agreement can fail, simply
by getting particularly unlucky with the distribution of your random
noise! However, the probability of a failure is very low - less than
2^-138 even for ML-KEM-512, and gets even smaller with the larger
variants.
An awkward feature for our purposes is that the matrix A, containing a
large number of residues mod the prime q=3329, is required to be
constructed by a process of rejection sampling, i.e. generating random
12-bit values and throwing away the out-of-range ones. That would be a
real pain for our side-channel testing system, which generally handles
rejection sampling badly (since it necessarily involves data-dependent
control flow and timing variation). Fortunately, the matrix and the
random seed it was made from are both public: the matrix seed is
transmitted as part of the public key, so it's not necessary to try to
hide it. Accordingly, I was able to get the implementation to pass
testsc by means of not varying the matrix seed between runs, which is
justified by the principle of testsc that you vary the _secrets_ to
ensure timing is independent of them - and the matrix seed isn't a
secret, so you're allowed to keep it the same.
The three hybrid algorithms, defined by the current Internet-Draft
draft-kampanakis-curdle-ssh-pq-ke, include one hybrid of ML-KEM-768
with Curve25519 in exactly the same way we were already hybridising
NTRU Prime with Curve25519, and two more hybrids of ML-KEM with ECDH
over a NIST curve. The former hybrid interoperates with the
implementation in OpenSSH 9.9; all three interoperate with the fork
'openssh-oqs' at github.com/open-quantum-safe/openssh, and also with
the Python library AsyncSSH.
2024-12-07 19:33:39 +00:00
|
|
|
void mlkem_keygen_internal_wrapper(
|
|
|
|
BinarySink *ek, BinarySink *dk, const mlkem_params *params,
|
|
|
|
ptrlen d, ptrlen z)
|
|
|
|
{
|
|
|
|
assert(d.len == 32 && "Invalid d length");
|
|
|
|
assert(z.len == 32 && "Invalid z length");
|
|
|
|
mlkem_keygen_internal(ek, dk, params, d.ptr, z.ptr);
|
|
|
|
}
|
|
|
|
|
|
|
|
void mlkem_keygen_rho_sigma_wrapper(
|
|
|
|
BinarySink *ek, BinarySink *dk, const mlkem_params *params,
|
|
|
|
ptrlen rho, ptrlen sigma, ptrlen z)
|
|
|
|
{
|
|
|
|
assert(rho.len == 32 && "Invalid rho length");
|
|
|
|
assert(sigma.len == 32 && "Invalid sigma length");
|
|
|
|
assert(z.len == 32 && "Invalid z length");
|
|
|
|
mlkem_keygen_rho_sigma(ek, dk, params, rho.ptr, sigma.ptr, z.ptr);
|
|
|
|
}
|
|
|
|
|
|
|
|
bool mlkem_encaps_internal_wrapper(BinarySink *ciphertext, BinarySink *kout,
|
|
|
|
const mlkem_params *params, ptrlen ek,
|
|
|
|
ptrlen m)
|
|
|
|
{
|
|
|
|
assert(m.len == 32 && "Invalid m length");
|
|
|
|
return mlkem_encaps_internal(ciphertext, kout, params, ek, m.ptr);
|
|
|
|
}
|
|
|
|
|
New test system for mp_int and cryptography.
I've written a new standalone test program which incorporates all of
PuTTY's crypto code, including the mp_int and low-level elliptic curve
layers but also going all the way up to the implementations of the
MAC, hash, cipher, public key and kex abstractions.
The test program itself, 'testcrypt', speaks a simple line-oriented
protocol on standard I/O in which you write the name of a function
call followed by some inputs, and it gives you back a list of outputs
preceded by a line telling you how many there are. Dynamically
allocated objects are assigned string ids in the protocol, and there's
a 'free' function that tells testcrypt when it can dispose of one.
It's possible to speak that protocol by hand, but cumbersome. I've
also provided a Python module that wraps it, by running testcrypt as a
persistent subprocess and gatewaying all the function calls into
things that look reasonably natural to call from Python. The Python
module and testcrypt.c both read a carefully formatted header file
testcrypt.h which contains the name and signature of every exported
function, so it costs minimal effort to expose a given function
through this test API. In a few cases it's necessary to write a
wrapper in testcrypt.c that makes the function look more friendly, but
mostly you don't even need that. (Though that is one of the
motivations between a lot of API cleanups I've done recently!)
I considered doing Python integration in the more obvious way, by
linking parts of the PuTTY code directly into a native-code .so Python
module. I decided against it because this way is more flexible: I can
run the testcrypt program on its own, or compile it in a way that
Python wouldn't play nicely with (I bet compiling just that .so with
Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or
attach a debugger to it. I can even recompile testcrypt for a
different CPU architecture (32- vs 64-bit, or even running it on a
different machine over ssh or under emulation) and still layer the
nice API on top of that via the local Python interpreter. All I need
is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
|
|
|
strbuf *rsa_ssh1_encrypt_wrapper(ptrlen input, RSAKey *key)
|
|
|
|
{
|
|
|
|
/* Fold the boolean return value in C into the string return value
|
2020-01-09 19:16:58 +00:00
|
|
|
* for this purpose, by returning NULL on failure */
|
New test system for mp_int and cryptography.
I've written a new standalone test program which incorporates all of
PuTTY's crypto code, including the mp_int and low-level elliptic curve
layers but also going all the way up to the implementations of the
MAC, hash, cipher, public key and kex abstractions.
The test program itself, 'testcrypt', speaks a simple line-oriented
protocol on standard I/O in which you write the name of a function
call followed by some inputs, and it gives you back a list of outputs
preceded by a line telling you how many there are. Dynamically
allocated objects are assigned string ids in the protocol, and there's
a 'free' function that tells testcrypt when it can dispose of one.
It's possible to speak that protocol by hand, but cumbersome. I've
also provided a Python module that wraps it, by running testcrypt as a
persistent subprocess and gatewaying all the function calls into
things that look reasonably natural to call from Python. The Python
module and testcrypt.c both read a carefully formatted header file
testcrypt.h which contains the name and signature of every exported
function, so it costs minimal effort to expose a given function
through this test API. In a few cases it's necessary to write a
wrapper in testcrypt.c that makes the function look more friendly, but
mostly you don't even need that. (Though that is one of the
motivations between a lot of API cleanups I've done recently!)
I considered doing Python integration in the more obvious way, by
linking parts of the PuTTY code directly into a native-code .so Python
module. I decided against it because this way is more flexible: I can
run the testcrypt program on its own, or compile it in a way that
Python wouldn't play nicely with (I bet compiling just that .so with
Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or
attach a debugger to it. I can even recompile testcrypt for a
different CPU architecture (32- vs 64-bit, or even running it on a
different machine over ssh or under emulation) and still layer the
nice API on top of that via the local Python interpreter. All I need
is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
|
|
|
strbuf *sb = strbuf_new();
|
|
|
|
put_datapl(sb, input);
|
2020-01-09 19:16:58 +00:00
|
|
|
put_padding(sb, key->bytes - input.len, 0);
|
|
|
|
if (!rsa_ssh1_encrypt(sb->u, input.len, key)) {
|
|
|
|
strbuf_free(sb);
|
|
|
|
return NULL;
|
|
|
|
}
|
New test system for mp_int and cryptography.
I've written a new standalone test program which incorporates all of
PuTTY's crypto code, including the mp_int and low-level elliptic curve
layers but also going all the way up to the implementations of the
MAC, hash, cipher, public key and kex abstractions.
The test program itself, 'testcrypt', speaks a simple line-oriented
protocol on standard I/O in which you write the name of a function
call followed by some inputs, and it gives you back a list of outputs
preceded by a line telling you how many there are. Dynamically
allocated objects are assigned string ids in the protocol, and there's
a 'free' function that tells testcrypt when it can dispose of one.
It's possible to speak that protocol by hand, but cumbersome. I've
also provided a Python module that wraps it, by running testcrypt as a
persistent subprocess and gatewaying all the function calls into
things that look reasonably natural to call from Python. The Python
module and testcrypt.c both read a carefully formatted header file
testcrypt.h which contains the name and signature of every exported
function, so it costs minimal effort to expose a given function
through this test API. In a few cases it's necessary to write a
wrapper in testcrypt.c that makes the function look more friendly, but
mostly you don't even need that. (Though that is one of the
motivations between a lot of API cleanups I've done recently!)
I considered doing Python integration in the more obvious way, by
linking parts of the PuTTY code directly into a native-code .so Python
module. I decided against it because this way is more flexible: I can
run the testcrypt program on its own, or compile it in a way that
Python wouldn't play nicely with (I bet compiling just that .so with
Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or
attach a debugger to it. I can even recompile testcrypt for a
different CPU architecture (32- vs 64-bit, or even running it on a
different machine over ssh or under emulation) and still layer the
nice API on top of that via the local Python interpreter. All I need
is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
|
|
|
return sb;
|
|
|
|
}
|
|
|
|
|
|
|
|
strbuf *rsa_ssh1_decrypt_pkcs1_wrapper(mp_int *input, RSAKey *key)
|
|
|
|
{
|
|
|
|
/* Again, return "" on failure */
|
|
|
|
strbuf *sb = strbuf_new();
|
|
|
|
if (!rsa_ssh1_decrypt_pkcs1(input, key, sb))
|
2020-01-21 20:16:28 +00:00
|
|
|
strbuf_clear(sb);
|
New test system for mp_int and cryptography.
I've written a new standalone test program which incorporates all of
PuTTY's crypto code, including the mp_int and low-level elliptic curve
layers but also going all the way up to the implementations of the
MAC, hash, cipher, public key and kex abstractions.
The test program itself, 'testcrypt', speaks a simple line-oriented
protocol on standard I/O in which you write the name of a function
call followed by some inputs, and it gives you back a list of outputs
preceded by a line telling you how many there are. Dynamically
allocated objects are assigned string ids in the protocol, and there's
a 'free' function that tells testcrypt when it can dispose of one.
It's possible to speak that protocol by hand, but cumbersome. I've
also provided a Python module that wraps it, by running testcrypt as a
persistent subprocess and gatewaying all the function calls into
things that look reasonably natural to call from Python. The Python
module and testcrypt.c both read a carefully formatted header file
testcrypt.h which contains the name and signature of every exported
function, so it costs minimal effort to expose a given function
through this test API. In a few cases it's necessary to write a
wrapper in testcrypt.c that makes the function look more friendly, but
mostly you don't even need that. (Though that is one of the
motivations between a lot of API cleanups I've done recently!)
I considered doing Python integration in the more obvious way, by
linking parts of the PuTTY code directly into a native-code .so Python
module. I decided against it because this way is more flexible: I can
run the testcrypt program on its own, or compile it in a way that
Python wouldn't play nicely with (I bet compiling just that .so with
Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or
attach a debugger to it. I can even recompile testcrypt for a
different CPU architecture (32- vs 64-bit, or even running it on a
different machine over ssh or under emulation) and still layer the
nice API on top of that via the local Python interpreter. All I need
is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
|
|
|
return sb;
|
|
|
|
}
|
|
|
|
|
2019-01-04 08:23:17 +00:00
|
|
|
strbuf *des_encrypt_xdmauth_wrapper(ptrlen key, ptrlen data)
|
|
|
|
{
|
|
|
|
if (key.len != 7)
|
|
|
|
fatal_error("des_encrypt_xdmauth: key must be 7 bytes long");
|
|
|
|
if (data.len % 8 != 0)
|
|
|
|
fatal_error("des_encrypt_xdmauth: data must be a multiple of 8 bytes");
|
2022-04-19 09:53:44 +00:00
|
|
|
strbuf *sb = strbuf_dup(data);
|
2019-01-04 08:23:17 +00:00
|
|
|
des_encrypt_xdmauth(key.ptr, sb->u, sb->len);
|
|
|
|
return sb;
|
|
|
|
}
|
|
|
|
|
|
|
|
strbuf *des_decrypt_xdmauth_wrapper(ptrlen key, ptrlen data)
|
|
|
|
{
|
|
|
|
if (key.len != 7)
|
|
|
|
fatal_error("des_decrypt_xdmauth: key must be 7 bytes long");
|
|
|
|
if (data.len % 8 != 0)
|
|
|
|
fatal_error("des_decrypt_xdmauth: data must be a multiple of 8 bytes");
|
2022-04-19 09:53:44 +00:00
|
|
|
strbuf *sb = strbuf_dup(data);
|
2019-01-04 08:23:17 +00:00
|
|
|
des_decrypt_xdmauth(key.ptr, sb->u, sb->len);
|
|
|
|
return sb;
|
|
|
|
}
|
|
|
|
|
2019-01-18 06:39:35 +00:00
|
|
|
strbuf *des3_encrypt_pubkey_wrapper(ptrlen key, ptrlen data)
|
|
|
|
{
|
|
|
|
if (key.len != 16)
|
|
|
|
fatal_error("des3_encrypt_pubkey: key must be 16 bytes long");
|
|
|
|
if (data.len % 8 != 0)
|
|
|
|
fatal_error("des3_encrypt_pubkey: data must be a multiple of 8 bytes");
|
2022-04-19 09:53:44 +00:00
|
|
|
strbuf *sb = strbuf_dup(data);
|
2019-01-18 06:39:35 +00:00
|
|
|
des3_encrypt_pubkey(key.ptr, sb->u, sb->len);
|
|
|
|
return sb;
|
|
|
|
}
|
|
|
|
|
|
|
|
strbuf *des3_decrypt_pubkey_wrapper(ptrlen key, ptrlen data)
|
|
|
|
{
|
|
|
|
if (key.len != 16)
|
|
|
|
fatal_error("des3_decrypt_pubkey: key must be 16 bytes long");
|
|
|
|
if (data.len % 8 != 0)
|
|
|
|
fatal_error("des3_decrypt_pubkey: data must be a multiple of 8 bytes");
|
2022-04-19 09:53:44 +00:00
|
|
|
strbuf *sb = strbuf_dup(data);
|
2019-01-18 06:39:35 +00:00
|
|
|
des3_decrypt_pubkey(key.ptr, sb->u, sb->len);
|
|
|
|
return sb;
|
|
|
|
}
|
|
|
|
|
|
|
|
strbuf *des3_encrypt_pubkey_ossh_wrapper(ptrlen key, ptrlen iv, ptrlen data)
|
|
|
|
{
|
|
|
|
if (key.len != 24)
|
|
|
|
fatal_error("des3_encrypt_pubkey_ossh: key must be 24 bytes long");
|
|
|
|
if (iv.len != 8)
|
|
|
|
fatal_error("des3_encrypt_pubkey_ossh: iv must be 8 bytes long");
|
|
|
|
if (data.len % 8 != 0)
|
|
|
|
fatal_error("des3_encrypt_pubkey_ossh: data must be a multiple of 8 bytes");
|
2022-04-19 09:53:44 +00:00
|
|
|
strbuf *sb = strbuf_dup(data);
|
2019-01-18 06:39:35 +00:00
|
|
|
des3_encrypt_pubkey_ossh(key.ptr, iv.ptr, sb->u, sb->len);
|
|
|
|
return sb;
|
|
|
|
}
|
|
|
|
|
|
|
|
strbuf *des3_decrypt_pubkey_ossh_wrapper(ptrlen key, ptrlen iv, ptrlen data)
|
|
|
|
{
|
|
|
|
if (key.len != 24)
|
|
|
|
fatal_error("des3_decrypt_pubkey_ossh: key must be 24 bytes long");
|
|
|
|
if (iv.len != 8)
|
|
|
|
fatal_error("des3_encrypt_pubkey_ossh: iv must be 8 bytes long");
|
|
|
|
if (data.len % 8 != 0)
|
|
|
|
fatal_error("des3_decrypt_pubkey_ossh: data must be a multiple of 8 bytes");
|
2022-04-19 09:53:44 +00:00
|
|
|
strbuf *sb = strbuf_dup(data);
|
2019-01-18 06:39:35 +00:00
|
|
|
des3_decrypt_pubkey_ossh(key.ptr, iv.ptr, sb->u, sb->len);
|
|
|
|
return sb;
|
|
|
|
}
|
|
|
|
|
2021-02-18 17:48:06 +00:00
|
|
|
strbuf *aes256_encrypt_pubkey_wrapper(ptrlen key, ptrlen iv, ptrlen data)
|
2019-01-18 06:39:35 +00:00
|
|
|
{
|
|
|
|
if (key.len != 32)
|
|
|
|
fatal_error("aes256_encrypt_pubkey: key must be 32 bytes long");
|
2021-02-18 17:48:06 +00:00
|
|
|
if (iv.len != 16)
|
|
|
|
fatal_error("aes256_encrypt_pubkey: iv must be 16 bytes long");
|
2019-01-18 06:39:35 +00:00
|
|
|
if (data.len % 16 != 0)
|
|
|
|
fatal_error("aes256_encrypt_pubkey: data must be a multiple of 16 bytes");
|
2022-04-19 09:53:44 +00:00
|
|
|
strbuf *sb = strbuf_dup(data);
|
2021-02-18 17:48:06 +00:00
|
|
|
aes256_encrypt_pubkey(key.ptr, iv.ptr, sb->u, sb->len);
|
2019-01-18 06:39:35 +00:00
|
|
|
return sb;
|
|
|
|
}
|
|
|
|
|
2021-02-18 17:48:06 +00:00
|
|
|
strbuf *aes256_decrypt_pubkey_wrapper(ptrlen key, ptrlen iv, ptrlen data)
|
2019-01-18 06:39:35 +00:00
|
|
|
{
|
|
|
|
if (key.len != 32)
|
|
|
|
fatal_error("aes256_decrypt_pubkey: key must be 32 bytes long");
|
2021-02-18 17:48:06 +00:00
|
|
|
if (iv.len != 16)
|
|
|
|
fatal_error("aes256_encrypt_pubkey: iv must be 16 bytes long");
|
2019-01-18 06:39:35 +00:00
|
|
|
if (data.len % 16 != 0)
|
|
|
|
fatal_error("aes256_decrypt_pubkey: data must be a multiple of 16 bytes");
|
2022-04-19 09:53:44 +00:00
|
|
|
strbuf *sb = strbuf_dup(data);
|
2021-02-18 17:48:06 +00:00
|
|
|
aes256_decrypt_pubkey(key.ptr, iv.ptr, sb->u, sb->len);
|
2019-01-18 06:39:35 +00:00
|
|
|
return sb;
|
|
|
|
}
|
|
|
|
|
Replace PuTTY's PRNG with a Fortuna-like system.
This tears out the entire previous random-pool system in sshrand.c. In
its place is a system pretty close to Ferguson and Schneier's
'Fortuna' generator, with the main difference being that I use SHA-256
instead of AES for the generation side of the system (rationale given
in comment).
The PRNG implementation lives in sshprng.c, and defines a self-
contained data type with no state stored outside the object, so you
can instantiate however many of them you like. The old sshrand.c still
exists, but in place of the previous random pool system, it's just
become a client of sshprng.c, whose job is to hold a single global
instance of the PRNG type, and manage its reference count, save file,
noise-collection timers and similar administrative business.
Advantages of this change include:
- Fortuna is designed with a more varied threat model in mind than my
old home-grown random pool. For example, after any request for
random numbers, it automatically re-seeds itself, so that if the
state of the PRNG should be leaked, it won't give enough
information to find out what past outputs _were_.
- The PRNG type can be instantiated with any hash function; the
instance used by the main tools is based on SHA-256, an improvement
on the old pool's use of SHA-1.
- The new PRNG only uses the completely standard interface to the
hash function API, instead of having to have privileged access to
the internal SHA-1 block transform function. This will make it
easier to revamp the hash code in general, and also it means that
hardware-accelerated versions of SHA-256 will automatically be used
for the PRNG as well as for everything else.
- The new PRNG can be _tested_! Because it has an actual (if not
quite explicit) specification for exactly what the output numbers
_ought_ to be derived from the hashes of, I can (and have) put
tests in cryptsuite that ensure the output really is being derived
in the way I think it is. The old pool could have been returning
any old nonsense and it would have been very hard to tell for sure.
2019-01-22 22:42:41 +00:00
|
|
|
strbuf *prng_read_wrapper(prng *pr, size_t size)
|
|
|
|
{
|
|
|
|
strbuf *sb = strbuf_new();
|
|
|
|
prng_read(pr, strbuf_append(sb, size), size);
|
|
|
|
return sb;
|
|
|
|
}
|
|
|
|
|
|
|
|
void prng_seed_update(prng *pr, ptrlen data)
|
|
|
|
{
|
|
|
|
put_datapl(pr, data);
|
|
|
|
}
|
|
|
|
|
2019-01-14 21:19:38 +00:00
|
|
|
bool crcda_detect(ptrlen packet, ptrlen iv)
|
|
|
|
{
|
|
|
|
if (iv.len != 0 && iv.len != 8)
|
|
|
|
fatal_error("crcda_detect: iv must be empty or 8 bytes long");
|
|
|
|
if (packet.len % 8 != 0)
|
|
|
|
fatal_error("crcda_detect: packet must be a multiple of 8 bytes");
|
|
|
|
struct crcda_ctx *ctx = crcda_make_context();
|
|
|
|
bool toret = detect_attack(ctx, packet.ptr, packet.len,
|
|
|
|
iv.len ? iv.ptr : NULL);
|
|
|
|
crcda_free_context(ctx);
|
|
|
|
return toret;
|
|
|
|
}
|
|
|
|
|
2020-01-06 19:58:25 +00:00
|
|
|
ssh_key *ppk_load_s_wrapper(BinarySource *src, char **comment,
|
|
|
|
const char *passphrase, const char **errorstr)
|
|
|
|
{
|
|
|
|
ssh2_userkey *uk = ppk_load_s(src, passphrase, errorstr);
|
|
|
|
if (uk == SSH2_WRONG_PASSPHRASE) {
|
|
|
|
/* Fudge this special return value */
|
|
|
|
*errorstr = "SSH2_WRONG_PASSPHRASE";
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
if (uk == NULL)
|
|
|
|
return NULL;
|
|
|
|
ssh_key *toret = uk->key;
|
|
|
|
*comment = uk->comment;
|
|
|
|
sfree(uk);
|
|
|
|
return toret;
|
|
|
|
}
|
|
|
|
|
|
|
|
int rsa1_load_s_wrapper(BinarySource *src, RSAKey *rsa, char **comment,
|
|
|
|
const char *passphrase, const char **errorstr)
|
|
|
|
{
|
|
|
|
int toret = rsa1_load_s(src, rsa, passphrase, errorstr);
|
|
|
|
*comment = rsa->comment;
|
|
|
|
rsa->comment = NULL;
|
|
|
|
return toret;
|
|
|
|
}
|
|
|
|
|
2021-02-21 22:29:10 +00:00
|
|
|
strbuf *ppk_save_sb_wrapper(
|
|
|
|
ssh_key *key, const char *comment, const char *passphrase,
|
|
|
|
unsigned fmt_version, Argon2Flavour flavour,
|
|
|
|
uint32_t mem, uint32_t passes, uint32_t parallel)
|
2020-01-06 19:58:25 +00:00
|
|
|
{
|
Introduce PPK file format version 3.
This removes both uses of SHA-1 in the file format: it was used as the
MAC protecting the key file against tamperproofing, and also used in
the key derivation step that converted the user's passphrase to cipher
and MAC keys.
The MAC is simply upgraded from HMAC-SHA-1 to HMAC-SHA-256; it is
otherwise unchanged in how it's applied (in particular, to what data).
The key derivation is totally reworked, to be based on Argon2, which
I've just added to the code base. This should make stolen encrypted
key files more resistant to brute-force attack.
Argon2 has assorted configurable parameters for memory and CPU usage;
the new key format includes all those parameters. So there's no reason
we can't have them under user control, if a user wants to be
particularly vigorous or particularly lightweight with their own key
files. They could even switch to one of the other flavours of Argon2,
if they thought side channels were an especially large or small risk
in their particular environment. In this commit I haven't added any UI
for controlling that kind of thing, but the PPK loading function is
all set up to cope, so that can all be added in a future commit
without having to change the file format.
While I'm at it, I've also switched the CBC encryption to using a
random IV (or rather, one derived from the passphrase along with the
cipher and MAC keys). That's more like normal SSH-2 practice.
2021-02-20 10:17:45 +00:00
|
|
|
/*
|
|
|
|
* For repeatable testing purposes, we never want a timing-dependent
|
|
|
|
* choice of password hashing parameters, so this is easy.
|
|
|
|
*/
|
|
|
|
ppk_save_parameters save_params;
|
2021-02-21 14:37:59 +00:00
|
|
|
memset(&save_params, 0, sizeof(save_params));
|
2021-02-21 22:29:10 +00:00
|
|
|
save_params.fmt_version = fmt_version;
|
Introduce PPK file format version 3.
This removes both uses of SHA-1 in the file format: it was used as the
MAC protecting the key file against tamperproofing, and also used in
the key derivation step that converted the user's passphrase to cipher
and MAC keys.
The MAC is simply upgraded from HMAC-SHA-1 to HMAC-SHA-256; it is
otherwise unchanged in how it's applied (in particular, to what data).
The key derivation is totally reworked, to be based on Argon2, which
I've just added to the code base. This should make stolen encrypted
key files more resistant to brute-force attack.
Argon2 has assorted configurable parameters for memory and CPU usage;
the new key format includes all those parameters. So there's no reason
we can't have them under user control, if a user wants to be
particularly vigorous or particularly lightweight with their own key
files. They could even switch to one of the other flavours of Argon2,
if they thought side channels were an especially large or small risk
in their particular environment. In this commit I haven't added any UI
for controlling that kind of thing, but the PPK loading function is
all set up to cope, so that can all be added in a future commit
without having to change the file format.
While I'm at it, I've also switched the CBC encryption to using a
random IV (or rather, one derived from the passphrase along with the
cipher and MAC keys). That's more like normal SSH-2 practice.
2021-02-20 10:17:45 +00:00
|
|
|
save_params.argon2_flavour = flavour;
|
|
|
|
save_params.argon2_mem = mem;
|
|
|
|
save_params.argon2_passes_auto = false;
|
|
|
|
save_params.argon2_passes = passes;
|
|
|
|
save_params.argon2_parallelism = parallel;
|
|
|
|
|
2020-01-06 19:58:25 +00:00
|
|
|
ssh2_userkey uk;
|
|
|
|
uk.key = key;
|
|
|
|
uk.comment = dupstr(comment);
|
Introduce PPK file format version 3.
This removes both uses of SHA-1 in the file format: it was used as the
MAC protecting the key file against tamperproofing, and also used in
the key derivation step that converted the user's passphrase to cipher
and MAC keys.
The MAC is simply upgraded from HMAC-SHA-1 to HMAC-SHA-256; it is
otherwise unchanged in how it's applied (in particular, to what data).
The key derivation is totally reworked, to be based on Argon2, which
I've just added to the code base. This should make stolen encrypted
key files more resistant to brute-force attack.
Argon2 has assorted configurable parameters for memory and CPU usage;
the new key format includes all those parameters. So there's no reason
we can't have them under user control, if a user wants to be
particularly vigorous or particularly lightweight with their own key
files. They could even switch to one of the other flavours of Argon2,
if they thought side channels were an especially large or small risk
in their particular environment. In this commit I haven't added any UI
for controlling that kind of thing, but the PPK loading function is
all set up to cope, so that can all be added in a future commit
without having to change the file format.
While I'm at it, I've also switched the CBC encryption to using a
random IV (or rather, one derived from the passphrase along with the
cipher and MAC keys). That's more like normal SSH-2 practice.
2021-02-20 10:17:45 +00:00
|
|
|
strbuf *toret = ppk_save_sb(&uk, passphrase, &save_params);
|
2020-01-06 19:58:25 +00:00
|
|
|
sfree(uk.comment);
|
|
|
|
return toret;
|
|
|
|
}
|
|
|
|
|
|
|
|
strbuf *rsa1_save_sb_wrapper(RSAKey *key, const char *comment,
|
|
|
|
const char *passphrase)
|
|
|
|
{
|
|
|
|
key->comment = dupstr(comment);
|
|
|
|
strbuf *toret = rsa1_save_sb(key, passphrase);
|
|
|
|
sfree(key->comment);
|
|
|
|
key->comment = NULL;
|
|
|
|
return toret;
|
|
|
|
}
|
|
|
|
|
New test system for mp_int and cryptography.
I've written a new standalone test program which incorporates all of
PuTTY's crypto code, including the mp_int and low-level elliptic curve
layers but also going all the way up to the implementations of the
MAC, hash, cipher, public key and kex abstractions.
The test program itself, 'testcrypt', speaks a simple line-oriented
protocol on standard I/O in which you write the name of a function
call followed by some inputs, and it gives you back a list of outputs
preceded by a line telling you how many there are. Dynamically
allocated objects are assigned string ids in the protocol, and there's
a 'free' function that tells testcrypt when it can dispose of one.
It's possible to speak that protocol by hand, but cumbersome. I've
also provided a Python module that wraps it, by running testcrypt as a
persistent subprocess and gatewaying all the function calls into
things that look reasonably natural to call from Python. The Python
module and testcrypt.c both read a carefully formatted header file
testcrypt.h which contains the name and signature of every exported
function, so it costs minimal effort to expose a given function
through this test API. In a few cases it's necessary to write a
wrapper in testcrypt.c that makes the function look more friendly, but
mostly you don't even need that. (Though that is one of the
motivations between a lot of API cleanups I've done recently!)
I considered doing Python integration in the more obvious way, by
linking parts of the PuTTY code directly into a native-code .so Python
module. I decided against it because this way is more flexible: I can
run the testcrypt program on its own, or compile it in a way that
Python wouldn't play nicely with (I bet compiling just that .so with
Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or
attach a debugger to it. I can even recompile testcrypt for a
different CPU architecture (32- vs 64-bit, or even running it on a
different machine over ssh or under emulation) and still layer the
nice API on top of that via the local Python interpreter. All I need
is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
|
|
|
#define return_void(out, expression) (expression)
|
|
|
|
|
2020-02-23 15:29:40 +00:00
|
|
|
static ProgressReceiver null_progress = { .vt = &null_progress_vt };
|
2019-02-09 15:35:02 +00:00
|
|
|
|
2020-02-29 09:10:47 +00:00
|
|
|
mp_int *primegen_generate_wrapper(
|
|
|
|
PrimeGenerationContext *ctx, PrimeCandidateSource *pcs)
|
2019-02-09 15:35:02 +00:00
|
|
|
{
|
2020-02-29 09:10:47 +00:00
|
|
|
return primegen_generate(ctx, pcs, &null_progress);
|
2019-02-09 15:35:02 +00:00
|
|
|
}
|
|
|
|
|
RSA generation: option to generate strong primes.
A 'strong' prime, as defined by the Handbook of Applied Cryptography,
is a prime p such that each of p-1 and p+1 has a large prime factor,
and that the large factor q of p-1 is such that q-1 in turn _also_ has
a large prime factor.
HoAC says that making your RSA key using primes of this form defeats
some factoring algorithms - but there are other faster algorithms to
which it makes no difference. So this is probably not a useful
precaution in practice. However, it has been recommended in the past
by some official standards, and it's easy to implement given the new
general facility in PrimeCandidateSource that lets you ask for your
prime to satisfy an arbitrary modular congruence. (And HoAC also says
there's no particular reason _not_ to use strong primes.) So I provide
it as an option, just in case anyone wants to select it.
The change to the key generation algorithm is entirely in sshrsag.c,
and is neatly independent of the prime-generation system in use. If
you're using Maurer provable prime generation, then the known factor q
of p-1 can be used to help certify p, and the one for q-1 to help with
q in turn; if you switch to probabilistic prime generation then you
still get an RSA key with the right structure, except that every time
the definition says 'prime factor' you just append '(probably)'.
(The probabilistic version of this procedure is described as 'Gordon's
algorithm' in HoAC section 4.4.2.)
2020-03-02 06:52:09 +00:00
|
|
|
RSAKey *rsa1_generate(int bits, bool strong, PrimeGenerationContext *pgc)
|
2020-01-09 07:21:30 +00:00
|
|
|
{
|
|
|
|
RSAKey *rsakey = snew(RSAKey);
|
RSA generation: option to generate strong primes.
A 'strong' prime, as defined by the Handbook of Applied Cryptography,
is a prime p such that each of p-1 and p+1 has a large prime factor,
and that the large factor q of p-1 is such that q-1 in turn _also_ has
a large prime factor.
HoAC says that making your RSA key using primes of this form defeats
some factoring algorithms - but there are other faster algorithms to
which it makes no difference. So this is probably not a useful
precaution in practice. However, it has been recommended in the past
by some official standards, and it's easy to implement given the new
general facility in PrimeCandidateSource that lets you ask for your
prime to satisfy an arbitrary modular congruence. (And HoAC also says
there's no particular reason _not_ to use strong primes.) So I provide
it as an option, just in case anyone wants to select it.
The change to the key generation algorithm is entirely in sshrsag.c,
and is neatly independent of the prime-generation system in use. If
you're using Maurer provable prime generation, then the known factor q
of p-1 can be used to help certify p, and the one for q-1 to help with
q in turn; if you switch to probabilistic prime generation then you
still get an RSA key with the right structure, except that every time
the definition says 'prime factor' you just append '(probably)'.
(The probabilistic version of this procedure is described as 'Gordon's
algorithm' in HoAC section 4.4.2.)
2020-03-02 06:52:09 +00:00
|
|
|
rsa_generate(rsakey, bits, strong, pgc, &null_progress);
|
2020-01-09 07:21:30 +00:00
|
|
|
rsakey->comment = NULL;
|
|
|
|
return rsakey;
|
|
|
|
}
|
|
|
|
|
RSA generation: option to generate strong primes.
A 'strong' prime, as defined by the Handbook of Applied Cryptography,
is a prime p such that each of p-1 and p+1 has a large prime factor,
and that the large factor q of p-1 is such that q-1 in turn _also_ has
a large prime factor.
HoAC says that making your RSA key using primes of this form defeats
some factoring algorithms - but there are other faster algorithms to
which it makes no difference. So this is probably not a useful
precaution in practice. However, it has been recommended in the past
by some official standards, and it's easy to implement given the new
general facility in PrimeCandidateSource that lets you ask for your
prime to satisfy an arbitrary modular congruence. (And HoAC also says
there's no particular reason _not_ to use strong primes.) So I provide
it as an option, just in case anyone wants to select it.
The change to the key generation algorithm is entirely in sshrsag.c,
and is neatly independent of the prime-generation system in use. If
you're using Maurer provable prime generation, then the known factor q
of p-1 can be used to help certify p, and the one for q-1 to help with
q in turn; if you switch to probabilistic prime generation then you
still get an RSA key with the right structure, except that every time
the definition says 'prime factor' you just append '(probably)'.
(The probabilistic version of this procedure is described as 'Gordon's
algorithm' in HoAC section 4.4.2.)
2020-03-02 06:52:09 +00:00
|
|
|
ssh_key *rsa_generate_wrapper(int bits, bool strong,
|
|
|
|
PrimeGenerationContext *pgc)
|
2020-01-09 07:21:30 +00:00
|
|
|
{
|
RSA generation: option to generate strong primes.
A 'strong' prime, as defined by the Handbook of Applied Cryptography,
is a prime p such that each of p-1 and p+1 has a large prime factor,
and that the large factor q of p-1 is such that q-1 in turn _also_ has
a large prime factor.
HoAC says that making your RSA key using primes of this form defeats
some factoring algorithms - but there are other faster algorithms to
which it makes no difference. So this is probably not a useful
precaution in practice. However, it has been recommended in the past
by some official standards, and it's easy to implement given the new
general facility in PrimeCandidateSource that lets you ask for your
prime to satisfy an arbitrary modular congruence. (And HoAC also says
there's no particular reason _not_ to use strong primes.) So I provide
it as an option, just in case anyone wants to select it.
The change to the key generation algorithm is entirely in sshrsag.c,
and is neatly independent of the prime-generation system in use. If
you're using Maurer provable prime generation, then the known factor q
of p-1 can be used to help certify p, and the one for q-1 to help with
q in turn; if you switch to probabilistic prime generation then you
still get an RSA key with the right structure, except that every time
the definition says 'prime factor' you just append '(probably)'.
(The probabilistic version of this procedure is described as 'Gordon's
algorithm' in HoAC section 4.4.2.)
2020-03-02 06:52:09 +00:00
|
|
|
return &rsa1_generate(bits, strong, pgc)->sshk;
|
2020-01-09 07:21:30 +00:00
|
|
|
}
|
|
|
|
|
2020-02-29 09:10:47 +00:00
|
|
|
ssh_key *dsa_generate_wrapper(int bits, PrimeGenerationContext *pgc)
|
2020-01-09 07:21:30 +00:00
|
|
|
{
|
2021-04-22 17:28:35 +00:00
|
|
|
struct dsa_key *dsakey = snew(struct dsa_key);
|
|
|
|
dsa_generate(dsakey, bits, pgc, &null_progress);
|
|
|
|
return &dsakey->sshk;
|
2020-01-09 07:21:30 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
ssh_key *ecdsa_generate_wrapper(int bits)
|
|
|
|
{
|
|
|
|
struct ecdsa_key *ek = snew(struct ecdsa_key);
|
2020-02-23 15:29:40 +00:00
|
|
|
if (!ecdsa_generate(ek, bits)) {
|
2020-01-09 07:21:30 +00:00
|
|
|
sfree(ek);
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
return &ek->sshk;
|
|
|
|
}
|
|
|
|
|
|
|
|
ssh_key *eddsa_generate_wrapper(int bits)
|
|
|
|
{
|
|
|
|
struct eddsa_key *ek = snew(struct eddsa_key);
|
2020-02-23 15:29:40 +00:00
|
|
|
if (!eddsa_generate(ek, bits)) {
|
2020-01-09 07:21:30 +00:00
|
|
|
sfree(ek);
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
return &ek->sshk;
|
|
|
|
}
|
|
|
|
|
cmdgen: add a --dump option.
Also spelled '-O text', this takes a public or private key as input,
and produces on standard output a dump of all the actual numbers
involved in the key: the exponent and modulus for RSA, the p,q,g,y
parameters for DSA, the affine x and y coordinates of the public
elliptic curve point for ECC keys, and all the extra bits and pieces
in the private keys too.
Partly I expect this to be useful to me for debugging: I've had to
paste key files a few too many times through base64 decoders and hex
dump tools, then manually decode SSH marshalling and paste the result
into the Python REPL to get an integer object. Now I should be able to
get _straight_ to text I can paste into Python.
But also, it's a way that other applications can use the key
generator: if you need to generate, say, an RSA key in some format I
don't support (I've recently heard of an XML-based one, for example),
then you can run 'puttygen -t rsa --dump' and have it print the
elements of a freshly generated keypair on standard output, and then
all you have to do is understand the output format.
2020-02-17 19:53:19 +00:00
|
|
|
size_t key_components_count(key_components *kc) { return kc->ncomponents; }
|
|
|
|
const char *key_components_nth_name(key_components *kc, size_t n)
|
|
|
|
{
|
|
|
|
return (n >= kc->ncomponents ? NULL :
|
|
|
|
kc->components[n].name);
|
|
|
|
}
|
2022-04-18 09:10:57 +00:00
|
|
|
strbuf *key_components_nth_str(key_components *kc, size_t n)
|
cmdgen: add a --dump option.
Also spelled '-O text', this takes a public or private key as input,
and produces on standard output a dump of all the actual numbers
involved in the key: the exponent and modulus for RSA, the p,q,g,y
parameters for DSA, the affine x and y coordinates of the public
elliptic curve point for ECC keys, and all the extra bits and pieces
in the private keys too.
Partly I expect this to be useful to me for debugging: I've had to
paste key files a few too many times through base64 decoders and hex
dump tools, then manually decode SSH marshalling and paste the result
into the Python REPL to get an integer object. Now I should be able to
get _straight_ to text I can paste into Python.
But also, it's a way that other applications can use the key
generator: if you need to generate, say, an RSA key in some format I
don't support (I've recently heard of an XML-based one, for example),
then you can run 'puttygen -t rsa --dump' and have it print the
elements of a freshly generated keypair on standard output, and then
all you have to do is understand the output format.
2020-02-17 19:53:19 +00:00
|
|
|
{
|
2022-04-18 09:10:57 +00:00
|
|
|
if (n >= kc->ncomponents)
|
|
|
|
return NULL;
|
2022-04-18 09:06:31 +00:00
|
|
|
if (kc->components[n].type != KCT_TEXT &&
|
|
|
|
kc->components[n].type != KCT_BINARY)
|
2022-04-18 09:10:57 +00:00
|
|
|
return NULL;
|
|
|
|
return strbuf_dup(ptrlen_from_strbuf(kc->components[n].str));
|
cmdgen: add a --dump option.
Also spelled '-O text', this takes a public or private key as input,
and produces on standard output a dump of all the actual numbers
involved in the key: the exponent and modulus for RSA, the p,q,g,y
parameters for DSA, the affine x and y coordinates of the public
elliptic curve point for ECC keys, and all the extra bits and pieces
in the private keys too.
Partly I expect this to be useful to me for debugging: I've had to
paste key files a few too many times through base64 decoders and hex
dump tools, then manually decode SSH marshalling and paste the result
into the Python REPL to get an integer object. Now I should be able to
get _straight_ to text I can paste into Python.
But also, it's a way that other applications can use the key
generator: if you need to generate, say, an RSA key in some format I
don't support (I've recently heard of an XML-based one, for example),
then you can run 'puttygen -t rsa --dump' and have it print the
elements of a freshly generated keypair on standard output, and then
all you have to do is understand the output format.
2020-02-17 19:53:19 +00:00
|
|
|
}
|
|
|
|
mp_int *key_components_nth_mp(key_components *kc, size_t n)
|
|
|
|
{
|
|
|
|
return (n >= kc->ncomponents ? NULL :
|
2022-04-18 09:10:57 +00:00
|
|
|
kc->components[n].type != KCT_MPINT ? NULL :
|
cmdgen: add a --dump option.
Also spelled '-O text', this takes a public or private key as input,
and produces on standard output a dump of all the actual numbers
involved in the key: the exponent and modulus for RSA, the p,q,g,y
parameters for DSA, the affine x and y coordinates of the public
elliptic curve point for ECC keys, and all the extra bits and pieces
in the private keys too.
Partly I expect this to be useful to me for debugging: I've had to
paste key files a few too many times through base64 decoders and hex
dump tools, then manually decode SSH marshalling and paste the result
into the Python REPL to get an integer object. Now I should be able to
get _straight_ to text I can paste into Python.
But also, it's a way that other applications can use the key
generator: if you need to generate, say, an RSA key in some format I
don't support (I've recently heard of an XML-based one, for example),
then you can run 'puttygen -t rsa --dump' and have it print the
elements of a freshly generated keypair on standard output, and then
all you have to do is understand the output format.
2020-02-17 19:53:19 +00:00
|
|
|
mp_copy(kc->components[n].mp));
|
|
|
|
}
|
|
|
|
|
2020-02-23 15:16:30 +00:00
|
|
|
PockleStatus pockle_add_prime_wrapper(Pockle *pockle, mp_int *p,
|
|
|
|
struct mpint_list mpl, mp_int *witness)
|
|
|
|
{
|
|
|
|
return pockle_add_prime(pockle, p, mpl.integers, mpl.n, witness);
|
|
|
|
}
|
|
|
|
|
2021-02-13 17:30:12 +00:00
|
|
|
strbuf *argon2_wrapper(Argon2Flavour flavour, uint32_t mem, uint32_t passes,
|
|
|
|
uint32_t parallel, uint32_t taglen,
|
|
|
|
ptrlen P, ptrlen S, ptrlen K, ptrlen X)
|
|
|
|
{
|
|
|
|
strbuf *out = strbuf_new();
|
|
|
|
argon2(flavour, mem, passes, parallel, taglen, P, S, K, X, out);
|
|
|
|
return out;
|
|
|
|
}
|
|
|
|
|
2021-12-24 09:56:30 +00:00
|
|
|
strbuf *openssh_bcrypt_wrapper(ptrlen passphrase, ptrlen salt,
|
|
|
|
unsigned rounds, unsigned outbytes)
|
|
|
|
{
|
|
|
|
strbuf *out = strbuf_new();
|
|
|
|
openssh_bcrypt(passphrase, salt, rounds,
|
|
|
|
strbuf_append(out, outbytes), outbytes);
|
|
|
|
return out;
|
|
|
|
}
|
|
|
|
|
Break up crypto modules containing HW acceleration.
This applies to all of AES, SHA-1, SHA-256 and SHA-512. All those
source files previously contained multiple implementations of the
algorithm, enabled or disabled by ifdefs detecting whether they would
work on a given compiler. And in order to get advanced machine
instructions like AES-NI or NEON crypto into the output file when the
compile flags hadn't enabled them, we had to do nasty stuff with
compiler-specific pragmas or attributes.
Now we can do the detection at cmake time, and enable advanced
instructions in the more sensible way, by compile-time flags. So I've
broken up each of these modules into lots of sub-pieces: a file called
(e.g.) 'foo-common.c' containing common definitions across all
implementations (such as round constants), one called 'foo-select.c'
containing the top-level vtable(s), and a separate file for each
implementation exporting just the vtable(s) for that implementation.
One advantage of this is that it depends a lot less on compiler-
specific bodgery. My particular least favourite part of the previous
setup was the part where I had to _manually_ define some Arm ACLE
feature macros before including <arm_neon.h>, so that it would define
the intrinsics I wanted. Now I'm enabling interesting architecture
features in the normal way, on the compiler command line, there's no
need for that kind of trick: the right feature macros are already
defined and <arm_neon.h> does the right thing.
Another change in this reorganisation is that I've stopped assuming
there's just one hardware implementation per platform. Previously, the
accelerated vtables were called things like sha256_hw, and varied
between FOO-NI and NEON depending on platform; and the selection code
would simply ask 'is hw available? if so, use hw, else sw'. Now, each
HW acceleration strategy names its vtable its own way, and the
selection vtable has a whole list of possibilities to iterate over
looking for a supported one. So if someone feels like writing a second
accelerated implementation of something for a given platform - for
example, I've heard you can use plain NEON to speed up AES somewhat
even without the crypto extension - then it will now have somewhere to
drop in alongside the existing ones.
2021-04-19 05:42:12 +00:00
|
|
|
strbuf *get_implementations_commasep(ptrlen alg)
|
|
|
|
{
|
|
|
|
strbuf *out = strbuf_new();
|
|
|
|
put_datapl(out, alg);
|
|
|
|
|
Implement AES-GCM using the @openssh.com protocol IDs.
I only recently found out that OpenSSH defined their own protocol IDs
for AES-GCM, defined to work the same as the standard ones except that
they fixed the semantics for how you select the linked cipher+MAC pair
during key exchange.
(RFC 5647 defines protocol ids for AES-GCM in both the cipher and MAC
namespaces, and requires that you MUST select both or neither - but
this contradicts the selection policy set out in the base SSH RFCs,
and there's no discussion of how you resolve a conflict between them!
OpenSSH's answer is to do it the same way ChaCha20-Poly1305 works,
because that will ensure the two suites don't fight.)
People do occasionally ask us for this linked cipher/MAC pair, and now
I know it's actually feasible, I've implemented it, including a pair
of vector implementations for x86 and Arm using their respective
architecture extensions for multiplying polynomials over GF(2).
Unlike ChaCha20-Poly1305, I've kept the cipher and MAC implementations
in separate objects, with an arm's-length link between them that the
MAC uses when it needs to encrypt single cipher blocks to use as the
inputs to the MAC algorithm. That enables the cipher and the MAC to be
independently selected from their hardware-accelerated versions, just
in case someone runs on a system that has polynomial multiplication
instructions but not AES acceleration, or vice versa.
There's a fourth implementation of the GCM MAC, which is a pure
software implementation of the same algorithm used in the vectorised
versions. It's too slow to use live, but I've kept it in the code for
future testing needs, and because it's a convenient place to dump my
design comments.
The vectorised implementations are fairly crude as far as optimisation
goes. I'm sure serious x86 _or_ Arm optimisation engineers would look
at them and laugh. But GCM is a fast MAC compared to HMAC-SHA-256
(indeed compared to HMAC-anything-at-all), so it should at least be
good enough to use. And we've got a working version with some tests
now, so if someone else wants to improve them, they can.
2022-08-16 17:36:58 +00:00
|
|
|
if (ptrlen_startswith(alg, PTRLEN_LITERAL("aesgcm"), NULL)) {
|
|
|
|
put_fmt(out, ",%.*s_sw", PTRLEN_PRINTF(alg));
|
|
|
|
put_fmt(out, ",%.*s_ref_poly", PTRLEN_PRINTF(alg));
|
|
|
|
#if HAVE_CLMUL
|
|
|
|
put_fmt(out, ",%.*s_clmul", PTRLEN_PRINTF(alg));
|
|
|
|
#endif
|
|
|
|
#if HAVE_NEON_PMULL
|
|
|
|
put_fmt(out, ",%.*s_neon", PTRLEN_PRINTF(alg));
|
|
|
|
#endif
|
|
|
|
} else if (ptrlen_startswith(alg, PTRLEN_LITERAL("aes"), NULL)) {
|
2021-11-19 10:23:32 +00:00
|
|
|
put_fmt(out, ",%.*s_sw", PTRLEN_PRINTF(alg));
|
Break up crypto modules containing HW acceleration.
This applies to all of AES, SHA-1, SHA-256 and SHA-512. All those
source files previously contained multiple implementations of the
algorithm, enabled or disabled by ifdefs detecting whether they would
work on a given compiler. And in order to get advanced machine
instructions like AES-NI or NEON crypto into the output file when the
compile flags hadn't enabled them, we had to do nasty stuff with
compiler-specific pragmas or attributes.
Now we can do the detection at cmake time, and enable advanced
instructions in the more sensible way, by compile-time flags. So I've
broken up each of these modules into lots of sub-pieces: a file called
(e.g.) 'foo-common.c' containing common definitions across all
implementations (such as round constants), one called 'foo-select.c'
containing the top-level vtable(s), and a separate file for each
implementation exporting just the vtable(s) for that implementation.
One advantage of this is that it depends a lot less on compiler-
specific bodgery. My particular least favourite part of the previous
setup was the part where I had to _manually_ define some Arm ACLE
feature macros before including <arm_neon.h>, so that it would define
the intrinsics I wanted. Now I'm enabling interesting architecture
features in the normal way, on the compiler command line, there's no
need for that kind of trick: the right feature macros are already
defined and <arm_neon.h> does the right thing.
Another change in this reorganisation is that I've stopped assuming
there's just one hardware implementation per platform. Previously, the
accelerated vtables were called things like sha256_hw, and varied
between FOO-NI and NEON depending on platform; and the selection code
would simply ask 'is hw available? if so, use hw, else sw'. Now, each
HW acceleration strategy names its vtable its own way, and the
selection vtable has a whole list of possibilities to iterate over
looking for a supported one. So if someone feels like writing a second
accelerated implementation of something for a given platform - for
example, I've heard you can use plain NEON to speed up AES somewhat
even without the crypto extension - then it will now have somewhere to
drop in alongside the existing ones.
2021-04-19 05:42:12 +00:00
|
|
|
#if HAVE_AES_NI
|
2021-11-19 10:23:32 +00:00
|
|
|
put_fmt(out, ",%.*s_ni", PTRLEN_PRINTF(alg));
|
Break up crypto modules containing HW acceleration.
This applies to all of AES, SHA-1, SHA-256 and SHA-512. All those
source files previously contained multiple implementations of the
algorithm, enabled or disabled by ifdefs detecting whether they would
work on a given compiler. And in order to get advanced machine
instructions like AES-NI or NEON crypto into the output file when the
compile flags hadn't enabled them, we had to do nasty stuff with
compiler-specific pragmas or attributes.
Now we can do the detection at cmake time, and enable advanced
instructions in the more sensible way, by compile-time flags. So I've
broken up each of these modules into lots of sub-pieces: a file called
(e.g.) 'foo-common.c' containing common definitions across all
implementations (such as round constants), one called 'foo-select.c'
containing the top-level vtable(s), and a separate file for each
implementation exporting just the vtable(s) for that implementation.
One advantage of this is that it depends a lot less on compiler-
specific bodgery. My particular least favourite part of the previous
setup was the part where I had to _manually_ define some Arm ACLE
feature macros before including <arm_neon.h>, so that it would define
the intrinsics I wanted. Now I'm enabling interesting architecture
features in the normal way, on the compiler command line, there's no
need for that kind of trick: the right feature macros are already
defined and <arm_neon.h> does the right thing.
Another change in this reorganisation is that I've stopped assuming
there's just one hardware implementation per platform. Previously, the
accelerated vtables were called things like sha256_hw, and varied
between FOO-NI and NEON depending on platform; and the selection code
would simply ask 'is hw available? if so, use hw, else sw'. Now, each
HW acceleration strategy names its vtable its own way, and the
selection vtable has a whole list of possibilities to iterate over
looking for a supported one. So if someone feels like writing a second
accelerated implementation of something for a given platform - for
example, I've heard you can use plain NEON to speed up AES somewhat
even without the crypto extension - then it will now have somewhere to
drop in alongside the existing ones.
2021-04-19 05:42:12 +00:00
|
|
|
#endif
|
|
|
|
#if HAVE_NEON_CRYPTO
|
2021-11-19 10:23:32 +00:00
|
|
|
put_fmt(out, ",%.*s_neon", PTRLEN_PRINTF(alg));
|
Break up crypto modules containing HW acceleration.
This applies to all of AES, SHA-1, SHA-256 and SHA-512. All those
source files previously contained multiple implementations of the
algorithm, enabled or disabled by ifdefs detecting whether they would
work on a given compiler. And in order to get advanced machine
instructions like AES-NI or NEON crypto into the output file when the
compile flags hadn't enabled them, we had to do nasty stuff with
compiler-specific pragmas or attributes.
Now we can do the detection at cmake time, and enable advanced
instructions in the more sensible way, by compile-time flags. So I've
broken up each of these modules into lots of sub-pieces: a file called
(e.g.) 'foo-common.c' containing common definitions across all
implementations (such as round constants), one called 'foo-select.c'
containing the top-level vtable(s), and a separate file for each
implementation exporting just the vtable(s) for that implementation.
One advantage of this is that it depends a lot less on compiler-
specific bodgery. My particular least favourite part of the previous
setup was the part where I had to _manually_ define some Arm ACLE
feature macros before including <arm_neon.h>, so that it would define
the intrinsics I wanted. Now I'm enabling interesting architecture
features in the normal way, on the compiler command line, there's no
need for that kind of trick: the right feature macros are already
defined and <arm_neon.h> does the right thing.
Another change in this reorganisation is that I've stopped assuming
there's just one hardware implementation per platform. Previously, the
accelerated vtables were called things like sha256_hw, and varied
between FOO-NI and NEON depending on platform; and the selection code
would simply ask 'is hw available? if so, use hw, else sw'. Now, each
HW acceleration strategy names its vtable its own way, and the
selection vtable has a whole list of possibilities to iterate over
looking for a supported one. So if someone feels like writing a second
accelerated implementation of something for a given platform - for
example, I've heard you can use plain NEON to speed up AES somewhat
even without the crypto extension - then it will now have somewhere to
drop in alongside the existing ones.
2021-04-19 05:42:12 +00:00
|
|
|
#endif
|
|
|
|
} else if (ptrlen_startswith(alg, PTRLEN_LITERAL("sha256"), NULL) ||
|
|
|
|
ptrlen_startswith(alg, PTRLEN_LITERAL("sha1"), NULL)) {
|
2021-11-19 10:23:32 +00:00
|
|
|
put_fmt(out, ",%.*s_sw", PTRLEN_PRINTF(alg));
|
Break up crypto modules containing HW acceleration.
This applies to all of AES, SHA-1, SHA-256 and SHA-512. All those
source files previously contained multiple implementations of the
algorithm, enabled or disabled by ifdefs detecting whether they would
work on a given compiler. And in order to get advanced machine
instructions like AES-NI or NEON crypto into the output file when the
compile flags hadn't enabled them, we had to do nasty stuff with
compiler-specific pragmas or attributes.
Now we can do the detection at cmake time, and enable advanced
instructions in the more sensible way, by compile-time flags. So I've
broken up each of these modules into lots of sub-pieces: a file called
(e.g.) 'foo-common.c' containing common definitions across all
implementations (such as round constants), one called 'foo-select.c'
containing the top-level vtable(s), and a separate file for each
implementation exporting just the vtable(s) for that implementation.
One advantage of this is that it depends a lot less on compiler-
specific bodgery. My particular least favourite part of the previous
setup was the part where I had to _manually_ define some Arm ACLE
feature macros before including <arm_neon.h>, so that it would define
the intrinsics I wanted. Now I'm enabling interesting architecture
features in the normal way, on the compiler command line, there's no
need for that kind of trick: the right feature macros are already
defined and <arm_neon.h> does the right thing.
Another change in this reorganisation is that I've stopped assuming
there's just one hardware implementation per platform. Previously, the
accelerated vtables were called things like sha256_hw, and varied
between FOO-NI and NEON depending on platform; and the selection code
would simply ask 'is hw available? if so, use hw, else sw'. Now, each
HW acceleration strategy names its vtable its own way, and the
selection vtable has a whole list of possibilities to iterate over
looking for a supported one. So if someone feels like writing a second
accelerated implementation of something for a given platform - for
example, I've heard you can use plain NEON to speed up AES somewhat
even without the crypto extension - then it will now have somewhere to
drop in alongside the existing ones.
2021-04-19 05:42:12 +00:00
|
|
|
#if HAVE_SHA_NI
|
2021-11-19 10:23:32 +00:00
|
|
|
put_fmt(out, ",%.*s_ni", PTRLEN_PRINTF(alg));
|
Break up crypto modules containing HW acceleration.
This applies to all of AES, SHA-1, SHA-256 and SHA-512. All those
source files previously contained multiple implementations of the
algorithm, enabled or disabled by ifdefs detecting whether they would
work on a given compiler. And in order to get advanced machine
instructions like AES-NI or NEON crypto into the output file when the
compile flags hadn't enabled them, we had to do nasty stuff with
compiler-specific pragmas or attributes.
Now we can do the detection at cmake time, and enable advanced
instructions in the more sensible way, by compile-time flags. So I've
broken up each of these modules into lots of sub-pieces: a file called
(e.g.) 'foo-common.c' containing common definitions across all
implementations (such as round constants), one called 'foo-select.c'
containing the top-level vtable(s), and a separate file for each
implementation exporting just the vtable(s) for that implementation.
One advantage of this is that it depends a lot less on compiler-
specific bodgery. My particular least favourite part of the previous
setup was the part where I had to _manually_ define some Arm ACLE
feature macros before including <arm_neon.h>, so that it would define
the intrinsics I wanted. Now I'm enabling interesting architecture
features in the normal way, on the compiler command line, there's no
need for that kind of trick: the right feature macros are already
defined and <arm_neon.h> does the right thing.
Another change in this reorganisation is that I've stopped assuming
there's just one hardware implementation per platform. Previously, the
accelerated vtables were called things like sha256_hw, and varied
between FOO-NI and NEON depending on platform; and the selection code
would simply ask 'is hw available? if so, use hw, else sw'. Now, each
HW acceleration strategy names its vtable its own way, and the
selection vtable has a whole list of possibilities to iterate over
looking for a supported one. So if someone feels like writing a second
accelerated implementation of something for a given platform - for
example, I've heard you can use plain NEON to speed up AES somewhat
even without the crypto extension - then it will now have somewhere to
drop in alongside the existing ones.
2021-04-19 05:42:12 +00:00
|
|
|
#endif
|
|
|
|
#if HAVE_NEON_CRYPTO
|
2021-11-19 10:23:32 +00:00
|
|
|
put_fmt(out, ",%.*s_neon", PTRLEN_PRINTF(alg));
|
Break up crypto modules containing HW acceleration.
This applies to all of AES, SHA-1, SHA-256 and SHA-512. All those
source files previously contained multiple implementations of the
algorithm, enabled or disabled by ifdefs detecting whether they would
work on a given compiler. And in order to get advanced machine
instructions like AES-NI or NEON crypto into the output file when the
compile flags hadn't enabled them, we had to do nasty stuff with
compiler-specific pragmas or attributes.
Now we can do the detection at cmake time, and enable advanced
instructions in the more sensible way, by compile-time flags. So I've
broken up each of these modules into lots of sub-pieces: a file called
(e.g.) 'foo-common.c' containing common definitions across all
implementations (such as round constants), one called 'foo-select.c'
containing the top-level vtable(s), and a separate file for each
implementation exporting just the vtable(s) for that implementation.
One advantage of this is that it depends a lot less on compiler-
specific bodgery. My particular least favourite part of the previous
setup was the part where I had to _manually_ define some Arm ACLE
feature macros before including <arm_neon.h>, so that it would define
the intrinsics I wanted. Now I'm enabling interesting architecture
features in the normal way, on the compiler command line, there's no
need for that kind of trick: the right feature macros are already
defined and <arm_neon.h> does the right thing.
Another change in this reorganisation is that I've stopped assuming
there's just one hardware implementation per platform. Previously, the
accelerated vtables were called things like sha256_hw, and varied
between FOO-NI and NEON depending on platform; and the selection code
would simply ask 'is hw available? if so, use hw, else sw'. Now, each
HW acceleration strategy names its vtable its own way, and the
selection vtable has a whole list of possibilities to iterate over
looking for a supported one. So if someone feels like writing a second
accelerated implementation of something for a given platform - for
example, I've heard you can use plain NEON to speed up AES somewhat
even without the crypto extension - then it will now have somewhere to
drop in alongside the existing ones.
2021-04-19 05:42:12 +00:00
|
|
|
#endif
|
|
|
|
} else if (ptrlen_startswith(alg, PTRLEN_LITERAL("sha512"), NULL)) {
|
2021-11-19 10:23:32 +00:00
|
|
|
put_fmt(out, ",%.*s_sw", PTRLEN_PRINTF(alg));
|
Break up crypto modules containing HW acceleration.
This applies to all of AES, SHA-1, SHA-256 and SHA-512. All those
source files previously contained multiple implementations of the
algorithm, enabled or disabled by ifdefs detecting whether they would
work on a given compiler. And in order to get advanced machine
instructions like AES-NI or NEON crypto into the output file when the
compile flags hadn't enabled them, we had to do nasty stuff with
compiler-specific pragmas or attributes.
Now we can do the detection at cmake time, and enable advanced
instructions in the more sensible way, by compile-time flags. So I've
broken up each of these modules into lots of sub-pieces: a file called
(e.g.) 'foo-common.c' containing common definitions across all
implementations (such as round constants), one called 'foo-select.c'
containing the top-level vtable(s), and a separate file for each
implementation exporting just the vtable(s) for that implementation.
One advantage of this is that it depends a lot less on compiler-
specific bodgery. My particular least favourite part of the previous
setup was the part where I had to _manually_ define some Arm ACLE
feature macros before including <arm_neon.h>, so that it would define
the intrinsics I wanted. Now I'm enabling interesting architecture
features in the normal way, on the compiler command line, there's no
need for that kind of trick: the right feature macros are already
defined and <arm_neon.h> does the right thing.
Another change in this reorganisation is that I've stopped assuming
there's just one hardware implementation per platform. Previously, the
accelerated vtables were called things like sha256_hw, and varied
between FOO-NI and NEON depending on platform; and the selection code
would simply ask 'is hw available? if so, use hw, else sw'. Now, each
HW acceleration strategy names its vtable its own way, and the
selection vtable has a whole list of possibilities to iterate over
looking for a supported one. So if someone feels like writing a second
accelerated implementation of something for a given platform - for
example, I've heard you can use plain NEON to speed up AES somewhat
even without the crypto extension - then it will now have somewhere to
drop in alongside the existing ones.
2021-04-19 05:42:12 +00:00
|
|
|
#if HAVE_NEON_SHA512
|
2021-11-19 10:23:32 +00:00
|
|
|
put_fmt(out, ",%.*s_neon", PTRLEN_PRINTF(alg));
|
Break up crypto modules containing HW acceleration.
This applies to all of AES, SHA-1, SHA-256 and SHA-512. All those
source files previously contained multiple implementations of the
algorithm, enabled or disabled by ifdefs detecting whether they would
work on a given compiler. And in order to get advanced machine
instructions like AES-NI or NEON crypto into the output file when the
compile flags hadn't enabled them, we had to do nasty stuff with
compiler-specific pragmas or attributes.
Now we can do the detection at cmake time, and enable advanced
instructions in the more sensible way, by compile-time flags. So I've
broken up each of these modules into lots of sub-pieces: a file called
(e.g.) 'foo-common.c' containing common definitions across all
implementations (such as round constants), one called 'foo-select.c'
containing the top-level vtable(s), and a separate file for each
implementation exporting just the vtable(s) for that implementation.
One advantage of this is that it depends a lot less on compiler-
specific bodgery. My particular least favourite part of the previous
setup was the part where I had to _manually_ define some Arm ACLE
feature macros before including <arm_neon.h>, so that it would define
the intrinsics I wanted. Now I'm enabling interesting architecture
features in the normal way, on the compiler command line, there's no
need for that kind of trick: the right feature macros are already
defined and <arm_neon.h> does the right thing.
Another change in this reorganisation is that I've stopped assuming
there's just one hardware implementation per platform. Previously, the
accelerated vtables were called things like sha256_hw, and varied
between FOO-NI and NEON depending on platform; and the selection code
would simply ask 'is hw available? if so, use hw, else sw'. Now, each
HW acceleration strategy names its vtable its own way, and the
selection vtable has a whole list of possibilities to iterate over
looking for a supported one. So if someone feels like writing a second
accelerated implementation of something for a given platform - for
example, I've heard you can use plain NEON to speed up AES somewhat
even without the crypto extension - then it will now have somewhere to
drop in alongside the existing ones.
2021-04-19 05:42:12 +00:00
|
|
|
#endif
|
|
|
|
}
|
|
|
|
|
|
|
|
return out;
|
|
|
|
}
|
|
|
|
|
New test system for mp_int and cryptography.
I've written a new standalone test program which incorporates all of
PuTTY's crypto code, including the mp_int and low-level elliptic curve
layers but also going all the way up to the implementations of the
MAC, hash, cipher, public key and kex abstractions.
The test program itself, 'testcrypt', speaks a simple line-oriented
protocol on standard I/O in which you write the name of a function
call followed by some inputs, and it gives you back a list of outputs
preceded by a line telling you how many there are. Dynamically
allocated objects are assigned string ids in the protocol, and there's
a 'free' function that tells testcrypt when it can dispose of one.
It's possible to speak that protocol by hand, but cumbersome. I've
also provided a Python module that wraps it, by running testcrypt as a
persistent subprocess and gatewaying all the function calls into
things that look reasonably natural to call from Python. The Python
module and testcrypt.c both read a carefully formatted header file
testcrypt.h which contains the name and signature of every exported
function, so it costs minimal effort to expose a given function
through this test API. In a few cases it's necessary to write a
wrapper in testcrypt.c that makes the function look more friendly, but
mostly you don't even need that. (Though that is one of the
motivations between a lot of API cleanups I've done recently!)
I considered doing Python integration in the more obvious way, by
linking parts of the PuTTY code directly into a native-code .so Python
module. I decided against it because this way is more flexible: I can
run the testcrypt program on its own, or compile it in a way that
Python wouldn't play nicely with (I bet compiling just that .so with
Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or
attach a debugger to it. I can even recompile testcrypt for a
different CPU architecture (32- vs 64-bit, or even running it on a
different machine over ssh or under emulation) and still layer the
nice API on top of that via the local Python interpreter. All I need
is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
|
|
|
#define OPTIONAL_PTR_FUNC(type) \
|
|
|
|
typedef TD_val_##type TD_opt_val_##type; \
|
|
|
|
static TD_opt_val_##type get_opt_val_##type(BinarySource *in) { \
|
|
|
|
ptrlen word = get_word(in); \
|
|
|
|
if (ptrlen_eq_string(word, "NULL")) \
|
|
|
|
return NULL; \
|
|
|
|
return unwrap_value_##type(lookup_value(word))->vu_##type; \
|
|
|
|
}
|
Merge the ssh1_cipher type into ssh2_cipher.
The aim of this reorganisation is to make it easier to test all the
ciphers in PuTTY in a uniform way. It was inconvenient that there were
two separate vtable systems for the ciphers used in SSH-1 and SSH-2
with different functionality.
Now there's only one type, called ssh_cipher. But really it's the old
ssh2_cipher, just renamed: I haven't made any changes to the API on
the SSH-2 side. Instead, I've removed ssh1_cipher completely, and
adapted the SSH-1 BPP to use the SSH-2 style API.
(The relevant differences are that ssh1_cipher encapsulated both the
sending and receiving directions in one object - so now ssh1bpp has to
make a separate cipher instance per direction - and that ssh1_cipher
automatically initialised the IV to all zeroes, which ssh1bpp now has
to do by hand.)
The previous ssh1_cipher vtable for single-DES has been removed
completely, because when converted into the new API it became
identical to the SSH-2 single-DES vtable; so now there's just one
vtable for DES-CBC which works in both protocols. The other two SSH-1
ciphers each had to stay separate, because 3DES is completely
different between SSH-1 and SSH-2 (three layers of CBC structure
versus one), and Blowfish varies in endianness and key length between
the two.
(Actually, while I'm here, I've only just noticed that the SSH-1
Blowfish cipher mis-describes itself in log messages as Blowfish-128.
In fact it passes the whole of the input key buffer, which has length
SSH1_SESSION_KEY_LENGTH == 32 bytes == 256 bits. So it's actually
Blowfish-256, and has been all along!)
2019-01-17 18:06:08 +00:00
|
|
|
OPTIONAL_PTR_FUNC(cipher)
|
New test system for mp_int and cryptography.
I've written a new standalone test program which incorporates all of
PuTTY's crypto code, including the mp_int and low-level elliptic curve
layers but also going all the way up to the implementations of the
MAC, hash, cipher, public key and kex abstractions.
The test program itself, 'testcrypt', speaks a simple line-oriented
protocol on standard I/O in which you write the name of a function
call followed by some inputs, and it gives you back a list of outputs
preceded by a line telling you how many there are. Dynamically
allocated objects are assigned string ids in the protocol, and there's
a 'free' function that tells testcrypt when it can dispose of one.
It's possible to speak that protocol by hand, but cumbersome. I've
also provided a Python module that wraps it, by running testcrypt as a
persistent subprocess and gatewaying all the function calls into
things that look reasonably natural to call from Python. The Python
module and testcrypt.c both read a carefully formatted header file
testcrypt.h which contains the name and signature of every exported
function, so it costs minimal effort to expose a given function
through this test API. In a few cases it's necessary to write a
wrapper in testcrypt.c that makes the function look more friendly, but
mostly you don't even need that. (Though that is one of the
motivations between a lot of API cleanups I've done recently!)
I considered doing Python integration in the more obvious way, by
linking parts of the PuTTY code directly into a native-code .so Python
module. I decided against it because this way is more flexible: I can
run the testcrypt program on its own, or compile it in a way that
Python wouldn't play nicely with (I bet compiling just that .so with
Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or
attach a debugger to it. I can even recompile testcrypt for a
different CPU architecture (32- vs 64-bit, or even running it on a
different machine over ssh or under emulation) and still layer the
nice API on top of that via the local Python interpreter. All I need
is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
|
|
|
OPTIONAL_PTR_FUNC(mpint)
|
2020-01-06 19:58:25 +00:00
|
|
|
OPTIONAL_PTR_FUNC(string)
|
New test system for mp_int and cryptography.
I've written a new standalone test program which incorporates all of
PuTTY's crypto code, including the mp_int and low-level elliptic curve
layers but also going all the way up to the implementations of the
MAC, hash, cipher, public key and kex abstractions.
The test program itself, 'testcrypt', speaks a simple line-oriented
protocol on standard I/O in which you write the name of a function
call followed by some inputs, and it gives you back a list of outputs
preceded by a line telling you how many there are. Dynamically
allocated objects are assigned string ids in the protocol, and there's
a 'free' function that tells testcrypt when it can dispose of one.
It's possible to speak that protocol by hand, but cumbersome. I've
also provided a Python module that wraps it, by running testcrypt as a
persistent subprocess and gatewaying all the function calls into
things that look reasonably natural to call from Python. The Python
module and testcrypt.c both read a carefully formatted header file
testcrypt.h which contains the name and signature of every exported
function, so it costs minimal effort to expose a given function
through this test API. In a few cases it's necessary to write a
wrapper in testcrypt.c that makes the function look more friendly, but
mostly you don't even need that. (Though that is one of the
motivations between a lot of API cleanups I've done recently!)
I considered doing Python integration in the more obvious way, by
linking parts of the PuTTY code directly into a native-code .so Python
module. I decided against it because this way is more flexible: I can
run the testcrypt program on its own, or compile it in a way that
Python wouldn't play nicely with (I bet compiling just that .so with
Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or
attach a debugger to it. I can even recompile testcrypt for a
different CPU architecture (32- vs 64-bit, or even running it on a
different machine over ssh or under emulation) and still layer the
nice API on top of that via the local Python interpreter. All I need
is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
|
|
|
|
Rewrite the testcrypt.c macro system.
Yesterday's commit 52ee636b092c199 which further extended the huge
pile of arity-specific annoying wrapper macros pushed me over the edge
and inspired me to give some harder thought to finding a way to handle
all arities at once. And this time I found one!
The new technique changes the syntax of the function specifications in
testcrypt.h. In particular, they now have to specify a _name_ for each
parameter as well as a type, because the macros generating the C
marshalling wrappers will need a structure field for each parameter
and cpp isn't flexible enough to generate names for those fields
automatically. Rather than tediously name them arg1, arg2 etc, I've
reused the names of the parameters from the prototypes or definitions
of the underlying real functions (via a one-off auto-extraction
process starting from the output of 'clang -Xclang -dump-ast' plus
some manual polishing), which means testcrypt.h is now a bit more
self-documenting.
The testcrypt.py end of the mechanism is rewritten to eat the new
format. Since it's got more complicated syntax and nested parens and
things, I've written something a bit like a separated lexer/parser
system in place of the previous crude regex matcher, which should
enforce that the whole header file really does conform to the
restricted syntax it has to fit into.
The new system uses a lot less code in testcrypt.c, but I've made up
for that by also writing a long comment explaining how it works, which
was another thing the previous system lacked! Similarly, the new
testcrypt.h has some long-overdue instructions at the top.
2021-11-21 10:27:30 +00:00
|
|
|
/*
|
|
|
|
* HERE BE DRAGONS: the horrible C preprocessor business that reads
|
2021-11-22 18:50:12 +00:00
|
|
|
* testcrypt-func.h and generates a marshalling wrapper for each
|
|
|
|
* exported function.
|
Rewrite the testcrypt.c macro system.
Yesterday's commit 52ee636b092c199 which further extended the huge
pile of arity-specific annoying wrapper macros pushed me over the edge
and inspired me to give some harder thought to finding a way to handle
all arities at once. And this time I found one!
The new technique changes the syntax of the function specifications in
testcrypt.h. In particular, they now have to specify a _name_ for each
parameter as well as a type, because the macros generating the C
marshalling wrappers will need a structure field for each parameter
and cpp isn't flexible enough to generate names for those fields
automatically. Rather than tediously name them arg1, arg2 etc, I've
reused the names of the parameters from the prototypes or definitions
of the underlying real functions (via a one-off auto-extraction
process starting from the output of 'clang -Xclang -dump-ast' plus
some manual polishing), which means testcrypt.h is now a bit more
self-documenting.
The testcrypt.py end of the mechanism is rewritten to eat the new
format. Since it's got more complicated syntax and nested parens and
things, I've written something a bit like a separated lexer/parser
system in place of the previous crude regex matcher, which should
enforce that the whole header file really does conform to the
restricted syntax it has to fit into.
The new system uses a lot less code in testcrypt.c, but I've made up
for that by also writing a long comment explaining how it works, which
was another thing the previous system lacked! Similarly, the new
testcrypt.h has some long-overdue instructions at the top.
2021-11-21 10:27:30 +00:00
|
|
|
*
|
|
|
|
* In an ideal world, we would start from a specification like this in
|
2021-11-22 18:50:12 +00:00
|
|
|
* testcrypt-func.h
|
Rewrite the testcrypt.c macro system.
Yesterday's commit 52ee636b092c199 which further extended the huge
pile of arity-specific annoying wrapper macros pushed me over the edge
and inspired me to give some harder thought to finding a way to handle
all arities at once. And this time I found one!
The new technique changes the syntax of the function specifications in
testcrypt.h. In particular, they now have to specify a _name_ for each
parameter as well as a type, because the macros generating the C
marshalling wrappers will need a structure field for each parameter
and cpp isn't flexible enough to generate names for those fields
automatically. Rather than tediously name them arg1, arg2 etc, I've
reused the names of the parameters from the prototypes or definitions
of the underlying real functions (via a one-off auto-extraction
process starting from the output of 'clang -Xclang -dump-ast' plus
some manual polishing), which means testcrypt.h is now a bit more
self-documenting.
The testcrypt.py end of the mechanism is rewritten to eat the new
format. Since it's got more complicated syntax and nested parens and
things, I've written something a bit like a separated lexer/parser
system in place of the previous crude regex matcher, which should
enforce that the whole header file really does conform to the
restricted syntax it has to fit into.
The new system uses a lot less code in testcrypt.c, but I've made up
for that by also writing a long comment explaining how it works, which
was another thing the previous system lacked! Similarly, the new
testcrypt.h has some long-overdue instructions at the top.
2021-11-21 10:27:30 +00:00
|
|
|
*
|
2021-11-28 09:39:49 +00:00
|
|
|
* FUNC(val_foo, example, ARG(val_bar, bar), ARG(uint, n))
|
Rewrite the testcrypt.c macro system.
Yesterday's commit 52ee636b092c199 which further extended the huge
pile of arity-specific annoying wrapper macros pushed me over the edge
and inspired me to give some harder thought to finding a way to handle
all arities at once. And this time I found one!
The new technique changes the syntax of the function specifications in
testcrypt.h. In particular, they now have to specify a _name_ for each
parameter as well as a type, because the macros generating the C
marshalling wrappers will need a structure field for each parameter
and cpp isn't flexible enough to generate names for those fields
automatically. Rather than tediously name them arg1, arg2 etc, I've
reused the names of the parameters from the prototypes or definitions
of the underlying real functions (via a one-off auto-extraction
process starting from the output of 'clang -Xclang -dump-ast' plus
some manual polishing), which means testcrypt.h is now a bit more
self-documenting.
The testcrypt.py end of the mechanism is rewritten to eat the new
format. Since it's got more complicated syntax and nested parens and
things, I've written something a bit like a separated lexer/parser
system in place of the previous crude regex matcher, which should
enforce that the whole header file really does conform to the
restricted syntax it has to fit into.
The new system uses a lot less code in testcrypt.c, but I've made up
for that by also writing a long comment explaining how it works, which
was another thing the previous system lacked! Similarly, the new
testcrypt.h has some long-overdue instructions at the top.
2021-11-21 10:27:30 +00:00
|
|
|
*
|
|
|
|
* and generate a wrapper function looking like this:
|
|
|
|
*
|
|
|
|
* static void handle_example(BinarySource *in, strbuf *out) {
|
|
|
|
* TD_val_bar bar = get_val_bar(in);
|
|
|
|
* TD_uint n = get_uint(in);
|
|
|
|
* return_val_foo(out, example(bar, n));
|
|
|
|
* }
|
|
|
|
*
|
|
|
|
* which would read the marshalled form of each function argument in
|
|
|
|
* turn from the input BinarySource via the get_<type>() function
|
|
|
|
* family defined in this file; assign each argument to a local
|
|
|
|
* variable; call the underlying C function with all those arguments;
|
|
|
|
* and then call a function of the return_<type>() family to marshal
|
|
|
|
* the output value into the output strbuf to be sent to standard
|
|
|
|
* output.
|
|
|
|
*
|
|
|
|
* With a more general macro processor such as m4, or custom code in
|
|
|
|
* Perl or Python, or a helper program like llvm-tblgen, we could just
|
2021-11-22 18:50:12 +00:00
|
|
|
* do that directly, reading function specifications from
|
|
|
|
* testcrypt-func.h and writing out exactly the above. But we don't
|
|
|
|
* have a fully general macro processor (since everything in that
|
|
|
|
* category introduces an extra build dependency that's awkward on
|
|
|
|
* plain Windows, or requires compiling and running a helper program
|
|
|
|
* which is awkward in a cross-compile). We only have cpp. And in cpp,
|
|
|
|
* a macro can't expand one of its arguments differently in two parts
|
|
|
|
* of its own expansion. So we have to be more clever.
|
Rewrite the testcrypt.c macro system.
Yesterday's commit 52ee636b092c199 which further extended the huge
pile of arity-specific annoying wrapper macros pushed me over the edge
and inspired me to give some harder thought to finding a way to handle
all arities at once. And this time I found one!
The new technique changes the syntax of the function specifications in
testcrypt.h. In particular, they now have to specify a _name_ for each
parameter as well as a type, because the macros generating the C
marshalling wrappers will need a structure field for each parameter
and cpp isn't flexible enough to generate names for those fields
automatically. Rather than tediously name them arg1, arg2 etc, I've
reused the names of the parameters from the prototypes or definitions
of the underlying real functions (via a one-off auto-extraction
process starting from the output of 'clang -Xclang -dump-ast' plus
some manual polishing), which means testcrypt.h is now a bit more
self-documenting.
The testcrypt.py end of the mechanism is rewritten to eat the new
format. Since it's got more complicated syntax and nested parens and
things, I've written something a bit like a separated lexer/parser
system in place of the previous crude regex matcher, which should
enforce that the whole header file really does conform to the
restricted syntax it has to fit into.
The new system uses a lot less code in testcrypt.c, but I've made up
for that by also writing a long comment explaining how it works, which
was another thing the previous system lacked! Similarly, the new
testcrypt.h has some long-overdue instructions at the top.
2021-11-21 10:27:30 +00:00
|
|
|
*
|
|
|
|
* In place of the above code, I instead generate three successive
|
|
|
|
* declarations for each function. In simplified form they would look
|
|
|
|
* like this:
|
|
|
|
*
|
|
|
|
* typedef struct ARGS_example {
|
|
|
|
* TD_val_bar bar;
|
|
|
|
* TD_uint n;
|
|
|
|
* } ARGS_example;
|
|
|
|
*
|
|
|
|
* static inline ARGS_example get_args_example(BinarySource *in) {
|
|
|
|
* ARGS_example args;
|
|
|
|
* args.bar = get_val_bar(in);
|
|
|
|
* args.n = get_uint(in);
|
|
|
|
* return args;
|
|
|
|
* }
|
|
|
|
*
|
|
|
|
* static void handle_example(BinarySource *in, strbuf *out) {
|
|
|
|
* ARGS_example args = get_args_example(in);
|
|
|
|
* return_val_foo(out, example(args.bar, args.n));
|
|
|
|
* }
|
|
|
|
*
|
|
|
|
* Each of these mentions the arguments and their types just _once_,
|
|
|
|
* so each one can be generated by a single expansion of the FUNC(...)
|
2021-11-22 18:50:12 +00:00
|
|
|
* specification in testcrypt-func.h, with FUNC and ARG and VOID
|
|
|
|
* defined to appropriate values.
|
Rewrite the testcrypt.c macro system.
Yesterday's commit 52ee636b092c199 which further extended the huge
pile of arity-specific annoying wrapper macros pushed me over the edge
and inspired me to give some harder thought to finding a way to handle
all arities at once. And this time I found one!
The new technique changes the syntax of the function specifications in
testcrypt.h. In particular, they now have to specify a _name_ for each
parameter as well as a type, because the macros generating the C
marshalling wrappers will need a structure field for each parameter
and cpp isn't flexible enough to generate names for those fields
automatically. Rather than tediously name them arg1, arg2 etc, I've
reused the names of the parameters from the prototypes or definitions
of the underlying real functions (via a one-off auto-extraction
process starting from the output of 'clang -Xclang -dump-ast' plus
some manual polishing), which means testcrypt.h is now a bit more
self-documenting.
The testcrypt.py end of the mechanism is rewritten to eat the new
format. Since it's got more complicated syntax and nested parens and
things, I've written something a bit like a separated lexer/parser
system in place of the previous crude regex matcher, which should
enforce that the whole header file really does conform to the
restricted syntax it has to fit into.
The new system uses a lot less code in testcrypt.c, but I've made up
for that by also writing a long comment explaining how it works, which
was another thing the previous system lacked! Similarly, the new
testcrypt.h has some long-overdue instructions at the top.
2021-11-21 10:27:30 +00:00
|
|
|
*
|
|
|
|
* Or ... *nearly*. In fact, I left out several details there, but
|
|
|
|
* it's a good starting point to understand the full version.
|
|
|
|
*
|
|
|
|
* To begin with, several of the variable names shown above are
|
|
|
|
* actually named with an ugly leading underscore, to minimise the
|
|
|
|
* chance of them colliding with real parameter names. (You could
|
|
|
|
* easily imagine 'out' being the name of a parameter to one of the
|
|
|
|
* wrapped functions.) Also, we memset the whole structure to zero at
|
|
|
|
* the start of get_args_example() to avoid compiler warnings about
|
|
|
|
* uninitialised stuff, and insert a precautionary '(void)args;' in
|
|
|
|
* handle_example to avoid a similar warning about _unused_ stuff.
|
|
|
|
*
|
|
|
|
* The big problem is the commas that have to appear between arguments
|
|
|
|
* in the final call to the actual C function. Those can't be
|
|
|
|
* generated by expanding the ARG macro itself, or you'd get one too
|
|
|
|
* many - either a leading comma or a trailing comma. Trailing commas
|
|
|
|
* are legal in a Python function call, but unfortunately C is not yet
|
|
|
|
* so enlightened. (C permits a trailing comma in a struct or array
|
|
|
|
* initialiser, and is coming round to it in enums, but hasn't yet
|
|
|
|
* seen the light about function calls or function prototypes.)
|
|
|
|
*
|
|
|
|
* So the commas must appear _between_ ARG(...) specifiers. And that
|
|
|
|
* means they unavoidably appear in _every_ expansion of FUNC() (or
|
2021-11-28 09:39:49 +00:00
|
|
|
* rather, every expansion that uses the variadic argument list at
|
|
|
|
* all). Therefore, we need to ensure they're harmless in the other
|
|
|
|
* two functions as well.
|
Rewrite the testcrypt.c macro system.
Yesterday's commit 52ee636b092c199 which further extended the huge
pile of arity-specific annoying wrapper macros pushed me over the edge
and inspired me to give some harder thought to finding a way to handle
all arities at once. And this time I found one!
The new technique changes the syntax of the function specifications in
testcrypt.h. In particular, they now have to specify a _name_ for each
parameter as well as a type, because the macros generating the C
marshalling wrappers will need a structure field for each parameter
and cpp isn't flexible enough to generate names for those fields
automatically. Rather than tediously name them arg1, arg2 etc, I've
reused the names of the parameters from the prototypes or definitions
of the underlying real functions (via a one-off auto-extraction
process starting from the output of 'clang -Xclang -dump-ast' plus
some manual polishing), which means testcrypt.h is now a bit more
self-documenting.
The testcrypt.py end of the mechanism is rewritten to eat the new
format. Since it's got more complicated syntax and nested parens and
things, I've written something a bit like a separated lexer/parser
system in place of the previous crude regex matcher, which should
enforce that the whole header file really does conform to the
restricted syntax it has to fit into.
The new system uses a lot less code in testcrypt.c, but I've made up
for that by also writing a long comment explaining how it works, which
was another thing the previous system lacked! Similarly, the new
testcrypt.h has some long-overdue instructions at the top.
2021-11-21 10:27:30 +00:00
|
|
|
*
|
|
|
|
* In the get_args_example() function above, there's no real problem.
|
|
|
|
* The list of assignments can perfectly well be separated by commas
|
|
|
|
* instead of semicolons, so that it becomes a single expression-
|
|
|
|
* statement instead of a sequence of them; the comma operator still
|
|
|
|
* defines a sequence point, so it's fine.
|
|
|
|
*
|
|
|
|
* But what about the structure definition of ARGS_example?
|
|
|
|
*
|
|
|
|
* To get round that, we fill the structure with pointless extra
|
|
|
|
* cruft, in the form of an extra 'int' field before and after each
|
|
|
|
* actually useful argument field. So the real structure definition
|
|
|
|
* ends up looking more like this:
|
|
|
|
*
|
|
|
|
* typedef struct ARGS_example {
|
|
|
|
* int _predummy_bar;
|
|
|
|
* TD_val_bar bar;
|
|
|
|
* int _postdummy_bar, _predummy_n;
|
|
|
|
* TD_uint n;
|
|
|
|
* int _postdummy_n;
|
|
|
|
* } ARGS_example;
|
|
|
|
*
|
|
|
|
* Those extra 'int' fields are ignored completely at run time. They
|
|
|
|
* might cause a runtime space cost if the struct doesn't get
|
|
|
|
* completely optimised away when get_args_example is inlined into
|
|
|
|
* handle_example, but even if so, that's OK, this is a test program
|
|
|
|
* whose memory usage isn't critical. The real point is that, in
|
|
|
|
* between each pair of real arguments, there's a declaration
|
|
|
|
* containing *two* int variables, and in between them is the vital
|
|
|
|
* comma that we need!
|
|
|
|
*
|
2021-11-22 18:50:12 +00:00
|
|
|
* So in that pass through testcrypt-func.h, the ARG(type, name) macro
|
|
|
|
* has to expand to the weird piece of text
|
Rewrite the testcrypt.c macro system.
Yesterday's commit 52ee636b092c199 which further extended the huge
pile of arity-specific annoying wrapper macros pushed me over the edge
and inspired me to give some harder thought to finding a way to handle
all arities at once. And this time I found one!
The new technique changes the syntax of the function specifications in
testcrypt.h. In particular, they now have to specify a _name_ for each
parameter as well as a type, because the macros generating the C
marshalling wrappers will need a structure field for each parameter
and cpp isn't flexible enough to generate names for those fields
automatically. Rather than tediously name them arg1, arg2 etc, I've
reused the names of the parameters from the prototypes or definitions
of the underlying real functions (via a one-off auto-extraction
process starting from the output of 'clang -Xclang -dump-ast' plus
some manual polishing), which means testcrypt.h is now a bit more
self-documenting.
The testcrypt.py end of the mechanism is rewritten to eat the new
format. Since it's got more complicated syntax and nested parens and
things, I've written something a bit like a separated lexer/parser
system in place of the previous crude regex matcher, which should
enforce that the whole header file really does conform to the
restricted syntax it has to fit into.
The new system uses a lot less code in testcrypt.c, but I've made up
for that by also writing a long comment explaining how it works, which
was another thing the previous system lacked! Similarly, the new
testcrypt.h has some long-overdue instructions at the top.
2021-11-21 10:27:30 +00:00
|
|
|
*
|
|
|
|
* _predummy_name; // terminating the previous int declaration
|
|
|
|
* TD_type name; // declaring the thing we actually wanted
|
|
|
|
* int _postdummy_name // new declaration ready to see a comma
|
|
|
|
*
|
|
|
|
* so that a comma-separated list of pieces of expansion like that
|
|
|
|
* will fall into just the right form to be the core of the above
|
|
|
|
* expanded structure definition. Then we just need to put in the
|
|
|
|
* 'int' after the open brace, and the ';' before the closing brace,
|
|
|
|
* and we've got everything we need to make it all syntactically legal.
|
|
|
|
*
|
|
|
|
* Finally, what if a wrapped function has _no_ arguments? Two out of
|
|
|
|
* three uses of the argument list here need some kind of special case
|
|
|
|
* for that. That's why you have to write 'VOID' explicitly in an
|
2021-11-22 18:50:12 +00:00
|
|
|
* empty argument list in testcrypt-func.h: we make VOID expand to
|
|
|
|
* whatever is needed to avoid a syntax error in that special case.
|
Rewrite the testcrypt.c macro system.
Yesterday's commit 52ee636b092c199 which further extended the huge
pile of arity-specific annoying wrapper macros pushed me over the edge
and inspired me to give some harder thought to finding a way to handle
all arities at once. And this time I found one!
The new technique changes the syntax of the function specifications in
testcrypt.h. In particular, they now have to specify a _name_ for each
parameter as well as a type, because the macros generating the C
marshalling wrappers will need a structure field for each parameter
and cpp isn't flexible enough to generate names for those fields
automatically. Rather than tediously name them arg1, arg2 etc, I've
reused the names of the parameters from the prototypes or definitions
of the underlying real functions (via a one-off auto-extraction
process starting from the output of 'clang -Xclang -dump-ast' plus
some manual polishing), which means testcrypt.h is now a bit more
self-documenting.
The testcrypt.py end of the mechanism is rewritten to eat the new
format. Since it's got more complicated syntax and nested parens and
things, I've written something a bit like a separated lexer/parser
system in place of the previous crude regex matcher, which should
enforce that the whole header file really does conform to the
restricted syntax it has to fit into.
The new system uses a lot less code in testcrypt.c, but I've made up
for that by also writing a long comment explaining how it works, which
was another thing the previous system lacked! Similarly, the new
testcrypt.h has some long-overdue instructions at the top.
2021-11-21 10:27:30 +00:00
|
|
|
*/
|
2021-02-20 16:47:37 +00:00
|
|
|
|
2021-11-28 09:39:49 +00:00
|
|
|
/*
|
|
|
|
* Workarounds for an awkwardness in Visual Studio's preprocessor,
|
|
|
|
* which disagrees with everyone else about what happens if you expand
|
|
|
|
* __VA_ARGS__ into the argument list of another macro. gcc and clang
|
|
|
|
* will treat the commas expanding from __VA_ARGS__ as argument
|
|
|
|
* separators, whereas VS will make them all part of a single argument
|
|
|
|
* to the secondary macro. We want the former behaviour, so we use
|
|
|
|
* the following workaround to enforce it.
|
|
|
|
*
|
|
|
|
* Each of these JUXTAPOSE macros simply places its arguments side by
|
|
|
|
* side. But the arguments are macro-expanded before JUXTAPOSE is
|
|
|
|
* called at all, so we can do this:
|
|
|
|
*
|
|
|
|
* JUXTAPOSE(macroname, (__VA_ARGS__))
|
|
|
|
* -> JUXTAPOSE(macroname, (foo, bar, baz))
|
|
|
|
* -> macroname (foo, bar, baz)
|
|
|
|
*
|
|
|
|
* and this preliminary expansion causes the commas to be treated
|
|
|
|
* normally by the time VS gets round to expanding the inner macro.
|
|
|
|
*
|
|
|
|
* We need two differently named JUXTAPOSE macros, because we have to
|
|
|
|
* do this trick twice: once to turn FUNC and FUNC_WRAPPED in
|
|
|
|
* testcrypt-funcs.h into the underlying common FUNC_INNER, and again
|
|
|
|
* to expand the final function call. And you can't expand a macro
|
|
|
|
* inside text expanded from the _same_ macro, so we have to do the
|
|
|
|
* outer and inner instances of this trick using macros of different
|
|
|
|
* names.
|
|
|
|
*/
|
|
|
|
#define JUXTAPOSE1(first, second) first second
|
|
|
|
#define JUXTAPOSE2(first, second) first second
|
2021-11-20 14:28:36 +00:00
|
|
|
|
2021-11-28 09:39:49 +00:00
|
|
|
#define FUNC(outtype, fname, ...) \
|
|
|
|
JUXTAPOSE1(FUNC_INNER, (outtype, fname, fname, __VA_ARGS__))
|
|
|
|
#define FUNC_WRAPPED(outtype, fname, ...) \
|
|
|
|
JUXTAPOSE1(FUNC_INNER, (outtype, fname, fname##_wrapper, __VA_ARGS__))
|
2021-11-21 13:03:34 +00:00
|
|
|
|
Rewrite the testcrypt.c macro system.
Yesterday's commit 52ee636b092c199 which further extended the huge
pile of arity-specific annoying wrapper macros pushed me over the edge
and inspired me to give some harder thought to finding a way to handle
all arities at once. And this time I found one!
The new technique changes the syntax of the function specifications in
testcrypt.h. In particular, they now have to specify a _name_ for each
parameter as well as a type, because the macros generating the C
marshalling wrappers will need a structure field for each parameter
and cpp isn't flexible enough to generate names for those fields
automatically. Rather than tediously name them arg1, arg2 etc, I've
reused the names of the parameters from the prototypes or definitions
of the underlying real functions (via a one-off auto-extraction
process starting from the output of 'clang -Xclang -dump-ast' plus
some manual polishing), which means testcrypt.h is now a bit more
self-documenting.
The testcrypt.py end of the mechanism is rewritten to eat the new
format. Since it's got more complicated syntax and nested parens and
things, I've written something a bit like a separated lexer/parser
system in place of the previous crude regex matcher, which should
enforce that the whole header file really does conform to the
restricted syntax it has to fit into.
The new system uses a lot less code in testcrypt.c, but I've made up
for that by also writing a long comment explaining how it works, which
was another thing the previous system lacked! Similarly, the new
testcrypt.h has some long-overdue instructions at the top.
2021-11-21 10:27:30 +00:00
|
|
|
#define ARG(type, arg) _predummy_##arg; TD_##type arg; int _postdummy_##arg
|
|
|
|
#define VOID _voiddummy
|
2021-11-28 09:39:49 +00:00
|
|
|
#define FUNC_INNER(outtype, fname, realname, ...) \
|
2021-11-21 13:03:34 +00:00
|
|
|
typedef struct ARGS_##fname { \
|
2021-11-28 09:39:49 +00:00
|
|
|
int __VA_ARGS__; \
|
Rewrite the testcrypt.c macro system.
Yesterday's commit 52ee636b092c199 which further extended the huge
pile of arity-specific annoying wrapper macros pushed me over the edge
and inspired me to give some harder thought to finding a way to handle
all arities at once. And this time I found one!
The new technique changes the syntax of the function specifications in
testcrypt.h. In particular, they now have to specify a _name_ for each
parameter as well as a type, because the macros generating the C
marshalling wrappers will need a structure field for each parameter
and cpp isn't flexible enough to generate names for those fields
automatically. Rather than tediously name them arg1, arg2 etc, I've
reused the names of the parameters from the prototypes or definitions
of the underlying real functions (via a one-off auto-extraction
process starting from the output of 'clang -Xclang -dump-ast' plus
some manual polishing), which means testcrypt.h is now a bit more
self-documenting.
The testcrypt.py end of the mechanism is rewritten to eat the new
format. Since it's got more complicated syntax and nested parens and
things, I've written something a bit like a separated lexer/parser
system in place of the previous crude regex matcher, which should
enforce that the whole header file really does conform to the
restricted syntax it has to fit into.
The new system uses a lot less code in testcrypt.c, but I've made up
for that by also writing a long comment explaining how it works, which
was another thing the previous system lacked! Similarly, the new
testcrypt.h has some long-overdue instructions at the top.
2021-11-21 10:27:30 +00:00
|
|
|
} ARGS_##fname;
|
2021-11-22 18:50:12 +00:00
|
|
|
#include "testcrypt-func.h"
|
2021-11-21 13:03:34 +00:00
|
|
|
#undef FUNC_INNER
|
Rewrite the testcrypt.c macro system.
Yesterday's commit 52ee636b092c199 which further extended the huge
pile of arity-specific annoying wrapper macros pushed me over the edge
and inspired me to give some harder thought to finding a way to handle
all arities at once. And this time I found one!
The new technique changes the syntax of the function specifications in
testcrypt.h. In particular, they now have to specify a _name_ for each
parameter as well as a type, because the macros generating the C
marshalling wrappers will need a structure field for each parameter
and cpp isn't flexible enough to generate names for those fields
automatically. Rather than tediously name them arg1, arg2 etc, I've
reused the names of the parameters from the prototypes or definitions
of the underlying real functions (via a one-off auto-extraction
process starting from the output of 'clang -Xclang -dump-ast' plus
some manual polishing), which means testcrypt.h is now a bit more
self-documenting.
The testcrypt.py end of the mechanism is rewritten to eat the new
format. Since it's got more complicated syntax and nested parens and
things, I've written something a bit like a separated lexer/parser
system in place of the previous crude regex matcher, which should
enforce that the whole header file really does conform to the
restricted syntax it has to fit into.
The new system uses a lot less code in testcrypt.c, but I've made up
for that by also writing a long comment explaining how it works, which
was another thing the previous system lacked! Similarly, the new
testcrypt.h has some long-overdue instructions at the top.
2021-11-21 10:27:30 +00:00
|
|
|
#undef ARG
|
|
|
|
#undef VOID
|
|
|
|
|
|
|
|
#define ARG(type, arg) _args.arg = get_##type(_in)
|
|
|
|
#define VOID ((void)0)
|
2021-11-28 09:39:49 +00:00
|
|
|
#define FUNC_INNER(outtype, fname, realname, ...) \
|
Rewrite the testcrypt.c macro system.
Yesterday's commit 52ee636b092c199 which further extended the huge
pile of arity-specific annoying wrapper macros pushed me over the edge
and inspired me to give some harder thought to finding a way to handle
all arities at once. And this time I found one!
The new technique changes the syntax of the function specifications in
testcrypt.h. In particular, they now have to specify a _name_ for each
parameter as well as a type, because the macros generating the C
marshalling wrappers will need a structure field for each parameter
and cpp isn't flexible enough to generate names for those fields
automatically. Rather than tediously name them arg1, arg2 etc, I've
reused the names of the parameters from the prototypes or definitions
of the underlying real functions (via a one-off auto-extraction
process starting from the output of 'clang -Xclang -dump-ast' plus
some manual polishing), which means testcrypt.h is now a bit more
self-documenting.
The testcrypt.py end of the mechanism is rewritten to eat the new
format. Since it's got more complicated syntax and nested parens and
things, I've written something a bit like a separated lexer/parser
system in place of the previous crude regex matcher, which should
enforce that the whole header file really does conform to the
restricted syntax it has to fit into.
The new system uses a lot less code in testcrypt.c, but I've made up
for that by also writing a long comment explaining how it works, which
was another thing the previous system lacked! Similarly, the new
testcrypt.h has some long-overdue instructions at the top.
2021-11-21 10:27:30 +00:00
|
|
|
static inline ARGS_##fname get_args_##fname(BinarySource *_in) { \
|
|
|
|
ARGS_##fname _args; \
|
|
|
|
memset(&_args, 0, sizeof(_args)); \
|
2021-11-28 09:39:49 +00:00
|
|
|
__VA_ARGS__; \
|
Rewrite the testcrypt.c macro system.
Yesterday's commit 52ee636b092c199 which further extended the huge
pile of arity-specific annoying wrapper macros pushed me over the edge
and inspired me to give some harder thought to finding a way to handle
all arities at once. And this time I found one!
The new technique changes the syntax of the function specifications in
testcrypt.h. In particular, they now have to specify a _name_ for each
parameter as well as a type, because the macros generating the C
marshalling wrappers will need a structure field for each parameter
and cpp isn't flexible enough to generate names for those fields
automatically. Rather than tediously name them arg1, arg2 etc, I've
reused the names of the parameters from the prototypes or definitions
of the underlying real functions (via a one-off auto-extraction
process starting from the output of 'clang -Xclang -dump-ast' plus
some manual polishing), which means testcrypt.h is now a bit more
self-documenting.
The testcrypt.py end of the mechanism is rewritten to eat the new
format. Since it's got more complicated syntax and nested parens and
things, I've written something a bit like a separated lexer/parser
system in place of the previous crude regex matcher, which should
enforce that the whole header file really does conform to the
restricted syntax it has to fit into.
The new system uses a lot less code in testcrypt.c, but I've made up
for that by also writing a long comment explaining how it works, which
was another thing the previous system lacked! Similarly, the new
testcrypt.h has some long-overdue instructions at the top.
2021-11-21 10:27:30 +00:00
|
|
|
return _args; \
|
2021-11-20 14:28:36 +00:00
|
|
|
}
|
2021-11-22 18:50:12 +00:00
|
|
|
#include "testcrypt-func.h"
|
2021-11-21 13:03:34 +00:00
|
|
|
#undef FUNC_INNER
|
Rewrite the testcrypt.c macro system.
Yesterday's commit 52ee636b092c199 which further extended the huge
pile of arity-specific annoying wrapper macros pushed me over the edge
and inspired me to give some harder thought to finding a way to handle
all arities at once. And this time I found one!
The new technique changes the syntax of the function specifications in
testcrypt.h. In particular, they now have to specify a _name_ for each
parameter as well as a type, because the macros generating the C
marshalling wrappers will need a structure field for each parameter
and cpp isn't flexible enough to generate names for those fields
automatically. Rather than tediously name them arg1, arg2 etc, I've
reused the names of the parameters from the prototypes or definitions
of the underlying real functions (via a one-off auto-extraction
process starting from the output of 'clang -Xclang -dump-ast' plus
some manual polishing), which means testcrypt.h is now a bit more
self-documenting.
The testcrypt.py end of the mechanism is rewritten to eat the new
format. Since it's got more complicated syntax and nested parens and
things, I've written something a bit like a separated lexer/parser
system in place of the previous crude regex matcher, which should
enforce that the whole header file really does conform to the
restricted syntax it has to fit into.
The new system uses a lot less code in testcrypt.c, but I've made up
for that by also writing a long comment explaining how it works, which
was another thing the previous system lacked! Similarly, the new
testcrypt.h has some long-overdue instructions at the top.
2021-11-21 10:27:30 +00:00
|
|
|
#undef ARG
|
|
|
|
#undef VOID
|
|
|
|
|
|
|
|
#define ARG(type, arg) _args.arg
|
|
|
|
#define VOID
|
2021-11-28 09:39:49 +00:00
|
|
|
#define FUNC_INNER(outtype, fname, realname, ...) \
|
Rewrite the testcrypt.c macro system.
Yesterday's commit 52ee636b092c199 which further extended the huge
pile of arity-specific annoying wrapper macros pushed me over the edge
and inspired me to give some harder thought to finding a way to handle
all arities at once. And this time I found one!
The new technique changes the syntax of the function specifications in
testcrypt.h. In particular, they now have to specify a _name_ for each
parameter as well as a type, because the macros generating the C
marshalling wrappers will need a structure field for each parameter
and cpp isn't flexible enough to generate names for those fields
automatically. Rather than tediously name them arg1, arg2 etc, I've
reused the names of the parameters from the prototypes or definitions
of the underlying real functions (via a one-off auto-extraction
process starting from the output of 'clang -Xclang -dump-ast' plus
some manual polishing), which means testcrypt.h is now a bit more
self-documenting.
The testcrypt.py end of the mechanism is rewritten to eat the new
format. Since it's got more complicated syntax and nested parens and
things, I've written something a bit like a separated lexer/parser
system in place of the previous crude regex matcher, which should
enforce that the whole header file really does conform to the
restricted syntax it has to fit into.
The new system uses a lot less code in testcrypt.c, but I've made up
for that by also writing a long comment explaining how it works, which
was another thing the previous system lacked! Similarly, the new
testcrypt.h has some long-overdue instructions at the top.
2021-11-21 10:27:30 +00:00
|
|
|
static void handle_##fname(BinarySource *_in, strbuf *_out) { \
|
|
|
|
ARGS_##fname _args = get_args_##fname(_in); \
|
|
|
|
(void)_args; /* suppress warning if no actual arguments */ \
|
2022-08-03 19:48:46 +00:00
|
|
|
return_##outtype(_out, JUXTAPOSE2(realname, (__VA_ARGS__))); \
|
2021-11-20 14:28:36 +00:00
|
|
|
}
|
2021-11-22 18:50:12 +00:00
|
|
|
#include "testcrypt-func.h"
|
2021-11-21 13:03:34 +00:00
|
|
|
#undef FUNC_INNER
|
Rewrite the testcrypt.c macro system.
Yesterday's commit 52ee636b092c199 which further extended the huge
pile of arity-specific annoying wrapper macros pushed me over the edge
and inspired me to give some harder thought to finding a way to handle
all arities at once. And this time I found one!
The new technique changes the syntax of the function specifications in
testcrypt.h. In particular, they now have to specify a _name_ for each
parameter as well as a type, because the macros generating the C
marshalling wrappers will need a structure field for each parameter
and cpp isn't flexible enough to generate names for those fields
automatically. Rather than tediously name them arg1, arg2 etc, I've
reused the names of the parameters from the prototypes or definitions
of the underlying real functions (via a one-off auto-extraction
process starting from the output of 'clang -Xclang -dump-ast' plus
some manual polishing), which means testcrypt.h is now a bit more
self-documenting.
The testcrypt.py end of the mechanism is rewritten to eat the new
format. Since it's got more complicated syntax and nested parens and
things, I've written something a bit like a separated lexer/parser
system in place of the previous crude regex matcher, which should
enforce that the whole header file really does conform to the
restricted syntax it has to fit into.
The new system uses a lot less code in testcrypt.c, but I've made up
for that by also writing a long comment explaining how it works, which
was another thing the previous system lacked! Similarly, the new
testcrypt.h has some long-overdue instructions at the top.
2021-11-21 10:27:30 +00:00
|
|
|
#undef ARG
|
New test system for mp_int and cryptography.
I've written a new standalone test program which incorporates all of
PuTTY's crypto code, including the mp_int and low-level elliptic curve
layers but also going all the way up to the implementations of the
MAC, hash, cipher, public key and kex abstractions.
The test program itself, 'testcrypt', speaks a simple line-oriented
protocol on standard I/O in which you write the name of a function
call followed by some inputs, and it gives you back a list of outputs
preceded by a line telling you how many there are. Dynamically
allocated objects are assigned string ids in the protocol, and there's
a 'free' function that tells testcrypt when it can dispose of one.
It's possible to speak that protocol by hand, but cumbersome. I've
also provided a Python module that wraps it, by running testcrypt as a
persistent subprocess and gatewaying all the function calls into
things that look reasonably natural to call from Python. The Python
module and testcrypt.c both read a carefully formatted header file
testcrypt.h which contains the name and signature of every exported
function, so it costs minimal effort to expose a given function
through this test API. In a few cases it's necessary to write a
wrapper in testcrypt.c that makes the function look more friendly, but
mostly you don't even need that. (Though that is one of the
motivations between a lot of API cleanups I've done recently!)
I considered doing Python integration in the more obvious way, by
linking parts of the PuTTY code directly into a native-code .so Python
module. I decided against it because this way is more flexible: I can
run the testcrypt program on its own, or compile it in a way that
Python wouldn't play nicely with (I bet compiling just that .so with
Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or
attach a debugger to it. I can even recompile testcrypt for a
different CPU architecture (32- vs 64-bit, or even running it on a
different machine over ssh or under emulation) and still layer the
nice API on top of that via the local Python interpreter. All I need
is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
|
|
|
|
|
|
|
static void process_line(BinarySource *in, strbuf *out)
|
|
|
|
{
|
|
|
|
ptrlen id = get_word(in);
|
|
|
|
|
2020-01-29 06:13:41 +00:00
|
|
|
#define DISPATCH_INTERNAL(cmdname, handler) do { \
|
|
|
|
if (ptrlen_eq_string(id, cmdname)) { \
|
|
|
|
handler(in, out); \
|
|
|
|
return; \
|
|
|
|
} \
|
|
|
|
} while (0)
|
|
|
|
|
|
|
|
#define DISPATCH_COMMAND(cmd) DISPATCH_INTERNAL(#cmd, handle_##cmd)
|
New test system for mp_int and cryptography.
I've written a new standalone test program which incorporates all of
PuTTY's crypto code, including the mp_int and low-level elliptic curve
layers but also going all the way up to the implementations of the
MAC, hash, cipher, public key and kex abstractions.
The test program itself, 'testcrypt', speaks a simple line-oriented
protocol on standard I/O in which you write the name of a function
call followed by some inputs, and it gives you back a list of outputs
preceded by a line telling you how many there are. Dynamically
allocated objects are assigned string ids in the protocol, and there's
a 'free' function that tells testcrypt when it can dispose of one.
It's possible to speak that protocol by hand, but cumbersome. I've
also provided a Python module that wraps it, by running testcrypt as a
persistent subprocess and gatewaying all the function calls into
things that look reasonably natural to call from Python. The Python
module and testcrypt.c both read a carefully formatted header file
testcrypt.h which contains the name and signature of every exported
function, so it costs minimal effort to expose a given function
through this test API. In a few cases it's necessary to write a
wrapper in testcrypt.c that makes the function look more friendly, but
mostly you don't even need that. (Though that is one of the
motivations between a lot of API cleanups I've done recently!)
I considered doing Python integration in the more obvious way, by
linking parts of the PuTTY code directly into a native-code .so Python
module. I decided against it because this way is more flexible: I can
run the testcrypt program on its own, or compile it in a way that
Python wouldn't play nicely with (I bet compiling just that .so with
Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or
attach a debugger to it. I can even recompile testcrypt for a
different CPU architecture (32- vs 64-bit, or even running it on a
different machine over ssh or under emulation) and still layer the
nice API on top of that via the local Python interpreter. All I need
is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
|
|
|
DISPATCH_COMMAND(hello);
|
|
|
|
DISPATCH_COMMAND(free);
|
|
|
|
DISPATCH_COMMAND(newstring);
|
|
|
|
DISPATCH_COMMAND(getstring);
|
|
|
|
DISPATCH_COMMAND(mp_literal);
|
|
|
|
DISPATCH_COMMAND(mp_dump);
|
2021-11-22 18:39:45 +00:00
|
|
|
DISPATCH_COMMAND(checkenum);
|
2020-01-29 06:13:41 +00:00
|
|
|
#undef DISPATCH_COMMAND
|
New test system for mp_int and cryptography.
I've written a new standalone test program which incorporates all of
PuTTY's crypto code, including the mp_int and low-level elliptic curve
layers but also going all the way up to the implementations of the
MAC, hash, cipher, public key and kex abstractions.
The test program itself, 'testcrypt', speaks a simple line-oriented
protocol on standard I/O in which you write the name of a function
call followed by some inputs, and it gives you back a list of outputs
preceded by a line telling you how many there are. Dynamically
allocated objects are assigned string ids in the protocol, and there's
a 'free' function that tells testcrypt when it can dispose of one.
It's possible to speak that protocol by hand, but cumbersome. I've
also provided a Python module that wraps it, by running testcrypt as a
persistent subprocess and gatewaying all the function calls into
things that look reasonably natural to call from Python. The Python
module and testcrypt.c both read a carefully formatted header file
testcrypt.h which contains the name and signature of every exported
function, so it costs minimal effort to expose a given function
through this test API. In a few cases it's necessary to write a
wrapper in testcrypt.c that makes the function look more friendly, but
mostly you don't even need that. (Though that is one of the
motivations between a lot of API cleanups I've done recently!)
I considered doing Python integration in the more obvious way, by
linking parts of the PuTTY code directly into a native-code .so Python
module. I decided against it because this way is more flexible: I can
run the testcrypt program on its own, or compile it in a way that
Python wouldn't play nicely with (I bet compiling just that .so with
Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or
attach a debugger to it. I can even recompile testcrypt for a
different CPU architecture (32- vs 64-bit, or even running it on a
different machine over ssh or under emulation) and still layer the
nice API on top of that via the local Python interpreter. All I need
is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
|
|
|
|
2021-11-28 09:39:49 +00:00
|
|
|
#define FUNC_INNER(outtype, fname, realname, ...) \
|
2021-11-21 13:03:34 +00:00
|
|
|
DISPATCH_INTERNAL(#fname,handle_##fname);
|
2021-11-22 18:50:12 +00:00
|
|
|
#include "testcrypt-func.h"
|
2021-11-21 13:03:34 +00:00
|
|
|
#undef FUNC_INNER
|
New test system for mp_int and cryptography.
I've written a new standalone test program which incorporates all of
PuTTY's crypto code, including the mp_int and low-level elliptic curve
layers but also going all the way up to the implementations of the
MAC, hash, cipher, public key and kex abstractions.
The test program itself, 'testcrypt', speaks a simple line-oriented
protocol on standard I/O in which you write the name of a function
call followed by some inputs, and it gives you back a list of outputs
preceded by a line telling you how many there are. Dynamically
allocated objects are assigned string ids in the protocol, and there's
a 'free' function that tells testcrypt when it can dispose of one.
It's possible to speak that protocol by hand, but cumbersome. I've
also provided a Python module that wraps it, by running testcrypt as a
persistent subprocess and gatewaying all the function calls into
things that look reasonably natural to call from Python. The Python
module and testcrypt.c both read a carefully formatted header file
testcrypt.h which contains the name and signature of every exported
function, so it costs minimal effort to expose a given function
through this test API. In a few cases it's necessary to write a
wrapper in testcrypt.c that makes the function look more friendly, but
mostly you don't even need that. (Though that is one of the
motivations between a lot of API cleanups I've done recently!)
I considered doing Python integration in the more obvious way, by
linking parts of the PuTTY code directly into a native-code .so Python
module. I decided against it because this way is more flexible: I can
run the testcrypt program on its own, or compile it in a way that
Python wouldn't play nicely with (I bet compiling just that .so with
Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or
attach a debugger to it. I can even recompile testcrypt for a
different CPU architecture (32- vs 64-bit, or even running it on a
different machine over ssh or under emulation) and still layer the
nice API on top of that via the local Python interpreter. All I need
is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
|
|
|
|
2020-01-29 06:13:41 +00:00
|
|
|
#undef DISPATCH_INTERNAL
|
|
|
|
|
New test system for mp_int and cryptography.
I've written a new standalone test program which incorporates all of
PuTTY's crypto code, including the mp_int and low-level elliptic curve
layers but also going all the way up to the implementations of the
MAC, hash, cipher, public key and kex abstractions.
The test program itself, 'testcrypt', speaks a simple line-oriented
protocol on standard I/O in which you write the name of a function
call followed by some inputs, and it gives you back a list of outputs
preceded by a line telling you how many there are. Dynamically
allocated objects are assigned string ids in the protocol, and there's
a 'free' function that tells testcrypt when it can dispose of one.
It's possible to speak that protocol by hand, but cumbersome. I've
also provided a Python module that wraps it, by running testcrypt as a
persistent subprocess and gatewaying all the function calls into
things that look reasonably natural to call from Python. The Python
module and testcrypt.c both read a carefully formatted header file
testcrypt.h which contains the name and signature of every exported
function, so it costs minimal effort to expose a given function
through this test API. In a few cases it's necessary to write a
wrapper in testcrypt.c that makes the function look more friendly, but
mostly you don't even need that. (Though that is one of the
motivations between a lot of API cleanups I've done recently!)
I considered doing Python integration in the more obvious way, by
linking parts of the PuTTY code directly into a native-code .so Python
module. I decided against it because this way is more flexible: I can
run the testcrypt program on its own, or compile it in a way that
Python wouldn't play nicely with (I bet compiling just that .so with
Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or
attach a debugger to it. I can even recompile testcrypt for a
different CPU architecture (32- vs 64-bit, or even running it on a
different machine over ssh or under emulation) and still layer the
nice API on top of that via the local Python interpreter. All I need
is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
|
|
|
fatal_error("command '%.*s': unrecognised", PTRLEN_PRINTF(id));
|
|
|
|
}
|
|
|
|
|
|
|
|
static void free_all_values(void)
|
|
|
|
{
|
|
|
|
for (Value *val; (val = delpos234(values, 0)) != NULL ;)
|
|
|
|
free_value(val);
|
|
|
|
freetree234(values);
|
|
|
|
}
|
|
|
|
|
2019-01-20 21:44:51 +00:00
|
|
|
void dputs(const char *buf)
|
|
|
|
{
|
|
|
|
fputs(buf, stderr);
|
|
|
|
}
|
|
|
|
|
New test system for mp_int and cryptography.
I've written a new standalone test program which incorporates all of
PuTTY's crypto code, including the mp_int and low-level elliptic curve
layers but also going all the way up to the implementations of the
MAC, hash, cipher, public key and kex abstractions.
The test program itself, 'testcrypt', speaks a simple line-oriented
protocol on standard I/O in which you write the name of a function
call followed by some inputs, and it gives you back a list of outputs
preceded by a line telling you how many there are. Dynamically
allocated objects are assigned string ids in the protocol, and there's
a 'free' function that tells testcrypt when it can dispose of one.
It's possible to speak that protocol by hand, but cumbersome. I've
also provided a Python module that wraps it, by running testcrypt as a
persistent subprocess and gatewaying all the function calls into
things that look reasonably natural to call from Python. The Python
module and testcrypt.c both read a carefully formatted header file
testcrypt.h which contains the name and signature of every exported
function, so it costs minimal effort to expose a given function
through this test API. In a few cases it's necessary to write a
wrapper in testcrypt.c that makes the function look more friendly, but
mostly you don't even need that. (Though that is one of the
motivations between a lot of API cleanups I've done recently!)
I considered doing Python integration in the more obvious way, by
linking parts of the PuTTY code directly into a native-code .so Python
module. I decided against it because this way is more flexible: I can
run the testcrypt program on its own, or compile it in a way that
Python wouldn't play nicely with (I bet compiling just that .so with
Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or
attach a debugger to it. I can even recompile testcrypt for a
different CPU architecture (32- vs 64-bit, or even running it on a
different machine over ssh or under emulation) and still layer the
nice API on top of that via the local Python interpreter. All I need
is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
|
|
|
int main(int argc, char **argv)
|
|
|
|
{
|
2019-01-06 21:44:57 +00:00
|
|
|
const char *infile = NULL, *outfile = NULL;
|
|
|
|
bool doing_opts = true;
|
|
|
|
|
Arm: turn on PSTATE.DIT if available and needed.
DIT, for 'Data-Independent Timing', is a bit you can set in the
processor state on sufficiently new Arm CPUs, which promises that a
long list of instructions will deliberately avoid varying their timing
based on the input register values. Just what you want for keeping
your constant-time crypto primitives constant-time.
As far as I'm aware, no CPU has _yet_ implemented any data-dependent
optimisations, so DIT is a safety precaution against them doing so in
future. It would be embarrassing to be caught without it if a future
CPU does do that, so we now turn on DIT in the PuTTY process state.
I've put a call to the new enable_dit() function at the start of every
main() and WinMain() belonging to a program that might do
cryptography (even testcrypt, in case someone uses it for something!),
and in case I missed one there, also added a second call at the first
moment that any cryptography-using part of the code looks as if it
might become active: when an instance of the SSH protocol object is
configured, when the system PRNG is initialised, and when selecting
any cryptographic authentication protocol in an HTTP or SOCKS proxy
connection. With any luck those precautions between them should ensure
it's on whenever we need it.
Arm's own recommendation is that you should carefully choose the
granularity at which you enable and disable DIT: there's a potential
time cost to turning it on and off (I'm not sure what, but plausibly
something of the order of a pipeline flush), so it's a performance hit
to do it _inside_ each individual crypto function, but if CPUs start
supporting significant data-dependent optimisation in future, then it
will also become a noticeable performance hit to just leave it on
across the whole process. So you'd like to do it somewhere in the
middle: for example, you might turn on DIT once around the whole
process of verifying and decrypting an SSH packet, instead of once for
decryption and once for MAC.
With all respect to that recommendation as a strategy for maximum
performance, I'm not following it here. I turn on DIT at the start of
the PuTTY process, and then leave it on. Rationale:
1. PuTTY is not otherwise a performance-critical application: it's
not likely to max out your CPU for any purpose _other_ than
cryptography. The most CPU-intensive non-cryptographic thing I can
imagine a PuTTY process doing is the complicated computation of
font rendering in the terminal, and that will normally be cached
(you don't recompute each glyph from its outline and hints for
every time you display it).
2. I think a bigger risk lies in accidental side channels from having
DIT turned off when it should have been on. I can imagine lots of
causes for that. Missing a crypto operation in some unswept corner
of the code; confusing control flow (like my coroutine macros)
jumping with DIT clear into the middle of a region of code that
expected DIT to have been set at the beginning; having a reference
counter of DIT requests and getting it out of sync.
In a more sophisticated programming language, it might be possible to
avoid the risk in #2 by cleverness with the type system. For example,
in Rust, you could have a zero-sized type that acts as a proof token
for DIT being enabled (it would be constructed by a function that also
sets DIT, have a Drop implementation that clears DIT, and be !Send so
you couldn't use it in a thread other than the one where DIT was set),
and then you could require all the actual crypto functions to take a
DitToken as an extra parameter, at zero runtime cost. Then "oops I
forgot to set DIT around this piece of crypto" would become a compile
error. Even so, you'd have to take some care with coroutine-structured
code (what happens if a Rust async function yields while holding a DIT
token?) and with nesting (if you have two DIT tokens, you don't want
dropping the inner one to clear DIT while the outer one is still there
to wrongly convince callees that it's set). Maybe in Rust you could
get this all to work reliably. But not in C!
DIT is an optional feature of the Arm architecture, so we must first
test to see if it's supported. This is done the same way as we already
do for the various Arm crypto accelerators: on ELF-based systems,
check the appropriate bit in the 'hwcap' words in the ELF aux vector;
on Mac, look for an appropriate sysctl flag.
On Windows I don't know of a way to query the DIT feature, _or_ of a
way to write the necessary enabling instruction in an MSVC-compatible
way. I've _heard_ that it might not be necessary, because Windows
might just turn on DIT unconditionally and leave it on, in an even
more extreme version of my own strategy. I don't have a source for
that - I heard it by word of mouth - but I _hope_ it's true, because
that would suit me very well! Certainly I can't write code to enable
DIT without knowing (a) how to do it, (b) how to know if it's safe.
Nonetheless, I've put the enable_dit() call in all the right places in
the Windows main programs as well as the Unix and cross-platform code,
so that if I later find out that I _can_ put in an explicit enable of
DIT in some way, I'll only have to arrange to set HAVE_ARM_DIT and
compile the enable_dit() function appropriately.
2024-12-19 08:47:08 +00:00
|
|
|
enable_dit(); /* in case this is used as a crypto helper (Hyrum's Law) */
|
|
|
|
|
2019-01-06 21:44:57 +00:00
|
|
|
while (--argc > 0) {
|
|
|
|
char *p = *++argv;
|
|
|
|
|
|
|
|
if (p[0] == '-' && doing_opts) {
|
|
|
|
if (!strcmp(p, "-o")) {
|
|
|
|
if (--argc <= 0) {
|
|
|
|
fprintf(stderr, "'-o' expects a filename\n");
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
outfile = *++argv;
|
|
|
|
} else if (!strcmp(p, "--")) {
|
|
|
|
doing_opts = false;
|
|
|
|
} else if (!strcmp(p, "--help")) {
|
|
|
|
printf("usage: testcrypt [INFILE] [-o OUTFILE]\n");
|
|
|
|
printf(" also: testcrypt --help display this text\n");
|
|
|
|
return 0;
|
|
|
|
} else {
|
|
|
|
fprintf(stderr, "unknown command line option '%s'\n", p);
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
} else if (!infile) {
|
|
|
|
infile = p;
|
|
|
|
} else {
|
|
|
|
fprintf(stderr, "can only handle one input file name\n");
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
FILE *infp = stdin;
|
|
|
|
if (infile) {
|
|
|
|
infp = fopen(infile, "r");
|
|
|
|
if (!infp) {
|
|
|
|
fprintf(stderr, "%s: open: %s\n", infile, strerror(errno));
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
}
|
New test system for mp_int and cryptography.
I've written a new standalone test program which incorporates all of
PuTTY's crypto code, including the mp_int and low-level elliptic curve
layers but also going all the way up to the implementations of the
MAC, hash, cipher, public key and kex abstractions.
The test program itself, 'testcrypt', speaks a simple line-oriented
protocol on standard I/O in which you write the name of a function
call followed by some inputs, and it gives you back a list of outputs
preceded by a line telling you how many there are. Dynamically
allocated objects are assigned string ids in the protocol, and there's
a 'free' function that tells testcrypt when it can dispose of one.
It's possible to speak that protocol by hand, but cumbersome. I've
also provided a Python module that wraps it, by running testcrypt as a
persistent subprocess and gatewaying all the function calls into
things that look reasonably natural to call from Python. The Python
module and testcrypt.c both read a carefully formatted header file
testcrypt.h which contains the name and signature of every exported
function, so it costs minimal effort to expose a given function
through this test API. In a few cases it's necessary to write a
wrapper in testcrypt.c that makes the function look more friendly, but
mostly you don't even need that. (Though that is one of the
motivations between a lot of API cleanups I've done recently!)
I considered doing Python integration in the more obvious way, by
linking parts of the PuTTY code directly into a native-code .so Python
module. I decided against it because this way is more flexible: I can
run the testcrypt program on its own, or compile it in a way that
Python wouldn't play nicely with (I bet compiling just that .so with
Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or
attach a debugger to it. I can even recompile testcrypt for a
different CPU architecture (32- vs 64-bit, or even running it on a
different machine over ssh or under emulation) and still layer the
nice API on top of that via the local Python interpreter. All I need
is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
|
|
|
|
2019-01-06 21:44:57 +00:00
|
|
|
FILE *outfp = stdout;
|
|
|
|
if (outfile) {
|
|
|
|
outfp = fopen(outfile, "w");
|
|
|
|
if (!outfp) {
|
|
|
|
fprintf(stderr, "%s: open: %s\n", outfile, strerror(errno));
|
New test system for mp_int and cryptography.
I've written a new standalone test program which incorporates all of
PuTTY's crypto code, including the mp_int and low-level elliptic curve
layers but also going all the way up to the implementations of the
MAC, hash, cipher, public key and kex abstractions.
The test program itself, 'testcrypt', speaks a simple line-oriented
protocol on standard I/O in which you write the name of a function
call followed by some inputs, and it gives you back a list of outputs
preceded by a line telling you how many there are. Dynamically
allocated objects are assigned string ids in the protocol, and there's
a 'free' function that tells testcrypt when it can dispose of one.
It's possible to speak that protocol by hand, but cumbersome. I've
also provided a Python module that wraps it, by running testcrypt as a
persistent subprocess and gatewaying all the function calls into
things that look reasonably natural to call from Python. The Python
module and testcrypt.c both read a carefully formatted header file
testcrypt.h which contains the name and signature of every exported
function, so it costs minimal effort to expose a given function
through this test API. In a few cases it's necessary to write a
wrapper in testcrypt.c that makes the function look more friendly, but
mostly you don't even need that. (Though that is one of the
motivations between a lot of API cleanups I've done recently!)
I considered doing Python integration in the more obvious way, by
linking parts of the PuTTY code directly into a native-code .so Python
module. I decided against it because this way is more flexible: I can
run the testcrypt program on its own, or compile it in a way that
Python wouldn't play nicely with (I bet compiling just that .so with
Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or
attach a debugger to it. I can even recompile testcrypt for a
different CPU architecture (32- vs 64-bit, or even running it on a
different machine over ssh or under emulation) and still layer the
nice API on top of that via the local Python interpreter. All I need
is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
values = newtree234(valuecmp);
|
|
|
|
|
|
|
|
atexit(free_all_values);
|
|
|
|
|
2019-01-06 21:44:57 +00:00
|
|
|
for (char *line; (line = chomp(fgetline(infp))) != NULL ;) {
|
New test system for mp_int and cryptography.
I've written a new standalone test program which incorporates all of
PuTTY's crypto code, including the mp_int and low-level elliptic curve
layers but also going all the way up to the implementations of the
MAC, hash, cipher, public key and kex abstractions.
The test program itself, 'testcrypt', speaks a simple line-oriented
protocol on standard I/O in which you write the name of a function
call followed by some inputs, and it gives you back a list of outputs
preceded by a line telling you how many there are. Dynamically
allocated objects are assigned string ids in the protocol, and there's
a 'free' function that tells testcrypt when it can dispose of one.
It's possible to speak that protocol by hand, but cumbersome. I've
also provided a Python module that wraps it, by running testcrypt as a
persistent subprocess and gatewaying all the function calls into
things that look reasonably natural to call from Python. The Python
module and testcrypt.c both read a carefully formatted header file
testcrypt.h which contains the name and signature of every exported
function, so it costs minimal effort to expose a given function
through this test API. In a few cases it's necessary to write a
wrapper in testcrypt.c that makes the function look more friendly, but
mostly you don't even need that. (Though that is one of the
motivations between a lot of API cleanups I've done recently!)
I considered doing Python integration in the more obvious way, by
linking parts of the PuTTY code directly into a native-code .so Python
module. I decided against it because this way is more flexible: I can
run the testcrypt program on its own, or compile it in a way that
Python wouldn't play nicely with (I bet compiling just that .so with
Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or
attach a debugger to it. I can even recompile testcrypt for a
different CPU architecture (32- vs 64-bit, or even running it on a
different machine over ssh or under emulation) and still layer the
nice API on top of that via the local Python interpreter. All I need
is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
|
|
|
BinarySource src[1];
|
|
|
|
BinarySource_BARE_INIT(src, line, strlen(line));
|
|
|
|
strbuf *sb = strbuf_new();
|
|
|
|
process_line(src, sb);
|
|
|
|
run_finalisers(sb);
|
|
|
|
size_t lines = 0;
|
|
|
|
for (size_t i = 0; i < sb->len; i++)
|
|
|
|
if (sb->s[i] == '\n')
|
|
|
|
lines++;
|
2020-01-26 10:59:07 +00:00
|
|
|
fprintf(outfp, "%"SIZEu"\n%s", lines, sb->s);
|
2019-01-06 21:44:57 +00:00
|
|
|
fflush(outfp);
|
New test system for mp_int and cryptography.
I've written a new standalone test program which incorporates all of
PuTTY's crypto code, including the mp_int and low-level elliptic curve
layers but also going all the way up to the implementations of the
MAC, hash, cipher, public key and kex abstractions.
The test program itself, 'testcrypt', speaks a simple line-oriented
protocol on standard I/O in which you write the name of a function
call followed by some inputs, and it gives you back a list of outputs
preceded by a line telling you how many there are. Dynamically
allocated objects are assigned string ids in the protocol, and there's
a 'free' function that tells testcrypt when it can dispose of one.
It's possible to speak that protocol by hand, but cumbersome. I've
also provided a Python module that wraps it, by running testcrypt as a
persistent subprocess and gatewaying all the function calls into
things that look reasonably natural to call from Python. The Python
module and testcrypt.c both read a carefully formatted header file
testcrypt.h which contains the name and signature of every exported
function, so it costs minimal effort to expose a given function
through this test API. In a few cases it's necessary to write a
wrapper in testcrypt.c that makes the function look more friendly, but
mostly you don't even need that. (Though that is one of the
motivations between a lot of API cleanups I've done recently!)
I considered doing Python integration in the more obvious way, by
linking parts of the PuTTY code directly into a native-code .so Python
module. I decided against it because this way is more flexible: I can
run the testcrypt program on its own, or compile it in a way that
Python wouldn't play nicely with (I bet compiling just that .so with
Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or
attach a debugger to it. I can even recompile testcrypt for a
different CPU architecture (32- vs 64-bit, or even running it on a
different machine over ssh or under emulation) and still layer the
nice API on top of that via the local Python interpreter. All I need
is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
|
|
|
strbuf_free(sb);
|
|
|
|
sfree(line);
|
|
|
|
}
|
|
|
|
|
2019-01-06 21:44:57 +00:00
|
|
|
if (infp != stdin)
|
|
|
|
fclose(infp);
|
|
|
|
if (outfp != stdin)
|
|
|
|
fclose(outfp);
|
New test system for mp_int and cryptography.
I've written a new standalone test program which incorporates all of
PuTTY's crypto code, including the mp_int and low-level elliptic curve
layers but also going all the way up to the implementations of the
MAC, hash, cipher, public key and kex abstractions.
The test program itself, 'testcrypt', speaks a simple line-oriented
protocol on standard I/O in which you write the name of a function
call followed by some inputs, and it gives you back a list of outputs
preceded by a line telling you how many there are. Dynamically
allocated objects are assigned string ids in the protocol, and there's
a 'free' function that tells testcrypt when it can dispose of one.
It's possible to speak that protocol by hand, but cumbersome. I've
also provided a Python module that wraps it, by running testcrypt as a
persistent subprocess and gatewaying all the function calls into
things that look reasonably natural to call from Python. The Python
module and testcrypt.c both read a carefully formatted header file
testcrypt.h which contains the name and signature of every exported
function, so it costs minimal effort to expose a given function
through this test API. In a few cases it's necessary to write a
wrapper in testcrypt.c that makes the function look more friendly, but
mostly you don't even need that. (Though that is one of the
motivations between a lot of API cleanups I've done recently!)
I considered doing Python integration in the more obvious way, by
linking parts of the PuTTY code directly into a native-code .so Python
module. I decided against it because this way is more flexible: I can
run the testcrypt program on its own, or compile it in a way that
Python wouldn't play nicely with (I bet compiling just that .so with
Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or
attach a debugger to it. I can even recompile testcrypt for a
different CPU architecture (32- vs 64-bit, or even running it on a
different machine over ssh or under emulation) and still layer the
nice API on top of that via the local Python interpreter. All I need
is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|