2021-04-18 12:16:59 +00:00
|
|
|
add_sources_from_current_dir(crypto
|
Break up crypto modules containing HW acceleration.
This applies to all of AES, SHA-1, SHA-256 and SHA-512. All those
source files previously contained multiple implementations of the
algorithm, enabled or disabled by ifdefs detecting whether they would
work on a given compiler. And in order to get advanced machine
instructions like AES-NI or NEON crypto into the output file when the
compile flags hadn't enabled them, we had to do nasty stuff with
compiler-specific pragmas or attributes.
Now we can do the detection at cmake time, and enable advanced
instructions in the more sensible way, by compile-time flags. So I've
broken up each of these modules into lots of sub-pieces: a file called
(e.g.) 'foo-common.c' containing common definitions across all
implementations (such as round constants), one called 'foo-select.c'
containing the top-level vtable(s), and a separate file for each
implementation exporting just the vtable(s) for that implementation.
One advantage of this is that it depends a lot less on compiler-
specific bodgery. My particular least favourite part of the previous
setup was the part where I had to _manually_ define some Arm ACLE
feature macros before including <arm_neon.h>, so that it would define
the intrinsics I wanted. Now I'm enabling interesting architecture
features in the normal way, on the compiler command line, there's no
need for that kind of trick: the right feature macros are already
defined and <arm_neon.h> does the right thing.
Another change in this reorganisation is that I've stopped assuming
there's just one hardware implementation per platform. Previously, the
accelerated vtables were called things like sha256_hw, and varied
between FOO-NI and NEON depending on platform; and the selection code
would simply ask 'is hw available? if so, use hw, else sw'. Now, each
HW acceleration strategy names its vtable its own way, and the
selection vtable has a whole list of possibilities to iterate over
looking for a supported one. So if someone feels like writing a second
accelerated implementation of something for a given platform - for
example, I've heard you can use plain NEON to speed up AES somewhat
even without the crypto extension - then it will now have somewhere to
drop in alongside the existing ones.
2021-04-19 05:42:12 +00:00
|
|
|
aes-common.c
|
|
|
|
aes-select.c
|
|
|
|
aes-sw.c
|
Implement AES-GCM using the @openssh.com protocol IDs.
I only recently found out that OpenSSH defined their own protocol IDs
for AES-GCM, defined to work the same as the standard ones except that
they fixed the semantics for how you select the linked cipher+MAC pair
during key exchange.
(RFC 5647 defines protocol ids for AES-GCM in both the cipher and MAC
namespaces, and requires that you MUST select both or neither - but
this contradicts the selection policy set out in the base SSH RFCs,
and there's no discussion of how you resolve a conflict between them!
OpenSSH's answer is to do it the same way ChaCha20-Poly1305 works,
because that will ensure the two suites don't fight.)
People do occasionally ask us for this linked cipher/MAC pair, and now
I know it's actually feasible, I've implemented it, including a pair
of vector implementations for x86 and Arm using their respective
architecture extensions for multiplying polynomials over GF(2).
Unlike ChaCha20-Poly1305, I've kept the cipher and MAC implementations
in separate objects, with an arm's-length link between them that the
MAC uses when it needs to encrypt single cipher blocks to use as the
inputs to the MAC algorithm. That enables the cipher and the MAC to be
independently selected from their hardware-accelerated versions, just
in case someone runs on a system that has polynomial multiplication
instructions but not AES acceleration, or vice versa.
There's a fourth implementation of the GCM MAC, which is a pure
software implementation of the same algorithm used in the vectorised
versions. It's too slow to use live, but I've kept it in the code for
future testing needs, and because it's a convenient place to dump my
design comments.
The vectorised implementations are fairly crude as far as optimisation
goes. I'm sure serious x86 _or_ Arm optimisation engineers would look
at them and laugh. But GCM is a fast MAC compared to HMAC-SHA-256
(indeed compared to HMAC-anything-at-all), so it should at least be
good enough to use. And we've got a working version with some tests
now, so if someone else wants to improve them, they can.
2022-08-16 17:36:58 +00:00
|
|
|
aesgcm-common.c
|
|
|
|
aesgcm-select.c
|
|
|
|
aesgcm-sw.c
|
|
|
|
aesgcm-ref-poly.c
|
2021-04-18 12:16:59 +00:00
|
|
|
arcfour.c
|
|
|
|
argon2.c
|
|
|
|
bcrypt.c
|
|
|
|
blake2.c
|
|
|
|
blowfish.c
|
|
|
|
chacha20-poly1305.c
|
|
|
|
crc32.c
|
|
|
|
des.c
|
|
|
|
diffie-hellman.c
|
|
|
|
dsa.c
|
|
|
|
ecc-arithmetic.c
|
|
|
|
ecc-ssh.c
|
|
|
|
hash_simple.c
|
|
|
|
hmac.c
|
2024-12-07 19:21:57 +00:00
|
|
|
kex-hybrid.c
|
2021-04-18 12:16:59 +00:00
|
|
|
mac.c
|
|
|
|
mac_simple.c
|
|
|
|
md5.c
|
New post-quantum kex: ML-KEM, and three hybrids of it.
As standardised by NIST in FIPS 203, this is a lattice-based
post-quantum KEM.
Very vaguely, the idea of it is that your public key is a matrix A and
vector t, and the private key is the knowledge of how to decompose t
into two vectors with all their coefficients small, one transformed by
A relative to the other. Encryption of a binary secret starts by
turning each bit into one of two maximally separated residues mod a
prime q, and then adding 'noise' based on the public key in the form
of small increments and decrements mod q, again with some of the noise
transformed by A relative to the rest. Decryption uses the knowledge
of t's decomposition to align the two sets of noise so that the
_large_ changes (which masked the secret from an eavesdropper) cancel
out, leaving only a collection of small changes to the original secret
vector. Then the vector of input bits can be recovered by assuming
that those accumulated small pieces of noise haven't concentrated in
any particular residue enough to push it more than half way to the
other of its possible starting values.
A weird feature of it is that decryption is not a true mathematical
inverse of encryption. The assumption that the noise doesn't get large
enough to flip any bit of the secret is only probabilistically valid,
not a hard guarantee. In other words, key agreement can fail, simply
by getting particularly unlucky with the distribution of your random
noise! However, the probability of a failure is very low - less than
2^-138 even for ML-KEM-512, and gets even smaller with the larger
variants.
An awkward feature for our purposes is that the matrix A, containing a
large number of residues mod the prime q=3329, is required to be
constructed by a process of rejection sampling, i.e. generating random
12-bit values and throwing away the out-of-range ones. That would be a
real pain for our side-channel testing system, which generally handles
rejection sampling badly (since it necessarily involves data-dependent
control flow and timing variation). Fortunately, the matrix and the
random seed it was made from are both public: the matrix seed is
transmitted as part of the public key, so it's not necessary to try to
hide it. Accordingly, I was able to get the implementation to pass
testsc by means of not varying the matrix seed between runs, which is
justified by the principle of testsc that you vary the _secrets_ to
ensure timing is independent of them - and the matrix seed isn't a
secret, so you're allowed to keep it the same.
The three hybrid algorithms, defined by the current Internet-Draft
draft-kampanakis-curdle-ssh-pq-ke, include one hybrid of ML-KEM-768
with Curve25519 in exactly the same way we were already hybridising
NTRU Prime with Curve25519, and two more hybrids of ML-KEM with ECDH
over a NIST curve. The former hybrid interoperates with the
implementation in OpenSSH 9.9; all three interoperate with the fork
'openssh-oqs' at github.com/open-quantum-safe/openssh, and also with
the Python library AsyncSSH.
2024-12-07 19:33:39 +00:00
|
|
|
mlkem.c
|
2021-04-18 12:16:59 +00:00
|
|
|
mpint.c
|
Implement OpenSSH 9.x's NTRU Prime / Curve25519 kex.
This consists of DJB's 'Streamlined NTRU Prime' quantum-resistant
cryptosystem, currently in round 3 of the NIST post-quantum key
exchange competition; it's run in parallel with ordinary Curve25519,
and generates a shared secret combining the output of both systems.
(Hence, even if you don't trust this newfangled NTRU Prime thing at
all, it's at least no _less_ secure than the kex you were using
already.)
As the OpenSSH developers point out, key exchange is the most urgent
thing to make quantum-resistant, even before working quantum computers
big enough to break crypto become available, because a break of the
kex algorithm can be applied retroactively to recordings of your past
sessions. By contrast, authentication is a real-time protocol, and can
only be broken by a quantum computer if there's one available to
attack you _already_.
I've implemented both sides of the mechanism, so that PuTTY and Uppity
both support it. In my initial testing, the two sides can both
interoperate with the appropriate half of OpenSSH, and also (of
course, but it would be embarrassing to mess it up) with each other.
2022-04-15 16:19:47 +00:00
|
|
|
ntru.c
|
Family of key types for OpenSSH certificates.
This commit is groundwork for full certificate support, but doesn't
complete the job by itself. It introduces the new key types, and adds
a test in cryptsuite ensuring they work as expected, but nothing else.
If you manually construct a PPK file for one of the new key types, so
that it has a certificate in the public key field, then this commit
enables PuTTY to present that key to a server for user authentication,
either directly or via Pageant storing and using it. But I haven't yet
provided any mechanism for making such a PPK, so by itself, this isn't
much use.
Also, these new key types are not yet included in the KEXINIT host
keys list, because if they were, they'd just be treated as normal host
keys, in that you'd be asked to manually confirm the SSH fingerprint
of the certificate. I'll enable them for host keys once I add the
missing pieces.
2022-04-19 13:48:31 +00:00
|
|
|
openssh-certs.c
|
2021-04-18 12:16:59 +00:00
|
|
|
prng.c
|
|
|
|
pubkey-pem.c
|
|
|
|
pubkey-ppk.c
|
|
|
|
pubkey-ssh1.c
|
Switch to RFC 6979 for DSA nonce generation.
This fixes a vulnerability that compromises NIST P521 ECDSA keys when
they are used with PuTTY's existing DSA nonce generation code. The
vulnerability has been assigned the identifier CVE-2024-31497.
PuTTY has been doing its DSA signing deterministically for literally
as long as it's been doing it at all, because I didn't trust Windows's
entropy generation. Deterministic nonce generation was introduced in
commit d345ebc2a5a0b59, as part of the initial version of our DSA
signing routine. At the time, there was no standard for how to do it,
so we had to think up the details of our system ourselves, with some
help from the Cambridge University computer security group.
More than ten years later, RFC 6979 was published, recommending a
similar system for general use, naturally with all the details
different. We didn't switch over to doing it that way, because we had
a scheme in place already, and as far as I could see, the differences
were not security-critical - just the normal sort of variation you
expect when any two people design a protocol component of this kind
independently.
As far as I know, the _structure_ of our scheme is still perfectly
fine, in terms of what data gets hashed, how many times, and how the
hash output is converted into a nonce. But the weak spot is the choice
of hash function: inside our dsa_gen_k() function, we generate 512
bits of random data using SHA-512, and then reduce that to the output
range by modular reduction, regardless of what signature algorithm
we're generating a nonce for.
In the original use case, this introduced a theoretical bias (the
output size is an odd prime, which doesn't evenly divide the space of
2^512 possible inputs to the reduction), but the theory was that since
integer DSA uses a modulus prime only 160 bits long (being based on
SHA-1, at least in the form that SSH uses it), the bias would be too
small to be detectable, let alone exploitable.
Then we reused the same function for NIST-style ECDSA, when it
arrived. This is fine for the P256 curve, and even P384. But in P521,
the order of the base point is _greater_ than 2^512, so when we
generate a 512-bit number and reduce it, the reduction never makes any
difference, and our output nonces are all in the first 2^512 elements
of the range of about 2^521. So this _does_ introduce a significant
bias in the nonces, compared to the ideal of uniformly random
distribution over the whole range. And it's been recently discovered
that a bias of this kind is sufficient to expose private keys, given a
manageably small number of signatures to work from.
(Incidentally, none of this affects Ed25519. The spec for that system
includes its own idea of how you should do deterministic nonce
generation - completely different again, naturally - and we did it
that way rather than our way, so that we could use the existing test
vectors.)
The simplest fix would be to patch our existing nonce generator to use
a longer hash, or concatenate a couple of SHA-512 hashes, or something
similar. But I think a more robust approach is to switch it out
completely for what is now the standard system. The main reason why I
prefer that is that the standard system comes with test vectors, which
adds a lot of confidence that I haven't made some other mistake in
following my own design.
So here's a commit that adds an implementation of RFC 6979, and
removes the old dsa_gen_k() function. Tests are added based on the
RFC's appendix of test vectors (as many as are compatible with the
more limited API of PuTTY's crypto code, e.g. we lack support for the
NIST P192 curve, or for doing integer DSA with many different hash
functions). One existing test changes its expected outputs, namely the
one that has a sample key pair and signature for every key algorithm
we support.
2024-04-01 08:18:34 +00:00
|
|
|
rfc6979.c
|
2021-04-18 12:16:59 +00:00
|
|
|
rsa.c
|
Break up crypto modules containing HW acceleration.
This applies to all of AES, SHA-1, SHA-256 and SHA-512. All those
source files previously contained multiple implementations of the
algorithm, enabled or disabled by ifdefs detecting whether they would
work on a given compiler. And in order to get advanced machine
instructions like AES-NI or NEON crypto into the output file when the
compile flags hadn't enabled them, we had to do nasty stuff with
compiler-specific pragmas or attributes.
Now we can do the detection at cmake time, and enable advanced
instructions in the more sensible way, by compile-time flags. So I've
broken up each of these modules into lots of sub-pieces: a file called
(e.g.) 'foo-common.c' containing common definitions across all
implementations (such as round constants), one called 'foo-select.c'
containing the top-level vtable(s), and a separate file for each
implementation exporting just the vtable(s) for that implementation.
One advantage of this is that it depends a lot less on compiler-
specific bodgery. My particular least favourite part of the previous
setup was the part where I had to _manually_ define some Arm ACLE
feature macros before including <arm_neon.h>, so that it would define
the intrinsics I wanted. Now I'm enabling interesting architecture
features in the normal way, on the compiler command line, there's no
need for that kind of trick: the right feature macros are already
defined and <arm_neon.h> does the right thing.
Another change in this reorganisation is that I've stopped assuming
there's just one hardware implementation per platform. Previously, the
accelerated vtables were called things like sha256_hw, and varied
between FOO-NI and NEON depending on platform; and the selection code
would simply ask 'is hw available? if so, use hw, else sw'. Now, each
HW acceleration strategy names its vtable its own way, and the
selection vtable has a whole list of possibilities to iterate over
looking for a supported one. So if someone feels like writing a second
accelerated implementation of something for a given platform - for
example, I've heard you can use plain NEON to speed up AES somewhat
even without the crypto extension - then it will now have somewhere to
drop in alongside the existing ones.
2021-04-19 05:42:12 +00:00
|
|
|
sha256-common.c
|
|
|
|
sha256-select.c
|
|
|
|
sha256-sw.c
|
|
|
|
sha512-common.c
|
|
|
|
sha512-select.c
|
|
|
|
sha512-sw.c
|
2021-04-18 12:16:59 +00:00
|
|
|
sha3.c
|
Break up crypto modules containing HW acceleration.
This applies to all of AES, SHA-1, SHA-256 and SHA-512. All those
source files previously contained multiple implementations of the
algorithm, enabled or disabled by ifdefs detecting whether they would
work on a given compiler. And in order to get advanced machine
instructions like AES-NI or NEON crypto into the output file when the
compile flags hadn't enabled them, we had to do nasty stuff with
compiler-specific pragmas or attributes.
Now we can do the detection at cmake time, and enable advanced
instructions in the more sensible way, by compile-time flags. So I've
broken up each of these modules into lots of sub-pieces: a file called
(e.g.) 'foo-common.c' containing common definitions across all
implementations (such as round constants), one called 'foo-select.c'
containing the top-level vtable(s), and a separate file for each
implementation exporting just the vtable(s) for that implementation.
One advantage of this is that it depends a lot less on compiler-
specific bodgery. My particular least favourite part of the previous
setup was the part where I had to _manually_ define some Arm ACLE
feature macros before including <arm_neon.h>, so that it would define
the intrinsics I wanted. Now I'm enabling interesting architecture
features in the normal way, on the compiler command line, there's no
need for that kind of trick: the right feature macros are already
defined and <arm_neon.h> does the right thing.
Another change in this reorganisation is that I've stopped assuming
there's just one hardware implementation per platform. Previously, the
accelerated vtables were called things like sha256_hw, and varied
between FOO-NI and NEON depending on platform; and the selection code
would simply ask 'is hw available? if so, use hw, else sw'. Now, each
HW acceleration strategy names its vtable its own way, and the
selection vtable has a whole list of possibilities to iterate over
looking for a supported one. So if someone feels like writing a second
accelerated implementation of something for a given platform - for
example, I've heard you can use plain NEON to speed up AES somewhat
even without the crypto extension - then it will now have somewhere to
drop in alongside the existing ones.
2021-04-19 05:42:12 +00:00
|
|
|
sha1-common.c
|
|
|
|
sha1-select.c
|
|
|
|
sha1-sw.c
|
2021-04-18 12:16:59 +00:00
|
|
|
xdmauth.c)
|
Break up crypto modules containing HW acceleration.
This applies to all of AES, SHA-1, SHA-256 and SHA-512. All those
source files previously contained multiple implementations of the
algorithm, enabled or disabled by ifdefs detecting whether they would
work on a given compiler. And in order to get advanced machine
instructions like AES-NI or NEON crypto into the output file when the
compile flags hadn't enabled them, we had to do nasty stuff with
compiler-specific pragmas or attributes.
Now we can do the detection at cmake time, and enable advanced
instructions in the more sensible way, by compile-time flags. So I've
broken up each of these modules into lots of sub-pieces: a file called
(e.g.) 'foo-common.c' containing common definitions across all
implementations (such as round constants), one called 'foo-select.c'
containing the top-level vtable(s), and a separate file for each
implementation exporting just the vtable(s) for that implementation.
One advantage of this is that it depends a lot less on compiler-
specific bodgery. My particular least favourite part of the previous
setup was the part where I had to _manually_ define some Arm ACLE
feature macros before including <arm_neon.h>, so that it would define
the intrinsics I wanted. Now I'm enabling interesting architecture
features in the normal way, on the compiler command line, there's no
need for that kind of trick: the right feature macros are already
defined and <arm_neon.h> does the right thing.
Another change in this reorganisation is that I've stopped assuming
there's just one hardware implementation per platform. Previously, the
accelerated vtables were called things like sha256_hw, and varied
between FOO-NI and NEON depending on platform; and the selection code
would simply ask 'is hw available? if so, use hw, else sw'. Now, each
HW acceleration strategy names its vtable its own way, and the
selection vtable has a whole list of possibilities to iterate over
looking for a supported one. So if someone feels like writing a second
accelerated implementation of something for a given platform - for
example, I've heard you can use plain NEON to speed up AES somewhat
even without the crypto extension - then it will now have somewhere to
drop in alongside the existing ones.
2021-04-19 05:42:12 +00:00
|
|
|
|
|
|
|
include(CheckCSourceCompiles)
|
|
|
|
|
|
|
|
function(test_compile_with_flags outvar)
|
|
|
|
cmake_parse_arguments(OPT "" ""
|
|
|
|
"GNU_FLAGS;MSVC_FLAGS;ADD_SOURCES_IF_SUCCESSFUL;TEST_SOURCE" "${ARGN}")
|
|
|
|
|
|
|
|
# Figure out what flags are applicable to this compiler.
|
|
|
|
set(flags)
|
|
|
|
if(CMAKE_C_COMPILER_ID MATCHES "GNU" OR
|
|
|
|
CMAKE_C_COMPILER_ID MATCHES "Clang")
|
|
|
|
set(flags ${OPT_GNU_FLAGS})
|
|
|
|
endif()
|
|
|
|
if(CMAKE_C_COMPILER_ID MATCHES "MSVC")
|
|
|
|
set(flags ${OPT_MSVC_FLAGS})
|
|
|
|
endif()
|
|
|
|
|
|
|
|
# See if we can compile the provided test program.
|
Compatibility with older versions of cmake.
After this change, the cmake setup now works even on Debian stretch
(oldoldstable), which runs cmake 3.7.
In order to support a version that early I had to:
- write a fallback implementation of 'add_compile_definitions' for
older cmakes, which is easy, because add_compile_definitions(FOO)
is basically just add_compile_options(-DFOO)
- stop using list(TRANSFORM) and string(JOIN), of which I had one
case each, and they were easily replaced with simple foreach loops
- stop putting OBJECT libraries in the target_link_libraries command
for executable targets, in favour of adding $<TARGET_OBJECTS:foo>
to the main sources list for the same target. That matches what I
do with library targets, so it's probably more sensible anyway.
I tried going back by another Debian release and getting this cmake
setup to work on jessie, but that runs CMake 3.0.1, and in _that_
version of cmake the target_sources command is missing, and I didn't
find any alternative way to add extra sources to a target after having
first declared it. Reorganising to cope with _that_ omission would be
too much upheaval without a very good reason.
2021-10-29 17:08:18 +00:00
|
|
|
foreach(i ${flags})
|
|
|
|
set(CMAKE_REQUIRED_FLAGS "${CMAKE_REQUIRED_FLAGS} ${i}")
|
|
|
|
endforeach()
|
Break up crypto modules containing HW acceleration.
This applies to all of AES, SHA-1, SHA-256 and SHA-512. All those
source files previously contained multiple implementations of the
algorithm, enabled or disabled by ifdefs detecting whether they would
work on a given compiler. And in order to get advanced machine
instructions like AES-NI or NEON crypto into the output file when the
compile flags hadn't enabled them, we had to do nasty stuff with
compiler-specific pragmas or attributes.
Now we can do the detection at cmake time, and enable advanced
instructions in the more sensible way, by compile-time flags. So I've
broken up each of these modules into lots of sub-pieces: a file called
(e.g.) 'foo-common.c' containing common definitions across all
implementations (such as round constants), one called 'foo-select.c'
containing the top-level vtable(s), and a separate file for each
implementation exporting just the vtable(s) for that implementation.
One advantage of this is that it depends a lot less on compiler-
specific bodgery. My particular least favourite part of the previous
setup was the part where I had to _manually_ define some Arm ACLE
feature macros before including <arm_neon.h>, so that it would define
the intrinsics I wanted. Now I'm enabling interesting architecture
features in the normal way, on the compiler command line, there's no
need for that kind of trick: the right feature macros are already
defined and <arm_neon.h> does the right thing.
Another change in this reorganisation is that I've stopped assuming
there's just one hardware implementation per platform. Previously, the
accelerated vtables were called things like sha256_hw, and varied
between FOO-NI and NEON depending on platform; and the selection code
would simply ask 'is hw available? if so, use hw, else sw'. Now, each
HW acceleration strategy names its vtable its own way, and the
selection vtable has a whole list of possibilities to iterate over
looking for a supported one. So if someone feels like writing a second
accelerated implementation of something for a given platform - for
example, I've heard you can use plain NEON to speed up AES somewhat
even without the crypto extension - then it will now have somewhere to
drop in alongside the existing ones.
2021-04-19 05:42:12 +00:00
|
|
|
check_c_source_compiles("${OPT_TEST_SOURCE}" "${outvar}")
|
|
|
|
|
|
|
|
if(${outvar} AND OPT_ADD_SOURCES_IF_SUCCESSFUL)
|
|
|
|
# Make an object library that compiles the implementation with the
|
|
|
|
# necessary flags, and add the resulting objects to the crypto
|
|
|
|
# library.
|
|
|
|
set(libname object_lib_${outvar})
|
|
|
|
add_library(${libname} OBJECT ${OPT_ADD_SOURCES_IF_SUCCESSFUL})
|
|
|
|
target_compile_options(${libname} PRIVATE ${flags})
|
|
|
|
target_sources(crypto PRIVATE $<TARGET_OBJECTS:${libname}>)
|
|
|
|
endif()
|
|
|
|
|
|
|
|
# Export the output to the caller's scope, so that further tests can
|
|
|
|
# be based on it.
|
|
|
|
set(${outvar} ${${outvar}} PARENT_SCOPE)
|
|
|
|
endfunction()
|
|
|
|
|
|
|
|
# ----------------------------------------------------------------------
|
|
|
|
# Try to enable x86 intrinsics-based crypto implementations.
|
|
|
|
|
|
|
|
test_compile_with_flags(HAVE_WMMINTRIN_H
|
|
|
|
GNU_FLAGS -msse4.1
|
|
|
|
TEST_SOURCE "
|
|
|
|
#include <wmmintrin.h>
|
|
|
|
#include <smmintrin.h>
|
|
|
|
volatile __m128i r, a, b;
|
|
|
|
int main(void) { r = _mm_xor_si128(a, b); }")
|
|
|
|
if(HAVE_WMMINTRIN_H)
|
|
|
|
test_compile_with_flags(HAVE_AES_NI
|
|
|
|
GNU_FLAGS -msse4.1 -maes
|
|
|
|
TEST_SOURCE "
|
|
|
|
#include <wmmintrin.h>
|
|
|
|
#include <smmintrin.h>
|
|
|
|
volatile __m128i r, a, b;
|
|
|
|
int main(void) { r = _mm_aesenc_si128(a, b); }"
|
|
|
|
ADD_SOURCES_IF_SUCCESSFUL aes-ni aes-ni.c)
|
|
|
|
|
|
|
|
# shaintrin.h doesn't exist on all compilers; sometimes it's folded
|
|
|
|
# into the other headers
|
|
|
|
test_compile_with_flags(HAVE_SHAINTRIN_H
|
|
|
|
GNU_FLAGS -msse4.1 -msha
|
|
|
|
TEST_SOURCE "
|
|
|
|
#include <wmmintrin.h>
|
|
|
|
#include <smmintrin.h>
|
|
|
|
#include <immintrin.h>
|
|
|
|
#include <shaintrin.h>
|
|
|
|
volatile __m128i r, a, b;
|
|
|
|
int main(void) { r = _mm_xor_si128(a, b); }")
|
|
|
|
if(HAVE_SHAINTRIN_H)
|
|
|
|
set(include_shaintrin "#include <shaintrin.h>")
|
|
|
|
else()
|
|
|
|
set(include_shaintrin "")
|
|
|
|
endif()
|
|
|
|
|
|
|
|
test_compile_with_flags(HAVE_SHA_NI
|
|
|
|
GNU_FLAGS -msse4.1 -msha
|
|
|
|
TEST_SOURCE "
|
|
|
|
#include <wmmintrin.h>
|
|
|
|
#include <smmintrin.h>
|
|
|
|
#include <immintrin.h>
|
|
|
|
${include_shaintrin}
|
|
|
|
volatile __m128i r, a, b, c;
|
2021-04-22 11:42:32 +00:00
|
|
|
int main(void) { r = _mm_sha256rnds2_epu32(a, b, c); }"
|
|
|
|
ADD_SOURCES_IF_SUCCESSFUL sha256-ni.c sha1-ni.c)
|
Implement AES-GCM using the @openssh.com protocol IDs.
I only recently found out that OpenSSH defined their own protocol IDs
for AES-GCM, defined to work the same as the standard ones except that
they fixed the semantics for how you select the linked cipher+MAC pair
during key exchange.
(RFC 5647 defines protocol ids for AES-GCM in both the cipher and MAC
namespaces, and requires that you MUST select both or neither - but
this contradicts the selection policy set out in the base SSH RFCs,
and there's no discussion of how you resolve a conflict between them!
OpenSSH's answer is to do it the same way ChaCha20-Poly1305 works,
because that will ensure the two suites don't fight.)
People do occasionally ask us for this linked cipher/MAC pair, and now
I know it's actually feasible, I've implemented it, including a pair
of vector implementations for x86 and Arm using their respective
architecture extensions for multiplying polynomials over GF(2).
Unlike ChaCha20-Poly1305, I've kept the cipher and MAC implementations
in separate objects, with an arm's-length link between them that the
MAC uses when it needs to encrypt single cipher blocks to use as the
inputs to the MAC algorithm. That enables the cipher and the MAC to be
independently selected from their hardware-accelerated versions, just
in case someone runs on a system that has polynomial multiplication
instructions but not AES acceleration, or vice versa.
There's a fourth implementation of the GCM MAC, which is a pure
software implementation of the same algorithm used in the vectorised
versions. It's too slow to use live, but I've kept it in the code for
future testing needs, and because it's a convenient place to dump my
design comments.
The vectorised implementations are fairly crude as far as optimisation
goes. I'm sure serious x86 _or_ Arm optimisation engineers would look
at them and laugh. But GCM is a fast MAC compared to HMAC-SHA-256
(indeed compared to HMAC-anything-at-all), so it should at least be
good enough to use. And we've got a working version with some tests
now, so if someone else wants to improve them, they can.
2022-08-16 17:36:58 +00:00
|
|
|
|
|
|
|
test_compile_with_flags(HAVE_CLMUL
|
|
|
|
GNU_FLAGS -msse4.1 -mpclmul
|
|
|
|
TEST_SOURCE "
|
|
|
|
#include <wmmintrin.h>
|
|
|
|
#include <tmmintrin.h>
|
|
|
|
volatile __m128i r, a, b;
|
|
|
|
int main(void) { r = _mm_clmulepi64_si128(a, b, 5);
|
|
|
|
r = _mm_shuffle_epi8(r, a); }"
|
|
|
|
ADD_SOURCES_IF_SUCCESSFUL aesgcm-clmul.c)
|
Break up crypto modules containing HW acceleration.
This applies to all of AES, SHA-1, SHA-256 and SHA-512. All those
source files previously contained multiple implementations of the
algorithm, enabled or disabled by ifdefs detecting whether they would
work on a given compiler. And in order to get advanced machine
instructions like AES-NI or NEON crypto into the output file when the
compile flags hadn't enabled them, we had to do nasty stuff with
compiler-specific pragmas or attributes.
Now we can do the detection at cmake time, and enable advanced
instructions in the more sensible way, by compile-time flags. So I've
broken up each of these modules into lots of sub-pieces: a file called
(e.g.) 'foo-common.c' containing common definitions across all
implementations (such as round constants), one called 'foo-select.c'
containing the top-level vtable(s), and a separate file for each
implementation exporting just the vtable(s) for that implementation.
One advantage of this is that it depends a lot less on compiler-
specific bodgery. My particular least favourite part of the previous
setup was the part where I had to _manually_ define some Arm ACLE
feature macros before including <arm_neon.h>, so that it would define
the intrinsics I wanted. Now I'm enabling interesting architecture
features in the normal way, on the compiler command line, there's no
need for that kind of trick: the right feature macros are already
defined and <arm_neon.h> does the right thing.
Another change in this reorganisation is that I've stopped assuming
there's just one hardware implementation per platform. Previously, the
accelerated vtables were called things like sha256_hw, and varied
between FOO-NI and NEON depending on platform; and the selection code
would simply ask 'is hw available? if so, use hw, else sw'. Now, each
HW acceleration strategy names its vtable its own way, and the
selection vtable has a whole list of possibilities to iterate over
looking for a supported one. So if someone feels like writing a second
accelerated implementation of something for a given platform - for
example, I've heard you can use plain NEON to speed up AES somewhat
even without the crypto extension - then it will now have somewhere to
drop in alongside the existing ones.
2021-04-19 05:42:12 +00:00
|
|
|
endif()
|
|
|
|
|
|
|
|
# ----------------------------------------------------------------------
|
|
|
|
# Try to enable Arm Neon intrinsics-based crypto implementations.
|
|
|
|
|
|
|
|
# Start by checking which header file we need. ACLE specifies that it
|
|
|
|
# ought to be <arm_neon.h>, on both 32- and 64-bit Arm, but Visual
|
|
|
|
# Studio for some reason renamed the header to <arm64_neon.h> in
|
|
|
|
# 64-bit, and gives an error if you use the standard name. (However,
|
|
|
|
# clang-cl does let you use the standard name.)
|
|
|
|
test_compile_with_flags(HAVE_ARM_NEON_H
|
|
|
|
MSVC_FLAGS -D_ARM_USE_NEW_NEON_INTRINSICS
|
|
|
|
TEST_SOURCE "
|
|
|
|
#include <arm_neon.h>
|
|
|
|
volatile uint8x16_t r, a, b;
|
|
|
|
int main(void) { r = veorq_u8(a, b); }")
|
|
|
|
if(HAVE_ARM_NEON_H)
|
|
|
|
set(neon ON)
|
|
|
|
set(neon_header "arm_neon.h")
|
|
|
|
else()
|
|
|
|
test_compile_with_flags(HAVE_ARM64_NEON_H TEST_SOURCE "
|
|
|
|
#include <arm64_neon.h>
|
|
|
|
volatile uint8x16_t r, a, b;
|
|
|
|
int main(void) { r = veorq_u8(a, b); }")
|
|
|
|
if(HAVE_ARM64_NEON_H)
|
|
|
|
set(neon ON)
|
|
|
|
set(neon_header "arm64_neon.h")
|
|
|
|
set(USE_ARM64_NEON_H ON)
|
|
|
|
endif()
|
|
|
|
endif()
|
|
|
|
|
|
|
|
if(neon)
|
|
|
|
# If we have _some_ NEON header, look for the individual things we
|
|
|
|
# can enable with it.
|
|
|
|
|
|
|
|
# The 'crypto' architecture extension includes support for AES,
|
|
|
|
# SHA-1, and SHA-256.
|
|
|
|
test_compile_with_flags(HAVE_NEON_CRYPTO
|
|
|
|
GNU_FLAGS -march=armv8-a+crypto
|
|
|
|
MSVC_FLAGS -D_ARM_USE_NEW_NEON_INTRINSICS
|
|
|
|
TEST_SOURCE "
|
|
|
|
#include <${neon_header}>
|
|
|
|
volatile uint8x16_t r, a, b;
|
|
|
|
volatile uint32x4_t s, x, y, z;
|
|
|
|
int main(void) { r = vaeseq_u8(a, b); s = vsha256hq_u32(x, y, z); }"
|
|
|
|
ADD_SOURCES_IF_SUCCESSFUL aes-neon.c sha256-neon.c sha1-neon.c)
|
|
|
|
|
Implement AES-GCM using the @openssh.com protocol IDs.
I only recently found out that OpenSSH defined their own protocol IDs
for AES-GCM, defined to work the same as the standard ones except that
they fixed the semantics for how you select the linked cipher+MAC pair
during key exchange.
(RFC 5647 defines protocol ids for AES-GCM in both the cipher and MAC
namespaces, and requires that you MUST select both or neither - but
this contradicts the selection policy set out in the base SSH RFCs,
and there's no discussion of how you resolve a conflict between them!
OpenSSH's answer is to do it the same way ChaCha20-Poly1305 works,
because that will ensure the two suites don't fight.)
People do occasionally ask us for this linked cipher/MAC pair, and now
I know it's actually feasible, I've implemented it, including a pair
of vector implementations for x86 and Arm using their respective
architecture extensions for multiplying polynomials over GF(2).
Unlike ChaCha20-Poly1305, I've kept the cipher and MAC implementations
in separate objects, with an arm's-length link between them that the
MAC uses when it needs to encrypt single cipher blocks to use as the
inputs to the MAC algorithm. That enables the cipher and the MAC to be
independently selected from their hardware-accelerated versions, just
in case someone runs on a system that has polynomial multiplication
instructions but not AES acceleration, or vice versa.
There's a fourth implementation of the GCM MAC, which is a pure
software implementation of the same algorithm used in the vectorised
versions. It's too slow to use live, but I've kept it in the code for
future testing needs, and because it's a convenient place to dump my
design comments.
The vectorised implementations are fairly crude as far as optimisation
goes. I'm sure serious x86 _or_ Arm optimisation engineers would look
at them and laugh. But GCM is a fast MAC compared to HMAC-SHA-256
(indeed compared to HMAC-anything-at-all), so it should at least be
good enough to use. And we've got a working version with some tests
now, so if someone else wants to improve them, they can.
2022-08-16 17:36:58 +00:00
|
|
|
test_compile_with_flags(HAVE_NEON_PMULL
|
|
|
|
GNU_FLAGS -march=armv8-a+crypto
|
|
|
|
MSVC_FLAGS -D_ARM_USE_NEW_NEON_INTRINSICS
|
|
|
|
TEST_SOURCE "
|
|
|
|
#include <${neon_header}>
|
|
|
|
volatile poly128_t r;
|
|
|
|
volatile poly64_t a, b;
|
|
|
|
volatile poly64x2_t u, v;
|
|
|
|
int main(void) { r = vmull_p64(a, b); r = vmull_high_p64(u, v); }"
|
|
|
|
ADD_SOURCES_IF_SUCCESSFUL aesgcm-neon.c)
|
|
|
|
|
2022-10-12 11:54:36 +00:00
|
|
|
test_compile_with_flags(HAVE_NEON_VADDQ_P128
|
|
|
|
GNU_FLAGS -march=armv8-a+crypto
|
|
|
|
MSVC_FLAGS -D_ARM_USE_NEW_NEON_INTRINSICS
|
|
|
|
TEST_SOURCE "
|
|
|
|
#include <${neon_header}>
|
|
|
|
volatile poly128_t r;
|
|
|
|
int main(void) { r = vaddq_p128(r, r); }")
|
|
|
|
|
Break up crypto modules containing HW acceleration.
This applies to all of AES, SHA-1, SHA-256 and SHA-512. All those
source files previously contained multiple implementations of the
algorithm, enabled or disabled by ifdefs detecting whether they would
work on a given compiler. And in order to get advanced machine
instructions like AES-NI or NEON crypto into the output file when the
compile flags hadn't enabled them, we had to do nasty stuff with
compiler-specific pragmas or attributes.
Now we can do the detection at cmake time, and enable advanced
instructions in the more sensible way, by compile-time flags. So I've
broken up each of these modules into lots of sub-pieces: a file called
(e.g.) 'foo-common.c' containing common definitions across all
implementations (such as round constants), one called 'foo-select.c'
containing the top-level vtable(s), and a separate file for each
implementation exporting just the vtable(s) for that implementation.
One advantage of this is that it depends a lot less on compiler-
specific bodgery. My particular least favourite part of the previous
setup was the part where I had to _manually_ define some Arm ACLE
feature macros before including <arm_neon.h>, so that it would define
the intrinsics I wanted. Now I'm enabling interesting architecture
features in the normal way, on the compiler command line, there's no
need for that kind of trick: the right feature macros are already
defined and <arm_neon.h> does the right thing.
Another change in this reorganisation is that I've stopped assuming
there's just one hardware implementation per platform. Previously, the
accelerated vtables were called things like sha256_hw, and varied
between FOO-NI and NEON depending on platform; and the selection code
would simply ask 'is hw available? if so, use hw, else sw'. Now, each
HW acceleration strategy names its vtable its own way, and the
selection vtable has a whole list of possibilities to iterate over
looking for a supported one. So if someone feels like writing a second
accelerated implementation of something for a given platform - for
example, I've heard you can use plain NEON to speed up AES somewhat
even without the crypto extension - then it will now have somewhere to
drop in alongside the existing ones.
2021-04-19 05:42:12 +00:00
|
|
|
# The 'sha3' architecture extension, despite the name, includes
|
|
|
|
# support for SHA-512 (from the SHA-2 standard) as well as SHA-3
|
|
|
|
# proper.
|
|
|
|
#
|
|
|
|
# Versions of clang up to and including clang 12 support this
|
|
|
|
# extension in assembly language, but not the ACLE intrinsics for
|
|
|
|
# it. So we check both.
|
|
|
|
test_compile_with_flags(HAVE_NEON_SHA512_INTRINSICS
|
|
|
|
GNU_FLAGS -march=armv8.2-a+crypto+sha3
|
|
|
|
TEST_SOURCE "
|
|
|
|
#include <${neon_header}>
|
|
|
|
volatile uint64x2_t r, a, b;
|
|
|
|
int main(void) { r = vsha512su0q_u64(a, b); }"
|
|
|
|
ADD_SOURCES_IF_SUCCESSFUL sha512-neon.c)
|
|
|
|
if(HAVE_NEON_SHA512_INTRINSICS)
|
|
|
|
set(HAVE_NEON_SHA512 ON)
|
|
|
|
else()
|
|
|
|
test_compile_with_flags(HAVE_NEON_SHA512_ASM
|
|
|
|
GNU_FLAGS -march=armv8.2-a+crypto+sha3
|
|
|
|
TEST_SOURCE "
|
|
|
|
#include <${neon_header}>
|
|
|
|
volatile uint64x2_t r, a;
|
|
|
|
int main(void) { __asm__(\"sha512su0 %0.2D,%1.2D\" : \"+w\" (r) : \"w\" (a)); }"
|
|
|
|
ADD_SOURCES_IF_SUCCESSFUL sha512-neon.c)
|
|
|
|
if(HAVE_NEON_SHA512_ASM)
|
|
|
|
set(HAVE_NEON_SHA512 ON)
|
|
|
|
endif()
|
|
|
|
endif()
|
|
|
|
endif()
|
|
|
|
|
Arm: turn on PSTATE.DIT if available and needed.
DIT, for 'Data-Independent Timing', is a bit you can set in the
processor state on sufficiently new Arm CPUs, which promises that a
long list of instructions will deliberately avoid varying their timing
based on the input register values. Just what you want for keeping
your constant-time crypto primitives constant-time.
As far as I'm aware, no CPU has _yet_ implemented any data-dependent
optimisations, so DIT is a safety precaution against them doing so in
future. It would be embarrassing to be caught without it if a future
CPU does do that, so we now turn on DIT in the PuTTY process state.
I've put a call to the new enable_dit() function at the start of every
main() and WinMain() belonging to a program that might do
cryptography (even testcrypt, in case someone uses it for something!),
and in case I missed one there, also added a second call at the first
moment that any cryptography-using part of the code looks as if it
might become active: when an instance of the SSH protocol object is
configured, when the system PRNG is initialised, and when selecting
any cryptographic authentication protocol in an HTTP or SOCKS proxy
connection. With any luck those precautions between them should ensure
it's on whenever we need it.
Arm's own recommendation is that you should carefully choose the
granularity at which you enable and disable DIT: there's a potential
time cost to turning it on and off (I'm not sure what, but plausibly
something of the order of a pipeline flush), so it's a performance hit
to do it _inside_ each individual crypto function, but if CPUs start
supporting significant data-dependent optimisation in future, then it
will also become a noticeable performance hit to just leave it on
across the whole process. So you'd like to do it somewhere in the
middle: for example, you might turn on DIT once around the whole
process of verifying and decrypting an SSH packet, instead of once for
decryption and once for MAC.
With all respect to that recommendation as a strategy for maximum
performance, I'm not following it here. I turn on DIT at the start of
the PuTTY process, and then leave it on. Rationale:
1. PuTTY is not otherwise a performance-critical application: it's
not likely to max out your CPU for any purpose _other_ than
cryptography. The most CPU-intensive non-cryptographic thing I can
imagine a PuTTY process doing is the complicated computation of
font rendering in the terminal, and that will normally be cached
(you don't recompute each glyph from its outline and hints for
every time you display it).
2. I think a bigger risk lies in accidental side channels from having
DIT turned off when it should have been on. I can imagine lots of
causes for that. Missing a crypto operation in some unswept corner
of the code; confusing control flow (like my coroutine macros)
jumping with DIT clear into the middle of a region of code that
expected DIT to have been set at the beginning; having a reference
counter of DIT requests and getting it out of sync.
In a more sophisticated programming language, it might be possible to
avoid the risk in #2 by cleverness with the type system. For example,
in Rust, you could have a zero-sized type that acts as a proof token
for DIT being enabled (it would be constructed by a function that also
sets DIT, have a Drop implementation that clears DIT, and be !Send so
you couldn't use it in a thread other than the one where DIT was set),
and then you could require all the actual crypto functions to take a
DitToken as an extra parameter, at zero runtime cost. Then "oops I
forgot to set DIT around this piece of crypto" would become a compile
error. Even so, you'd have to take some care with coroutine-structured
code (what happens if a Rust async function yields while holding a DIT
token?) and with nesting (if you have two DIT tokens, you don't want
dropping the inner one to clear DIT while the outer one is still there
to wrongly convince callees that it's set). Maybe in Rust you could
get this all to work reliably. But not in C!
DIT is an optional feature of the Arm architecture, so we must first
test to see if it's supported. This is done the same way as we already
do for the various Arm crypto accelerators: on ELF-based systems,
check the appropriate bit in the 'hwcap' words in the ELF aux vector;
on Mac, look for an appropriate sysctl flag.
On Windows I don't know of a way to query the DIT feature, _or_ of a
way to write the necessary enabling instruction in an MSVC-compatible
way. I've _heard_ that it might not be necessary, because Windows
might just turn on DIT unconditionally and leave it on, in an even
more extreme version of my own strategy. I don't have a source for
that - I heard it by word of mouth - but I _hope_ it's true, because
that would suit me very well! Certainly I can't write code to enable
DIT without knowing (a) how to do it, (b) how to know if it's safe.
Nonetheless, I've put the enable_dit() call in all the right places in
the Windows main programs as well as the Unix and cross-platform code,
so that if I later find out that I _can_ put in an explicit enable of
DIT in some way, I'll only have to arrange to set HAVE_ARM_DIT and
compile the enable_dit() function appropriately.
2024-12-19 08:47:08 +00:00
|
|
|
test_compile_with_flags(HAVE_ARM_DIT
|
|
|
|
GNU_FLAGS -march=armv8.4-a
|
|
|
|
TEST_SOURCE "
|
|
|
|
int main(void) { asm volatile(\"msr dit, %0\" :: \"r\"(1)); }"
|
|
|
|
ADD_SOURCES_IF_SUCCESSFUL enable_dit.c)
|
|
|
|
|
Break up crypto modules containing HW acceleration.
This applies to all of AES, SHA-1, SHA-256 and SHA-512. All those
source files previously contained multiple implementations of the
algorithm, enabled or disabled by ifdefs detecting whether they would
work on a given compiler. And in order to get advanced machine
instructions like AES-NI or NEON crypto into the output file when the
compile flags hadn't enabled them, we had to do nasty stuff with
compiler-specific pragmas or attributes.
Now we can do the detection at cmake time, and enable advanced
instructions in the more sensible way, by compile-time flags. So I've
broken up each of these modules into lots of sub-pieces: a file called
(e.g.) 'foo-common.c' containing common definitions across all
implementations (such as round constants), one called 'foo-select.c'
containing the top-level vtable(s), and a separate file for each
implementation exporting just the vtable(s) for that implementation.
One advantage of this is that it depends a lot less on compiler-
specific bodgery. My particular least favourite part of the previous
setup was the part where I had to _manually_ define some Arm ACLE
feature macros before including <arm_neon.h>, so that it would define
the intrinsics I wanted. Now I'm enabling interesting architecture
features in the normal way, on the compiler command line, there's no
need for that kind of trick: the right feature macros are already
defined and <arm_neon.h> does the right thing.
Another change in this reorganisation is that I've stopped assuming
there's just one hardware implementation per platform. Previously, the
accelerated vtables were called things like sha256_hw, and varied
between FOO-NI and NEON depending on platform; and the selection code
would simply ask 'is hw available? if so, use hw, else sw'. Now, each
HW acceleration strategy names its vtable its own way, and the
selection vtable has a whole list of possibilities to iterate over
looking for a supported one. So if someone feels like writing a second
accelerated implementation of something for a given platform - for
example, I've heard you can use plain NEON to speed up AES somewhat
even without the crypto extension - then it will now have somewhere to
drop in alongside the existing ones.
2021-04-19 05:42:12 +00:00
|
|
|
set(HAVE_AES_NI ${HAVE_AES_NI} PARENT_SCOPE)
|
|
|
|
set(HAVE_SHA_NI ${HAVE_SHA_NI} PARENT_SCOPE)
|
|
|
|
set(HAVE_SHAINTRIN_H ${HAVE_SHAINTRIN_H} PARENT_SCOPE)
|
|
|
|
set(HAVE_NEON_CRYPTO ${HAVE_NEON_CRYPTO} PARENT_SCOPE)
|
|
|
|
set(HAVE_NEON_SHA512 ${HAVE_NEON_SHA512} PARENT_SCOPE)
|
|
|
|
set(HAVE_NEON_SHA512_INTRINSICS ${HAVE_NEON_SHA512_INTRINSICS} PARENT_SCOPE)
|
|
|
|
set(USE_ARM64_NEON_H ${USE_ARM64_NEON_H} PARENT_SCOPE)
|
Arm: turn on PSTATE.DIT if available and needed.
DIT, for 'Data-Independent Timing', is a bit you can set in the
processor state on sufficiently new Arm CPUs, which promises that a
long list of instructions will deliberately avoid varying their timing
based on the input register values. Just what you want for keeping
your constant-time crypto primitives constant-time.
As far as I'm aware, no CPU has _yet_ implemented any data-dependent
optimisations, so DIT is a safety precaution against them doing so in
future. It would be embarrassing to be caught without it if a future
CPU does do that, so we now turn on DIT in the PuTTY process state.
I've put a call to the new enable_dit() function at the start of every
main() and WinMain() belonging to a program that might do
cryptography (even testcrypt, in case someone uses it for something!),
and in case I missed one there, also added a second call at the first
moment that any cryptography-using part of the code looks as if it
might become active: when an instance of the SSH protocol object is
configured, when the system PRNG is initialised, and when selecting
any cryptographic authentication protocol in an HTTP or SOCKS proxy
connection. With any luck those precautions between them should ensure
it's on whenever we need it.
Arm's own recommendation is that you should carefully choose the
granularity at which you enable and disable DIT: there's a potential
time cost to turning it on and off (I'm not sure what, but plausibly
something of the order of a pipeline flush), so it's a performance hit
to do it _inside_ each individual crypto function, but if CPUs start
supporting significant data-dependent optimisation in future, then it
will also become a noticeable performance hit to just leave it on
across the whole process. So you'd like to do it somewhere in the
middle: for example, you might turn on DIT once around the whole
process of verifying and decrypting an SSH packet, instead of once for
decryption and once for MAC.
With all respect to that recommendation as a strategy for maximum
performance, I'm not following it here. I turn on DIT at the start of
the PuTTY process, and then leave it on. Rationale:
1. PuTTY is not otherwise a performance-critical application: it's
not likely to max out your CPU for any purpose _other_ than
cryptography. The most CPU-intensive non-cryptographic thing I can
imagine a PuTTY process doing is the complicated computation of
font rendering in the terminal, and that will normally be cached
(you don't recompute each glyph from its outline and hints for
every time you display it).
2. I think a bigger risk lies in accidental side channels from having
DIT turned off when it should have been on. I can imagine lots of
causes for that. Missing a crypto operation in some unswept corner
of the code; confusing control flow (like my coroutine macros)
jumping with DIT clear into the middle of a region of code that
expected DIT to have been set at the beginning; having a reference
counter of DIT requests and getting it out of sync.
In a more sophisticated programming language, it might be possible to
avoid the risk in #2 by cleverness with the type system. For example,
in Rust, you could have a zero-sized type that acts as a proof token
for DIT being enabled (it would be constructed by a function that also
sets DIT, have a Drop implementation that clears DIT, and be !Send so
you couldn't use it in a thread other than the one where DIT was set),
and then you could require all the actual crypto functions to take a
DitToken as an extra parameter, at zero runtime cost. Then "oops I
forgot to set DIT around this piece of crypto" would become a compile
error. Even so, you'd have to take some care with coroutine-structured
code (what happens if a Rust async function yields while holding a DIT
token?) and with nesting (if you have two DIT tokens, you don't want
dropping the inner one to clear DIT while the outer one is still there
to wrongly convince callees that it's set). Maybe in Rust you could
get this all to work reliably. But not in C!
DIT is an optional feature of the Arm architecture, so we must first
test to see if it's supported. This is done the same way as we already
do for the various Arm crypto accelerators: on ELF-based systems,
check the appropriate bit in the 'hwcap' words in the ELF aux vector;
on Mac, look for an appropriate sysctl flag.
On Windows I don't know of a way to query the DIT feature, _or_ of a
way to write the necessary enabling instruction in an MSVC-compatible
way. I've _heard_ that it might not be necessary, because Windows
might just turn on DIT unconditionally and leave it on, in an even
more extreme version of my own strategy. I don't have a source for
that - I heard it by word of mouth - but I _hope_ it's true, because
that would suit me very well! Certainly I can't write code to enable
DIT without knowing (a) how to do it, (b) how to know if it's safe.
Nonetheless, I've put the enable_dit() call in all the right places in
the Windows main programs as well as the Unix and cross-platform code,
so that if I later find out that I _can_ put in an explicit enable of
DIT in some way, I'll only have to arrange to set HAVE_ARM_DIT and
compile the enable_dit() function appropriately.
2024-12-19 08:47:08 +00:00
|
|
|
set(HAVE_ARM_DIT ${HAVE_ARM_DIT} PARENT_SCOPE)
|