1
0
mirror of https://git.tartarus.org/simon/putty.git synced 2025-01-10 01:48:00 +00:00
putty-source/crypto/CMakeLists.txt

246 lines
7.3 KiB
CMake
Raw Normal View History

add_sources_from_current_dir(crypto
Break up crypto modules containing HW acceleration. This applies to all of AES, SHA-1, SHA-256 and SHA-512. All those source files previously contained multiple implementations of the algorithm, enabled or disabled by ifdefs detecting whether they would work on a given compiler. And in order to get advanced machine instructions like AES-NI or NEON crypto into the output file when the compile flags hadn't enabled them, we had to do nasty stuff with compiler-specific pragmas or attributes. Now we can do the detection at cmake time, and enable advanced instructions in the more sensible way, by compile-time flags. So I've broken up each of these modules into lots of sub-pieces: a file called (e.g.) 'foo-common.c' containing common definitions across all implementations (such as round constants), one called 'foo-select.c' containing the top-level vtable(s), and a separate file for each implementation exporting just the vtable(s) for that implementation. One advantage of this is that it depends a lot less on compiler- specific bodgery. My particular least favourite part of the previous setup was the part where I had to _manually_ define some Arm ACLE feature macros before including <arm_neon.h>, so that it would define the intrinsics I wanted. Now I'm enabling interesting architecture features in the normal way, on the compiler command line, there's no need for that kind of trick: the right feature macros are already defined and <arm_neon.h> does the right thing. Another change in this reorganisation is that I've stopped assuming there's just one hardware implementation per platform. Previously, the accelerated vtables were called things like sha256_hw, and varied between FOO-NI and NEON depending on platform; and the selection code would simply ask 'is hw available? if so, use hw, else sw'. Now, each HW acceleration strategy names its vtable its own way, and the selection vtable has a whole list of possibilities to iterate over looking for a supported one. So if someone feels like writing a second accelerated implementation of something for a given platform - for example, I've heard you can use plain NEON to speed up AES somewhat even without the crypto extension - then it will now have somewhere to drop in alongside the existing ones.
2021-04-19 05:42:12 +00:00
aes-common.c
aes-select.c
aes-sw.c
Implement AES-GCM using the @openssh.com protocol IDs. I only recently found out that OpenSSH defined their own protocol IDs for AES-GCM, defined to work the same as the standard ones except that they fixed the semantics for how you select the linked cipher+MAC pair during key exchange. (RFC 5647 defines protocol ids for AES-GCM in both the cipher and MAC namespaces, and requires that you MUST select both or neither - but this contradicts the selection policy set out in the base SSH RFCs, and there's no discussion of how you resolve a conflict between them! OpenSSH's answer is to do it the same way ChaCha20-Poly1305 works, because that will ensure the two suites don't fight.) People do occasionally ask us for this linked cipher/MAC pair, and now I know it's actually feasible, I've implemented it, including a pair of vector implementations for x86 and Arm using their respective architecture extensions for multiplying polynomials over GF(2). Unlike ChaCha20-Poly1305, I've kept the cipher and MAC implementations in separate objects, with an arm's-length link between them that the MAC uses when it needs to encrypt single cipher blocks to use as the inputs to the MAC algorithm. That enables the cipher and the MAC to be independently selected from their hardware-accelerated versions, just in case someone runs on a system that has polynomial multiplication instructions but not AES acceleration, or vice versa. There's a fourth implementation of the GCM MAC, which is a pure software implementation of the same algorithm used in the vectorised versions. It's too slow to use live, but I've kept it in the code for future testing needs, and because it's a convenient place to dump my design comments. The vectorised implementations are fairly crude as far as optimisation goes. I'm sure serious x86 _or_ Arm optimisation engineers would look at them and laugh. But GCM is a fast MAC compared to HMAC-SHA-256 (indeed compared to HMAC-anything-at-all), so it should at least be good enough to use. And we've got a working version with some tests now, so if someone else wants to improve them, they can.
2022-08-16 17:36:58 +00:00
aesgcm-common.c
aesgcm-select.c
aesgcm-sw.c
aesgcm-ref-poly.c
arcfour.c
argon2.c
bcrypt.c
blake2.c
blowfish.c
chacha20-poly1305.c
crc32.c
des.c
diffie-hellman.c
dsa.c
ecc-arithmetic.c
ecc-ssh.c
hash_simple.c
hmac.c
kex-hybrid.c
mac.c
mac_simple.c
md5.c
New post-quantum kex: ML-KEM, and three hybrids of it. As standardised by NIST in FIPS 203, this is a lattice-based post-quantum KEM. Very vaguely, the idea of it is that your public key is a matrix A and vector t, and the private key is the knowledge of how to decompose t into two vectors with all their coefficients small, one transformed by A relative to the other. Encryption of a binary secret starts by turning each bit into one of two maximally separated residues mod a prime q, and then adding 'noise' based on the public key in the form of small increments and decrements mod q, again with some of the noise transformed by A relative to the rest. Decryption uses the knowledge of t's decomposition to align the two sets of noise so that the _large_ changes (which masked the secret from an eavesdropper) cancel out, leaving only a collection of small changes to the original secret vector. Then the vector of input bits can be recovered by assuming that those accumulated small pieces of noise haven't concentrated in any particular residue enough to push it more than half way to the other of its possible starting values. A weird feature of it is that decryption is not a true mathematical inverse of encryption. The assumption that the noise doesn't get large enough to flip any bit of the secret is only probabilistically valid, not a hard guarantee. In other words, key agreement can fail, simply by getting particularly unlucky with the distribution of your random noise! However, the probability of a failure is very low - less than 2^-138 even for ML-KEM-512, and gets even smaller with the larger variants. An awkward feature for our purposes is that the matrix A, containing a large number of residues mod the prime q=3329, is required to be constructed by a process of rejection sampling, i.e. generating random 12-bit values and throwing away the out-of-range ones. That would be a real pain for our side-channel testing system, which generally handles rejection sampling badly (since it necessarily involves data-dependent control flow and timing variation). Fortunately, the matrix and the random seed it was made from are both public: the matrix seed is transmitted as part of the public key, so it's not necessary to try to hide it. Accordingly, I was able to get the implementation to pass testsc by means of not varying the matrix seed between runs, which is justified by the principle of testsc that you vary the _secrets_ to ensure timing is independent of them - and the matrix seed isn't a secret, so you're allowed to keep it the same. The three hybrid algorithms, defined by the current Internet-Draft draft-kampanakis-curdle-ssh-pq-ke, include one hybrid of ML-KEM-768 with Curve25519 in exactly the same way we were already hybridising NTRU Prime with Curve25519, and two more hybrids of ML-KEM with ECDH over a NIST curve. The former hybrid interoperates with the implementation in OpenSSH 9.9; all three interoperate with the fork 'openssh-oqs' at github.com/open-quantum-safe/openssh, and also with the Python library AsyncSSH.
2024-12-07 19:33:39 +00:00
mlkem.c
mpint.c
ntru.c
openssh-certs.c
prng.c
pubkey-pem.c
pubkey-ppk.c
pubkey-ssh1.c
Switch to RFC 6979 for DSA nonce generation. This fixes a vulnerability that compromises NIST P521 ECDSA keys when they are used with PuTTY's existing DSA nonce generation code. The vulnerability has been assigned the identifier CVE-2024-31497. PuTTY has been doing its DSA signing deterministically for literally as long as it's been doing it at all, because I didn't trust Windows's entropy generation. Deterministic nonce generation was introduced in commit d345ebc2a5a0b59, as part of the initial version of our DSA signing routine. At the time, there was no standard for how to do it, so we had to think up the details of our system ourselves, with some help from the Cambridge University computer security group. More than ten years later, RFC 6979 was published, recommending a similar system for general use, naturally with all the details different. We didn't switch over to doing it that way, because we had a scheme in place already, and as far as I could see, the differences were not security-critical - just the normal sort of variation you expect when any two people design a protocol component of this kind independently. As far as I know, the _structure_ of our scheme is still perfectly fine, in terms of what data gets hashed, how many times, and how the hash output is converted into a nonce. But the weak spot is the choice of hash function: inside our dsa_gen_k() function, we generate 512 bits of random data using SHA-512, and then reduce that to the output range by modular reduction, regardless of what signature algorithm we're generating a nonce for. In the original use case, this introduced a theoretical bias (the output size is an odd prime, which doesn't evenly divide the space of 2^512 possible inputs to the reduction), but the theory was that since integer DSA uses a modulus prime only 160 bits long (being based on SHA-1, at least in the form that SSH uses it), the bias would be too small to be detectable, let alone exploitable. Then we reused the same function for NIST-style ECDSA, when it arrived. This is fine for the P256 curve, and even P384. But in P521, the order of the base point is _greater_ than 2^512, so when we generate a 512-bit number and reduce it, the reduction never makes any difference, and our output nonces are all in the first 2^512 elements of the range of about 2^521. So this _does_ introduce a significant bias in the nonces, compared to the ideal of uniformly random distribution over the whole range. And it's been recently discovered that a bias of this kind is sufficient to expose private keys, given a manageably small number of signatures to work from. (Incidentally, none of this affects Ed25519. The spec for that system includes its own idea of how you should do deterministic nonce generation - completely different again, naturally - and we did it that way rather than our way, so that we could use the existing test vectors.) The simplest fix would be to patch our existing nonce generator to use a longer hash, or concatenate a couple of SHA-512 hashes, or something similar. But I think a more robust approach is to switch it out completely for what is now the standard system. The main reason why I prefer that is that the standard system comes with test vectors, which adds a lot of confidence that I haven't made some other mistake in following my own design. So here's a commit that adds an implementation of RFC 6979, and removes the old dsa_gen_k() function. Tests are added based on the RFC's appendix of test vectors (as many as are compatible with the more limited API of PuTTY's crypto code, e.g. we lack support for the NIST P192 curve, or for doing integer DSA with many different hash functions). One existing test changes its expected outputs, namely the one that has a sample key pair and signature for every key algorithm we support.
2024-04-01 08:18:34 +00:00
rfc6979.c
rsa.c
Break up crypto modules containing HW acceleration. This applies to all of AES, SHA-1, SHA-256 and SHA-512. All those source files previously contained multiple implementations of the algorithm, enabled or disabled by ifdefs detecting whether they would work on a given compiler. And in order to get advanced machine instructions like AES-NI or NEON crypto into the output file when the compile flags hadn't enabled them, we had to do nasty stuff with compiler-specific pragmas or attributes. Now we can do the detection at cmake time, and enable advanced instructions in the more sensible way, by compile-time flags. So I've broken up each of these modules into lots of sub-pieces: a file called (e.g.) 'foo-common.c' containing common definitions across all implementations (such as round constants), one called 'foo-select.c' containing the top-level vtable(s), and a separate file for each implementation exporting just the vtable(s) for that implementation. One advantage of this is that it depends a lot less on compiler- specific bodgery. My particular least favourite part of the previous setup was the part where I had to _manually_ define some Arm ACLE feature macros before including <arm_neon.h>, so that it would define the intrinsics I wanted. Now I'm enabling interesting architecture features in the normal way, on the compiler command line, there's no need for that kind of trick: the right feature macros are already defined and <arm_neon.h> does the right thing. Another change in this reorganisation is that I've stopped assuming there's just one hardware implementation per platform. Previously, the accelerated vtables were called things like sha256_hw, and varied between FOO-NI and NEON depending on platform; and the selection code would simply ask 'is hw available? if so, use hw, else sw'. Now, each HW acceleration strategy names its vtable its own way, and the selection vtable has a whole list of possibilities to iterate over looking for a supported one. So if someone feels like writing a second accelerated implementation of something for a given platform - for example, I've heard you can use plain NEON to speed up AES somewhat even without the crypto extension - then it will now have somewhere to drop in alongside the existing ones.
2021-04-19 05:42:12 +00:00
sha256-common.c
sha256-select.c
sha256-sw.c
sha512-common.c
sha512-select.c
sha512-sw.c
sha3.c
Break up crypto modules containing HW acceleration. This applies to all of AES, SHA-1, SHA-256 and SHA-512. All those source files previously contained multiple implementations of the algorithm, enabled or disabled by ifdefs detecting whether they would work on a given compiler. And in order to get advanced machine instructions like AES-NI or NEON crypto into the output file when the compile flags hadn't enabled them, we had to do nasty stuff with compiler-specific pragmas or attributes. Now we can do the detection at cmake time, and enable advanced instructions in the more sensible way, by compile-time flags. So I've broken up each of these modules into lots of sub-pieces: a file called (e.g.) 'foo-common.c' containing common definitions across all implementations (such as round constants), one called 'foo-select.c' containing the top-level vtable(s), and a separate file for each implementation exporting just the vtable(s) for that implementation. One advantage of this is that it depends a lot less on compiler- specific bodgery. My particular least favourite part of the previous setup was the part where I had to _manually_ define some Arm ACLE feature macros before including <arm_neon.h>, so that it would define the intrinsics I wanted. Now I'm enabling interesting architecture features in the normal way, on the compiler command line, there's no need for that kind of trick: the right feature macros are already defined and <arm_neon.h> does the right thing. Another change in this reorganisation is that I've stopped assuming there's just one hardware implementation per platform. Previously, the accelerated vtables were called things like sha256_hw, and varied between FOO-NI and NEON depending on platform; and the selection code would simply ask 'is hw available? if so, use hw, else sw'. Now, each HW acceleration strategy names its vtable its own way, and the selection vtable has a whole list of possibilities to iterate over looking for a supported one. So if someone feels like writing a second accelerated implementation of something for a given platform - for example, I've heard you can use plain NEON to speed up AES somewhat even without the crypto extension - then it will now have somewhere to drop in alongside the existing ones.
2021-04-19 05:42:12 +00:00
sha1-common.c
sha1-select.c
sha1-sw.c
xdmauth.c)
Break up crypto modules containing HW acceleration. This applies to all of AES, SHA-1, SHA-256 and SHA-512. All those source files previously contained multiple implementations of the algorithm, enabled or disabled by ifdefs detecting whether they would work on a given compiler. And in order to get advanced machine instructions like AES-NI or NEON crypto into the output file when the compile flags hadn't enabled them, we had to do nasty stuff with compiler-specific pragmas or attributes. Now we can do the detection at cmake time, and enable advanced instructions in the more sensible way, by compile-time flags. So I've broken up each of these modules into lots of sub-pieces: a file called (e.g.) 'foo-common.c' containing common definitions across all implementations (such as round constants), one called 'foo-select.c' containing the top-level vtable(s), and a separate file for each implementation exporting just the vtable(s) for that implementation. One advantage of this is that it depends a lot less on compiler- specific bodgery. My particular least favourite part of the previous setup was the part where I had to _manually_ define some Arm ACLE feature macros before including <arm_neon.h>, so that it would define the intrinsics I wanted. Now I'm enabling interesting architecture features in the normal way, on the compiler command line, there's no need for that kind of trick: the right feature macros are already defined and <arm_neon.h> does the right thing. Another change in this reorganisation is that I've stopped assuming there's just one hardware implementation per platform. Previously, the accelerated vtables were called things like sha256_hw, and varied between FOO-NI and NEON depending on platform; and the selection code would simply ask 'is hw available? if so, use hw, else sw'. Now, each HW acceleration strategy names its vtable its own way, and the selection vtable has a whole list of possibilities to iterate over looking for a supported one. So if someone feels like writing a second accelerated implementation of something for a given platform - for example, I've heard you can use plain NEON to speed up AES somewhat even without the crypto extension - then it will now have somewhere to drop in alongside the existing ones.
2021-04-19 05:42:12 +00:00
include(CheckCSourceCompiles)
function(test_compile_with_flags outvar)
cmake_parse_arguments(OPT "" ""
"GNU_FLAGS;MSVC_FLAGS;ADD_SOURCES_IF_SUCCESSFUL;TEST_SOURCE" "${ARGN}")
# Figure out what flags are applicable to this compiler.
set(flags)
if(CMAKE_C_COMPILER_ID MATCHES "GNU" OR
CMAKE_C_COMPILER_ID MATCHES "Clang")
set(flags ${OPT_GNU_FLAGS})
endif()
if(CMAKE_C_COMPILER_ID MATCHES "MSVC")
set(flags ${OPT_MSVC_FLAGS})
endif()
# See if we can compile the provided test program.
foreach(i ${flags})
set(CMAKE_REQUIRED_FLAGS "${CMAKE_REQUIRED_FLAGS} ${i}")
endforeach()
Break up crypto modules containing HW acceleration. This applies to all of AES, SHA-1, SHA-256 and SHA-512. All those source files previously contained multiple implementations of the algorithm, enabled or disabled by ifdefs detecting whether they would work on a given compiler. And in order to get advanced machine instructions like AES-NI or NEON crypto into the output file when the compile flags hadn't enabled them, we had to do nasty stuff with compiler-specific pragmas or attributes. Now we can do the detection at cmake time, and enable advanced instructions in the more sensible way, by compile-time flags. So I've broken up each of these modules into lots of sub-pieces: a file called (e.g.) 'foo-common.c' containing common definitions across all implementations (such as round constants), one called 'foo-select.c' containing the top-level vtable(s), and a separate file for each implementation exporting just the vtable(s) for that implementation. One advantage of this is that it depends a lot less on compiler- specific bodgery. My particular least favourite part of the previous setup was the part where I had to _manually_ define some Arm ACLE feature macros before including <arm_neon.h>, so that it would define the intrinsics I wanted. Now I'm enabling interesting architecture features in the normal way, on the compiler command line, there's no need for that kind of trick: the right feature macros are already defined and <arm_neon.h> does the right thing. Another change in this reorganisation is that I've stopped assuming there's just one hardware implementation per platform. Previously, the accelerated vtables were called things like sha256_hw, and varied between FOO-NI and NEON depending on platform; and the selection code would simply ask 'is hw available? if so, use hw, else sw'. Now, each HW acceleration strategy names its vtable its own way, and the selection vtable has a whole list of possibilities to iterate over looking for a supported one. So if someone feels like writing a second accelerated implementation of something for a given platform - for example, I've heard you can use plain NEON to speed up AES somewhat even without the crypto extension - then it will now have somewhere to drop in alongside the existing ones.
2021-04-19 05:42:12 +00:00
check_c_source_compiles("${OPT_TEST_SOURCE}" "${outvar}")
if(${outvar} AND OPT_ADD_SOURCES_IF_SUCCESSFUL)
# Make an object library that compiles the implementation with the
# necessary flags, and add the resulting objects to the crypto
# library.
set(libname object_lib_${outvar})
add_library(${libname} OBJECT ${OPT_ADD_SOURCES_IF_SUCCESSFUL})
target_compile_options(${libname} PRIVATE ${flags})
target_sources(crypto PRIVATE $<TARGET_OBJECTS:${libname}>)
endif()
# Export the output to the caller's scope, so that further tests can
# be based on it.
set(${outvar} ${${outvar}} PARENT_SCOPE)
endfunction()
# ----------------------------------------------------------------------
# Try to enable x86 intrinsics-based crypto implementations.
test_compile_with_flags(HAVE_WMMINTRIN_H
GNU_FLAGS -msse4.1
TEST_SOURCE "
#include <wmmintrin.h>
#include <smmintrin.h>
volatile __m128i r, a, b;
int main(void) { r = _mm_xor_si128(a, b); }")
if(HAVE_WMMINTRIN_H)
test_compile_with_flags(HAVE_AES_NI
GNU_FLAGS -msse4.1 -maes
TEST_SOURCE "
#include <wmmintrin.h>
#include <smmintrin.h>
volatile __m128i r, a, b;
int main(void) { r = _mm_aesenc_si128(a, b); }"
ADD_SOURCES_IF_SUCCESSFUL aes-ni aes-ni.c)
# shaintrin.h doesn't exist on all compilers; sometimes it's folded
# into the other headers
test_compile_with_flags(HAVE_SHAINTRIN_H
GNU_FLAGS -msse4.1 -msha
TEST_SOURCE "
#include <wmmintrin.h>
#include <smmintrin.h>
#include <immintrin.h>
#include <shaintrin.h>
volatile __m128i r, a, b;
int main(void) { r = _mm_xor_si128(a, b); }")
if(HAVE_SHAINTRIN_H)
set(include_shaintrin "#include <shaintrin.h>")
else()
set(include_shaintrin "")
endif()
test_compile_with_flags(HAVE_SHA_NI
GNU_FLAGS -msse4.1 -msha
TEST_SOURCE "
#include <wmmintrin.h>
#include <smmintrin.h>
#include <immintrin.h>
${include_shaintrin}
volatile __m128i r, a, b, c;
int main(void) { r = _mm_sha256rnds2_epu32(a, b, c); }"
ADD_SOURCES_IF_SUCCESSFUL sha256-ni.c sha1-ni.c)
Implement AES-GCM using the @openssh.com protocol IDs. I only recently found out that OpenSSH defined their own protocol IDs for AES-GCM, defined to work the same as the standard ones except that they fixed the semantics for how you select the linked cipher+MAC pair during key exchange. (RFC 5647 defines protocol ids for AES-GCM in both the cipher and MAC namespaces, and requires that you MUST select both or neither - but this contradicts the selection policy set out in the base SSH RFCs, and there's no discussion of how you resolve a conflict between them! OpenSSH's answer is to do it the same way ChaCha20-Poly1305 works, because that will ensure the two suites don't fight.) People do occasionally ask us for this linked cipher/MAC pair, and now I know it's actually feasible, I've implemented it, including a pair of vector implementations for x86 and Arm using their respective architecture extensions for multiplying polynomials over GF(2). Unlike ChaCha20-Poly1305, I've kept the cipher and MAC implementations in separate objects, with an arm's-length link between them that the MAC uses when it needs to encrypt single cipher blocks to use as the inputs to the MAC algorithm. That enables the cipher and the MAC to be independently selected from their hardware-accelerated versions, just in case someone runs on a system that has polynomial multiplication instructions but not AES acceleration, or vice versa. There's a fourth implementation of the GCM MAC, which is a pure software implementation of the same algorithm used in the vectorised versions. It's too slow to use live, but I've kept it in the code for future testing needs, and because it's a convenient place to dump my design comments. The vectorised implementations are fairly crude as far as optimisation goes. I'm sure serious x86 _or_ Arm optimisation engineers would look at them and laugh. But GCM is a fast MAC compared to HMAC-SHA-256 (indeed compared to HMAC-anything-at-all), so it should at least be good enough to use. And we've got a working version with some tests now, so if someone else wants to improve them, they can.
2022-08-16 17:36:58 +00:00
test_compile_with_flags(HAVE_CLMUL
GNU_FLAGS -msse4.1 -mpclmul
TEST_SOURCE "
#include <wmmintrin.h>
#include <tmmintrin.h>
volatile __m128i r, a, b;
int main(void) { r = _mm_clmulepi64_si128(a, b, 5);
r = _mm_shuffle_epi8(r, a); }"
ADD_SOURCES_IF_SUCCESSFUL aesgcm-clmul.c)
Break up crypto modules containing HW acceleration. This applies to all of AES, SHA-1, SHA-256 and SHA-512. All those source files previously contained multiple implementations of the algorithm, enabled or disabled by ifdefs detecting whether they would work on a given compiler. And in order to get advanced machine instructions like AES-NI or NEON crypto into the output file when the compile flags hadn't enabled them, we had to do nasty stuff with compiler-specific pragmas or attributes. Now we can do the detection at cmake time, and enable advanced instructions in the more sensible way, by compile-time flags. So I've broken up each of these modules into lots of sub-pieces: a file called (e.g.) 'foo-common.c' containing common definitions across all implementations (such as round constants), one called 'foo-select.c' containing the top-level vtable(s), and a separate file for each implementation exporting just the vtable(s) for that implementation. One advantage of this is that it depends a lot less on compiler- specific bodgery. My particular least favourite part of the previous setup was the part where I had to _manually_ define some Arm ACLE feature macros before including <arm_neon.h>, so that it would define the intrinsics I wanted. Now I'm enabling interesting architecture features in the normal way, on the compiler command line, there's no need for that kind of trick: the right feature macros are already defined and <arm_neon.h> does the right thing. Another change in this reorganisation is that I've stopped assuming there's just one hardware implementation per platform. Previously, the accelerated vtables were called things like sha256_hw, and varied between FOO-NI and NEON depending on platform; and the selection code would simply ask 'is hw available? if so, use hw, else sw'. Now, each HW acceleration strategy names its vtable its own way, and the selection vtable has a whole list of possibilities to iterate over looking for a supported one. So if someone feels like writing a second accelerated implementation of something for a given platform - for example, I've heard you can use plain NEON to speed up AES somewhat even without the crypto extension - then it will now have somewhere to drop in alongside the existing ones.
2021-04-19 05:42:12 +00:00
endif()
# ----------------------------------------------------------------------
# Try to enable Arm Neon intrinsics-based crypto implementations.
# Start by checking which header file we need. ACLE specifies that it
# ought to be <arm_neon.h>, on both 32- and 64-bit Arm, but Visual
# Studio for some reason renamed the header to <arm64_neon.h> in
# 64-bit, and gives an error if you use the standard name. (However,
# clang-cl does let you use the standard name.)
test_compile_with_flags(HAVE_ARM_NEON_H
MSVC_FLAGS -D_ARM_USE_NEW_NEON_INTRINSICS
TEST_SOURCE "
#include <arm_neon.h>
volatile uint8x16_t r, a, b;
int main(void) { r = veorq_u8(a, b); }")
if(HAVE_ARM_NEON_H)
set(neon ON)
set(neon_header "arm_neon.h")
else()
test_compile_with_flags(HAVE_ARM64_NEON_H TEST_SOURCE "
#include <arm64_neon.h>
volatile uint8x16_t r, a, b;
int main(void) { r = veorq_u8(a, b); }")
if(HAVE_ARM64_NEON_H)
set(neon ON)
set(neon_header "arm64_neon.h")
set(USE_ARM64_NEON_H ON)
endif()
endif()
if(neon)
# If we have _some_ NEON header, look for the individual things we
# can enable with it.
# The 'crypto' architecture extension includes support for AES,
# SHA-1, and SHA-256.
test_compile_with_flags(HAVE_NEON_CRYPTO
GNU_FLAGS -march=armv8-a+crypto
MSVC_FLAGS -D_ARM_USE_NEW_NEON_INTRINSICS
TEST_SOURCE "
#include <${neon_header}>
volatile uint8x16_t r, a, b;
volatile uint32x4_t s, x, y, z;
int main(void) { r = vaeseq_u8(a, b); s = vsha256hq_u32(x, y, z); }"
ADD_SOURCES_IF_SUCCESSFUL aes-neon.c sha256-neon.c sha1-neon.c)
Implement AES-GCM using the @openssh.com protocol IDs. I only recently found out that OpenSSH defined their own protocol IDs for AES-GCM, defined to work the same as the standard ones except that they fixed the semantics for how you select the linked cipher+MAC pair during key exchange. (RFC 5647 defines protocol ids for AES-GCM in both the cipher and MAC namespaces, and requires that you MUST select both or neither - but this contradicts the selection policy set out in the base SSH RFCs, and there's no discussion of how you resolve a conflict between them! OpenSSH's answer is to do it the same way ChaCha20-Poly1305 works, because that will ensure the two suites don't fight.) People do occasionally ask us for this linked cipher/MAC pair, and now I know it's actually feasible, I've implemented it, including a pair of vector implementations for x86 and Arm using their respective architecture extensions for multiplying polynomials over GF(2). Unlike ChaCha20-Poly1305, I've kept the cipher and MAC implementations in separate objects, with an arm's-length link between them that the MAC uses when it needs to encrypt single cipher blocks to use as the inputs to the MAC algorithm. That enables the cipher and the MAC to be independently selected from their hardware-accelerated versions, just in case someone runs on a system that has polynomial multiplication instructions but not AES acceleration, or vice versa. There's a fourth implementation of the GCM MAC, which is a pure software implementation of the same algorithm used in the vectorised versions. It's too slow to use live, but I've kept it in the code for future testing needs, and because it's a convenient place to dump my design comments. The vectorised implementations are fairly crude as far as optimisation goes. I'm sure serious x86 _or_ Arm optimisation engineers would look at them and laugh. But GCM is a fast MAC compared to HMAC-SHA-256 (indeed compared to HMAC-anything-at-all), so it should at least be good enough to use. And we've got a working version with some tests now, so if someone else wants to improve them, they can.
2022-08-16 17:36:58 +00:00
test_compile_with_flags(HAVE_NEON_PMULL
GNU_FLAGS -march=armv8-a+crypto
MSVC_FLAGS -D_ARM_USE_NEW_NEON_INTRINSICS
TEST_SOURCE "
#include <${neon_header}>
volatile poly128_t r;
volatile poly64_t a, b;
volatile poly64x2_t u, v;
int main(void) { r = vmull_p64(a, b); r = vmull_high_p64(u, v); }"
ADD_SOURCES_IF_SUCCESSFUL aesgcm-neon.c)
test_compile_with_flags(HAVE_NEON_VADDQ_P128
GNU_FLAGS -march=armv8-a+crypto
MSVC_FLAGS -D_ARM_USE_NEW_NEON_INTRINSICS
TEST_SOURCE "
#include <${neon_header}>
volatile poly128_t r;
int main(void) { r = vaddq_p128(r, r); }")
Break up crypto modules containing HW acceleration. This applies to all of AES, SHA-1, SHA-256 and SHA-512. All those source files previously contained multiple implementations of the algorithm, enabled or disabled by ifdefs detecting whether they would work on a given compiler. And in order to get advanced machine instructions like AES-NI or NEON crypto into the output file when the compile flags hadn't enabled them, we had to do nasty stuff with compiler-specific pragmas or attributes. Now we can do the detection at cmake time, and enable advanced instructions in the more sensible way, by compile-time flags. So I've broken up each of these modules into lots of sub-pieces: a file called (e.g.) 'foo-common.c' containing common definitions across all implementations (such as round constants), one called 'foo-select.c' containing the top-level vtable(s), and a separate file for each implementation exporting just the vtable(s) for that implementation. One advantage of this is that it depends a lot less on compiler- specific bodgery. My particular least favourite part of the previous setup was the part where I had to _manually_ define some Arm ACLE feature macros before including <arm_neon.h>, so that it would define the intrinsics I wanted. Now I'm enabling interesting architecture features in the normal way, on the compiler command line, there's no need for that kind of trick: the right feature macros are already defined and <arm_neon.h> does the right thing. Another change in this reorganisation is that I've stopped assuming there's just one hardware implementation per platform. Previously, the accelerated vtables were called things like sha256_hw, and varied between FOO-NI and NEON depending on platform; and the selection code would simply ask 'is hw available? if so, use hw, else sw'. Now, each HW acceleration strategy names its vtable its own way, and the selection vtable has a whole list of possibilities to iterate over looking for a supported one. So if someone feels like writing a second accelerated implementation of something for a given platform - for example, I've heard you can use plain NEON to speed up AES somewhat even without the crypto extension - then it will now have somewhere to drop in alongside the existing ones.
2021-04-19 05:42:12 +00:00
# The 'sha3' architecture extension, despite the name, includes
# support for SHA-512 (from the SHA-2 standard) as well as SHA-3
# proper.
#
# Versions of clang up to and including clang 12 support this
# extension in assembly language, but not the ACLE intrinsics for
# it. So we check both.
test_compile_with_flags(HAVE_NEON_SHA512_INTRINSICS
GNU_FLAGS -march=armv8.2-a+crypto+sha3
TEST_SOURCE "
#include <${neon_header}>
volatile uint64x2_t r, a, b;
int main(void) { r = vsha512su0q_u64(a, b); }"
ADD_SOURCES_IF_SUCCESSFUL sha512-neon.c)
if(HAVE_NEON_SHA512_INTRINSICS)
set(HAVE_NEON_SHA512 ON)
else()
test_compile_with_flags(HAVE_NEON_SHA512_ASM
GNU_FLAGS -march=armv8.2-a+crypto+sha3
TEST_SOURCE "
#include <${neon_header}>
volatile uint64x2_t r, a;
int main(void) { __asm__(\"sha512su0 %0.2D,%1.2D\" : \"+w\" (r) : \"w\" (a)); }"
ADD_SOURCES_IF_SUCCESSFUL sha512-neon.c)
if(HAVE_NEON_SHA512_ASM)
set(HAVE_NEON_SHA512 ON)
endif()
endif()
endif()
set(HAVE_AES_NI ${HAVE_AES_NI} PARENT_SCOPE)
set(HAVE_SHA_NI ${HAVE_SHA_NI} PARENT_SCOPE)
set(HAVE_SHAINTRIN_H ${HAVE_SHAINTRIN_H} PARENT_SCOPE)
set(HAVE_NEON_CRYPTO ${HAVE_NEON_CRYPTO} PARENT_SCOPE)
set(HAVE_NEON_SHA512 ${HAVE_NEON_SHA512} PARENT_SCOPE)
set(HAVE_NEON_SHA512_INTRINSICS ${HAVE_NEON_SHA512_INTRINSICS} PARENT_SCOPE)
set(USE_ARM64_NEON_H ${USE_ARM64_NEON_H} PARENT_SCOPE)