1
0
mirror of https://git.tartarus.org/simon/putty.git synced 2025-01-09 17:38:00 +00:00
Commit Graph

73 Commits

Author SHA1 Message Date
Simon Tatham
6520574e58 Side-channel-safe rewrite of the Miller-Rabin test.
Thanks to Mark Wooding for explaining the method of doing this. At
first glance it seemed _obviously_ impossible to run an algorithm that
needs an iteration per factor of 2 in p-1, without a timing leak
giving away the number of factors of 2 in p-1. But it's not, because
you can do the M-R checks interleaved with each step of your whole
modular exponentiation, and they're cheap enough that you can do them
in _every_ step, even the ones where the exponent is too small for M-R
to be interested in yet, and then do bitwise masking to exclude the
spurious results from the final output.
2021-08-27 18:04:49 +01:00
Simon Tatham
23431f8ff4 Add some tests of Miller-Rabin to cryptsuite.
I'm about to rewrite the Miller-Rabin testing code, so let's start by
introducing a test suite that the old version passes, and then I can
make sure the new one does too.
2021-08-27 17:43:40 +01:00
Simon Tatham
47c2bc38d1 New script contrib/proveprime.py.
This generates primality certificates for numbers, in the form of
Python / testcrypt code that calls Pockle methods. It factors p-1 by
calling out to the 'yafu' utility, which is a moderately sophisticated
integer factoring tool (including ECC and quadratic sieve methods)
that runs as a standalone command-line program.

Also added a Pockle test generated as output from this script, which
verifies the primality of the three NIST curves' moduli and their
generators' orders. I already had Pockle certificates for the moduli
and orders used in EdDSA, so this completes the set, and it does it
without me having had to do a lot of manual work.
2021-06-12 13:50:51 +01:00
Simon Tatham
fca13a17b1 Break up crypto modules containing HW acceleration.
This applies to all of AES, SHA-1, SHA-256 and SHA-512. All those
source files previously contained multiple implementations of the
algorithm, enabled or disabled by ifdefs detecting whether they would
work on a given compiler. And in order to get advanced machine
instructions like AES-NI or NEON crypto into the output file when the
compile flags hadn't enabled them, we had to do nasty stuff with
compiler-specific pragmas or attributes.

Now we can do the detection at cmake time, and enable advanced
instructions in the more sensible way, by compile-time flags. So I've
broken up each of these modules into lots of sub-pieces: a file called
(e.g.) 'foo-common.c' containing common definitions across all
implementations (such as round constants), one called 'foo-select.c'
containing the top-level vtable(s), and a separate file for each
implementation exporting just the vtable(s) for that implementation.

One advantage of this is that it depends a lot less on compiler-
specific bodgery. My particular least favourite part of the previous
setup was the part where I had to _manually_ define some Arm ACLE
feature macros before including <arm_neon.h>, so that it would define
the intrinsics I wanted. Now I'm enabling interesting architecture
features in the normal way, on the compiler command line, there's no
need for that kind of trick: the right feature macros are already
defined and <arm_neon.h> does the right thing.

Another change in this reorganisation is that I've stopped assuming
there's just one hardware implementation per platform. Previously, the
accelerated vtables were called things like sha256_hw, and varied
between FOO-NI and NEON depending on platform; and the selection code
would simply ask 'is hw available? if so, use hw, else sw'. Now, each
HW acceleration strategy names its vtable its own way, and the
selection vtable has a whole list of possibilities to iterate over
looking for a supported one. So if someone feels like writing a second
accelerated implementation of something for a given platform - for
example, I've heard you can use plain NEON to speed up AES somewhat
even without the crypto extension - then it will now have somewhere to
drop in alongside the existing ones.
2021-04-21 21:55:26 +01:00
Jacob Nevins
97137f5cfd PuTTYgen: explicitly use 'Kbyte' in Argon2 naming.
Instead of 'Kb', which could be misread as 'Kbit'.
2021-04-19 17:03:05 +01:00
Simon Tatham
1da353e649 Introduce OpenSSH-compatible SHA256 key fingerprinting.
There's a new enumeration of fingerprint types, and you tell
ssh2_fingerprint() or ssh2_fingerprint_blob() which of them to use.

So far, this is only implemented behind the scenes, and exposed for
testcrypt to test. All the call sites of ssh2_fingerprint pass a fixed
default fptype, which is still set to the old MD5. That will change
shortly.
2021-03-13 11:01:35 +00:00
Simon Tatham
e9aa28fe02 Restore the ability to write out PPK v2.
This commit adds the capability in principle to ppk_save_sb, by adding
a fmt_version field in the save parameters structure. As yet it's not
connected up to any user interface in PuTTYgen, but I think I'll need
to, because currently there's no way at all to convert PPK v3 back to
v2, and surely people will need to interoperate with older
installations of PuTTY, or with other PPK-consuming software.
2021-02-22 20:53:18 +00:00
Simon Tatham
08d17140a0 Introduce PPK file format version 3.
This removes both uses of SHA-1 in the file format: it was used as the
MAC protecting the key file against tamperproofing, and also used in
the key derivation step that converted the user's passphrase to cipher
and MAC keys.

The MAC is simply upgraded from HMAC-SHA-1 to HMAC-SHA-256; it is
otherwise unchanged in how it's applied (in particular, to what data).

The key derivation is totally reworked, to be based on Argon2, which
I've just added to the code base. This should make stolen encrypted
key files more resistant to brute-force attack.

Argon2 has assorted configurable parameters for memory and CPU usage;
the new key format includes all those parameters. So there's no reason
we can't have them under user control, if a user wants to be
particularly vigorous or particularly lightweight with their own key
files. They could even switch to one of the other flavours of Argon2,
if they thought side channels were an especially large or small risk
in their particular environment. In this commit I haven't added any UI
for controlling that kind of thing, but the PPK loading function is
all set up to cope, so that can all be added in a future commit
without having to change the file format.

While I'm at it, I've also switched the CBC encryption to using a
random IV (or rather, one derived from the passphrase along with the
cipher and MAC keys). That's more like normal SSH-2 practice.
2021-02-20 16:57:47 +00:00
Simon Tatham
0faeb82ccd Add implementation of the Argon2 password hash.
This is going to be used in the new version of the PPK file format. It
was the winner of the Password Hashing Context, which I think makes it
a reasonable choice.

Argon2 comes in three flavours: one with no data dependency in its
memory addressing, one with _deliberate_ data dependency (intended to
serialise computation, to hinder parallel brute-forcing), and a hybrid
form that starts off data-independent and then switches over to the
dependent version once the sensitive input data has been adequately
mixed around. I test all three in the test suite; the side-channel
tester can only expect Argon2i to pass; and, following the spec's
recommendation, I'll be using Argon2id for the actual key file
encryption.
2021-02-20 16:51:29 +00:00
Simon Tatham
5c8f3bf924 Add an implementation of BLAKE2b.
I have no plans to use this directly, but it's a component of Argon2,
which I'm about to add in the next commit.
2021-02-20 16:49:52 +00:00
Simon Tatham
c61158aa34 Add an IV argument to aes_{en,de}crypt_pubkey.
No functional change: currently, the IV passed in is always zero
(except in the test suite). But this prepares to change that in a
future revision of the key file format.
2021-02-20 16:49:52 +00:00
Simon Tatham
a9763ce4ed Hardware-accelerated SHA-512 on the Arm architecture.
The NEON support for SHA-512 acceleration looks very like SHA-256,
with a pair of chained instructions to generate a 128-bit vector
register full of message schedule, and another pair to update the hash
state based on those. But since SHA-512 is twice as big in all
dimensions, those four instructions between them only account for two
rounds of it, in place of four rounds of SHA-256.

Also, it's a tighter squeeze to fit all the data needed by those
instructions into their limited number of register operands. The NEON
SHA-256 implementation was able to keep its hash state and message
schedule stored as 128-bit vectors and then pass combinations of those
vectors directly to the instructions that did the work; for SHA-512,
in several places you have to make one of the input operands to the
main instruction by combining two halves of different vectors from
your existing state. But that operation is a quick single EXT
instruction, so no trouble.

The only other problem I've found is that clang - in particular the
version on M1 macOS, but as far as I can tell, even on current trunk -
doesn't seem to implement the NEON intrinsics for the SHA-512
extension. So I had to bodge my own versions with inline assembler in
order to get my implementation to compile under clang. Hopefully at
some point in the future the gap might be filled and I can relegate
that to a backwards-compatibility hack!

This commit adds the same kind of switching mechanism for SHA-512 that
we already had for SHA-256, SHA-1 and AES, and as with all of those,
plumbs it through to testcrypt so that you can explicitly ask for the
hardware or software version of SHA-512. So the test suite can run the
standard test vectors against both implementations in turn.

On M1 macOS, I'm testing at run time for the presence of SHA-512 by
checking a sysctl setting. You can perform the same test on the
command line by running "sysctl hw.optional.armv8_2_sha512".

As far as I can tell, on Windows there is not yet any flag to test for
this CPU feature, so for the moment, the new accelerated SHA-512 is
turned off unconditionally on Windows.
2020-12-24 15:39:54 +00:00
Simon Tatham
7003b43963 Stop using mp_int in sshprng.c.
We keep an internal 128-bit counter that's used as part of the hash
preimages. There's no real need to import all the mp_int machinery in
order to implement that: we can do it by hand using a small fixed-size
array and a trivial use of BignumADC. This is another inter-module
dependency that's easy to remove and useful to spinoff programs.

This changes the hash preimage calculation in the PRNG, because we're
now formatting our 128-bit integer in the fixed-length representation
of 16 little-endian bytes instead of as an SSH-2 mpint. This is
harmless (perhaps even mildly beneficial, due to the length now not
depending on how long the PRNG has been running), but means I have to
update the PRNG tests as well.
2020-09-13 09:11:31 +01:00
Simon Tatham
2ec2b796ed Migrate all Python scripts to Python 3.
Most of them are now _mandatory_ P3 scripts, because I'm tired of
maintaining everything to be compatible with both versions.

The current exceptions are gdb.py (which has to live with whatever gdb
gives it), and kh2reg.py (which is actually designed for other people
to use, and some of them might still be stuck on P2 for the moment).
2020-03-04 21:23:49 +00:00
Simon Tatham
289d8873ec Fix mp_{eq,hs}_integer(tiny, huge).
The comparison functions between an mp_int and an integer worked by
walking along the mp_int, comparing each of its words to the
corresponding word of the integer. When they ran out of mp_int, they'd
stop.

But this overlooks the possibility that they might not have run out of
_integer_ yet! If BIGNUM_INT_BITS is defined to be less than the size
of a uintmax_t, then comparing (say) the uintmax_t 0x8000000000000001
against a one-word mp_int containing 0x0001 would return equality,
because it would never get as far as spotting the high bit of the
integer.

Fixed by iterating up to the max of the number of BignumInts in the
mp_int and the number that cover a uintmax_t. That means we have to
use mp_word() instead of a direct array lookup to get the mp_int words
to compare against, since now the word indices might be out of range.
2020-03-02 18:42:31 +00:00
Simon Tatham
a085acbadf Support the new "ssh-ed448" key type.
This is standardised by RFC 8709 at SHOULD level, and for us it's not
too difficult (because we use general-purpose elliptic-curve code). So
let's be up to date for a change, and add it.

This implementation uses all the formats defined in the RFC. But we
also have to choose a wire format for the public+private key blob sent
to an agent, and since the OpenSSH agent protocol is the de facto
standard but not (yet?) handled by the IETF, OpenSSH themselves get to
say what the format for a key should or shouldn't be. So if they don't
support a particular key method, what do you do?

I checked with them, and they agreed that there's an obviously right
format for Ed448 keys, which is to do them exactly like Ed25519 except
that you have a 57-byte string everywhere Ed25519 had a 32-byte
string. So I've done that.
2020-03-02 07:09:08 +00:00
Simon Tatham
b8a08f9321 Implement the SHA-3 family.
These aren't used _directly_ by SSH at present, but an instance of
SHAKE-256 is required by the recently standardised Ed448.
2020-03-02 06:55:48 +00:00
Simon Tatham
31e5b621b5 Implement "curve448-sha512" kex, from RFC 8731.
With all the preparation now in place, this is more or less trivial.
We add a new curve setup function in sshecc.c, and an ssh_kex linking
to it; we add the curve parameters to the reference / test code
eccref.py, and use them to generate the list of low-order input values
that should be rejected by the sanity check on the kex output; we add
the standard test vectors from RFC 7748 in cryptsuite.py, and the
low-order values we just generated.
2020-03-01 21:13:59 +00:00
Simon Tatham
2be70baa0d New 'Pockle' object, for verifying primality.
This implements an extended form of primality verification using
certificates based on Pocklington's theorem. You make a Pockle object,
and then try to convince it that one number after another is prime, by
means of providing it with a list of prime factors of p-1 and a
primitive root. (Or just by saying 'this prime is small enough for you
to check yourself'.)

Pocklington's theorem requires you to have factors of p-1 whose
product is at least the square root of p. I've extended that to
support factorisations only as big as the cube root, via an extension
of the theorem given in Maurer's paper on generating provable primes.

The Pockle object is more or less write-only: it has no methods for
reading out its contents. Its only output channel is the return value
when you try to insert a prime into it: if it isn't sufficiently
convinced that your prime is prime, it will return an error code. So
anything for which it returns POCKLE_OK you can be confident of.

I'm going to use this for provable prime generation. But exposing this
part of the system as an object in its own right means I can write a
set of unit tests for this specifically. My negative tests exercise
all the different ways a certification can be erroneous or inadequate;
the positive tests include proofs of primality of various primes used
in elliptic-curve crypto. The Poly1305 proof in particular is taken
from a proof in DJB's paper, which has exactly the form of a
Pocklington certificate only written in English.
2020-03-01 20:09:01 +00:00
Simon Tatham
20a9912c7c Add mp_copy_integer_into function.
Even simpler than the existing mp_add_integer_into.
2020-03-01 20:09:01 +00:00
Simon Tatham
6b27999500 Add mp_nthroot function.
This takes ordinary integer square and cube roots (i.e. not mod
anything) of mp_ints.
2020-03-01 20:09:01 +00:00
Simon Tatham
63b8f537f2 New API for primegen(), using PrimeCandidateSource.
The more features and options I add to PrimeCandidateSource, the more
cumbersome it will be to replicate each one in a command-line option
to the ultimate primegen() function. So I'm moving to an API in which
the client of primegen() constructs a PrimeCandidateSource themself,
and passes it in to primegen().

Also, changed the API for pcs_new() so that you don't have to pass
'firstbits' unless you really want to. The net effect is that even
though we've added flexibility, we've also simplified the call sites
of primegen() in the simple case: if you want a 1234-bit prime, you
just need to pass pcs_new(1234) as the argument to primegen, and
you're done.

The new declaration of primegen() lives in ssh_keygen.h, along with
all the types it depends on. So I've had to #include that header in a
few new files.
2020-02-29 13:55:41 +00:00
Simon Tatham
7751657811 Reject all low-order points in Montgomery key exchange.
This expands our previous check for the public value being zero, to
take in all the values that will _become_ zero after not many steps.

The actual check at run time is done using the new is_infinite query
method for Montgomery curve points. Test cases in cryptsuite.py cover
all the dangerous values I generated via all that fiddly quartic-
solving code.

(DJB's page http://cr.yp.to/ecdh.html#validate also lists these same
constants. But working them out again for myself makes me confident I
can do it again for other similar curves, such as Curve448.)

In particular, this makes us fully compliant with RFC 7748's demand to
check we didn't generate a trivial output key, which can happen if the
other end sends any of those low-order values.

I don't actually see why this is a vital check to perform for security
purposes, for the same reason that we didn't classify the bug
'diffie-hellman-range-check' as a vulnerability: I can't really see
what the other end's incentive might be to deliberately send one of
these nonsense values (and you can't do it by accident - none of these
values is a power of the canonical base point). It's not that a DH
participant couldn't possible want to secretly expose the session
traffic - but there are plenty of more subtle (and less subtle!) ways
to do it, so you don't really gain anything by forcing them to use one
of those instead. But the RFC says to check, so we check.
2020-02-28 20:48:52 +00:00
Simon Tatham
c9a8fa639e New query function ecc_montgomery_is_identity.
To begin with, this allows me to add a regression test for the change
in the previous commit.
2020-02-28 20:40:08 +00:00
Simon Tatham
da3bc3d927 Refactor generation of candidate integers in primegen.
I've replaced the random number generation and small delta-finding
loop in primegen() with a much more elaborate system in its own source
file, with unit tests and everything.

Immediate benefits:

 - fixes a theoretical possibility of overflowing the target number of
   bits, if the random number was so close to the top of the range
   that the addition of delta * factor pushed it over. However, this
   only happened with negligible probability.

 - fixes a directional bias in delta-finding. The previous code
   incremented the number repeatedly until it found a value coprime to
   all the right things, which meant that a prime preceded by a
   particularly long sequence of numbers with tiny factors was more
   likely to be chosen. Now we select candidate delta values at
   random, that bias should be eliminated.

 - changes the semantics of the outermost primegen() function to make
   them easier to use, because now the caller specifies the 'bits' and
   'firstbits' values for the actual returned prime, rather than
   having to account for the factor you're multiplying it by in DSA.
   DSA client code is correspondingly adjusted.

Future benefits:

 - having the candidate generation in a separate function makes it
   easy to reuse in alternative prime generation strategies

 - the available constraints support applications such as Maurer's
   algorithm for generating provable primes, or strong primes for RSA
   in which both p-1 and p+1 have a large factor. So those become
   things we could experiment with in future.
2020-02-23 15:47:44 +00:00
Simon Tatham
dfddd1381b testcrypt: allow random_read() to use a full PRNG.
This still isn't the true random generator used in the live tools:
it's deterministic, for repeatable testing. The Python side of
testcrypt can now call random_make_prng(), which will instantiate a
PRNG with the given seed. random_clear() still gets rid of it.

So I can still have some tests control the precise random numbers
received by the function under test, but for others (especially key
generation, with its uncertainty about how much randomness it will
actually use) I can just say 'here, have a seed, generate as much
stuff from that seed as you need'.
2020-02-23 15:01:55 +00:00
Simon Tatham
2debb352b0 mpint: add a gcd function.
This is another application of the existing mp_bezout_into, which
needed a tweak or two to cope with the numbers not necessarily being
coprime, plus a wrapper function to deal with shared factors of 2.

It reindents the entire second half of mp_bezout_into, so the patch is
best viewed with whitespace differences ignored.
2020-02-23 14:49:54 +00:00
Simon Tatham
18678ba9bc mpint: add mp_[lr]shift_safe_into functions.
There was previously no safe left shift at all, which is an omission.
And rshift_safe_into was an odd thing to be missing, so while I'm
here, I've added it on the basis that it will probably be useful
sooner or later.
2020-02-23 14:49:54 +00:00
Simon Tatham
82df83719a Test passing null pointers to mp_divmod_into.
I've got opt_val_mpint already in the test system, so it makes
sense to use it.
2020-02-23 12:02:44 +00:00
Simon Tatham
921118dbea Fix handling of large RHS in mp_add_integer_into.
While looking over the code for other reasons, I happened to notice
that the internal function mp_add_masked_integer_into was using a
totally wrong condition to check whether it was about to do an
out-of-range right shift: it was comparing a shift count measured in
bits against BIGNUM_INT_BYTES.

The resulting bug hasn't shown up in the code so far, which I assume
is just because no caller is passing any RHS to mp_add_integer_into
bigger than about 1 or 2. And it doesn't show up in the test suite
because I hadn't tested those functions. Now I am testing them, and
the newly added test fails when built for 16-bit BignumInt if you back
out the actual fix in this commit.
2020-02-22 18:51:43 +00:00
Simon Tatham
404f558705 sshprng.c: remove pointless pending_output buffer.
In an early draft of the new PRNG, before I decided to get rid of
random_byte() and replace it with random_read(), it was important
after generating a hash-worth of PRNG output to buffer it so as to
return it a byte at a time. So the PRNG data structure itself had to
keep a hash-sized buffer of pending output, and be able to return the
next byte from it on every random_byte() call.

But when random_read() came in, there was no need to do that any more,
because at the end of a read, the generator is re-seeded and the
remains of any generated data is deliberately thrown away. So the
pending_output buffer has no need to live in the persistent prng
object; it can be relegated to a local variable inside random_read
(and a couple of other functions that used the same buffer since it
was conveniently there).

A side effect of this is that we're no longer yielding the bytes of
each hash in reverse order, because only the previous silly code
structure made it convenient. Fortunately, of course, nothing is
depending on that - except the cryptsuite tests, which I've updated.
2020-01-26 16:37:48 +00:00
Simon Tatham
8c7b0a787f New test script 'agenttest.py' for testing Pageant.
Well, actually, two new test programs. agenttest.py is the actual
test; it depends on agenttestgen.py which generates a collection of
test private keys, using the newly exposed testcrypt interface to our
key generation code.

In this commit I've also factored out some Python SSH marshalling code
from cryptsuite, and moved it into a module ssh.py which the agent
tests can reuse.
2020-01-09 19:57:35 +00:00
Simon Tatham
d30387c780 Add cryptsuite tests for key file load and save.
This adds stability tests (of the form 'make sure this behaves
tomorrow the same way it behaved today, taking on faith that the
latter was right') for all the new in-memory APIs for public and
private key load/save.
2020-01-09 19:57:35 +00:00
Simon Tatham
859c81e838 Add a test for RSA key exchange.
It demonstrates a successful round trip from a source integer to
ciphertext and back, and also I've hardcoded the ciphertext I got from
the first attempt so that future changes to the code won't be able to
change it without me noticing.
2019-12-15 20:21:50 +00:00
Simon Tatham
5d718ef64b Whitespace rationalisation of entire code base.
The number of people has been steadily increasing who read our source
code with an editor that thinks tab stops are 4 spaces apart, as
opposed to the traditional tty-derived 8 that the PuTTY code expects.

So I've been wondering for ages about just fixing it, and switching to
a spaces-only policy throughout the code. And I recently found out
about 'git blame -w', which should make this change not too disruptive
for the purposes of source-control archaeology; so perhaps now is the
time.

While I'm at it, I've also taken the opportunity to remove all the
trailing spaces from source lines (on the basis that git dislikes
them, and is the only thing that seems to have a strong opinion one
way or the other).
    
Apologies to anyone downstream of this code who has complicated patch
sets to rebase past this change. I don't intend it to be needed again.
2019-09-08 20:29:21 +01:00
Simon Tatham
1cd935e6c9 cryptsuite: add a test of rsa_verify.
This includes a regression test for the p=1 bug I just fixed, and also
adds some more basic tests just because it seemed silly not to.
2019-04-28 10:00:56 +01:00
Simon Tatham
b5597cc833 Fix indentation goof in CRC test suite.
In crypt.testCRC32(), I had intended to test every input byte with
each of several previous states, but I mis-indented what should have
been the inner loop (over bytes), with the effect that instead I
silently tested the input bytes with only the last of those states.
2019-04-12 23:41:28 +01:00
Simon Tatham
a956da6e5b cryptsuite: add a general test of ssh_key methods.
This is the test that would have caught the bug described in 867e69187
if I'd got round to writing it before releasing 0.71. Stable door now
shut.
2019-03-24 10:20:44 +00:00
Simon Tatham
6ecc16fc4b cryptsuite: clean up exit handling.
Now we only run the final memory-leak check if we didn't already have
some other error to report, or some other exception that terminated
the process.

Also, we wait for the subprocess to terminate before returning control
to the shell, so that any last-minute complaints from Leak Sanitiser
appear before rather than after the shell prompt comes back.

While I'm here, I've also made check_return_status tolerate the case
in which the child process never got started at all. That way, if a
failure manages to occur before even getting _that_ far, there won't
be a cascade failure from check_return_status getting confused
afterwards.
2019-03-24 10:18:16 +00:00
Simon Tatham
c0e62e97bb Curve25519: add test vectors from RFC 7748.
My API for ECDH KEX doesn't provide a function to input the random
bytes from which the private key is derived, but conveniently, the
existing call to random_read() in ssh_ecdhkex_m_setup treats the
provided bytes in exactly the way that these test vectors expect.

One of these tests also exercises the 'reduction mod 2^255' case that
I just added.
2019-03-23 08:42:21 +00:00
Simon Tatham
8957e613bc Add missing sanity checks in ssh_dss_verify.
The standard says we should be checking that both r,s are in the range
[1,q-1]. Previously we were effectively reducing s mod q in the course
of inversion, and modinv() was guaranteeing never to return zero; the
remaining missing checks were benign. But the change from Bignum to
mp_int altered the error behaviour, and combined with the missing
upper bound check on s, made it possible to continue verification with
w == 0 mod q, which is a bad case.

Added a small DSA test case, including a check that none of these
types of signatures validates.
2019-02-10 20:10:41 +00:00
Simon Tatham
bfae3ee96e mpint: add a few simple bitwise operations.
I want to use mp_xor_into as part of an upcoming test program, and
while I'm at it, I thought I'd add a few other obvious bitops too.
2019-02-09 14:10:30 +00:00
Simon Tatham
bd84c5e4b3 mp_modmul: cope with oversized base values.
Previously, I checked by assertion that the base was less than the
modulus. There were two things wrong with this policy. Firstly, it's
perfectly _meaningful_ to want to raise a large number to a power mod
a smaller number, even if it doesn't come up often in cryptography;
secondly, I didn't do it right, because the check was based on the
formal sizes (nw fields) of the mp_ints, which meant that it was
possible to have a failure of the assertion even in the case where the
numerical value of the base _was_ less than the modulus.

In particular, this could come up in Diffie-Hellman with a fixed
group, because the fixed group modulus was decoded from an MP_LITERAL
in sshdh.c which gave a minimal value of nw, but the base was the
public value sent by the other end of the connection, which would
sometimes be sent with the leading zero byte required by the SSH-2
mpint encoding, and would cause a value of nw one larger, failing the
assertion.

Fixed by simply using mp_modmul in monty_import, replacing the
previous clever-but-restricted strategy that I wrote when I thought I
could get away without having to write a general division-based
modular reduction at all.
2019-02-04 20:32:31 +00:00
Simon Tatham
10f80777de Add "cbc" suffix to ciphers in testcrypt's namespace.
This completes the conversion begun in commit be5c0e635: now every
CBC-mode cipher has "cbc" in its name, and doesn't leave it implicit.
Hopefully this will never confuse me again!
2019-02-04 20:32:31 +00:00
Simon Tatham
6e7df89316 Fix buffer overrun in mp_from_decimal("").
The loop over the input string assumed it could read _one_ byte safely
before reaching the initial termination test.
2019-01-29 20:54:19 +00:00
Simon Tatham
5017d0a6ca mpint.c: outlaw mp_ints with nw==0.
Some functions got confused if given one as input (particularly
mp_get_decimal, which assumed it could safely write at least one word
into the inv5 value it makes internally), and I've decided it's easier
to stop them ever being created than to teach everything to handle
them correctly. So now mp_make_sized enforces nw != 0 by assertion,
and I've added a max at any call site that looked as if it might
violate that precondition.

mp_from_hex("") could generate one of these, in particular, so now
I've fixed it, I've added a test to make sure it continues doing
something sensible.
2019-01-29 20:54:19 +00:00
Simon Tatham
d4ad7272fd Add functions mp_max and mp_max_into.
These are easy, and just like the existing mp_min family; I just
hadn't needed them before now.
2019-01-29 20:03:52 +00:00
Simon Tatham
de2667f951 cryptsuite: stop failing if hardware AES is unavailable.
In the new testSSHCiphers function, I forgot to put in the check for
None that I put in all the other functions that try to explicitly
instantiate hardware-accelerated AES.
2019-01-25 20:31:05 +00:00
Simon Tatham
ca361fd77f cryptsuite: switch #! line to Python 3.
Since I apparently can't reliably keep this script working on both
flavours of Python, I think these days I'd rather it broke on 2 than
on 3 due to my inattention. So let's default to 3.
2019-01-25 20:20:37 +00:00
Simon Tatham
0e9ad99c04 testcrypt / cryptsuite: another set of Python 3 fixes.
One of these days I'll manage not to mess this up in every new test
I add ... perhaps.
2019-01-23 23:40:32 +00:00