1
0
mirror of https://git.tartarus.org/simon/putty.git synced 2025-01-09 09:27:59 +00:00
Commit Graph

106 Commits

Author SHA1 Message Date
Simon Tatham
e9aa28fe02 Restore the ability to write out PPK v2.
This commit adds the capability in principle to ppk_save_sb, by adding
a fmt_version field in the save parameters structure. As yet it's not
connected up to any user interface in PuTTYgen, but I think I'll need
to, because currently there's no way at all to convert PPK v3 back to
v2, and surely people will need to interoperate with older
installations of PuTTY, or with other PPK-consuming software.
2021-02-22 20:53:18 +00:00
Simon Tatham
08d17140a0 Introduce PPK file format version 3.
This removes both uses of SHA-1 in the file format: it was used as the
MAC protecting the key file against tamperproofing, and also used in
the key derivation step that converted the user's passphrase to cipher
and MAC keys.

The MAC is simply upgraded from HMAC-SHA-1 to HMAC-SHA-256; it is
otherwise unchanged in how it's applied (in particular, to what data).

The key derivation is totally reworked, to be based on Argon2, which
I've just added to the code base. This should make stolen encrypted
key files more resistant to brute-force attack.

Argon2 has assorted configurable parameters for memory and CPU usage;
the new key format includes all those parameters. So there's no reason
we can't have them under user control, if a user wants to be
particularly vigorous or particularly lightweight with their own key
files. They could even switch to one of the other flavours of Argon2,
if they thought side channels were an especially large or small risk
in their particular environment. In this commit I haven't added any UI
for controlling that kind of thing, but the PPK loading function is
all set up to cope, so that can all be added in a future commit
without having to change the file format.

While I'm at it, I've also switched the CBC encryption to using a
random IV (or rather, one derived from the passphrase along with the
cipher and MAC keys). That's more like normal SSH-2 practice.
2021-02-20 16:57:47 +00:00
Simon Tatham
0faeb82ccd Add implementation of the Argon2 password hash.
This is going to be used in the new version of the PPK file format. It
was the winner of the Password Hashing Context, which I think makes it
a reasonable choice.

Argon2 comes in three flavours: one with no data dependency in its
memory addressing, one with _deliberate_ data dependency (intended to
serialise computation, to hinder parallel brute-forcing), and a hybrid
form that starts off data-independent and then switches over to the
dependent version once the sensitive input data has been adequately
mixed around. I test all three in the test suite; the side-channel
tester can only expect Argon2i to pass; and, following the spec's
recommendation, I'll be using Argon2id for the actual key file
encryption.
2021-02-20 16:51:29 +00:00
Simon Tatham
5c8f3bf924 Add an implementation of BLAKE2b.
I have no plans to use this directly, but it's a component of Argon2,
which I'm about to add in the next commit.
2021-02-20 16:49:52 +00:00
Simon Tatham
c61158aa34 Add an IV argument to aes_{en,de}crypt_pubkey.
No functional change: currently, the IV passed in is always zero
(except in the test suite). But this prepares to change that in a
future revision of the key file format.
2021-02-20 16:49:52 +00:00
Simon Tatham
8af1d90dca Test program for ancillary window updates.
I've just done a major rewrite of code structure and update policy for
most of the TermWin window-modification methods, and I wrote this test
program in the process to check that old and new versions of the
terminal still respond to all these escape sequences in the same way.
It's quite likely to come in useful again, so I'll commit it.
2021-02-07 19:59:21 +00:00
Simon Tatham
9e1ec093fd testcrypt: fix fake class methods on MACs.
I had the wrong function name prefix in the method_prefixes array: the
MAC functions all begin with ssh2_mac_* instead of ssh_mac_*. As a
result, MAC objects in the Python testcrypt system didn't provide
OO-like methods such as m.update() and m.genresult(); instead you had
to say ssh2_mac_update(m, ...) and ssh2_mac_genresult(m).
2021-02-02 18:17:46 +00:00
Simon Tatham
a9763ce4ed Hardware-accelerated SHA-512 on the Arm architecture.
The NEON support for SHA-512 acceleration looks very like SHA-256,
with a pair of chained instructions to generate a 128-bit vector
register full of message schedule, and another pair to update the hash
state based on those. But since SHA-512 is twice as big in all
dimensions, those four instructions between them only account for two
rounds of it, in place of four rounds of SHA-256.

Also, it's a tighter squeeze to fit all the data needed by those
instructions into their limited number of register operands. The NEON
SHA-256 implementation was able to keep its hash state and message
schedule stored as 128-bit vectors and then pass combinations of those
vectors directly to the instructions that did the work; for SHA-512,
in several places you have to make one of the input operands to the
main instruction by combining two halves of different vectors from
your existing state. But that operation is a quick single EXT
instruction, so no trouble.

The only other problem I've found is that clang - in particular the
version on M1 macOS, but as far as I can tell, even on current trunk -
doesn't seem to implement the NEON intrinsics for the SHA-512
extension. So I had to bodge my own versions with inline assembler in
order to get my implementation to compile under clang. Hopefully at
some point in the future the gap might be filled and I can relegate
that to a backwards-compatibility hack!

This commit adds the same kind of switching mechanism for SHA-512 that
we already had for SHA-256, SHA-1 and AES, and as with all of those,
plumbs it through to testcrypt so that you can explicitly ask for the
hardware or software version of SHA-512. So the test suite can run the
standard test vectors against both implementations in turn.

On M1 macOS, I'm testing at run time for the presence of SHA-512 by
checking a sysctl setting. You can perform the same test on the
command line by running "sysctl hw.optional.armv8_2_sha512".

As far as I can tell, on Windows there is not yet any flag to test for
this CPU feature, so for the moment, the new accelerated SHA-512 is
turned off unconditionally on Windows.
2020-12-24 15:39:54 +00:00
Simon Tatham
04c50b6cfd sclog: add missing instr_set_translation.
When we invent a movzx instruction as part of shift-count logging on
x86, we apparently need to set its 'translation' field to point at a
pre-existing instruction that it's logically related to. Later
versions of DynamoRIO than I was running with will complain if this
isn't done.
2020-12-16 09:27:40 +00:00
Simon Tatham
7aca274789 sclog: log the size of allocated memory regions.
This occurred to me recently as a (very small) hole in the logging
strategy: if the size of an allocated memory block depended on some
secret data, it certainly would change the control flow and memory
access pattern inside malloc, but since we disable logging inside
malloc, the log file from this test suite would never see the
difference.

Easily fixed by printing the size of each block in the code that
intercepts malloc and realloc. As expected, no test actually fails as
a result of filling in this gap.
2020-12-13 12:33:43 +00:00
Simon Tatham
e97a364d07 sclog: don't try to find libc functions outside libc.
On AArch64, there are unexpectedly malloc and free functions in ld.so,
so the module-load function finds them there, wraps them, and then
misses the real versions in libc.
2020-11-26 18:04:49 +00:00
Simon Tatham
b3f2726b83 sclog: support AArch64 division and shift instructions.
These need to be logged for the same reasons as on x86.
2020-11-26 18:04:49 +00:00
Simon Tatham
f65153ab5b sclog: put x86-specific parts under ifdef.
This allows my side-channel test system to at least _compile_ on other
architectures without failing for the lack of OP_xxx enum constants,
although it now won't log all the things it needs to be a proper test.
2020-11-26 17:52:11 +00:00
Simon Tatham
7003b43963 Stop using mp_int in sshprng.c.
We keep an internal 128-bit counter that's used as part of the hash
preimages. There's no real need to import all the mp_int machinery in
order to implement that: we can do it by hand using a small fixed-size
array and a trivial use of BignumADC. This is another inter-module
dependency that's easy to remove and useful to spinoff programs.

This changes the hash preimage calculation in the PRNG, because we're
now formatting our 128-bit integer in the fixed-length representation
of 16 little-endian bytes instead of as an SSH-2 mpint. This is
harmless (perhaps even mildly beneficial, due to the length now not
depending on how long the PRNG has been running), but means I have to
update the PRNG tests as well.
2020-09-13 09:11:31 +01:00
Jacob Nevins
56132d69c6 Add SGR 9 strikethrough to test file. 2020-08-13 23:54:58 +01:00
Simon Tatham
4948b79114 test/numbertheory.py: fix comment wording.
The class for general rth-root finding started off as a cube-root
finder before I generalised it, and in one part of the top-level
explanatory comment, I still referred to a subgroup having index 3
rather than index r.

Also, in a later paragraph, I seem to have said 'index' several times
where I meant the concept of 'rank' I defined in the previous
paragraph.
2020-05-03 11:18:50 +01:00
Simon Tatham
bed4e12f15 testcrypt.py: marshal string literals more efficiently.
The testcrypt protocol expects a string literal to be a concatenation
of literal bytes other than '%' and '\n', and %-escaped hex digit
pairs. But testcrypt.py was only ever using the latter format, so even
a legible ASCII string like "123" was being sent to testcrypt as the
unreadable and needlessly long "%31%32%33".

When debugging, I often arrange to save the testcrypt input stream to
a file, and sometimes I use that file as the starting point for
editing. So it is actually useful to have the protocol exchange be
legible to humans. Hence, here's a change to testcrypt.py which makes
it only use the %-escape encoding for byte values that aren't
printable ASCII.
2020-03-08 11:09:06 +00:00
Simon Tatham
844e766b03 RSA generation: option to generate strong primes.
A 'strong' prime, as defined by the Handbook of Applied Cryptography,
is a prime p such that each of p-1 and p+1 has a large prime factor,
and that the large factor q of p-1 is such that q-1 in turn _also_ has
a large prime factor.

HoAC says that making your RSA key using primes of this form defeats
some factoring algorithms - but there are other faster algorithms to
which it makes no difference. So this is probably not a useful
precaution in practice. However, it has been recommended in the past
by some official standards, and it's easy to implement given the new
general facility in PrimeCandidateSource that lets you ask for your
prime to satisfy an arbitrary modular congruence. (And HoAC also says
there's no particular reason _not_ to use strong primes.) So I provide
it as an option, just in case anyone wants to select it.

The change to the key generation algorithm is entirely in sshrsag.c,
and is neatly independent of the prime-generation system in use. If
you're using Maurer provable prime generation, then the known factor q
of p-1 can be used to help certify p, and the one for q-1 to help with
q in turn; if you switch to probabilistic prime generation then you
still get an RSA key with the right structure, except that every time
the definition says 'prime factor' you just append '(probably)'.

(The probabilistic version of this procedure is described as 'Gordon's
algorithm' in HoAC section 4.4.2.)
2020-03-07 11:37:31 +00:00
Simon Tatham
365c1d2df7 Command-line prime-generation testing tool.
Since our prime-generation code contains facilities not used by the
main key generators - Sophie Germain primes, user-specified modular
congruences, and MPU certificate output - it's probably going to be
useful sooner or later to have a command-line tool to access those
facilities. So here's a simple script that glues a Python argparse
interface on to the front of it all.

It would be nice to put this in 'contrib' rather than 'test', on the
grounds that it's at least potentially useful for purposes other than
testing PuTTY during development. But it's a client of the testcrypt
system, so it can't live anywhere other than the same directory as
testcrypt.py without me first having to do a lot of faffing about with
Python module organisation. So it can live here for the moment.
2020-03-07 11:37:31 +00:00
Simon Tatham
18fd47b618 Generate MPU certificates for proven primes.
Conveniently checkable certificates of primality aren't a new concept.
I didn't invent them, and I wasn't the first to implement them. Given
that, I thought it might be useful to be able to independently verify
a prime generated by PuTTY's provable prime system. Then, even if you
don't trust _this_ code, you might still trust someone else's
verifier, or at least be less willing to believe that both were
colluding.

The Perl module Math::Prime::Util is the only free software I've found
that defines a specific text-file format for certificates of
primality. The MPU format (as it calls it) supports various different
methods of certifying the primality of a number (most of which, like
Pockle's, depend on having previously proved some smaller number(s) to
be prime). The system implemented by Pockle is on its list: MPU calls
it by the name "BLS5".

So this commit introduces extra stored data inside Pockle so that it
remembers not just _that_ it believes certain numbers to be prime, but
also _why_ it believed each one to be prime. Then there's an extra
method in the Pockle API to translate its internal data structures
into the text of an MPU certificate for any number it knows about.

Math::Prime::Util doesn't come with a command-line verification tool,
unfortunately; only a Perl function which you feed a string argument.
So also in this commit I add test/mpu-check.pl, which is a trivial
command-line client of that function.

At the moment, this new piece of API is only exposed via testcrypt. I
could easily put some user interface into the key generation tools
that would save a few primality certificates alongside the private
key, but I have yet to think of any good reason to do it. Mostly this
facility is intended for debugging and cross-checking of the
_algorithm_, not of any particular prime.
2020-03-07 11:24:12 +00:00
Simon Tatham
2ec2b796ed Migrate all Python scripts to Python 3.
Most of them are now _mandatory_ P3 scripts, because I'm tired of
maintaining everything to be compatible with both versions.

The current exceptions are gdb.py (which has to live with whatever gdb
gives it), and kh2reg.py (which is actually designed for other people
to use, and some of them might still be stuck on P2 for the moment).
2020-03-04 21:23:49 +00:00
Simon Tatham
289d8873ec Fix mp_{eq,hs}_integer(tiny, huge).
The comparison functions between an mp_int and an integer worked by
walking along the mp_int, comparing each of its words to the
corresponding word of the integer. When they ran out of mp_int, they'd
stop.

But this overlooks the possibility that they might not have run out of
_integer_ yet! If BIGNUM_INT_BITS is defined to be less than the size
of a uintmax_t, then comparing (say) the uintmax_t 0x8000000000000001
against a one-word mp_int containing 0x0001 would return equality,
because it would never get as far as spotting the high bit of the
integer.

Fixed by iterating up to the max of the number of BignumInts in the
mp_int and the number that cover a uintmax_t. That means we have to
use mp_word() instead of a direct array lookup to get the mp_int words
to compare against, since now the word indices might be out of range.
2020-03-02 18:42:31 +00:00
Simon Tatham
a085acbadf Support the new "ssh-ed448" key type.
This is standardised by RFC 8709 at SHOULD level, and for us it's not
too difficult (because we use general-purpose elliptic-curve code). So
let's be up to date for a change, and add it.

This implementation uses all the formats defined in the RFC. But we
also have to choose a wire format for the public+private key blob sent
to an agent, and since the OpenSSH agent protocol is the de facto
standard but not (yet?) handled by the IETF, OpenSSH themselves get to
say what the format for a key should or shouldn't be. So if they don't
support a particular key method, what do you do?

I checked with them, and they agreed that there's an obviously right
format for Ed448 keys, which is to do them exactly like Ed25519 except
that you have a 57-byte string everywhere Ed25519 had a 32-byte
string. So I've done that.
2020-03-02 07:09:08 +00:00
Simon Tatham
b8a08f9321 Implement the SHA-3 family.
These aren't used _directly_ by SSH at present, but an instance of
SHAKE-256 is required by the recently standardised Ed448.
2020-03-02 06:55:48 +00:00
Simon Tatham
31e5b621b5 Implement "curve448-sha512" kex, from RFC 8731.
With all the preparation now in place, this is more or less trivial.
We add a new curve setup function in sshecc.c, and an ssh_kex linking
to it; we add the curve parameters to the reference / test code
eccref.py, and use them to generate the list of low-order input values
that should be rejected by the sanity check on the kex output; we add
the standard test vectors from RFC 7748 in cryptsuite.py, and the
low-order values we just generated.
2020-03-01 21:13:59 +00:00
Simon Tatham
2be70baa0d New 'Pockle' object, for verifying primality.
This implements an extended form of primality verification using
certificates based on Pocklington's theorem. You make a Pockle object,
and then try to convince it that one number after another is prime, by
means of providing it with a list of prime factors of p-1 and a
primitive root. (Or just by saying 'this prime is small enough for you
to check yourself'.)

Pocklington's theorem requires you to have factors of p-1 whose
product is at least the square root of p. I've extended that to
support factorisations only as big as the cube root, via an extension
of the theorem given in Maurer's paper on generating provable primes.

The Pockle object is more or less write-only: it has no methods for
reading out its contents. Its only output channel is the return value
when you try to insert a prime into it: if it isn't sufficiently
convinced that your prime is prime, it will return an error code. So
anything for which it returns POCKLE_OK you can be confident of.

I'm going to use this for provable prime generation. But exposing this
part of the system as an object in its own right means I can write a
set of unit tests for this specifically. My negative tests exercise
all the different ways a certification can be erroneous or inadequate;
the positive tests include proofs of primality of various primes used
in elliptic-curve crypto. The Poly1305 proof in particular is taken
from a proof in DJB's paper, which has exactly the form of a
Pocklington certificate only written in English.
2020-03-01 20:09:01 +00:00
Simon Tatham
20a9912c7c Add mp_copy_integer_into function.
Even simpler than the existing mp_add_integer_into.
2020-03-01 20:09:01 +00:00
Simon Tatham
6b27999500 Add mp_nthroot function.
This takes ordinary integer square and cube roots (i.e. not mod
anything) of mp_ints.
2020-03-01 20:09:01 +00:00
Simon Tatham
ece788240c Introduce a vtable system for prime generation.
The functions primegen() and primegen_add_progress_phase() are gone.
In their place is a small vtable system with two methods corresponding
to them, plus the usual admin of allocating and freeing contexts.

This API change is the starting point for being able to drop in
different prime generation algorithms at run time in response to user
configuration.
2020-03-01 20:09:01 +00:00
Simon Tatham
63b8f537f2 New API for primegen(), using PrimeCandidateSource.
The more features and options I add to PrimeCandidateSource, the more
cumbersome it will be to replicate each one in a command-line option
to the ultimate primegen() function. So I'm moving to an API in which
the client of primegen() constructs a PrimeCandidateSource themself,
and passes it in to primegen().

Also, changed the API for pcs_new() so that you don't have to pass
'firstbits' unless you really want to. The net effect is that even
though we've added flexibility, we've also simplified the call sites
of primegen() in the simple case: if you want a 1234-bit prime, you
just need to pass pcs_new(1234) as the argument to primegen, and
you're done.

The new declaration of primegen() lives in ssh_keygen.h, along with
all the types it depends on. So I've had to #include that header in a
few new files.
2020-02-29 13:55:41 +00:00
Simon Tatham
809a4eb249 testcrypt.py: avoid restarting subprocess for frees.
I just ran into a bug in which the testcrypt child process was cleanly
terminated, but at least one Python object was left lying around
containing the identifier of a testcrypt object that had never been
freed. On program exit, the Python reference count on that object went
to zero, the __del__ method was invoked, and childprocess.funcall
started a _new_ instance of testcrypt just so it could tell it to free
the object identifier - which, of course, the new testcrypt had never
heard of!

We can already tell the difference between a ChildProcess object which
has no subprocess because it hasn't yet been started, and one which
has no subprocess because it's terminated: the latter has exitstatus
set to something other than None. So now we enforce by assertion that
we don't ever restart the child process, and the __del__ method avoids
doing anything if the child has already finished.
2020-02-29 12:13:32 +00:00
Simon Tatham
db7a314c38 testcrypt.py: fake some OO syntax.
When I'm writing Python using the testcrypt API, I keep finding that I
instinctively try to call vtable methods as if they were actual
methods of the object. For example, calling key.sign(msg, 0) instead
of ssh_key_sign(key, msg, 0).

So this change to the Python side of the testcrypt mechanism panders
to my inappropriate finger-macros by making them work! The idea is
that I define a set of pairs (type, prefix), such that any function
whose name begins with the prefix and whose first argument is of that
type will be automatically translated into a method on the Python
object wrapping a testcrypt value of that type. For example, any
function of the form ssh_key_foo(val_ssh_key, other args) will
automatically be exposed as a method key.foo(other args), simply
because (val_ssh_key, "ssh_key_") appears in the translation table.

This is particularly nice for the Python 3 REPL, which will let me
tab-complete the right set of method names by knowing the type I'm
trying to invoke one on. I haven't decided yet whether I want to
switch to using it throughout cryptsuite.py.

For namespace-cleanness, I've also renamed all the existing attributes
of the Python Value class wrapper so that they start with '_', to
leave the space of sensible names clear for the new OOish methods.
2020-02-29 12:13:32 +00:00
Simon Tatham
7751657811 Reject all low-order points in Montgomery key exchange.
This expands our previous check for the public value being zero, to
take in all the values that will _become_ zero after not many steps.

The actual check at run time is done using the new is_infinite query
method for Montgomery curve points. Test cases in cryptsuite.py cover
all the dangerous values I generated via all that fiddly quartic-
solving code.

(DJB's page http://cr.yp.to/ecdh.html#validate also lists these same
constants. But working them out again for myself makes me confident I
can do it again for other similar curves, such as Curve448.)

In particular, this makes us fully compliant with RFC 7748's demand to
check we didn't generate a trivial output key, which can happen if the
other end sends any of those low-order values.

I don't actually see why this is a vital check to perform for security
purposes, for the same reason that we didn't classify the bug
'diffie-hellman-range-check' as a vulnerability: I can't really see
what the other end's incentive might be to deliberately send one of
these nonsense values (and you can't do it by accident - none of these
values is a power of the canonical base point). It's not that a DH
participant couldn't possible want to secretly expose the session
traffic - but there are plenty of more subtle (and less subtle!) ways
to do it, so you don't really gain anything by forcing them to use one
of those instead. But the RFC says to check, so we check.
2020-02-28 20:48:52 +00:00
Simon Tatham
1cad3c8255 eccref.py: find low-order points on Montgomery curves.
This uses the new quartic-solver mod p to generate all the values in
Curve25519 that can end up at the curve identity by repeated
application of the doubling formula.
2020-02-28 20:40:08 +00:00
Simon Tatham
f82af9ffe2 numbertheory.py: cubic and quartic solver mod p.
I'm going to want to use this for finding special values in elliptic
curves' ground fields.

In order to solve cubics and quartics in F_p, you have to work in
F_{p^2}, for much the same reasons that you have to be willing to use
complex numbers if you want to solve general cubics over the reals
(even if all the eventual roots turn out to be real after all). So
I've also introduced another arithmetic class to work in that kind of
field, and a shim that glues that on to the cyclic-group root finder
from the previous commit.
2020-02-28 20:40:08 +00:00
Simon Tatham
072d3c665a numbertheory.py: generalise SqrtModP to do other roots.
I'm about to want to solve quartics mod a prime, which means I'll need
to be able to take cube roots mod p as well as square roots.

This commit introduces a more general class which can take rth roots
for any prime r, and moreover, it can do it in a general cyclic group.
(You have to tell it the group's order and give it some primitives for
doing arithmetic, plus a way of iterating over the group elements that
it can use to look for a non-rth-power and roots of unity.)

That system makes it nicely easy to test, because you can give it a
cyclic group represented as the integers under _addition_, and then
you obviously know what all the right answers are. So I've also added
a unit test system checking that.
2020-02-28 20:40:08 +00:00
Simon Tatham
7be2e16023 numbertheory.py: make the ModP class hashable.
That will let me keep them in sets.
2020-02-28 20:40:08 +00:00
Simon Tatham
3ee9b92935 numbertheory.py: factor out invert().
I'm about to want to reuse it.
2020-02-28 20:40:08 +00:00
Simon Tatham
122d785283 eccref.py: move support routines into a new file.
I'm about to want to expand the underlying number-theory code, so I'll
start by moving it into a file where it has room to grow without
swamping the main purpose of eccref.py.
2020-02-28 20:40:08 +00:00
Simon Tatham
c9a8fa639e New query function ecc_montgomery_is_identity.
To begin with, this allows me to add a regression test for the change
in the previous commit.
2020-02-28 20:40:08 +00:00
Simon Tatham
0645824e4d eccref.py: handle order-2 points in Montgomery curves.
If a point doubles to the identity, we should return the identity,
rather than throwing a Python divide-by-zero exception.
2020-02-28 20:40:08 +00:00
Simon Tatham
da3bc3d927 Refactor generation of candidate integers in primegen.
I've replaced the random number generation and small delta-finding
loop in primegen() with a much more elaborate system in its own source
file, with unit tests and everything.

Immediate benefits:

 - fixes a theoretical possibility of overflowing the target number of
   bits, if the random number was so close to the top of the range
   that the addition of delta * factor pushed it over. However, this
   only happened with negligible probability.

 - fixes a directional bias in delta-finding. The previous code
   incremented the number repeatedly until it found a value coprime to
   all the right things, which meant that a prime preceded by a
   particularly long sequence of numbers with tiny factors was more
   likely to be chosen. Now we select candidate delta values at
   random, that bias should be eliminated.

 - changes the semantics of the outermost primegen() function to make
   them easier to use, because now the caller specifies the 'bits' and
   'firstbits' values for the actual returned prime, rather than
   having to account for the factor you're multiplying it by in DSA.
   DSA client code is correspondingly adjusted.

Future benefits:

 - having the candidate generation in a separate function makes it
   easy to reuse in alternative prime generation strategies

 - the available constraints support applications such as Maurer's
   algorithm for generating provable primes, or strong primes for RSA
   in which both p-1 and p+1 have a large factor. So those become
   things we could experiment with in future.
2020-02-23 15:47:44 +00:00
Simon Tatham
dfddd1381b testcrypt: allow random_read() to use a full PRNG.
This still isn't the true random generator used in the live tools:
it's deterministic, for repeatable testing. The Python side of
testcrypt can now call random_make_prng(), which will instantiate a
PRNG with the given seed. random_clear() still gets rid of it.

So I can still have some tests control the precise random numbers
received by the function under test, but for others (especially key
generation, with its uncertainty about how much randomness it will
actually use) I can just say 'here, have a seed, generate as much
stuff from that seed as you need'.
2020-02-23 15:01:55 +00:00
Simon Tatham
2debb352b0 mpint: add a gcd function.
This is another application of the existing mp_bezout_into, which
needed a tweak or two to cope with the numbers not necessarily being
coprime, plus a wrapper function to deal with shared factors of 2.

It reindents the entire second half of mp_bezout_into, so the patch is
best viewed with whitespace differences ignored.
2020-02-23 14:49:54 +00:00
Simon Tatham
18678ba9bc mpint: add mp_[lr]shift_safe_into functions.
There was previously no safe left shift at all, which is an omission.
And rshift_safe_into was an odd thing to be missing, so while I'm
here, I've added it on the basis that it will probably be useful
sooner or later.
2020-02-23 14:49:54 +00:00
Simon Tatham
82df83719a Test passing null pointers to mp_divmod_into.
I've got opt_val_mpint already in the test system, so it makes
sense to use it.
2020-02-23 12:02:44 +00:00
Simon Tatham
921118dbea Fix handling of large RHS in mp_add_integer_into.
While looking over the code for other reasons, I happened to notice
that the internal function mp_add_masked_integer_into was using a
totally wrong condition to check whether it was about to do an
out-of-range right shift: it was comparing a shift count measured in
bits against BIGNUM_INT_BYTES.

The resulting bug hasn't shown up in the code so far, which I assume
is just because no caller is passing any RHS to mp_add_integer_into
bigger than about 1 or 2. And it doesn't show up in the test suite
because I hadn't tested those functions. Now I am testing them, and
the newly added test fails when built for 16-bit BignumInt if you back
out the actual fix in this commit.
2020-02-22 18:51:43 +00:00
Simon Tatham
c18e5dc8fb cmdgen: add a --dump option.
Also spelled '-O text', this takes a public or private key as input,
and produces on standard output a dump of all the actual numbers
involved in the key: the exponent and modulus for RSA, the p,q,g,y
parameters for DSA, the affine x and y coordinates of the public
elliptic curve point for ECC keys, and all the extra bits and pieces
in the private keys too.

Partly I expect this to be useful to me for debugging: I've had to
paste key files a few too many times through base64 decoders and hex
dump tools, then manually decode SSH marshalling and paste the result
into the Python REPL to get an integer object. Now I should be able to
get _straight_ to text I can paste into Python.

But also, it's a way that other applications can use the key
generator: if you need to generate, say, an RSA key in some format I
don't support (I've recently heard of an XML-based one, for example),
then you can run 'puttygen -t rsa --dump' and have it print the
elements of a freshly generated keypair on standard output, and then
all you have to do is understand the output format.
2020-02-22 18:42:13 +00:00
Simon Tatham
8d186c3c93 Formatting change to braces around one case of a switch.
Sometimes, within a switch statement, you want to declare local
variables specific to the handler for one particular case. Until now
I've mostly been writing this in the form

    switch (discriminant) {
      case SIMPLE:
        do stuff;
        break;
      case COMPLICATED:
        {
            declare variables;
            do stuff;
        }
        break;
    }

which is ugly because the two pieces of essentially similar code
appear at different indent levels, and also inconvenient because you
have less horizontal space available to write the complicated case
handler in - particuarly undesirable because _complicated_ case
handlers are the ones most likely to need all the space they can get!

After encountering a rather nicer idiom in the LLVM source code, and
after a bit of hackery this morning figuring out how to persuade
Emacs's auto-indent to do what I wanted with it, I've decided to move
to an idiom in which the open brace comes right after the case
statement, and the code within it is indented the same as it would
have been without the brace. Then the whole case handler (including
the break) lives inside those braces, and you get something that looks
more like this:

    switch (discriminant) {
      case SIMPLE:
        do stuff;
        break;
      case COMPLICATED: {
        declare variables;
        do stuff;
        break;
      }
    }

This commit is a big-bang change that reformats all the complicated
case handlers I could find into the new layout. This is particularly
nice in the Pageant main function, in which almost _every_ case
handler had a bundle of variables and was long and complicated. (In
fact that's what motivated me to get round to this.) Some of the
innermost parts of the terminal escape-sequence handling are also
breathing a bit easier now the horizontal pressure on them is
relieved.

(Also, in a few cases, I was able to remove the extra braces
completely, because the only variable local to the case handler was a
loop variable which our new C99 policy allows me to move into the
initialiser clause of its for statement.)

Viewed with whitespace ignored, this is not too disruptive a change.
Downstream patches that conflict with it may need to be reapplied
using --ignore-whitespace or similar.
2020-02-16 11:26:21 +00:00
Simon Tatham
404f558705 sshprng.c: remove pointless pending_output buffer.
In an early draft of the new PRNG, before I decided to get rid of
random_byte() and replace it with random_read(), it was important
after generating a hash-worth of PRNG output to buffer it so as to
return it a byte at a time. So the PRNG data structure itself had to
keep a hash-sized buffer of pending output, and be able to return the
next byte from it on every random_byte() call.

But when random_read() came in, there was no need to do that any more,
because at the end of a read, the generator is re-seeded and the
remains of any generated data is deliberately thrown away. So the
pending_output buffer has no need to live in the persistent prng
object; it can be relegated to a local variable inside random_read
(and a couple of other functions that used the same buffer since it
was conveniently there).

A side effect of this is that we're no longer yielding the bytes of
each hash in reverse order, because only the previous silly code
structure made it convenient. Fortunately, of course, nothing is
depending on that - except the cryptsuite tests, which I've updated.
2020-01-26 16:37:48 +00:00