1
0
mirror of https://git.tartarus.org/simon/putty.git synced 2025-01-09 01:18:00 +00:00
putty-source/testcrypt.h

330 lines
16 KiB
C
Raw Normal View History

New test system for mp_int and cryptography. I've written a new standalone test program which incorporates all of PuTTY's crypto code, including the mp_int and low-level elliptic curve layers but also going all the way up to the implementations of the MAC, hash, cipher, public key and kex abstractions. The test program itself, 'testcrypt', speaks a simple line-oriented protocol on standard I/O in which you write the name of a function call followed by some inputs, and it gives you back a list of outputs preceded by a line telling you how many there are. Dynamically allocated objects are assigned string ids in the protocol, and there's a 'free' function that tells testcrypt when it can dispose of one. It's possible to speak that protocol by hand, but cumbersome. I've also provided a Python module that wraps it, by running testcrypt as a persistent subprocess and gatewaying all the function calls into things that look reasonably natural to call from Python. The Python module and testcrypt.c both read a carefully formatted header file testcrypt.h which contains the name and signature of every exported function, so it costs minimal effort to expose a given function through this test API. In a few cases it's necessary to write a wrapper in testcrypt.c that makes the function look more friendly, but mostly you don't even need that. (Though that is one of the motivations between a lot of API cleanups I've done recently!) I considered doing Python integration in the more obvious way, by linking parts of the PuTTY code directly into a native-code .so Python module. I decided against it because this way is more flexible: I can run the testcrypt program on its own, or compile it in a way that Python wouldn't play nicely with (I bet compiling just that .so with Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or attach a debugger to it. I can even recompile testcrypt for a different CPU architecture (32- vs 64-bit, or even running it on a different machine over ssh or under emulation) and still layer the nice API on top of that via the local Python interpreter. All I need is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
/*
* mpint.h functions.
*/
Build testcrypt on Windows. The bulk of this commit is the changes necessary to make testcrypt compile under Visual Studio. Unfortunately, I've had to remove my fiddly clever uses of C99 variadic macros, because Visual Studio does something unexpected when a variadic macro's expansion puts __VA_ARGS__ in the argument list of a further macro invocation: the commas don't separate further arguments. In other words, if you write #define INNER(x,y,z) some expansion involving x, y and z #define OUTER(...) INNER(__VA_ARGS__) OUTER(1,2,3) then gcc and clang will translate OUTER(1,2,3) into INNER(1,2,3) in the obvious way, and the inner macro will be expanded with x=1, y=2 and z=3. But try this in Visual Studio, and you'll get the macro parameter x expanding to the entire string 1,2,3 and the other two empty (with warnings complaining that INNER didn't get the number of arguments it expected). It's hard to cite chapter and verse of the standard to say which of those is _definitely_ right, though my reading leans towards the gcc/clang behaviour. But I do know I can't depend on it in code that has to compile under both! So I've removed the system that allowed me to declare everything in testcrypt.h as FUNC(ret,fn,arg,arg,arg), and now I have to use a different macro for each arity (FUNC0, FUNC1, FUNC2 etc). Also, the WRAPPED_NAME system is gone (because that too depended on the use of a comma to shift macro arguments along by one), and now I put a custom C wrapper around a function by simply re-#defining that function's own name (and therefore the subsequent code has to be a little more careful to _not_ pass functions' names between several macros before stringifying them). That's all a bit tedious, and commits me to a small amount of ongoing annoyance because now I'll have to add an explicit argument count every time I add something to testcrypt.h. But then again, perhaps it will make the code less incomprehensible to someone trying to understand it!
2019-01-11 06:25:28 +00:00
FUNC1(val_mpint, mp_new, uint)
FUNC1(void, mp_clear, val_mpint)
FUNC1(val_mpint, mp_from_bytes_le, val_string_ptrlen)
FUNC1(val_mpint, mp_from_bytes_be, val_string_ptrlen)
FUNC1(val_mpint, mp_from_integer, uint)
FUNC1(val_mpint, mp_from_decimal_pl, val_string_ptrlen)
FUNC1(val_mpint, mp_from_decimal, val_string_asciz)
FUNC1(val_mpint, mp_from_hex_pl, val_string_ptrlen)
FUNC1(val_mpint, mp_from_hex, val_string_asciz)
FUNC1(val_mpint, mp_copy, val_mpint)
FUNC1(val_mpint, mp_power_2, uint)
FUNC2(uint, mp_get_byte, val_mpint, uint)
FUNC2(uint, mp_get_bit, val_mpint, uint)
FUNC3(void, mp_set_bit, val_mpint, uint, uint)
FUNC1(uint, mp_max_bytes, val_mpint)
FUNC1(uint, mp_max_bits, val_mpint)
FUNC1(uint, mp_get_nbits, val_mpint)
FUNC1(val_string_asciz, mp_get_decimal, val_mpint)
FUNC1(val_string_asciz, mp_get_hex, val_mpint)
FUNC1(val_string_asciz, mp_get_hex_uppercase, val_mpint)
FUNC2(uint, mp_cmp_hs, val_mpint, val_mpint)
FUNC2(uint, mp_cmp_eq, val_mpint, val_mpint)
FUNC2(uint, mp_hs_integer, val_mpint, uint)
FUNC2(uint, mp_eq_integer, val_mpint, uint)
FUNC3(void, mp_min_into, val_mpint, val_mpint, val_mpint)
FUNC3(void, mp_max_into, val_mpint, val_mpint, val_mpint)
Build testcrypt on Windows. The bulk of this commit is the changes necessary to make testcrypt compile under Visual Studio. Unfortunately, I've had to remove my fiddly clever uses of C99 variadic macros, because Visual Studio does something unexpected when a variadic macro's expansion puts __VA_ARGS__ in the argument list of a further macro invocation: the commas don't separate further arguments. In other words, if you write #define INNER(x,y,z) some expansion involving x, y and z #define OUTER(...) INNER(__VA_ARGS__) OUTER(1,2,3) then gcc and clang will translate OUTER(1,2,3) into INNER(1,2,3) in the obvious way, and the inner macro will be expanded with x=1, y=2 and z=3. But try this in Visual Studio, and you'll get the macro parameter x expanding to the entire string 1,2,3 and the other two empty (with warnings complaining that INNER didn't get the number of arguments it expected). It's hard to cite chapter and verse of the standard to say which of those is _definitely_ right, though my reading leans towards the gcc/clang behaviour. But I do know I can't depend on it in code that has to compile under both! So I've removed the system that allowed me to declare everything in testcrypt.h as FUNC(ret,fn,arg,arg,arg), and now I have to use a different macro for each arity (FUNC0, FUNC1, FUNC2 etc). Also, the WRAPPED_NAME system is gone (because that too depended on the use of a comma to shift macro arguments along by one), and now I put a custom C wrapper around a function by simply re-#defining that function's own name (and therefore the subsequent code has to be a little more careful to _not_ pass functions' names between several macros before stringifying them). That's all a bit tedious, and commits me to a small amount of ongoing annoyance because now I'll have to add an explicit argument count every time I add something to testcrypt.h. But then again, perhaps it will make the code less incomprehensible to someone trying to understand it!
2019-01-11 06:25:28 +00:00
FUNC2(val_mpint, mp_min, val_mpint, val_mpint)
FUNC2(val_mpint, mp_max, val_mpint, val_mpint)
Build testcrypt on Windows. The bulk of this commit is the changes necessary to make testcrypt compile under Visual Studio. Unfortunately, I've had to remove my fiddly clever uses of C99 variadic macros, because Visual Studio does something unexpected when a variadic macro's expansion puts __VA_ARGS__ in the argument list of a further macro invocation: the commas don't separate further arguments. In other words, if you write #define INNER(x,y,z) some expansion involving x, y and z #define OUTER(...) INNER(__VA_ARGS__) OUTER(1,2,3) then gcc and clang will translate OUTER(1,2,3) into INNER(1,2,3) in the obvious way, and the inner macro will be expanded with x=1, y=2 and z=3. But try this in Visual Studio, and you'll get the macro parameter x expanding to the entire string 1,2,3 and the other two empty (with warnings complaining that INNER didn't get the number of arguments it expected). It's hard to cite chapter and verse of the standard to say which of those is _definitely_ right, though my reading leans towards the gcc/clang behaviour. But I do know I can't depend on it in code that has to compile under both! So I've removed the system that allowed me to declare everything in testcrypt.h as FUNC(ret,fn,arg,arg,arg), and now I have to use a different macro for each arity (FUNC0, FUNC1, FUNC2 etc). Also, the WRAPPED_NAME system is gone (because that too depended on the use of a comma to shift macro arguments along by one), and now I put a custom C wrapper around a function by simply re-#defining that function's own name (and therefore the subsequent code has to be a little more careful to _not_ pass functions' names between several macros before stringifying them). That's all a bit tedious, and commits me to a small amount of ongoing annoyance because now I'll have to add an explicit argument count every time I add something to testcrypt.h. But then again, perhaps it will make the code less incomprehensible to someone trying to understand it!
2019-01-11 06:25:28 +00:00
FUNC2(void, mp_copy_into, val_mpint, val_mpint)
FUNC4(void, mp_select_into, val_mpint, val_mpint, val_mpint, uint)
FUNC3(void, mp_add_into, val_mpint, val_mpint, val_mpint)
FUNC3(void, mp_sub_into, val_mpint, val_mpint, val_mpint)
FUNC3(void, mp_mul_into, val_mpint, val_mpint, val_mpint)
FUNC2(val_mpint, mp_add, val_mpint, val_mpint)
FUNC2(val_mpint, mp_sub, val_mpint, val_mpint)
FUNC2(val_mpint, mp_mul, val_mpint, val_mpint)
FUNC3(void, mp_and_into, val_mpint, val_mpint, val_mpint)
FUNC3(void, mp_or_into, val_mpint, val_mpint, val_mpint)
FUNC3(void, mp_xor_into, val_mpint, val_mpint, val_mpint)
FUNC3(void, mp_bic_into, val_mpint, val_mpint, val_mpint)
FUNC2(void, mp_copy_integer_into, val_mpint, uint)
Build testcrypt on Windows. The bulk of this commit is the changes necessary to make testcrypt compile under Visual Studio. Unfortunately, I've had to remove my fiddly clever uses of C99 variadic macros, because Visual Studio does something unexpected when a variadic macro's expansion puts __VA_ARGS__ in the argument list of a further macro invocation: the commas don't separate further arguments. In other words, if you write #define INNER(x,y,z) some expansion involving x, y and z #define OUTER(...) INNER(__VA_ARGS__) OUTER(1,2,3) then gcc and clang will translate OUTER(1,2,3) into INNER(1,2,3) in the obvious way, and the inner macro will be expanded with x=1, y=2 and z=3. But try this in Visual Studio, and you'll get the macro parameter x expanding to the entire string 1,2,3 and the other two empty (with warnings complaining that INNER didn't get the number of arguments it expected). It's hard to cite chapter and verse of the standard to say which of those is _definitely_ right, though my reading leans towards the gcc/clang behaviour. But I do know I can't depend on it in code that has to compile under both! So I've removed the system that allowed me to declare everything in testcrypt.h as FUNC(ret,fn,arg,arg,arg), and now I have to use a different macro for each arity (FUNC0, FUNC1, FUNC2 etc). Also, the WRAPPED_NAME system is gone (because that too depended on the use of a comma to shift macro arguments along by one), and now I put a custom C wrapper around a function by simply re-#defining that function's own name (and therefore the subsequent code has to be a little more careful to _not_ pass functions' names between several macros before stringifying them). That's all a bit tedious, and commits me to a small amount of ongoing annoyance because now I'll have to add an explicit argument count every time I add something to testcrypt.h. But then again, perhaps it will make the code less incomprehensible to someone trying to understand it!
2019-01-11 06:25:28 +00:00
FUNC3(void, mp_add_integer_into, val_mpint, val_mpint, uint)
FUNC3(void, mp_sub_integer_into, val_mpint, val_mpint, uint)
FUNC3(void, mp_mul_integer_into, val_mpint, val_mpint, uint)
FUNC4(void, mp_cond_add_into, val_mpint, val_mpint, val_mpint, uint)
FUNC4(void, mp_cond_sub_into, val_mpint, val_mpint, val_mpint, uint)
FUNC3(void, mp_cond_swap, val_mpint, val_mpint, uint)
FUNC2(void, mp_cond_clear, val_mpint, uint)
FUNC4(void, mp_divmod_into, val_mpint, val_mpint, opt_val_mpint, opt_val_mpint)
Build testcrypt on Windows. The bulk of this commit is the changes necessary to make testcrypt compile under Visual Studio. Unfortunately, I've had to remove my fiddly clever uses of C99 variadic macros, because Visual Studio does something unexpected when a variadic macro's expansion puts __VA_ARGS__ in the argument list of a further macro invocation: the commas don't separate further arguments. In other words, if you write #define INNER(x,y,z) some expansion involving x, y and z #define OUTER(...) INNER(__VA_ARGS__) OUTER(1,2,3) then gcc and clang will translate OUTER(1,2,3) into INNER(1,2,3) in the obvious way, and the inner macro will be expanded with x=1, y=2 and z=3. But try this in Visual Studio, and you'll get the macro parameter x expanding to the entire string 1,2,3 and the other two empty (with warnings complaining that INNER didn't get the number of arguments it expected). It's hard to cite chapter and verse of the standard to say which of those is _definitely_ right, though my reading leans towards the gcc/clang behaviour. But I do know I can't depend on it in code that has to compile under both! So I've removed the system that allowed me to declare everything in testcrypt.h as FUNC(ret,fn,arg,arg,arg), and now I have to use a different macro for each arity (FUNC0, FUNC1, FUNC2 etc). Also, the WRAPPED_NAME system is gone (because that too depended on the use of a comma to shift macro arguments along by one), and now I put a custom C wrapper around a function by simply re-#defining that function's own name (and therefore the subsequent code has to be a little more careful to _not_ pass functions' names between several macros before stringifying them). That's all a bit tedious, and commits me to a small amount of ongoing annoyance because now I'll have to add an explicit argument count every time I add something to testcrypt.h. But then again, perhaps it will make the code less incomprehensible to someone trying to understand it!
2019-01-11 06:25:28 +00:00
FUNC2(val_mpint, mp_div, val_mpint, val_mpint)
FUNC2(val_mpint, mp_mod, val_mpint, val_mpint)
FUNC3(val_mpint, mp_nthroot, val_mpint, uint, opt_val_mpint)
Build testcrypt on Windows. The bulk of this commit is the changes necessary to make testcrypt compile under Visual Studio. Unfortunately, I've had to remove my fiddly clever uses of C99 variadic macros, because Visual Studio does something unexpected when a variadic macro's expansion puts __VA_ARGS__ in the argument list of a further macro invocation: the commas don't separate further arguments. In other words, if you write #define INNER(x,y,z) some expansion involving x, y and z #define OUTER(...) INNER(__VA_ARGS__) OUTER(1,2,3) then gcc and clang will translate OUTER(1,2,3) into INNER(1,2,3) in the obvious way, and the inner macro will be expanded with x=1, y=2 and z=3. But try this in Visual Studio, and you'll get the macro parameter x expanding to the entire string 1,2,3 and the other two empty (with warnings complaining that INNER didn't get the number of arguments it expected). It's hard to cite chapter and verse of the standard to say which of those is _definitely_ right, though my reading leans towards the gcc/clang behaviour. But I do know I can't depend on it in code that has to compile under both! So I've removed the system that allowed me to declare everything in testcrypt.h as FUNC(ret,fn,arg,arg,arg), and now I have to use a different macro for each arity (FUNC0, FUNC1, FUNC2 etc). Also, the WRAPPED_NAME system is gone (because that too depended on the use of a comma to shift macro arguments along by one), and now I put a custom C wrapper around a function by simply re-#defining that function's own name (and therefore the subsequent code has to be a little more careful to _not_ pass functions' names between several macros before stringifying them). That's all a bit tedious, and commits me to a small amount of ongoing annoyance because now I'll have to add an explicit argument count every time I add something to testcrypt.h. But then again, perhaps it will make the code less incomprehensible to someone trying to understand it!
2019-01-11 06:25:28 +00:00
FUNC2(void, mp_reduce_mod_2to, val_mpint, uint)
FUNC2(val_mpint, mp_invert_mod_2to, val_mpint, uint)
FUNC2(val_mpint, mp_invert, val_mpint, val_mpint)
FUNC5(void, mp_gcd_into, val_mpint, val_mpint, opt_val_mpint, opt_val_mpint, opt_val_mpint)
FUNC2(val_mpint, mp_gcd, val_mpint, val_mpint)
FUNC2(uint, mp_coprime, val_mpint, val_mpint)
Build testcrypt on Windows. The bulk of this commit is the changes necessary to make testcrypt compile under Visual Studio. Unfortunately, I've had to remove my fiddly clever uses of C99 variadic macros, because Visual Studio does something unexpected when a variadic macro's expansion puts __VA_ARGS__ in the argument list of a further macro invocation: the commas don't separate further arguments. In other words, if you write #define INNER(x,y,z) some expansion involving x, y and z #define OUTER(...) INNER(__VA_ARGS__) OUTER(1,2,3) then gcc and clang will translate OUTER(1,2,3) into INNER(1,2,3) in the obvious way, and the inner macro will be expanded with x=1, y=2 and z=3. But try this in Visual Studio, and you'll get the macro parameter x expanding to the entire string 1,2,3 and the other two empty (with warnings complaining that INNER didn't get the number of arguments it expected). It's hard to cite chapter and verse of the standard to say which of those is _definitely_ right, though my reading leans towards the gcc/clang behaviour. But I do know I can't depend on it in code that has to compile under both! So I've removed the system that allowed me to declare everything in testcrypt.h as FUNC(ret,fn,arg,arg,arg), and now I have to use a different macro for each arity (FUNC0, FUNC1, FUNC2 etc). Also, the WRAPPED_NAME system is gone (because that too depended on the use of a comma to shift macro arguments along by one), and now I put a custom C wrapper around a function by simply re-#defining that function's own name (and therefore the subsequent code has to be a little more careful to _not_ pass functions' names between several macros before stringifying them). That's all a bit tedious, and commits me to a small amount of ongoing annoyance because now I'll have to add an explicit argument count every time I add something to testcrypt.h. But then again, perhaps it will make the code less incomprehensible to someone trying to understand it!
2019-01-11 06:25:28 +00:00
FUNC2(val_modsqrt, modsqrt_new, val_mpint, val_mpint)
New test system for mp_int and cryptography. I've written a new standalone test program which incorporates all of PuTTY's crypto code, including the mp_int and low-level elliptic curve layers but also going all the way up to the implementations of the MAC, hash, cipher, public key and kex abstractions. The test program itself, 'testcrypt', speaks a simple line-oriented protocol on standard I/O in which you write the name of a function call followed by some inputs, and it gives you back a list of outputs preceded by a line telling you how many there are. Dynamically allocated objects are assigned string ids in the protocol, and there's a 'free' function that tells testcrypt when it can dispose of one. It's possible to speak that protocol by hand, but cumbersome. I've also provided a Python module that wraps it, by running testcrypt as a persistent subprocess and gatewaying all the function calls into things that look reasonably natural to call from Python. The Python module and testcrypt.c both read a carefully formatted header file testcrypt.h which contains the name and signature of every exported function, so it costs minimal effort to expose a given function through this test API. In a few cases it's necessary to write a wrapper in testcrypt.c that makes the function look more friendly, but mostly you don't even need that. (Though that is one of the motivations between a lot of API cleanups I've done recently!) I considered doing Python integration in the more obvious way, by linking parts of the PuTTY code directly into a native-code .so Python module. I decided against it because this way is more flexible: I can run the testcrypt program on its own, or compile it in a way that Python wouldn't play nicely with (I bet compiling just that .so with Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or attach a debugger to it. I can even recompile testcrypt for a different CPU architecture (32- vs 64-bit, or even running it on a different machine over ssh or under emulation) and still layer the nice API on top of that via the local Python interpreter. All I need is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
/* The modsqrt functions' 'success' pointer becomes a second return value */
Build testcrypt on Windows. The bulk of this commit is the changes necessary to make testcrypt compile under Visual Studio. Unfortunately, I've had to remove my fiddly clever uses of C99 variadic macros, because Visual Studio does something unexpected when a variadic macro's expansion puts __VA_ARGS__ in the argument list of a further macro invocation: the commas don't separate further arguments. In other words, if you write #define INNER(x,y,z) some expansion involving x, y and z #define OUTER(...) INNER(__VA_ARGS__) OUTER(1,2,3) then gcc and clang will translate OUTER(1,2,3) into INNER(1,2,3) in the obvious way, and the inner macro will be expanded with x=1, y=2 and z=3. But try this in Visual Studio, and you'll get the macro parameter x expanding to the entire string 1,2,3 and the other two empty (with warnings complaining that INNER didn't get the number of arguments it expected). It's hard to cite chapter and verse of the standard to say which of those is _definitely_ right, though my reading leans towards the gcc/clang behaviour. But I do know I can't depend on it in code that has to compile under both! So I've removed the system that allowed me to declare everything in testcrypt.h as FUNC(ret,fn,arg,arg,arg), and now I have to use a different macro for each arity (FUNC0, FUNC1, FUNC2 etc). Also, the WRAPPED_NAME system is gone (because that too depended on the use of a comma to shift macro arguments along by one), and now I put a custom C wrapper around a function by simply re-#defining that function's own name (and therefore the subsequent code has to be a little more careful to _not_ pass functions' names between several macros before stringifying them). That's all a bit tedious, and commits me to a small amount of ongoing annoyance because now I'll have to add an explicit argument count every time I add something to testcrypt.h. But then again, perhaps it will make the code less incomprehensible to someone trying to understand it!
2019-01-11 06:25:28 +00:00
FUNC3(val_mpint, mp_modsqrt, val_modsqrt, val_mpint, out_uint)
FUNC1(val_monty, monty_new, val_mpint)
FUNC1(val_mpint, monty_modulus, val_monty)
FUNC1(val_mpint, monty_identity, val_monty)
FUNC3(void, monty_import_into, val_monty, val_mpint, val_mpint)
FUNC2(val_mpint, monty_import, val_monty, val_mpint)
FUNC3(void, monty_export_into, val_monty, val_mpint, val_mpint)
FUNC2(val_mpint, monty_export, val_monty, val_mpint)
FUNC4(void, monty_mul_into, val_monty, val_mpint, val_mpint, val_mpint)
FUNC3(val_mpint, monty_add, val_monty, val_mpint, val_mpint)
FUNC3(val_mpint, monty_sub, val_monty, val_mpint, val_mpint)
FUNC3(val_mpint, monty_mul, val_monty, val_mpint, val_mpint)
FUNC3(val_mpint, monty_pow, val_monty, val_mpint, val_mpint)
FUNC2(val_mpint, monty_invert, val_monty, val_mpint)
FUNC3(val_mpint, monty_modsqrt, val_modsqrt, val_mpint, out_uint)
FUNC3(val_mpint, mp_modpow, val_mpint, val_mpint, val_mpint)
FUNC3(val_mpint, mp_modmul, val_mpint, val_mpint, val_mpint)
FUNC3(val_mpint, mp_modadd, val_mpint, val_mpint, val_mpint)
FUNC3(val_mpint, mp_modsub, val_mpint, val_mpint, val_mpint)
FUNC3(void, mp_lshift_safe_into, val_mpint, val_mpint, uint)
FUNC3(void, mp_rshift_safe_into, val_mpint, val_mpint, uint)
Build testcrypt on Windows. The bulk of this commit is the changes necessary to make testcrypt compile under Visual Studio. Unfortunately, I've had to remove my fiddly clever uses of C99 variadic macros, because Visual Studio does something unexpected when a variadic macro's expansion puts __VA_ARGS__ in the argument list of a further macro invocation: the commas don't separate further arguments. In other words, if you write #define INNER(x,y,z) some expansion involving x, y and z #define OUTER(...) INNER(__VA_ARGS__) OUTER(1,2,3) then gcc and clang will translate OUTER(1,2,3) into INNER(1,2,3) in the obvious way, and the inner macro will be expanded with x=1, y=2 and z=3. But try this in Visual Studio, and you'll get the macro parameter x expanding to the entire string 1,2,3 and the other two empty (with warnings complaining that INNER didn't get the number of arguments it expected). It's hard to cite chapter and verse of the standard to say which of those is _definitely_ right, though my reading leans towards the gcc/clang behaviour. But I do know I can't depend on it in code that has to compile under both! So I've removed the system that allowed me to declare everything in testcrypt.h as FUNC(ret,fn,arg,arg,arg), and now I have to use a different macro for each arity (FUNC0, FUNC1, FUNC2 etc). Also, the WRAPPED_NAME system is gone (because that too depended on the use of a comma to shift macro arguments along by one), and now I put a custom C wrapper around a function by simply re-#defining that function's own name (and therefore the subsequent code has to be a little more careful to _not_ pass functions' names between several macros before stringifying them). That's all a bit tedious, and commits me to a small amount of ongoing annoyance because now I'll have to add an explicit argument count every time I add something to testcrypt.h. But then again, perhaps it will make the code less incomprehensible to someone trying to understand it!
2019-01-11 06:25:28 +00:00
FUNC2(val_mpint, mp_rshift_safe, val_mpint, uint)
FUNC3(void, mp_lshift_fixed_into, val_mpint, val_mpint, uint)
FUNC3(void, mp_rshift_fixed_into, val_mpint, val_mpint, uint)
FUNC2(val_mpint, mp_rshift_fixed, val_mpint, uint)
FUNC1(val_mpint, mp_random_bits, uint)
FUNC2(val_mpint, mp_random_in_range, val_mpint, val_mpint)
New test system for mp_int and cryptography. I've written a new standalone test program which incorporates all of PuTTY's crypto code, including the mp_int and low-level elliptic curve layers but also going all the way up to the implementations of the MAC, hash, cipher, public key and kex abstractions. The test program itself, 'testcrypt', speaks a simple line-oriented protocol on standard I/O in which you write the name of a function call followed by some inputs, and it gives you back a list of outputs preceded by a line telling you how many there are. Dynamically allocated objects are assigned string ids in the protocol, and there's a 'free' function that tells testcrypt when it can dispose of one. It's possible to speak that protocol by hand, but cumbersome. I've also provided a Python module that wraps it, by running testcrypt as a persistent subprocess and gatewaying all the function calls into things that look reasonably natural to call from Python. The Python module and testcrypt.c both read a carefully formatted header file testcrypt.h which contains the name and signature of every exported function, so it costs minimal effort to expose a given function through this test API. In a few cases it's necessary to write a wrapper in testcrypt.c that makes the function look more friendly, but mostly you don't even need that. (Though that is one of the motivations between a lot of API cleanups I've done recently!) I considered doing Python integration in the more obvious way, by linking parts of the PuTTY code directly into a native-code .so Python module. I decided against it because this way is more flexible: I can run the testcrypt program on its own, or compile it in a way that Python wouldn't play nicely with (I bet compiling just that .so with Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or attach a debugger to it. I can even recompile testcrypt for a different CPU architecture (32- vs 64-bit, or even running it on a different machine over ssh or under emulation) and still layer the nice API on top of that via the local Python interpreter. All I need is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
/*
* ecc.h functions.
*/
Build testcrypt on Windows. The bulk of this commit is the changes necessary to make testcrypt compile under Visual Studio. Unfortunately, I've had to remove my fiddly clever uses of C99 variadic macros, because Visual Studio does something unexpected when a variadic macro's expansion puts __VA_ARGS__ in the argument list of a further macro invocation: the commas don't separate further arguments. In other words, if you write #define INNER(x,y,z) some expansion involving x, y and z #define OUTER(...) INNER(__VA_ARGS__) OUTER(1,2,3) then gcc and clang will translate OUTER(1,2,3) into INNER(1,2,3) in the obvious way, and the inner macro will be expanded with x=1, y=2 and z=3. But try this in Visual Studio, and you'll get the macro parameter x expanding to the entire string 1,2,3 and the other two empty (with warnings complaining that INNER didn't get the number of arguments it expected). It's hard to cite chapter and verse of the standard to say which of those is _definitely_ right, though my reading leans towards the gcc/clang behaviour. But I do know I can't depend on it in code that has to compile under both! So I've removed the system that allowed me to declare everything in testcrypt.h as FUNC(ret,fn,arg,arg,arg), and now I have to use a different macro for each arity (FUNC0, FUNC1, FUNC2 etc). Also, the WRAPPED_NAME system is gone (because that too depended on the use of a comma to shift macro arguments along by one), and now I put a custom C wrapper around a function by simply re-#defining that function's own name (and therefore the subsequent code has to be a little more careful to _not_ pass functions' names between several macros before stringifying them). That's all a bit tedious, and commits me to a small amount of ongoing annoyance because now I'll have to add an explicit argument count every time I add something to testcrypt.h. But then again, perhaps it will make the code less incomprehensible to someone trying to understand it!
2019-01-11 06:25:28 +00:00
FUNC4(val_wcurve, ecc_weierstrass_curve, val_mpint, val_mpint, val_mpint, opt_val_mpint)
FUNC1(val_wpoint, ecc_weierstrass_point_new_identity, val_wcurve)
FUNC3(val_wpoint, ecc_weierstrass_point_new, val_wcurve, val_mpint, val_mpint)
FUNC3(val_wpoint, ecc_weierstrass_point_new_from_x, val_wcurve, val_mpint, uint)
FUNC1(val_wpoint, ecc_weierstrass_point_copy, val_wpoint)
FUNC1(uint, ecc_weierstrass_point_valid, val_wpoint)
FUNC2(val_wpoint, ecc_weierstrass_add_general, val_wpoint, val_wpoint)
FUNC2(val_wpoint, ecc_weierstrass_add, val_wpoint, val_wpoint)
FUNC1(val_wpoint, ecc_weierstrass_double, val_wpoint)
FUNC2(val_wpoint, ecc_weierstrass_multiply, val_wpoint, val_mpint)
FUNC1(uint, ecc_weierstrass_is_identity, val_wpoint)
New test system for mp_int and cryptography. I've written a new standalone test program which incorporates all of PuTTY's crypto code, including the mp_int and low-level elliptic curve layers but also going all the way up to the implementations of the MAC, hash, cipher, public key and kex abstractions. The test program itself, 'testcrypt', speaks a simple line-oriented protocol on standard I/O in which you write the name of a function call followed by some inputs, and it gives you back a list of outputs preceded by a line telling you how many there are. Dynamically allocated objects are assigned string ids in the protocol, and there's a 'free' function that tells testcrypt when it can dispose of one. It's possible to speak that protocol by hand, but cumbersome. I've also provided a Python module that wraps it, by running testcrypt as a persistent subprocess and gatewaying all the function calls into things that look reasonably natural to call from Python. The Python module and testcrypt.c both read a carefully formatted header file testcrypt.h which contains the name and signature of every exported function, so it costs minimal effort to expose a given function through this test API. In a few cases it's necessary to write a wrapper in testcrypt.c that makes the function look more friendly, but mostly you don't even need that. (Though that is one of the motivations between a lot of API cleanups I've done recently!) I considered doing Python integration in the more obvious way, by linking parts of the PuTTY code directly into a native-code .so Python module. I decided against it because this way is more flexible: I can run the testcrypt program on its own, or compile it in a way that Python wouldn't play nicely with (I bet compiling just that .so with Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or attach a debugger to it. I can even recompile testcrypt for a different CPU architecture (32- vs 64-bit, or even running it on a different machine over ssh or under emulation) and still layer the nice API on top of that via the local Python interpreter. All I need is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
/* The output pointers in get_affine all become extra output values */
Build testcrypt on Windows. The bulk of this commit is the changes necessary to make testcrypt compile under Visual Studio. Unfortunately, I've had to remove my fiddly clever uses of C99 variadic macros, because Visual Studio does something unexpected when a variadic macro's expansion puts __VA_ARGS__ in the argument list of a further macro invocation: the commas don't separate further arguments. In other words, if you write #define INNER(x,y,z) some expansion involving x, y and z #define OUTER(...) INNER(__VA_ARGS__) OUTER(1,2,3) then gcc and clang will translate OUTER(1,2,3) into INNER(1,2,3) in the obvious way, and the inner macro will be expanded with x=1, y=2 and z=3. But try this in Visual Studio, and you'll get the macro parameter x expanding to the entire string 1,2,3 and the other two empty (with warnings complaining that INNER didn't get the number of arguments it expected). It's hard to cite chapter and verse of the standard to say which of those is _definitely_ right, though my reading leans towards the gcc/clang behaviour. But I do know I can't depend on it in code that has to compile under both! So I've removed the system that allowed me to declare everything in testcrypt.h as FUNC(ret,fn,arg,arg,arg), and now I have to use a different macro for each arity (FUNC0, FUNC1, FUNC2 etc). Also, the WRAPPED_NAME system is gone (because that too depended on the use of a comma to shift macro arguments along by one), and now I put a custom C wrapper around a function by simply re-#defining that function's own name (and therefore the subsequent code has to be a little more careful to _not_ pass functions' names between several macros before stringifying them). That's all a bit tedious, and commits me to a small amount of ongoing annoyance because now I'll have to add an explicit argument count every time I add something to testcrypt.h. But then again, perhaps it will make the code less incomprehensible to someone trying to understand it!
2019-01-11 06:25:28 +00:00
FUNC3(void, ecc_weierstrass_get_affine, val_wpoint, out_val_mpint, out_val_mpint)
FUNC3(val_mcurve, ecc_montgomery_curve, val_mpint, val_mpint, val_mpint)
FUNC2(val_mpoint, ecc_montgomery_point_new, val_mcurve, val_mpint)
FUNC1(val_mpoint, ecc_montgomery_point_copy, val_mpoint)
FUNC3(val_mpoint, ecc_montgomery_diff_add, val_mpoint, val_mpoint, val_mpoint)
FUNC1(val_mpoint, ecc_montgomery_double, val_mpoint)
FUNC2(val_mpoint, ecc_montgomery_multiply, val_mpoint, val_mpint)
FUNC2(void, ecc_montgomery_get_affine, val_mpoint, out_val_mpint)
FUNC1(boolean, ecc_montgomery_is_identity, val_mpoint)
Build testcrypt on Windows. The bulk of this commit is the changes necessary to make testcrypt compile under Visual Studio. Unfortunately, I've had to remove my fiddly clever uses of C99 variadic macros, because Visual Studio does something unexpected when a variadic macro's expansion puts __VA_ARGS__ in the argument list of a further macro invocation: the commas don't separate further arguments. In other words, if you write #define INNER(x,y,z) some expansion involving x, y and z #define OUTER(...) INNER(__VA_ARGS__) OUTER(1,2,3) then gcc and clang will translate OUTER(1,2,3) into INNER(1,2,3) in the obvious way, and the inner macro will be expanded with x=1, y=2 and z=3. But try this in Visual Studio, and you'll get the macro parameter x expanding to the entire string 1,2,3 and the other two empty (with warnings complaining that INNER didn't get the number of arguments it expected). It's hard to cite chapter and verse of the standard to say which of those is _definitely_ right, though my reading leans towards the gcc/clang behaviour. But I do know I can't depend on it in code that has to compile under both! So I've removed the system that allowed me to declare everything in testcrypt.h as FUNC(ret,fn,arg,arg,arg), and now I have to use a different macro for each arity (FUNC0, FUNC1, FUNC2 etc). Also, the WRAPPED_NAME system is gone (because that too depended on the use of a comma to shift macro arguments along by one), and now I put a custom C wrapper around a function by simply re-#defining that function's own name (and therefore the subsequent code has to be a little more careful to _not_ pass functions' names between several macros before stringifying them). That's all a bit tedious, and commits me to a small amount of ongoing annoyance because now I'll have to add an explicit argument count every time I add something to testcrypt.h. But then again, perhaps it will make the code less incomprehensible to someone trying to understand it!
2019-01-11 06:25:28 +00:00
FUNC4(val_ecurve, ecc_edwards_curve, val_mpint, val_mpint, val_mpint, opt_val_mpint)
FUNC3(val_epoint, ecc_edwards_point_new, val_ecurve, val_mpint, val_mpint)
FUNC3(val_epoint, ecc_edwards_point_new_from_y, val_ecurve, val_mpint, uint)
FUNC1(val_epoint, ecc_edwards_point_copy, val_epoint)
FUNC2(val_epoint, ecc_edwards_add, val_epoint, val_epoint)
FUNC2(val_epoint, ecc_edwards_multiply, val_epoint, val_mpint)
FUNC2(uint, ecc_edwards_eq, val_epoint, val_epoint)
FUNC3(void, ecc_edwards_get_affine, val_epoint, out_val_mpint, out_val_mpint)
New test system for mp_int and cryptography. I've written a new standalone test program which incorporates all of PuTTY's crypto code, including the mp_int and low-level elliptic curve layers but also going all the way up to the implementations of the MAC, hash, cipher, public key and kex abstractions. The test program itself, 'testcrypt', speaks a simple line-oriented protocol on standard I/O in which you write the name of a function call followed by some inputs, and it gives you back a list of outputs preceded by a line telling you how many there are. Dynamically allocated objects are assigned string ids in the protocol, and there's a 'free' function that tells testcrypt when it can dispose of one. It's possible to speak that protocol by hand, but cumbersome. I've also provided a Python module that wraps it, by running testcrypt as a persistent subprocess and gatewaying all the function calls into things that look reasonably natural to call from Python. The Python module and testcrypt.c both read a carefully formatted header file testcrypt.h which contains the name and signature of every exported function, so it costs minimal effort to expose a given function through this test API. In a few cases it's necessary to write a wrapper in testcrypt.c that makes the function look more friendly, but mostly you don't even need that. (Though that is one of the motivations between a lot of API cleanups I've done recently!) I considered doing Python integration in the more obvious way, by linking parts of the PuTTY code directly into a native-code .so Python module. I decided against it because this way is more flexible: I can run the testcrypt program on its own, or compile it in a way that Python wouldn't play nicely with (I bet compiling just that .so with Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or attach a debugger to it. I can even recompile testcrypt for a different CPU architecture (32- vs 64-bit, or even running it on a different machine over ssh or under emulation) and still layer the nice API on top of that via the local Python interpreter. All I need is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
/*
* The ssh_hash abstraction. Note the 'consumed', indicating that
* ssh_hash_final puts its input ssh_hash beyond use.
*
* ssh_hash_update is an invention of testcrypt, handled in the real C
* API by the hash object also functioning as a BinarySink.
*/
FUNC1(opt_val_hash, ssh_hash_new, hashalg)
FUNC1(void, ssh_hash_reset, val_hash)
Build testcrypt on Windows. The bulk of this commit is the changes necessary to make testcrypt compile under Visual Studio. Unfortunately, I've had to remove my fiddly clever uses of C99 variadic macros, because Visual Studio does something unexpected when a variadic macro's expansion puts __VA_ARGS__ in the argument list of a further macro invocation: the commas don't separate further arguments. In other words, if you write #define INNER(x,y,z) some expansion involving x, y and z #define OUTER(...) INNER(__VA_ARGS__) OUTER(1,2,3) then gcc and clang will translate OUTER(1,2,3) into INNER(1,2,3) in the obvious way, and the inner macro will be expanded with x=1, y=2 and z=3. But try this in Visual Studio, and you'll get the macro parameter x expanding to the entire string 1,2,3 and the other two empty (with warnings complaining that INNER didn't get the number of arguments it expected). It's hard to cite chapter and verse of the standard to say which of those is _definitely_ right, though my reading leans towards the gcc/clang behaviour. But I do know I can't depend on it in code that has to compile under both! So I've removed the system that allowed me to declare everything in testcrypt.h as FUNC(ret,fn,arg,arg,arg), and now I have to use a different macro for each arity (FUNC0, FUNC1, FUNC2 etc). Also, the WRAPPED_NAME system is gone (because that too depended on the use of a comma to shift macro arguments along by one), and now I put a custom C wrapper around a function by simply re-#defining that function's own name (and therefore the subsequent code has to be a little more careful to _not_ pass functions' names between several macros before stringifying them). That's all a bit tedious, and commits me to a small amount of ongoing annoyance because now I'll have to add an explicit argument count every time I add something to testcrypt.h. But then again, perhaps it will make the code less incomprehensible to someone trying to understand it!
2019-01-11 06:25:28 +00:00
FUNC1(val_hash, ssh_hash_copy, val_hash)
FUNC1(val_string, ssh_hash_digest, val_hash)
Build testcrypt on Windows. The bulk of this commit is the changes necessary to make testcrypt compile under Visual Studio. Unfortunately, I've had to remove my fiddly clever uses of C99 variadic macros, because Visual Studio does something unexpected when a variadic macro's expansion puts __VA_ARGS__ in the argument list of a further macro invocation: the commas don't separate further arguments. In other words, if you write #define INNER(x,y,z) some expansion involving x, y and z #define OUTER(...) INNER(__VA_ARGS__) OUTER(1,2,3) then gcc and clang will translate OUTER(1,2,3) into INNER(1,2,3) in the obvious way, and the inner macro will be expanded with x=1, y=2 and z=3. But try this in Visual Studio, and you'll get the macro parameter x expanding to the entire string 1,2,3 and the other two empty (with warnings complaining that INNER didn't get the number of arguments it expected). It's hard to cite chapter and verse of the standard to say which of those is _definitely_ right, though my reading leans towards the gcc/clang behaviour. But I do know I can't depend on it in code that has to compile under both! So I've removed the system that allowed me to declare everything in testcrypt.h as FUNC(ret,fn,arg,arg,arg), and now I have to use a different macro for each arity (FUNC0, FUNC1, FUNC2 etc). Also, the WRAPPED_NAME system is gone (because that too depended on the use of a comma to shift macro arguments along by one), and now I put a custom C wrapper around a function by simply re-#defining that function's own name (and therefore the subsequent code has to be a little more careful to _not_ pass functions' names between several macros before stringifying them). That's all a bit tedious, and commits me to a small amount of ongoing annoyance because now I'll have to add an explicit argument count every time I add something to testcrypt.h. But then again, perhaps it will make the code less incomprehensible to someone trying to understand it!
2019-01-11 06:25:28 +00:00
FUNC1(val_string, ssh_hash_final, consumed_val_hash)
FUNC2(void, ssh_hash_update, val_hash, val_string_ptrlen)
New test system for mp_int and cryptography. I've written a new standalone test program which incorporates all of PuTTY's crypto code, including the mp_int and low-level elliptic curve layers but also going all the way up to the implementations of the MAC, hash, cipher, public key and kex abstractions. The test program itself, 'testcrypt', speaks a simple line-oriented protocol on standard I/O in which you write the name of a function call followed by some inputs, and it gives you back a list of outputs preceded by a line telling you how many there are. Dynamically allocated objects are assigned string ids in the protocol, and there's a 'free' function that tells testcrypt when it can dispose of one. It's possible to speak that protocol by hand, but cumbersome. I've also provided a Python module that wraps it, by running testcrypt as a persistent subprocess and gatewaying all the function calls into things that look reasonably natural to call from Python. The Python module and testcrypt.c both read a carefully formatted header file testcrypt.h which contains the name and signature of every exported function, so it costs minimal effort to expose a given function through this test API. In a few cases it's necessary to write a wrapper in testcrypt.c that makes the function look more friendly, but mostly you don't even need that. (Though that is one of the motivations between a lot of API cleanups I've done recently!) I considered doing Python integration in the more obvious way, by linking parts of the PuTTY code directly into a native-code .so Python module. I decided against it because this way is more flexible: I can run the testcrypt program on its own, or compile it in a way that Python wouldn't play nicely with (I bet compiling just that .so with Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or attach a debugger to it. I can even recompile testcrypt for a different CPU architecture (32- vs 64-bit, or even running it on a different machine over ssh or under emulation) and still layer the nice API on top of that via the local Python interpreter. All I need is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
FUNC1(opt_val_hash, blake2b_new_general, uint)
New test system for mp_int and cryptography. I've written a new standalone test program which incorporates all of PuTTY's crypto code, including the mp_int and low-level elliptic curve layers but also going all the way up to the implementations of the MAC, hash, cipher, public key and kex abstractions. The test program itself, 'testcrypt', speaks a simple line-oriented protocol on standard I/O in which you write the name of a function call followed by some inputs, and it gives you back a list of outputs preceded by a line telling you how many there are. Dynamically allocated objects are assigned string ids in the protocol, and there's a 'free' function that tells testcrypt when it can dispose of one. It's possible to speak that protocol by hand, but cumbersome. I've also provided a Python module that wraps it, by running testcrypt as a persistent subprocess and gatewaying all the function calls into things that look reasonably natural to call from Python. The Python module and testcrypt.c both read a carefully formatted header file testcrypt.h which contains the name and signature of every exported function, so it costs minimal effort to expose a given function through this test API. In a few cases it's necessary to write a wrapper in testcrypt.c that makes the function look more friendly, but mostly you don't even need that. (Though that is one of the motivations between a lot of API cleanups I've done recently!) I considered doing Python integration in the more obvious way, by linking parts of the PuTTY code directly into a native-code .so Python module. I decided against it because this way is more flexible: I can run the testcrypt program on its own, or compile it in a way that Python wouldn't play nicely with (I bet compiling just that .so with Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or attach a debugger to it. I can even recompile testcrypt for a different CPU architecture (32- vs 64-bit, or even running it on a different machine over ssh or under emulation) and still layer the nice API on top of that via the local Python interpreter. All I need is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
/*
Merge the ssh1_cipher type into ssh2_cipher. The aim of this reorganisation is to make it easier to test all the ciphers in PuTTY in a uniform way. It was inconvenient that there were two separate vtable systems for the ciphers used in SSH-1 and SSH-2 with different functionality. Now there's only one type, called ssh_cipher. But really it's the old ssh2_cipher, just renamed: I haven't made any changes to the API on the SSH-2 side. Instead, I've removed ssh1_cipher completely, and adapted the SSH-1 BPP to use the SSH-2 style API. (The relevant differences are that ssh1_cipher encapsulated both the sending and receiving directions in one object - so now ssh1bpp has to make a separate cipher instance per direction - and that ssh1_cipher automatically initialised the IV to all zeroes, which ssh1bpp now has to do by hand.) The previous ssh1_cipher vtable for single-DES has been removed completely, because when converted into the new API it became identical to the SSH-2 single-DES vtable; so now there's just one vtable for DES-CBC which works in both protocols. The other two SSH-1 ciphers each had to stay separate, because 3DES is completely different between SSH-1 and SSH-2 (three layers of CBC structure versus one), and Blowfish varies in endianness and key length between the two. (Actually, while I'm here, I've only just noticed that the SSH-1 Blowfish cipher mis-describes itself in log messages as Blowfish-128. In fact it passes the whole of the input key buffer, which has length SSH1_SESSION_KEY_LENGTH == 32 bytes == 256 bits. So it's actually Blowfish-256, and has been all along!)
2019-01-17 18:06:08 +00:00
* The ssh2_mac abstraction. Note the optional ssh_cipher parameter
New test system for mp_int and cryptography. I've written a new standalone test program which incorporates all of PuTTY's crypto code, including the mp_int and low-level elliptic curve layers but also going all the way up to the implementations of the MAC, hash, cipher, public key and kex abstractions. The test program itself, 'testcrypt', speaks a simple line-oriented protocol on standard I/O in which you write the name of a function call followed by some inputs, and it gives you back a list of outputs preceded by a line telling you how many there are. Dynamically allocated objects are assigned string ids in the protocol, and there's a 'free' function that tells testcrypt when it can dispose of one. It's possible to speak that protocol by hand, but cumbersome. I've also provided a Python module that wraps it, by running testcrypt as a persistent subprocess and gatewaying all the function calls into things that look reasonably natural to call from Python. The Python module and testcrypt.c both read a carefully formatted header file testcrypt.h which contains the name and signature of every exported function, so it costs minimal effort to expose a given function through this test API. In a few cases it's necessary to write a wrapper in testcrypt.c that makes the function look more friendly, but mostly you don't even need that. (Though that is one of the motivations between a lot of API cleanups I've done recently!) I considered doing Python integration in the more obvious way, by linking parts of the PuTTY code directly into a native-code .so Python module. I decided against it because this way is more flexible: I can run the testcrypt program on its own, or compile it in a way that Python wouldn't play nicely with (I bet compiling just that .so with Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or attach a debugger to it. I can even recompile testcrypt for a different CPU architecture (32- vs 64-bit, or even running it on a different machine over ssh or under emulation) and still layer the nice API on top of that via the local Python interpreter. All I need is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
* to ssh2_mac_new. Also, again, I've invented an ssh2_mac_update so
* you can put data into the MAC.
*/
Merge the ssh1_cipher type into ssh2_cipher. The aim of this reorganisation is to make it easier to test all the ciphers in PuTTY in a uniform way. It was inconvenient that there were two separate vtable systems for the ciphers used in SSH-1 and SSH-2 with different functionality. Now there's only one type, called ssh_cipher. But really it's the old ssh2_cipher, just renamed: I haven't made any changes to the API on the SSH-2 side. Instead, I've removed ssh1_cipher completely, and adapted the SSH-1 BPP to use the SSH-2 style API. (The relevant differences are that ssh1_cipher encapsulated both the sending and receiving directions in one object - so now ssh1bpp has to make a separate cipher instance per direction - and that ssh1_cipher automatically initialised the IV to all zeroes, which ssh1bpp now has to do by hand.) The previous ssh1_cipher vtable for single-DES has been removed completely, because when converted into the new API it became identical to the SSH-2 single-DES vtable; so now there's just one vtable for DES-CBC which works in both protocols. The other two SSH-1 ciphers each had to stay separate, because 3DES is completely different between SSH-1 and SSH-2 (three layers of CBC structure versus one), and Blowfish varies in endianness and key length between the two. (Actually, while I'm here, I've only just noticed that the SSH-1 Blowfish cipher mis-describes itself in log messages as Blowfish-128. In fact it passes the whole of the input key buffer, which has length SSH1_SESSION_KEY_LENGTH == 32 bytes == 256 bits. So it's actually Blowfish-256, and has been all along!)
2019-01-17 18:06:08 +00:00
FUNC2(val_mac, ssh2_mac_new, macalg, opt_val_cipher)
Build testcrypt on Windows. The bulk of this commit is the changes necessary to make testcrypt compile under Visual Studio. Unfortunately, I've had to remove my fiddly clever uses of C99 variadic macros, because Visual Studio does something unexpected when a variadic macro's expansion puts __VA_ARGS__ in the argument list of a further macro invocation: the commas don't separate further arguments. In other words, if you write #define INNER(x,y,z) some expansion involving x, y and z #define OUTER(...) INNER(__VA_ARGS__) OUTER(1,2,3) then gcc and clang will translate OUTER(1,2,3) into INNER(1,2,3) in the obvious way, and the inner macro will be expanded with x=1, y=2 and z=3. But try this in Visual Studio, and you'll get the macro parameter x expanding to the entire string 1,2,3 and the other two empty (with warnings complaining that INNER didn't get the number of arguments it expected). It's hard to cite chapter and verse of the standard to say which of those is _definitely_ right, though my reading leans towards the gcc/clang behaviour. But I do know I can't depend on it in code that has to compile under both! So I've removed the system that allowed me to declare everything in testcrypt.h as FUNC(ret,fn,arg,arg,arg), and now I have to use a different macro for each arity (FUNC0, FUNC1, FUNC2 etc). Also, the WRAPPED_NAME system is gone (because that too depended on the use of a comma to shift macro arguments along by one), and now I put a custom C wrapper around a function by simply re-#defining that function's own name (and therefore the subsequent code has to be a little more careful to _not_ pass functions' names between several macros before stringifying them). That's all a bit tedious, and commits me to a small amount of ongoing annoyance because now I'll have to add an explicit argument count every time I add something to testcrypt.h. But then again, perhaps it will make the code less incomprehensible to someone trying to understand it!
2019-01-11 06:25:28 +00:00
FUNC2(void, ssh2_mac_setkey, val_mac, val_string_ptrlen)
FUNC1(void, ssh2_mac_start, val_mac)
FUNC2(void, ssh2_mac_update, val_mac, val_string_ptrlen)
FUNC1(val_string, ssh2_mac_genresult, val_mac)
FUNC1(val_string_asciz_const, ssh2_mac_text_name, val_mac)
New test system for mp_int and cryptography. I've written a new standalone test program which incorporates all of PuTTY's crypto code, including the mp_int and low-level elliptic curve layers but also going all the way up to the implementations of the MAC, hash, cipher, public key and kex abstractions. The test program itself, 'testcrypt', speaks a simple line-oriented protocol on standard I/O in which you write the name of a function call followed by some inputs, and it gives you back a list of outputs preceded by a line telling you how many there are. Dynamically allocated objects are assigned string ids in the protocol, and there's a 'free' function that tells testcrypt when it can dispose of one. It's possible to speak that protocol by hand, but cumbersome. I've also provided a Python module that wraps it, by running testcrypt as a persistent subprocess and gatewaying all the function calls into things that look reasonably natural to call from Python. The Python module and testcrypt.c both read a carefully formatted header file testcrypt.h which contains the name and signature of every exported function, so it costs minimal effort to expose a given function through this test API. In a few cases it's necessary to write a wrapper in testcrypt.c that makes the function look more friendly, but mostly you don't even need that. (Though that is one of the motivations between a lot of API cleanups I've done recently!) I considered doing Python integration in the more obvious way, by linking parts of the PuTTY code directly into a native-code .so Python module. I decided against it because this way is more flexible: I can run the testcrypt program on its own, or compile it in a way that Python wouldn't play nicely with (I bet compiling just that .so with Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or attach a debugger to it. I can even recompile testcrypt for a different CPU architecture (32- vs 64-bit, or even running it on a different machine over ssh or under emulation) and still layer the nice API on top of that via the local Python interpreter. All I need is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
/*
* The ssh_key abstraction. All the uses of BinarySink and
* BinarySource in parameters are replaced with ordinary strings for
* the testing API: new_priv_openssh just takes a string input, and
* all the functions that output key and signature blobs do it by
* returning a string.
*/
Build testcrypt on Windows. The bulk of this commit is the changes necessary to make testcrypt compile under Visual Studio. Unfortunately, I've had to remove my fiddly clever uses of C99 variadic macros, because Visual Studio does something unexpected when a variadic macro's expansion puts __VA_ARGS__ in the argument list of a further macro invocation: the commas don't separate further arguments. In other words, if you write #define INNER(x,y,z) some expansion involving x, y and z #define OUTER(...) INNER(__VA_ARGS__) OUTER(1,2,3) then gcc and clang will translate OUTER(1,2,3) into INNER(1,2,3) in the obvious way, and the inner macro will be expanded with x=1, y=2 and z=3. But try this in Visual Studio, and you'll get the macro parameter x expanding to the entire string 1,2,3 and the other two empty (with warnings complaining that INNER didn't get the number of arguments it expected). It's hard to cite chapter and verse of the standard to say which of those is _definitely_ right, though my reading leans towards the gcc/clang behaviour. But I do know I can't depend on it in code that has to compile under both! So I've removed the system that allowed me to declare everything in testcrypt.h as FUNC(ret,fn,arg,arg,arg), and now I have to use a different macro for each arity (FUNC0, FUNC1, FUNC2 etc). Also, the WRAPPED_NAME system is gone (because that too depended on the use of a comma to shift macro arguments along by one), and now I put a custom C wrapper around a function by simply re-#defining that function's own name (and therefore the subsequent code has to be a little more careful to _not_ pass functions' names between several macros before stringifying them). That's all a bit tedious, and commits me to a small amount of ongoing annoyance because now I'll have to add an explicit argument count every time I add something to testcrypt.h. But then again, perhaps it will make the code less incomprehensible to someone trying to understand it!
2019-01-11 06:25:28 +00:00
FUNC2(val_key, ssh_key_new_pub, keyalg, val_string_ptrlen)
FUNC3(opt_val_key, ssh_key_new_priv, keyalg, val_string_ptrlen, val_string_ptrlen)
FUNC2(opt_val_key, ssh_key_new_priv_openssh, keyalg, val_string_binarysource)
FUNC2(opt_val_string_asciz, ssh_key_invalid, val_key, uint)
Build testcrypt on Windows. The bulk of this commit is the changes necessary to make testcrypt compile under Visual Studio. Unfortunately, I've had to remove my fiddly clever uses of C99 variadic macros, because Visual Studio does something unexpected when a variadic macro's expansion puts __VA_ARGS__ in the argument list of a further macro invocation: the commas don't separate further arguments. In other words, if you write #define INNER(x,y,z) some expansion involving x, y and z #define OUTER(...) INNER(__VA_ARGS__) OUTER(1,2,3) then gcc and clang will translate OUTER(1,2,3) into INNER(1,2,3) in the obvious way, and the inner macro will be expanded with x=1, y=2 and z=3. But try this in Visual Studio, and you'll get the macro parameter x expanding to the entire string 1,2,3 and the other two empty (with warnings complaining that INNER didn't get the number of arguments it expected). It's hard to cite chapter and verse of the standard to say which of those is _definitely_ right, though my reading leans towards the gcc/clang behaviour. But I do know I can't depend on it in code that has to compile under both! So I've removed the system that allowed me to declare everything in testcrypt.h as FUNC(ret,fn,arg,arg,arg), and now I have to use a different macro for each arity (FUNC0, FUNC1, FUNC2 etc). Also, the WRAPPED_NAME system is gone (because that too depended on the use of a comma to shift macro arguments along by one), and now I put a custom C wrapper around a function by simply re-#defining that function's own name (and therefore the subsequent code has to be a little more careful to _not_ pass functions' names between several macros before stringifying them). That's all a bit tedious, and commits me to a small amount of ongoing annoyance because now I'll have to add an explicit argument count every time I add something to testcrypt.h. But then again, perhaps it will make the code less incomprehensible to someone trying to understand it!
2019-01-11 06:25:28 +00:00
FUNC4(void, ssh_key_sign, val_key, val_string_ptrlen, uint, out_val_string_binarysink)
FUNC3(boolean, ssh_key_verify, val_key, val_string_ptrlen, val_string_ptrlen)
FUNC2(void, ssh_key_public_blob, val_key, out_val_string_binarysink)
FUNC2(void, ssh_key_private_blob, val_key, out_val_string_binarysink)
FUNC2(void, ssh_key_openssh_blob, val_key, out_val_string_binarysink)
FUNC1(val_string_asciz, ssh_key_cache_str, val_key)
FUNC1(val_keycomponents, ssh_key_components, val_key)
Build testcrypt on Windows. The bulk of this commit is the changes necessary to make testcrypt compile under Visual Studio. Unfortunately, I've had to remove my fiddly clever uses of C99 variadic macros, because Visual Studio does something unexpected when a variadic macro's expansion puts __VA_ARGS__ in the argument list of a further macro invocation: the commas don't separate further arguments. In other words, if you write #define INNER(x,y,z) some expansion involving x, y and z #define OUTER(...) INNER(__VA_ARGS__) OUTER(1,2,3) then gcc and clang will translate OUTER(1,2,3) into INNER(1,2,3) in the obvious way, and the inner macro will be expanded with x=1, y=2 and z=3. But try this in Visual Studio, and you'll get the macro parameter x expanding to the entire string 1,2,3 and the other two empty (with warnings complaining that INNER didn't get the number of arguments it expected). It's hard to cite chapter and verse of the standard to say which of those is _definitely_ right, though my reading leans towards the gcc/clang behaviour. But I do know I can't depend on it in code that has to compile under both! So I've removed the system that allowed me to declare everything in testcrypt.h as FUNC(ret,fn,arg,arg,arg), and now I have to use a different macro for each arity (FUNC0, FUNC1, FUNC2 etc). Also, the WRAPPED_NAME system is gone (because that too depended on the use of a comma to shift macro arguments along by one), and now I put a custom C wrapper around a function by simply re-#defining that function's own name (and therefore the subsequent code has to be a little more careful to _not_ pass functions' names between several macros before stringifying them). That's all a bit tedious, and commits me to a small amount of ongoing annoyance because now I'll have to add an explicit argument count every time I add something to testcrypt.h. But then again, perhaps it will make the code less incomprehensible to someone trying to understand it!
2019-01-11 06:25:28 +00:00
FUNC2(uint, ssh_key_public_bits, keyalg, val_string_ptrlen)
New test system for mp_int and cryptography. I've written a new standalone test program which incorporates all of PuTTY's crypto code, including the mp_int and low-level elliptic curve layers but also going all the way up to the implementations of the MAC, hash, cipher, public key and kex abstractions. The test program itself, 'testcrypt', speaks a simple line-oriented protocol on standard I/O in which you write the name of a function call followed by some inputs, and it gives you back a list of outputs preceded by a line telling you how many there are. Dynamically allocated objects are assigned string ids in the protocol, and there's a 'free' function that tells testcrypt when it can dispose of one. It's possible to speak that protocol by hand, but cumbersome. I've also provided a Python module that wraps it, by running testcrypt as a persistent subprocess and gatewaying all the function calls into things that look reasonably natural to call from Python. The Python module and testcrypt.c both read a carefully formatted header file testcrypt.h which contains the name and signature of every exported function, so it costs minimal effort to expose a given function through this test API. In a few cases it's necessary to write a wrapper in testcrypt.c that makes the function look more friendly, but mostly you don't even need that. (Though that is one of the motivations between a lot of API cleanups I've done recently!) I considered doing Python integration in the more obvious way, by linking parts of the PuTTY code directly into a native-code .so Python module. I decided against it because this way is more flexible: I can run the testcrypt program on its own, or compile it in a way that Python wouldn't play nicely with (I bet compiling just that .so with Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or attach a debugger to it. I can even recompile testcrypt for a different CPU architecture (32- vs 64-bit, or even running it on a different machine over ssh or under emulation) and still layer the nice API on top of that via the local Python interpreter. All I need is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
/*
* Accessors to retrieve the innards of a 'key_components'.
*/
FUNC1(uint, key_components_count, val_keycomponents)
FUNC2(opt_val_string_asciz_const, key_components_nth_name, val_keycomponents, uint)
FUNC2(opt_val_string_asciz_const, key_components_nth_str, val_keycomponents, uint)
FUNC2(opt_val_mpint, key_components_nth_mp, val_keycomponents, uint)
New test system for mp_int and cryptography. I've written a new standalone test program which incorporates all of PuTTY's crypto code, including the mp_int and low-level elliptic curve layers but also going all the way up to the implementations of the MAC, hash, cipher, public key and kex abstractions. The test program itself, 'testcrypt', speaks a simple line-oriented protocol on standard I/O in which you write the name of a function call followed by some inputs, and it gives you back a list of outputs preceded by a line telling you how many there are. Dynamically allocated objects are assigned string ids in the protocol, and there's a 'free' function that tells testcrypt when it can dispose of one. It's possible to speak that protocol by hand, but cumbersome. I've also provided a Python module that wraps it, by running testcrypt as a persistent subprocess and gatewaying all the function calls into things that look reasonably natural to call from Python. The Python module and testcrypt.c both read a carefully formatted header file testcrypt.h which contains the name and signature of every exported function, so it costs minimal effort to expose a given function through this test API. In a few cases it's necessary to write a wrapper in testcrypt.c that makes the function look more friendly, but mostly you don't even need that. (Though that is one of the motivations between a lot of API cleanups I've done recently!) I considered doing Python integration in the more obvious way, by linking parts of the PuTTY code directly into a native-code .so Python module. I decided against it because this way is more flexible: I can run the testcrypt program on its own, or compile it in a way that Python wouldn't play nicely with (I bet compiling just that .so with Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or attach a debugger to it. I can even recompile testcrypt for a different CPU architecture (32- vs 64-bit, or even running it on a different machine over ssh or under emulation) and still layer the nice API on top of that via the local Python interpreter. All I need is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
/*
Merge the ssh1_cipher type into ssh2_cipher. The aim of this reorganisation is to make it easier to test all the ciphers in PuTTY in a uniform way. It was inconvenient that there were two separate vtable systems for the ciphers used in SSH-1 and SSH-2 with different functionality. Now there's only one type, called ssh_cipher. But really it's the old ssh2_cipher, just renamed: I haven't made any changes to the API on the SSH-2 side. Instead, I've removed ssh1_cipher completely, and adapted the SSH-1 BPP to use the SSH-2 style API. (The relevant differences are that ssh1_cipher encapsulated both the sending and receiving directions in one object - so now ssh1bpp has to make a separate cipher instance per direction - and that ssh1_cipher automatically initialised the IV to all zeroes, which ssh1bpp now has to do by hand.) The previous ssh1_cipher vtable for single-DES has been removed completely, because when converted into the new API it became identical to the SSH-2 single-DES vtable; so now there's just one vtable for DES-CBC which works in both protocols. The other two SSH-1 ciphers each had to stay separate, because 3DES is completely different between SSH-1 and SSH-2 (three layers of CBC structure versus one), and Blowfish varies in endianness and key length between the two. (Actually, while I'm here, I've only just noticed that the SSH-1 Blowfish cipher mis-describes itself in log messages as Blowfish-128. In fact it passes the whole of the input key buffer, which has length SSH1_SESSION_KEY_LENGTH == 32 bytes == 256 bits. So it's actually Blowfish-256, and has been all along!)
2019-01-17 18:06:08 +00:00
* The ssh_cipher abstraction. The in-place encrypt and decrypt
* functions are wrapped to replace them with versions that take one
New test system for mp_int and cryptography. I've written a new standalone test program which incorporates all of PuTTY's crypto code, including the mp_int and low-level elliptic curve layers but also going all the way up to the implementations of the MAC, hash, cipher, public key and kex abstractions. The test program itself, 'testcrypt', speaks a simple line-oriented protocol on standard I/O in which you write the name of a function call followed by some inputs, and it gives you back a list of outputs preceded by a line telling you how many there are. Dynamically allocated objects are assigned string ids in the protocol, and there's a 'free' function that tells testcrypt when it can dispose of one. It's possible to speak that protocol by hand, but cumbersome. I've also provided a Python module that wraps it, by running testcrypt as a persistent subprocess and gatewaying all the function calls into things that look reasonably natural to call from Python. The Python module and testcrypt.c both read a carefully formatted header file testcrypt.h which contains the name and signature of every exported function, so it costs minimal effort to expose a given function through this test API. In a few cases it's necessary to write a wrapper in testcrypt.c that makes the function look more friendly, but mostly you don't even need that. (Though that is one of the motivations between a lot of API cleanups I've done recently!) I considered doing Python integration in the more obvious way, by linking parts of the PuTTY code directly into a native-code .so Python module. I decided against it because this way is more flexible: I can run the testcrypt program on its own, or compile it in a way that Python wouldn't play nicely with (I bet compiling just that .so with Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or attach a debugger to it. I can even recompile testcrypt for a different CPU architecture (32- vs 64-bit, or even running it on a different machine over ssh or under emulation) and still layer the nice API on top of that via the local Python interpreter. All I need is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
* string and return a separate string.
*/
Merge the ssh1_cipher type into ssh2_cipher. The aim of this reorganisation is to make it easier to test all the ciphers in PuTTY in a uniform way. It was inconvenient that there were two separate vtable systems for the ciphers used in SSH-1 and SSH-2 with different functionality. Now there's only one type, called ssh_cipher. But really it's the old ssh2_cipher, just renamed: I haven't made any changes to the API on the SSH-2 side. Instead, I've removed ssh1_cipher completely, and adapted the SSH-1 BPP to use the SSH-2 style API. (The relevant differences are that ssh1_cipher encapsulated both the sending and receiving directions in one object - so now ssh1bpp has to make a separate cipher instance per direction - and that ssh1_cipher automatically initialised the IV to all zeroes, which ssh1bpp now has to do by hand.) The previous ssh1_cipher vtable for single-DES has been removed completely, because when converted into the new API it became identical to the SSH-2 single-DES vtable; so now there's just one vtable for DES-CBC which works in both protocols. The other two SSH-1 ciphers each had to stay separate, because 3DES is completely different between SSH-1 and SSH-2 (three layers of CBC structure versus one), and Blowfish varies in endianness and key length between the two. (Actually, while I'm here, I've only just noticed that the SSH-1 Blowfish cipher mis-describes itself in log messages as Blowfish-128. In fact it passes the whole of the input key buffer, which has length SSH1_SESSION_KEY_LENGTH == 32 bytes == 256 bits. So it's actually Blowfish-256, and has been all along!)
2019-01-17 18:06:08 +00:00
FUNC1(opt_val_cipher, ssh_cipher_new, cipheralg)
FUNC2(void, ssh_cipher_setiv, val_cipher, val_string_ptrlen)
FUNC2(void, ssh_cipher_setkey, val_cipher, val_string_ptrlen)
FUNC2(val_string, ssh_cipher_encrypt, val_cipher, val_string_ptrlen)
FUNC2(val_string, ssh_cipher_decrypt, val_cipher, val_string_ptrlen)
FUNC3(val_string, ssh_cipher_encrypt_length, val_cipher, val_string_ptrlen, uint)
FUNC3(val_string, ssh_cipher_decrypt_length, val_cipher, val_string_ptrlen, uint)
New test system for mp_int and cryptography. I've written a new standalone test program which incorporates all of PuTTY's crypto code, including the mp_int and low-level elliptic curve layers but also going all the way up to the implementations of the MAC, hash, cipher, public key and kex abstractions. The test program itself, 'testcrypt', speaks a simple line-oriented protocol on standard I/O in which you write the name of a function call followed by some inputs, and it gives you back a list of outputs preceded by a line telling you how many there are. Dynamically allocated objects are assigned string ids in the protocol, and there's a 'free' function that tells testcrypt when it can dispose of one. It's possible to speak that protocol by hand, but cumbersome. I've also provided a Python module that wraps it, by running testcrypt as a persistent subprocess and gatewaying all the function calls into things that look reasonably natural to call from Python. The Python module and testcrypt.c both read a carefully formatted header file testcrypt.h which contains the name and signature of every exported function, so it costs minimal effort to expose a given function through this test API. In a few cases it's necessary to write a wrapper in testcrypt.c that makes the function look more friendly, but mostly you don't even need that. (Though that is one of the motivations between a lot of API cleanups I've done recently!) I considered doing Python integration in the more obvious way, by linking parts of the PuTTY code directly into a native-code .so Python module. I decided against it because this way is more flexible: I can run the testcrypt program on its own, or compile it in a way that Python wouldn't play nicely with (I bet compiling just that .so with Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or attach a debugger to it. I can even recompile testcrypt for a different CPU architecture (32- vs 64-bit, or even running it on a different machine over ssh or under emulation) and still layer the nice API on top of that via the local Python interpreter. All I need is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
/*
* Integer Diffie-Hellman.
*/
Build testcrypt on Windows. The bulk of this commit is the changes necessary to make testcrypt compile under Visual Studio. Unfortunately, I've had to remove my fiddly clever uses of C99 variadic macros, because Visual Studio does something unexpected when a variadic macro's expansion puts __VA_ARGS__ in the argument list of a further macro invocation: the commas don't separate further arguments. In other words, if you write #define INNER(x,y,z) some expansion involving x, y and z #define OUTER(...) INNER(__VA_ARGS__) OUTER(1,2,3) then gcc and clang will translate OUTER(1,2,3) into INNER(1,2,3) in the obvious way, and the inner macro will be expanded with x=1, y=2 and z=3. But try this in Visual Studio, and you'll get the macro parameter x expanding to the entire string 1,2,3 and the other two empty (with warnings complaining that INNER didn't get the number of arguments it expected). It's hard to cite chapter and verse of the standard to say which of those is _definitely_ right, though my reading leans towards the gcc/clang behaviour. But I do know I can't depend on it in code that has to compile under both! So I've removed the system that allowed me to declare everything in testcrypt.h as FUNC(ret,fn,arg,arg,arg), and now I have to use a different macro for each arity (FUNC0, FUNC1, FUNC2 etc). Also, the WRAPPED_NAME system is gone (because that too depended on the use of a comma to shift macro arguments along by one), and now I put a custom C wrapper around a function by simply re-#defining that function's own name (and therefore the subsequent code has to be a little more careful to _not_ pass functions' names between several macros before stringifying them). That's all a bit tedious, and commits me to a small amount of ongoing annoyance because now I'll have to add an explicit argument count every time I add something to testcrypt.h. But then again, perhaps it will make the code less incomprehensible to someone trying to understand it!
2019-01-11 06:25:28 +00:00
FUNC1(val_dh, dh_setup_group, dh_group)
FUNC2(val_dh, dh_setup_gex, val_mpint, val_mpint)
FUNC1(uint, dh_modulus_bit_size, val_dh)
FUNC2(val_mpint, dh_create_e, val_dh, uint)
FUNC2(boolean, dh_validate_f, val_dh, val_mpint)
FUNC2(val_mpint, dh_find_K, val_dh, val_mpint)
New test system for mp_int and cryptography. I've written a new standalone test program which incorporates all of PuTTY's crypto code, including the mp_int and low-level elliptic curve layers but also going all the way up to the implementations of the MAC, hash, cipher, public key and kex abstractions. The test program itself, 'testcrypt', speaks a simple line-oriented protocol on standard I/O in which you write the name of a function call followed by some inputs, and it gives you back a list of outputs preceded by a line telling you how many there are. Dynamically allocated objects are assigned string ids in the protocol, and there's a 'free' function that tells testcrypt when it can dispose of one. It's possible to speak that protocol by hand, but cumbersome. I've also provided a Python module that wraps it, by running testcrypt as a persistent subprocess and gatewaying all the function calls into things that look reasonably natural to call from Python. The Python module and testcrypt.c both read a carefully formatted header file testcrypt.h which contains the name and signature of every exported function, so it costs minimal effort to expose a given function through this test API. In a few cases it's necessary to write a wrapper in testcrypt.c that makes the function look more friendly, but mostly you don't even need that. (Though that is one of the motivations between a lot of API cleanups I've done recently!) I considered doing Python integration in the more obvious way, by linking parts of the PuTTY code directly into a native-code .so Python module. I decided against it because this way is more flexible: I can run the testcrypt program on its own, or compile it in a way that Python wouldn't play nicely with (I bet compiling just that .so with Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or attach a debugger to it. I can even recompile testcrypt for a different CPU architecture (32- vs 64-bit, or even running it on a different machine over ssh or under emulation) and still layer the nice API on top of that via the local Python interpreter. All I need is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
/*
* Elliptic-curve Diffie-Hellman.
*/
Build testcrypt on Windows. The bulk of this commit is the changes necessary to make testcrypt compile under Visual Studio. Unfortunately, I've had to remove my fiddly clever uses of C99 variadic macros, because Visual Studio does something unexpected when a variadic macro's expansion puts __VA_ARGS__ in the argument list of a further macro invocation: the commas don't separate further arguments. In other words, if you write #define INNER(x,y,z) some expansion involving x, y and z #define OUTER(...) INNER(__VA_ARGS__) OUTER(1,2,3) then gcc and clang will translate OUTER(1,2,3) into INNER(1,2,3) in the obvious way, and the inner macro will be expanded with x=1, y=2 and z=3. But try this in Visual Studio, and you'll get the macro parameter x expanding to the entire string 1,2,3 and the other two empty (with warnings complaining that INNER didn't get the number of arguments it expected). It's hard to cite chapter and verse of the standard to say which of those is _definitely_ right, though my reading leans towards the gcc/clang behaviour. But I do know I can't depend on it in code that has to compile under both! So I've removed the system that allowed me to declare everything in testcrypt.h as FUNC(ret,fn,arg,arg,arg), and now I have to use a different macro for each arity (FUNC0, FUNC1, FUNC2 etc). Also, the WRAPPED_NAME system is gone (because that too depended on the use of a comma to shift macro arguments along by one), and now I put a custom C wrapper around a function by simply re-#defining that function's own name (and therefore the subsequent code has to be a little more careful to _not_ pass functions' names between several macros before stringifying them). That's all a bit tedious, and commits me to a small amount of ongoing annoyance because now I'll have to add an explicit argument count every time I add something to testcrypt.h. But then again, perhaps it will make the code less incomprehensible to someone trying to understand it!
2019-01-11 06:25:28 +00:00
FUNC1(val_ecdh, ssh_ecdhkex_newkey, ecdh_alg)
FUNC2(void, ssh_ecdhkex_getpublic, val_ecdh, out_val_string_binarysink)
2020-02-26 19:23:03 +00:00
FUNC2(opt_val_mpint, ssh_ecdhkex_getkey, val_ecdh, val_string_ptrlen)
New test system for mp_int and cryptography. I've written a new standalone test program which incorporates all of PuTTY's crypto code, including the mp_int and low-level elliptic curve layers but also going all the way up to the implementations of the MAC, hash, cipher, public key and kex abstractions. The test program itself, 'testcrypt', speaks a simple line-oriented protocol on standard I/O in which you write the name of a function call followed by some inputs, and it gives you back a list of outputs preceded by a line telling you how many there are. Dynamically allocated objects are assigned string ids in the protocol, and there's a 'free' function that tells testcrypt when it can dispose of one. It's possible to speak that protocol by hand, but cumbersome. I've also provided a Python module that wraps it, by running testcrypt as a persistent subprocess and gatewaying all the function calls into things that look reasonably natural to call from Python. The Python module and testcrypt.c both read a carefully formatted header file testcrypt.h which contains the name and signature of every exported function, so it costs minimal effort to expose a given function through this test API. In a few cases it's necessary to write a wrapper in testcrypt.c that makes the function look more friendly, but mostly you don't even need that. (Though that is one of the motivations between a lot of API cleanups I've done recently!) I considered doing Python integration in the more obvious way, by linking parts of the PuTTY code directly into a native-code .so Python module. I decided against it because this way is more flexible: I can run the testcrypt program on its own, or compile it in a way that Python wouldn't play nicely with (I bet compiling just that .so with Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or attach a debugger to it. I can even recompile testcrypt for a different CPU architecture (32- vs 64-bit, or even running it on a different machine over ssh or under emulation) and still layer the nice API on top of that via the local Python interpreter. All I need is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
/*
* RSA key exchange, and also the BinarySource get function
* get_ssh1_rsa_priv_agent, which is a convenient way to make an
* RSAKey for RSA kex testing purposes.
New test system for mp_int and cryptography. I've written a new standalone test program which incorporates all of PuTTY's crypto code, including the mp_int and low-level elliptic curve layers but also going all the way up to the implementations of the MAC, hash, cipher, public key and kex abstractions. The test program itself, 'testcrypt', speaks a simple line-oriented protocol on standard I/O in which you write the name of a function call followed by some inputs, and it gives you back a list of outputs preceded by a line telling you how many there are. Dynamically allocated objects are assigned string ids in the protocol, and there's a 'free' function that tells testcrypt when it can dispose of one. It's possible to speak that protocol by hand, but cumbersome. I've also provided a Python module that wraps it, by running testcrypt as a persistent subprocess and gatewaying all the function calls into things that look reasonably natural to call from Python. The Python module and testcrypt.c both read a carefully formatted header file testcrypt.h which contains the name and signature of every exported function, so it costs minimal effort to expose a given function through this test API. In a few cases it's necessary to write a wrapper in testcrypt.c that makes the function look more friendly, but mostly you don't even need that. (Though that is one of the motivations between a lot of API cleanups I've done recently!) I considered doing Python integration in the more obvious way, by linking parts of the PuTTY code directly into a native-code .so Python module. I decided against it because this way is more flexible: I can run the testcrypt program on its own, or compile it in a way that Python wouldn't play nicely with (I bet compiling just that .so with Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or attach a debugger to it. I can even recompile testcrypt for a different CPU architecture (32- vs 64-bit, or even running it on a different machine over ssh or under emulation) and still layer the nice API on top of that via the local Python interpreter. All I need is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
*/
Build testcrypt on Windows. The bulk of this commit is the changes necessary to make testcrypt compile under Visual Studio. Unfortunately, I've had to remove my fiddly clever uses of C99 variadic macros, because Visual Studio does something unexpected when a variadic macro's expansion puts __VA_ARGS__ in the argument list of a further macro invocation: the commas don't separate further arguments. In other words, if you write #define INNER(x,y,z) some expansion involving x, y and z #define OUTER(...) INNER(__VA_ARGS__) OUTER(1,2,3) then gcc and clang will translate OUTER(1,2,3) into INNER(1,2,3) in the obvious way, and the inner macro will be expanded with x=1, y=2 and z=3. But try this in Visual Studio, and you'll get the macro parameter x expanding to the entire string 1,2,3 and the other two empty (with warnings complaining that INNER didn't get the number of arguments it expected). It's hard to cite chapter and verse of the standard to say which of those is _definitely_ right, though my reading leans towards the gcc/clang behaviour. But I do know I can't depend on it in code that has to compile under both! So I've removed the system that allowed me to declare everything in testcrypt.h as FUNC(ret,fn,arg,arg,arg), and now I have to use a different macro for each arity (FUNC0, FUNC1, FUNC2 etc). Also, the WRAPPED_NAME system is gone (because that too depended on the use of a comma to shift macro arguments along by one), and now I put a custom C wrapper around a function by simply re-#defining that function's own name (and therefore the subsequent code has to be a little more careful to _not_ pass functions' names between several macros before stringifying them). That's all a bit tedious, and commits me to a small amount of ongoing annoyance because now I'll have to add an explicit argument count every time I add something to testcrypt.h. But then again, perhaps it will make the code less incomprehensible to someone trying to understand it!
2019-01-11 06:25:28 +00:00
FUNC1(val_rsakex, ssh_rsakex_newkey, val_string_ptrlen)
FUNC1(uint, ssh_rsakex_klen, val_rsakex)
FUNC3(val_string, ssh_rsakex_encrypt, val_rsakex, hashalg, val_string_ptrlen)
FUNC3(opt_val_mpint, ssh_rsakex_decrypt, val_rsakex, hashalg, val_string_ptrlen)
FUNC1(val_rsakex, get_rsa_ssh1_priv_agent, val_string_binarysource)
New test system for mp_int and cryptography. I've written a new standalone test program which incorporates all of PuTTY's crypto code, including the mp_int and low-level elliptic curve layers but also going all the way up to the implementations of the MAC, hash, cipher, public key and kex abstractions. The test program itself, 'testcrypt', speaks a simple line-oriented protocol on standard I/O in which you write the name of a function call followed by some inputs, and it gives you back a list of outputs preceded by a line telling you how many there are. Dynamically allocated objects are assigned string ids in the protocol, and there's a 'free' function that tells testcrypt when it can dispose of one. It's possible to speak that protocol by hand, but cumbersome. I've also provided a Python module that wraps it, by running testcrypt as a persistent subprocess and gatewaying all the function calls into things that look reasonably natural to call from Python. The Python module and testcrypt.c both read a carefully formatted header file testcrypt.h which contains the name and signature of every exported function, so it costs minimal effort to expose a given function through this test API. In a few cases it's necessary to write a wrapper in testcrypt.c that makes the function look more friendly, but mostly you don't even need that. (Though that is one of the motivations between a lot of API cleanups I've done recently!) I considered doing Python integration in the more obvious way, by linking parts of the PuTTY code directly into a native-code .so Python module. I decided against it because this way is more flexible: I can run the testcrypt program on its own, or compile it in a way that Python wouldn't play nicely with (I bet compiling just that .so with Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or attach a debugger to it. I can even recompile testcrypt for a different CPU architecture (32- vs 64-bit, or even running it on a different machine over ssh or under emulation) and still layer the nice API on top of that via the local Python interpreter. All I need is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
/*
* Bare RSA keys as used in SSH-1. The construction API functions
* write into an existing RSAKey object, so I've invented an 'rsa_new'
* function to make one in the first place.
*/
Build testcrypt on Windows. The bulk of this commit is the changes necessary to make testcrypt compile under Visual Studio. Unfortunately, I've had to remove my fiddly clever uses of C99 variadic macros, because Visual Studio does something unexpected when a variadic macro's expansion puts __VA_ARGS__ in the argument list of a further macro invocation: the commas don't separate further arguments. In other words, if you write #define INNER(x,y,z) some expansion involving x, y and z #define OUTER(...) INNER(__VA_ARGS__) OUTER(1,2,3) then gcc and clang will translate OUTER(1,2,3) into INNER(1,2,3) in the obvious way, and the inner macro will be expanded with x=1, y=2 and z=3. But try this in Visual Studio, and you'll get the macro parameter x expanding to the entire string 1,2,3 and the other two empty (with warnings complaining that INNER didn't get the number of arguments it expected). It's hard to cite chapter and verse of the standard to say which of those is _definitely_ right, though my reading leans towards the gcc/clang behaviour. But I do know I can't depend on it in code that has to compile under both! So I've removed the system that allowed me to declare everything in testcrypt.h as FUNC(ret,fn,arg,arg,arg), and now I have to use a different macro for each arity (FUNC0, FUNC1, FUNC2 etc). Also, the WRAPPED_NAME system is gone (because that too depended on the use of a comma to shift macro arguments along by one), and now I put a custom C wrapper around a function by simply re-#defining that function's own name (and therefore the subsequent code has to be a little more careful to _not_ pass functions' names between several macros before stringifying them). That's all a bit tedious, and commits me to a small amount of ongoing annoyance because now I'll have to add an explicit argument count every time I add something to testcrypt.h. But then again, perhaps it will make the code less incomprehensible to someone trying to understand it!
2019-01-11 06:25:28 +00:00
FUNC0(val_rsa, rsa_new)
FUNC3(void, get_rsa_ssh1_pub, val_string_binarysource, val_rsa, rsaorder)
FUNC2(void, get_rsa_ssh1_priv, val_string_binarysource, val_rsa)
FUNC2(opt_val_string, rsa_ssh1_encrypt, val_string_ptrlen, val_rsa)
Build testcrypt on Windows. The bulk of this commit is the changes necessary to make testcrypt compile under Visual Studio. Unfortunately, I've had to remove my fiddly clever uses of C99 variadic macros, because Visual Studio does something unexpected when a variadic macro's expansion puts __VA_ARGS__ in the argument list of a further macro invocation: the commas don't separate further arguments. In other words, if you write #define INNER(x,y,z) some expansion involving x, y and z #define OUTER(...) INNER(__VA_ARGS__) OUTER(1,2,3) then gcc and clang will translate OUTER(1,2,3) into INNER(1,2,3) in the obvious way, and the inner macro will be expanded with x=1, y=2 and z=3. But try this in Visual Studio, and you'll get the macro parameter x expanding to the entire string 1,2,3 and the other two empty (with warnings complaining that INNER didn't get the number of arguments it expected). It's hard to cite chapter and verse of the standard to say which of those is _definitely_ right, though my reading leans towards the gcc/clang behaviour. But I do know I can't depend on it in code that has to compile under both! So I've removed the system that allowed me to declare everything in testcrypt.h as FUNC(ret,fn,arg,arg,arg), and now I have to use a different macro for each arity (FUNC0, FUNC1, FUNC2 etc). Also, the WRAPPED_NAME system is gone (because that too depended on the use of a comma to shift macro arguments along by one), and now I put a custom C wrapper around a function by simply re-#defining that function's own name (and therefore the subsequent code has to be a little more careful to _not_ pass functions' names between several macros before stringifying them). That's all a bit tedious, and commits me to a small amount of ongoing annoyance because now I'll have to add an explicit argument count every time I add something to testcrypt.h. But then again, perhaps it will make the code less incomprehensible to someone trying to understand it!
2019-01-11 06:25:28 +00:00
FUNC2(val_mpint, rsa_ssh1_decrypt, val_mpint, val_rsa)
FUNC2(val_string, rsa_ssh1_decrypt_pkcs1, val_mpint, val_rsa)
FUNC1(val_string_asciz, rsastr_fmt, val_rsa)
FUNC1(val_string_asciz, rsa_ssh1_fingerprint, val_rsa)
FUNC3(void, rsa_ssh1_public_blob, out_val_string_binarysink, val_rsa, rsaorder)
FUNC1(int, rsa_ssh1_public_blob_len, val_string_ptrlen)
FUNC2(void, rsa_ssh1_private_blob_agent, out_val_string_binarysink, val_rsa)
New test system for mp_int and cryptography. I've written a new standalone test program which incorporates all of PuTTY's crypto code, including the mp_int and low-level elliptic curve layers but also going all the way up to the implementations of the MAC, hash, cipher, public key and kex abstractions. The test program itself, 'testcrypt', speaks a simple line-oriented protocol on standard I/O in which you write the name of a function call followed by some inputs, and it gives you back a list of outputs preceded by a line telling you how many there are. Dynamically allocated objects are assigned string ids in the protocol, and there's a 'free' function that tells testcrypt when it can dispose of one. It's possible to speak that protocol by hand, but cumbersome. I've also provided a Python module that wraps it, by running testcrypt as a persistent subprocess and gatewaying all the function calls into things that look reasonably natural to call from Python. The Python module and testcrypt.c both read a carefully formatted header file testcrypt.h which contains the name and signature of every exported function, so it costs minimal effort to expose a given function through this test API. In a few cases it's necessary to write a wrapper in testcrypt.c that makes the function look more friendly, but mostly you don't even need that. (Though that is one of the motivations between a lot of API cleanups I've done recently!) I considered doing Python integration in the more obvious way, by linking parts of the PuTTY code directly into a native-code .so Python module. I decided against it because this way is more flexible: I can run the testcrypt program on its own, or compile it in a way that Python wouldn't play nicely with (I bet compiling just that .so with Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or attach a debugger to it. I can even recompile testcrypt for a different CPU architecture (32- vs 64-bit, or even running it on a different machine over ssh or under emulation) and still layer the nice API on top of that via the local Python interpreter. All I need is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
Replace PuTTY's PRNG with a Fortuna-like system. This tears out the entire previous random-pool system in sshrand.c. In its place is a system pretty close to Ferguson and Schneier's 'Fortuna' generator, with the main difference being that I use SHA-256 instead of AES for the generation side of the system (rationale given in comment). The PRNG implementation lives in sshprng.c, and defines a self- contained data type with no state stored outside the object, so you can instantiate however many of them you like. The old sshrand.c still exists, but in place of the previous random pool system, it's just become a client of sshprng.c, whose job is to hold a single global instance of the PRNG type, and manage its reference count, save file, noise-collection timers and similar administrative business. Advantages of this change include: - Fortuna is designed with a more varied threat model in mind than my old home-grown random pool. For example, after any request for random numbers, it automatically re-seeds itself, so that if the state of the PRNG should be leaked, it won't give enough information to find out what past outputs _were_. - The PRNG type can be instantiated with any hash function; the instance used by the main tools is based on SHA-256, an improvement on the old pool's use of SHA-1. - The new PRNG only uses the completely standard interface to the hash function API, instead of having to have privileged access to the internal SHA-1 block transform function. This will make it easier to revamp the hash code in general, and also it means that hardware-accelerated versions of SHA-256 will automatically be used for the PRNG as well as for everything else. - The new PRNG can be _tested_! Because it has an actual (if not quite explicit) specification for exactly what the output numbers _ought_ to be derived from the hashes of, I can (and have) put tests in cryptsuite that ensure the output really is being derived in the way I think it is. The old pool could have been returning any old nonsense and it would have been very hard to tell for sure.
2019-01-22 22:42:41 +00:00
/*
* The PRNG type. Similarly to hashes and MACs, I've invented an extra
* function prng_seed_update for putting seed data into the PRNG's
* exposed BinarySink.
*/
FUNC1(val_prng, prng_new, hashalg)
FUNC1(void, prng_seed_begin, val_prng)
FUNC2(void, prng_seed_update, val_prng, val_string_ptrlen)
FUNC1(void, prng_seed_finish, val_prng)
FUNC2(val_string, prng_read, val_prng, uint)
FUNC3(void, prng_add_entropy, val_prng, uint, val_string_ptrlen)
/*
* Key load/save functions, or rather, the BinarySource / strbuf API
* that sits just inside the file I/O versions.
*/
FUNC2(boolean, ppk_encrypted_s, val_string_binarysource, out_opt_val_string_asciz)
FUNC2(boolean, rsa1_encrypted_s, val_string_binarysource, out_opt_val_string_asciz)
FUNC5(boolean, ppk_loadpub_s, val_string_binarysource, out_opt_val_string_asciz, out_val_string_binarysink, out_opt_val_string_asciz, out_opt_val_string_asciz_const)
FUNC4(int, rsa1_loadpub_s, val_string_binarysource, out_val_string_binarysink, out_opt_val_string_asciz, out_opt_val_string_asciz_const)
FUNC4(opt_val_key, ppk_load_s, val_string_binarysource, out_opt_val_string_asciz, opt_val_string_asciz, out_opt_val_string_asciz_const)
FUNC5(int, rsa1_load_s, val_string_binarysource, val_rsa, out_opt_val_string_asciz, opt_val_string_asciz, out_opt_val_string_asciz_const)
FUNC8(val_string, ppk_save_sb, val_key, opt_val_string_asciz, opt_val_string_asciz, uint, argon2flavour, uint, uint, uint)
FUNC3(val_string, rsa1_save_sb, val_rsa, opt_val_string_asciz, opt_val_string_asciz)
FUNC2(val_string_asciz, ssh2_fingerprint_blob, val_string_ptrlen, fptype)
/*
* Password hashing.
*/
FUNC9(val_string, argon2, argon2flavour, uint, uint, uint, uint, val_string_ptrlen, val_string_ptrlen, val_string_ptrlen, val_string_ptrlen)
FUNC2(val_string, argon2_long_hash, uint, val_string_ptrlen)
/*
* Key generation functions.
*/
2020-03-02 06:52:09 +00:00
FUNC3(val_key, rsa_generate, uint, boolean, val_pgc)
FUNC2(val_key, dsa_generate, uint, val_pgc)
FUNC1(opt_val_key, ecdsa_generate, uint)
FUNC1(opt_val_key, eddsa_generate, uint)
2020-03-02 06:52:09 +00:00
FUNC3(val_rsa, rsa1_generate, uint, boolean, val_pgc)
FUNC1(val_pgc, primegen_new_context, primegenpolicy)
FUNC2(opt_val_mpint, primegen_generate, val_pgc, consumed_val_pcs)
Generate MPU certificates for proven primes. Conveniently checkable certificates of primality aren't a new concept. I didn't invent them, and I wasn't the first to implement them. Given that, I thought it might be useful to be able to independently verify a prime generated by PuTTY's provable prime system. Then, even if you don't trust _this_ code, you might still trust someone else's verifier, or at least be less willing to believe that both were colluding. The Perl module Math::Prime::Util is the only free software I've found that defines a specific text-file format for certificates of primality. The MPU format (as it calls it) supports various different methods of certifying the primality of a number (most of which, like Pockle's, depend on having previously proved some smaller number(s) to be prime). The system implemented by Pockle is on its list: MPU calls it by the name "BLS5". So this commit introduces extra stored data inside Pockle so that it remembers not just _that_ it believes certain numbers to be prime, but also _why_ it believed each one to be prime. Then there's an extra method in the Pockle API to translate its internal data structures into the text of an MPU certificate for any number it knows about. Math::Prime::Util doesn't come with a command-line verification tool, unfortunately; only a Perl function which you feed a string argument. So also in this commit I add test/mpu-check.pl, which is a trivial command-line client of that function. At the moment, this new piece of API is only exposed via testcrypt. I could easily put some user interface into the key generation tools that would save a few primality certificates alongside the private key, but I have yet to think of any good reason to do it. Mostly this facility is intended for debugging and cross-checking of the _algorithm_, not of any particular prime.
2020-02-29 06:44:13 +00:00
FUNC2(val_string, primegen_mpu_certificate, val_pgc, val_mpint)
FUNC1(val_pcs, pcs_new, uint)
FUNC3(val_pcs, pcs_new_with_firstbits, uint, uint, uint)
Refactor generation of candidate integers in primegen. I've replaced the random number generation and small delta-finding loop in primegen() with a much more elaborate system in its own source file, with unit tests and everything. Immediate benefits: - fixes a theoretical possibility of overflowing the target number of bits, if the random number was so close to the top of the range that the addition of delta * factor pushed it over. However, this only happened with negligible probability. - fixes a directional bias in delta-finding. The previous code incremented the number repeatedly until it found a value coprime to all the right things, which meant that a prime preceded by a particularly long sequence of numbers with tiny factors was more likely to be chosen. Now we select candidate delta values at random, that bias should be eliminated. - changes the semantics of the outermost primegen() function to make them easier to use, because now the caller specifies the 'bits' and 'firstbits' values for the actual returned prime, rather than having to account for the factor you're multiplying it by in DSA. DSA client code is correspondingly adjusted. Future benefits: - having the candidate generation in a separate function makes it easy to reuse in alternative prime generation strategies - the available constraints support applications such as Maurer's algorithm for generating provable primes, or strong primes for RSA in which both p-1 and p+1 have a large factor. So those become things we could experiment with in future.
2020-02-23 14:30:03 +00:00
FUNC3(void, pcs_require_residue, val_pcs, val_mpint, val_mpint)
FUNC2(void, pcs_require_residue_1, val_pcs, val_mpint)
FUNC2(void, pcs_require_residue_1_mod_prime, val_pcs, val_mpint)
Refactor generation of candidate integers in primegen. I've replaced the random number generation and small delta-finding loop in primegen() with a much more elaborate system in its own source file, with unit tests and everything. Immediate benefits: - fixes a theoretical possibility of overflowing the target number of bits, if the random number was so close to the top of the range that the addition of delta * factor pushed it over. However, this only happened with negligible probability. - fixes a directional bias in delta-finding. The previous code incremented the number repeatedly until it found a value coprime to all the right things, which meant that a prime preceded by a particularly long sequence of numbers with tiny factors was more likely to be chosen. Now we select candidate delta values at random, that bias should be eliminated. - changes the semantics of the outermost primegen() function to make them easier to use, because now the caller specifies the 'bits' and 'firstbits' values for the actual returned prime, rather than having to account for the factor you're multiplying it by in DSA. DSA client code is correspondingly adjusted. Future benefits: - having the candidate generation in a separate function makes it easy to reuse in alternative prime generation strategies - the available constraints support applications such as Maurer's algorithm for generating provable primes, or strong primes for RSA in which both p-1 and p+1 have a large factor. So those become things we could experiment with in future.
2020-02-23 14:30:03 +00:00
FUNC3(void, pcs_avoid_residue_small, val_pcs, uint, uint)
FUNC1(void, pcs_try_sophie_germain, val_pcs)
PrimeCandidateSource: add one-shot mode. If you want to generate a Sophie Germain / safe prime pair with this code, then after you've made p, you need to test the primality of 2p+1. The easiest way to do that is to make a PrimeCandidateSource that is so constrained as to only be able to deliver 2p+1 as a candidate, and then run the ordinary prime generation system. The problem is that the prime generation loops forever, so if 2p+1 _isn't_ prime, it will just keep testing the same number over and over again and failing the test. To solve this, I add a 'one-shot mode' to the PrimeCandidateSource itself, which will cause it to return NULL if asked to generate a second candidate. Then the prime-generation loops all detect that and return NULL in turn. However, for clients that _don't_ set a pcs into one-shot mode, the API remains unchanged: pcs_generate will not return until it's found a prime (by its own criteria). This feels like a bit of a bodge, API-wise. But the other two obvious approaches turn out more awkward. One option is to extract the Pockle from the PrimeGenerationContext and use that to directly test primality of 2p+1 based on p - but that way you don't get to _probabilistically_ generate safe primes, because that kind of PGC has no Pockle in the first place. (And you can't make one separately, because you can't convince it that an only probabilistically generated p is prime!) Another option is to add a test() method to PrimeGenerationPolicy, that sits alongside generate(). Then, having generated p, you just _test_ 2p+1. But then in the provable case you have to explain to it that it should use p as part of the proof, so that API would get awkward in its own way. So this is actually the least disruptive way to do it!
2020-02-29 06:47:12 +00:00
FUNC1(void, pcs_set_oneshot, val_pcs)
Refactor generation of candidate integers in primegen. I've replaced the random number generation and small delta-finding loop in primegen() with a much more elaborate system in its own source file, with unit tests and everything. Immediate benefits: - fixes a theoretical possibility of overflowing the target number of bits, if the random number was so close to the top of the range that the addition of delta * factor pushed it over. However, this only happened with negligible probability. - fixes a directional bias in delta-finding. The previous code incremented the number repeatedly until it found a value coprime to all the right things, which meant that a prime preceded by a particularly long sequence of numbers with tiny factors was more likely to be chosen. Now we select candidate delta values at random, that bias should be eliminated. - changes the semantics of the outermost primegen() function to make them easier to use, because now the caller specifies the 'bits' and 'firstbits' values for the actual returned prime, rather than having to account for the factor you're multiplying it by in DSA. DSA client code is correspondingly adjusted. Future benefits: - having the candidate generation in a separate function makes it easy to reuse in alternative prime generation strategies - the available constraints support applications such as Maurer's algorithm for generating provable primes, or strong primes for RSA in which both p-1 and p+1 have a large factor. So those become things we could experiment with in future.
2020-02-23 14:30:03 +00:00
FUNC1(void, pcs_ready, val_pcs)
FUNC4(void, pcs_inspect, val_pcs, out_val_mpint, out_val_mpint, out_val_mpint)
FUNC1(val_mpint, pcs_generate, val_pcs)
New 'Pockle' object, for verifying primality. This implements an extended form of primality verification using certificates based on Pocklington's theorem. You make a Pockle object, and then try to convince it that one number after another is prime, by means of providing it with a list of prime factors of p-1 and a primitive root. (Or just by saying 'this prime is small enough for you to check yourself'.) Pocklington's theorem requires you to have factors of p-1 whose product is at least the square root of p. I've extended that to support factorisations only as big as the cube root, via an extension of the theorem given in Maurer's paper on generating provable primes. The Pockle object is more or less write-only: it has no methods for reading out its contents. Its only output channel is the return value when you try to insert a prime into it: if it isn't sufficiently convinced that your prime is prime, it will return an error code. So anything for which it returns POCKLE_OK you can be confident of. I'm going to use this for provable prime generation. But exposing this part of the system as an object in its own right means I can write a set of unit tests for this specifically. My negative tests exercise all the different ways a certification can be erroneous or inadequate; the positive tests include proofs of primality of various primes used in elliptic-curve crypto. The Poly1305 proof in particular is taken from a proof in DJB's paper, which has exactly the form of a Pocklington certificate only written in English.
2020-02-23 15:16:30 +00:00
FUNC0(val_pockle, pockle_new)
FUNC1(uint, pockle_mark, val_pockle)
FUNC2(void, pockle_release, val_pockle, uint)
FUNC2(pocklestatus, pockle_add_small_prime, val_pockle, val_mpint)
FUNC4(pocklestatus, pockle_add_prime, val_pockle, val_mpint, mpint_list, val_mpint)
Generate MPU certificates for proven primes. Conveniently checkable certificates of primality aren't a new concept. I didn't invent them, and I wasn't the first to implement them. Given that, I thought it might be useful to be able to independently verify a prime generated by PuTTY's provable prime system. Then, even if you don't trust _this_ code, you might still trust someone else's verifier, or at least be less willing to believe that both were colluding. The Perl module Math::Prime::Util is the only free software I've found that defines a specific text-file format for certificates of primality. The MPU format (as it calls it) supports various different methods of certifying the primality of a number (most of which, like Pockle's, depend on having previously proved some smaller number(s) to be prime). The system implemented by Pockle is on its list: MPU calls it by the name "BLS5". So this commit introduces extra stored data inside Pockle so that it remembers not just _that_ it believes certain numbers to be prime, but also _why_ it believed each one to be prime. Then there's an extra method in the Pockle API to translate its internal data structures into the text of an MPU certificate for any number it knows about. Math::Prime::Util doesn't come with a command-line verification tool, unfortunately; only a Perl function which you feed a string argument. So also in this commit I add test/mpu-check.pl, which is a trivial command-line client of that function. At the moment, this new piece of API is only exposed via testcrypt. I could easily put some user interface into the key generation tools that would save a few primality certificates alongside the private key, but I have yet to think of any good reason to do it. Mostly this facility is intended for debugging and cross-checking of the _algorithm_, not of any particular prime.
2020-02-29 06:44:13 +00:00
FUNC2(val_string, pockle_mpu, val_pockle, val_mpint)
FUNC1(val_millerrabin, miller_rabin_new, val_mpint)
FUNC2(mr_result, miller_rabin_test, val_millerrabin, val_mpint)
New test system for mp_int and cryptography. I've written a new standalone test program which incorporates all of PuTTY's crypto code, including the mp_int and low-level elliptic curve layers but also going all the way up to the implementations of the MAC, hash, cipher, public key and kex abstractions. The test program itself, 'testcrypt', speaks a simple line-oriented protocol on standard I/O in which you write the name of a function call followed by some inputs, and it gives you back a list of outputs preceded by a line telling you how many there are. Dynamically allocated objects are assigned string ids in the protocol, and there's a 'free' function that tells testcrypt when it can dispose of one. It's possible to speak that protocol by hand, but cumbersome. I've also provided a Python module that wraps it, by running testcrypt as a persistent subprocess and gatewaying all the function calls into things that look reasonably natural to call from Python. The Python module and testcrypt.c both read a carefully formatted header file testcrypt.h which contains the name and signature of every exported function, so it costs minimal effort to expose a given function through this test API. In a few cases it's necessary to write a wrapper in testcrypt.c that makes the function look more friendly, but mostly you don't even need that. (Though that is one of the motivations between a lot of API cleanups I've done recently!) I considered doing Python integration in the more obvious way, by linking parts of the PuTTY code directly into a native-code .so Python module. I decided against it because this way is more flexible: I can run the testcrypt program on its own, or compile it in a way that Python wouldn't play nicely with (I bet compiling just that .so with Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or attach a debugger to it. I can even recompile testcrypt for a different CPU architecture (32- vs 64-bit, or even running it on a different machine over ssh or under emulation) and still layer the nice API on top of that via the local Python interpreter. All I need is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
/*
* Miscellaneous.
*/
Build testcrypt on Windows. The bulk of this commit is the changes necessary to make testcrypt compile under Visual Studio. Unfortunately, I've had to remove my fiddly clever uses of C99 variadic macros, because Visual Studio does something unexpected when a variadic macro's expansion puts __VA_ARGS__ in the argument list of a further macro invocation: the commas don't separate further arguments. In other words, if you write #define INNER(x,y,z) some expansion involving x, y and z #define OUTER(...) INNER(__VA_ARGS__) OUTER(1,2,3) then gcc and clang will translate OUTER(1,2,3) into INNER(1,2,3) in the obvious way, and the inner macro will be expanded with x=1, y=2 and z=3. But try this in Visual Studio, and you'll get the macro parameter x expanding to the entire string 1,2,3 and the other two empty (with warnings complaining that INNER didn't get the number of arguments it expected). It's hard to cite chapter and verse of the standard to say which of those is _definitely_ right, though my reading leans towards the gcc/clang behaviour. But I do know I can't depend on it in code that has to compile under both! So I've removed the system that allowed me to declare everything in testcrypt.h as FUNC(ret,fn,arg,arg,arg), and now I have to use a different macro for each arity (FUNC0, FUNC1, FUNC2 etc). Also, the WRAPPED_NAME system is gone (because that too depended on the use of a comma to shift macro arguments along by one), and now I put a custom C wrapper around a function by simply re-#defining that function's own name (and therefore the subsequent code has to be a little more careful to _not_ pass functions' names between several macros before stringifying them). That's all a bit tedious, and commits me to a small amount of ongoing annoyance because now I'll have to add an explicit argument count every time I add something to testcrypt.h. But then again, perhaps it will make the code less incomprehensible to someone trying to understand it!
2019-01-11 06:25:28 +00:00
FUNC2(val_wpoint, ecdsa_public, val_mpint, keyalg)
FUNC2(val_epoint, eddsa_public, val_mpint, keyalg)
FUNC2(val_string, des_encrypt_xdmauth, val_string_ptrlen, val_string_ptrlen)
FUNC2(val_string, des_decrypt_xdmauth, val_string_ptrlen, val_string_ptrlen)
FUNC2(val_string, des3_encrypt_pubkey, val_string_ptrlen, val_string_ptrlen)
FUNC2(val_string, des3_decrypt_pubkey, val_string_ptrlen, val_string_ptrlen)
FUNC3(val_string, des3_encrypt_pubkey_ossh, val_string_ptrlen, val_string_ptrlen, val_string_ptrlen)
FUNC3(val_string, des3_decrypt_pubkey_ossh, val_string_ptrlen, val_string_ptrlen, val_string_ptrlen)
FUNC3(val_string, aes256_encrypt_pubkey, val_string_ptrlen, val_string_ptrlen, val_string_ptrlen)
FUNC3(val_string, aes256_decrypt_pubkey, val_string_ptrlen, val_string_ptrlen, val_string_ptrlen)
Expose CRC32 to testcrypt, and add tests for it. Finding even semi-official test vectors for this CRC implementation was hard, because it turns out not to _quite_ match any of the well known ones catalogued on the web. Its _polynomial_ is well known, but the combination of details that go alongside it (starting state, post-hashing transformation) are not quite the same as any other hash I know of. After trawling catalogue websites for a while I finally worked out that SSH-1's CRC and RFC 1662's CRC are basically the same except for different choices of starting value and final adjustment. And RFC 1662's CRC is common enough that there _are_ test vectors. So I've renamed the previous crc32_compute function to crc32_ssh1, reflecting that it seems to be its own thing unlike any other CRC; implemented the RFC 1662 CRC as well, as an alternative tiny wrapper on the inner crc32_update function; and exposed all three functions to testcrypt. That lets me run standard test vectors _and_ directed tests of the internal update routine, plus one check that crc32_ssh1 itself does what I expect. While I'm here, I've also modernised the code to use uint32_t in place of unsigned long, and ptrlen instead of separate pointer,length arguments. And I've removed the general primer on CRC theory from the header comment, in favour of the more specifically useful information about _which_ CRC this is and how it matches up to anything else out there. (I've bowed to inevitability and put the directed CRC tests in the 'crypt' class in cryptsuite.py. Of course this is a misnomer, since CRC isn't cryptography, but it falls into the same category in terms of the role it plays in SSH-1, and I didn't feel like making a new pointedly-named 'notreallycrypt' container class just for this :-)
2019-01-14 20:45:19 +00:00
FUNC1(uint, crc32_rfc1662, val_string_ptrlen)
FUNC1(uint, crc32_ssh1, val_string_ptrlen)
FUNC2(uint, crc32_update, uint, val_string_ptrlen)
FUNC2(boolean, crcda_detect, val_string_ptrlen, val_string_ptrlen)
Break up crypto modules containing HW acceleration. This applies to all of AES, SHA-1, SHA-256 and SHA-512. All those source files previously contained multiple implementations of the algorithm, enabled or disabled by ifdefs detecting whether they would work on a given compiler. And in order to get advanced machine instructions like AES-NI or NEON crypto into the output file when the compile flags hadn't enabled them, we had to do nasty stuff with compiler-specific pragmas or attributes. Now we can do the detection at cmake time, and enable advanced instructions in the more sensible way, by compile-time flags. So I've broken up each of these modules into lots of sub-pieces: a file called (e.g.) 'foo-common.c' containing common definitions across all implementations (such as round constants), one called 'foo-select.c' containing the top-level vtable(s), and a separate file for each implementation exporting just the vtable(s) for that implementation. One advantage of this is that it depends a lot less on compiler- specific bodgery. My particular least favourite part of the previous setup was the part where I had to _manually_ define some Arm ACLE feature macros before including <arm_neon.h>, so that it would define the intrinsics I wanted. Now I'm enabling interesting architecture features in the normal way, on the compiler command line, there's no need for that kind of trick: the right feature macros are already defined and <arm_neon.h> does the right thing. Another change in this reorganisation is that I've stopped assuming there's just one hardware implementation per platform. Previously, the accelerated vtables were called things like sha256_hw, and varied between FOO-NI and NEON depending on platform; and the selection code would simply ask 'is hw available? if so, use hw, else sw'. Now, each HW acceleration strategy names its vtable its own way, and the selection vtable has a whole list of possibilities to iterate over looking for a supported one. So if someone feels like writing a second accelerated implementation of something for a given platform - for example, I've heard you can use plain NEON to speed up AES somewhat even without the crypto extension - then it will now have somewhere to drop in alongside the existing ones.
2021-04-19 05:42:12 +00:00
FUNC1(val_string, get_implementations_commasep, val_string_ptrlen)
New test system for mp_int and cryptography. I've written a new standalone test program which incorporates all of PuTTY's crypto code, including the mp_int and low-level elliptic curve layers but also going all the way up to the implementations of the MAC, hash, cipher, public key and kex abstractions. The test program itself, 'testcrypt', speaks a simple line-oriented protocol on standard I/O in which you write the name of a function call followed by some inputs, and it gives you back a list of outputs preceded by a line telling you how many there are. Dynamically allocated objects are assigned string ids in the protocol, and there's a 'free' function that tells testcrypt when it can dispose of one. It's possible to speak that protocol by hand, but cumbersome. I've also provided a Python module that wraps it, by running testcrypt as a persistent subprocess and gatewaying all the function calls into things that look reasonably natural to call from Python. The Python module and testcrypt.c both read a carefully formatted header file testcrypt.h which contains the name and signature of every exported function, so it costs minimal effort to expose a given function through this test API. In a few cases it's necessary to write a wrapper in testcrypt.c that makes the function look more friendly, but mostly you don't even need that. (Though that is one of the motivations between a lot of API cleanups I've done recently!) I considered doing Python integration in the more obvious way, by linking parts of the PuTTY code directly into a native-code .so Python module. I decided against it because this way is more flexible: I can run the testcrypt program on its own, or compile it in a way that Python wouldn't play nicely with (I bet compiling just that .so with Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or attach a debugger to it. I can even recompile testcrypt for a different CPU architecture (32- vs 64-bit, or even running it on a different machine over ssh or under emulation) and still layer the nice API on top of that via the local Python interpreter. All I need is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
/*
* These functions aren't part of PuTTY's own API, but are additions
* by testcrypt itself for administrative purposes.
*/
Build testcrypt on Windows. The bulk of this commit is the changes necessary to make testcrypt compile under Visual Studio. Unfortunately, I've had to remove my fiddly clever uses of C99 variadic macros, because Visual Studio does something unexpected when a variadic macro's expansion puts __VA_ARGS__ in the argument list of a further macro invocation: the commas don't separate further arguments. In other words, if you write #define INNER(x,y,z) some expansion involving x, y and z #define OUTER(...) INNER(__VA_ARGS__) OUTER(1,2,3) then gcc and clang will translate OUTER(1,2,3) into INNER(1,2,3) in the obvious way, and the inner macro will be expanded with x=1, y=2 and z=3. But try this in Visual Studio, and you'll get the macro parameter x expanding to the entire string 1,2,3 and the other two empty (with warnings complaining that INNER didn't get the number of arguments it expected). It's hard to cite chapter and verse of the standard to say which of those is _definitely_ right, though my reading leans towards the gcc/clang behaviour. But I do know I can't depend on it in code that has to compile under both! So I've removed the system that allowed me to declare everything in testcrypt.h as FUNC(ret,fn,arg,arg,arg), and now I have to use a different macro for each arity (FUNC0, FUNC1, FUNC2 etc). Also, the WRAPPED_NAME system is gone (because that too depended on the use of a comma to shift macro arguments along by one), and now I put a custom C wrapper around a function by simply re-#defining that function's own name (and therefore the subsequent code has to be a little more careful to _not_ pass functions' names between several macros before stringifying them). That's all a bit tedious, and commits me to a small amount of ongoing annoyance because now I'll have to add an explicit argument count every time I add something to testcrypt.h. But then again, perhaps it will make the code less incomprehensible to someone trying to understand it!
2019-01-11 06:25:28 +00:00
FUNC1(void, random_queue, val_string_ptrlen)
FUNC0(uint, random_queue_len)
FUNC2(void, random_make_prng, hashalg, val_string_ptrlen)
Build testcrypt on Windows. The bulk of this commit is the changes necessary to make testcrypt compile under Visual Studio. Unfortunately, I've had to remove my fiddly clever uses of C99 variadic macros, because Visual Studio does something unexpected when a variadic macro's expansion puts __VA_ARGS__ in the argument list of a further macro invocation: the commas don't separate further arguments. In other words, if you write #define INNER(x,y,z) some expansion involving x, y and z #define OUTER(...) INNER(__VA_ARGS__) OUTER(1,2,3) then gcc and clang will translate OUTER(1,2,3) into INNER(1,2,3) in the obvious way, and the inner macro will be expanded with x=1, y=2 and z=3. But try this in Visual Studio, and you'll get the macro parameter x expanding to the entire string 1,2,3 and the other two empty (with warnings complaining that INNER didn't get the number of arguments it expected). It's hard to cite chapter and verse of the standard to say which of those is _definitely_ right, though my reading leans towards the gcc/clang behaviour. But I do know I can't depend on it in code that has to compile under both! So I've removed the system that allowed me to declare everything in testcrypt.h as FUNC(ret,fn,arg,arg,arg), and now I have to use a different macro for each arity (FUNC0, FUNC1, FUNC2 etc). Also, the WRAPPED_NAME system is gone (because that too depended on the use of a comma to shift macro arguments along by one), and now I put a custom C wrapper around a function by simply re-#defining that function's own name (and therefore the subsequent code has to be a little more careful to _not_ pass functions' names between several macros before stringifying them). That's all a bit tedious, and commits me to a small amount of ongoing annoyance because now I'll have to add an explicit argument count every time I add something to testcrypt.h. But then again, perhaps it will make the code less incomprehensible to someone trying to understand it!
2019-01-11 06:25:28 +00:00
FUNC0(void, random_clear)