New test system for mp_int and cryptography.
I've written a new standalone test program which incorporates all of
PuTTY's crypto code, including the mp_int and low-level elliptic curve
layers but also going all the way up to the implementations of the
MAC, hash, cipher, public key and kex abstractions.
The test program itself, 'testcrypt', speaks a simple line-oriented
protocol on standard I/O in which you write the name of a function
call followed by some inputs, and it gives you back a list of outputs
preceded by a line telling you how many there are. Dynamically
allocated objects are assigned string ids in the protocol, and there's
a 'free' function that tells testcrypt when it can dispose of one.
It's possible to speak that protocol by hand, but cumbersome. I've
also provided a Python module that wraps it, by running testcrypt as a
persistent subprocess and gatewaying all the function calls into
things that look reasonably natural to call from Python. The Python
module and testcrypt.c both read a carefully formatted header file
testcrypt.h which contains the name and signature of every exported
function, so it costs minimal effort to expose a given function
through this test API. In a few cases it's necessary to write a
wrapper in testcrypt.c that makes the function look more friendly, but
mostly you don't even need that. (Though that is one of the
motivations between a lot of API cleanups I've done recently!)
I considered doing Python integration in the more obvious way, by
linking parts of the PuTTY code directly into a native-code .so Python
module. I decided against it because this way is more flexible: I can
run the testcrypt program on its own, or compile it in a way that
Python wouldn't play nicely with (I bet compiling just that .so with
Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or
attach a debugger to it. I can even recompile testcrypt for a
different CPU architecture (32- vs 64-bit, or even running it on a
different machine over ssh or under emulation) and still layer the
nice API on top of that via the local Python interpreter. All I need
is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
|
|
|
import sys
|
|
|
|
import os
|
|
|
|
import numbers
|
|
|
|
import subprocess
|
|
|
|
import re
|
2019-01-23 23:40:32 +00:00
|
|
|
import struct
|
New test system for mp_int and cryptography.
I've written a new standalone test program which incorporates all of
PuTTY's crypto code, including the mp_int and low-level elliptic curve
layers but also going all the way up to the implementations of the
MAC, hash, cipher, public key and kex abstractions.
The test program itself, 'testcrypt', speaks a simple line-oriented
protocol on standard I/O in which you write the name of a function
call followed by some inputs, and it gives you back a list of outputs
preceded by a line telling you how many there are. Dynamically
allocated objects are assigned string ids in the protocol, and there's
a 'free' function that tells testcrypt when it can dispose of one.
It's possible to speak that protocol by hand, but cumbersome. I've
also provided a Python module that wraps it, by running testcrypt as a
persistent subprocess and gatewaying all the function calls into
things that look reasonably natural to call from Python. The Python
module and testcrypt.c both read a carefully formatted header file
testcrypt.h which contains the name and signature of every exported
function, so it costs minimal effort to expose a given function
through this test API. In a few cases it's necessary to write a
wrapper in testcrypt.c that makes the function look more friendly, but
mostly you don't even need that. (Though that is one of the
motivations between a lot of API cleanups I've done recently!)
I considered doing Python integration in the more obvious way, by
linking parts of the PuTTY code directly into a native-code .so Python
module. I decided against it because this way is more flexible: I can
run the testcrypt program on its own, or compile it in a way that
Python wouldn't play nicely with (I bet compiling just that .so with
Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or
attach a debugger to it. I can even recompile testcrypt for a
different CPU architecture (32- vs 64-bit, or even running it on a
different machine over ssh or under emulation) and still layer the
nice API on top of that via the local Python interpreter. All I need
is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
|
|
|
from binascii import hexlify
|
|
|
|
|
|
|
|
# Expect to be run from the 'test' subdirectory, one level down from
|
|
|
|
# the main source
|
|
|
|
putty_srcdir = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
|
|
|
|
|
|
|
|
def unicode_to_bytes(arg):
|
|
|
|
# Slightly fiddly way to do this which should work in Python 2 and 3
|
|
|
|
if isinstance(arg, type(u'a')) and not isinstance(arg, type(b'a')):
|
|
|
|
arg = arg.encode("UTF-8")
|
|
|
|
return arg
|
|
|
|
|
2019-01-23 23:40:32 +00:00
|
|
|
def bytevals(b):
|
|
|
|
return struct.unpack("{:d}B".format(len(b)), b)
|
|
|
|
def valbytes(b):
|
|
|
|
b = list(b)
|
|
|
|
return struct.pack("{:d}B".format(len(b)), *b)
|
New test system for mp_int and cryptography.
I've written a new standalone test program which incorporates all of
PuTTY's crypto code, including the mp_int and low-level elliptic curve
layers but also going all the way up to the implementations of the
MAC, hash, cipher, public key and kex abstractions.
The test program itself, 'testcrypt', speaks a simple line-oriented
protocol on standard I/O in which you write the name of a function
call followed by some inputs, and it gives you back a list of outputs
preceded by a line telling you how many there are. Dynamically
allocated objects are assigned string ids in the protocol, and there's
a 'free' function that tells testcrypt when it can dispose of one.
It's possible to speak that protocol by hand, but cumbersome. I've
also provided a Python module that wraps it, by running testcrypt as a
persistent subprocess and gatewaying all the function calls into
things that look reasonably natural to call from Python. The Python
module and testcrypt.c both read a carefully formatted header file
testcrypt.h which contains the name and signature of every exported
function, so it costs minimal effort to expose a given function
through this test API. In a few cases it's necessary to write a
wrapper in testcrypt.c that makes the function look more friendly, but
mostly you don't even need that. (Though that is one of the
motivations between a lot of API cleanups I've done recently!)
I considered doing Python integration in the more obvious way, by
linking parts of the PuTTY code directly into a native-code .so Python
module. I decided against it because this way is more flexible: I can
run the testcrypt program on its own, or compile it in a way that
Python wouldn't play nicely with (I bet compiling just that .so with
Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or
attach a debugger to it. I can even recompile testcrypt for a
different CPU architecture (32- vs 64-bit, or even running it on a
different machine over ssh or under emulation) and still layer the
nice API on top of that via the local Python interpreter. All I need
is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
|
|
|
|
|
|
|
class ChildProcess(object):
|
|
|
|
def __init__(self):
|
|
|
|
self.sp = None
|
|
|
|
self.debug = None
|
2019-03-24 09:56:57 +00:00
|
|
|
self.exitstatus = None
|
New test system for mp_int and cryptography.
I've written a new standalone test program which incorporates all of
PuTTY's crypto code, including the mp_int and low-level elliptic curve
layers but also going all the way up to the implementations of the
MAC, hash, cipher, public key and kex abstractions.
The test program itself, 'testcrypt', speaks a simple line-oriented
protocol on standard I/O in which you write the name of a function
call followed by some inputs, and it gives you back a list of outputs
preceded by a line telling you how many there are. Dynamically
allocated objects are assigned string ids in the protocol, and there's
a 'free' function that tells testcrypt when it can dispose of one.
It's possible to speak that protocol by hand, but cumbersome. I've
also provided a Python module that wraps it, by running testcrypt as a
persistent subprocess and gatewaying all the function calls into
things that look reasonably natural to call from Python. The Python
module and testcrypt.c both read a carefully formatted header file
testcrypt.h which contains the name and signature of every exported
function, so it costs minimal effort to expose a given function
through this test API. In a few cases it's necessary to write a
wrapper in testcrypt.c that makes the function look more friendly, but
mostly you don't even need that. (Though that is one of the
motivations between a lot of API cleanups I've done recently!)
I considered doing Python integration in the more obvious way, by
linking parts of the PuTTY code directly into a native-code .so Python
module. I decided against it because this way is more flexible: I can
run the testcrypt program on its own, or compile it in a way that
Python wouldn't play nicely with (I bet compiling just that .so with
Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or
attach a debugger to it. I can even recompile testcrypt for a
different CPU architecture (32- vs 64-bit, or even running it on a
different machine over ssh or under emulation) and still layer the
nice API on top of that via the local Python interpreter. All I need
is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
|
|
|
|
|
|
|
dbg = os.environ.get("PUTTY_TESTCRYPT_DEBUG")
|
|
|
|
if dbg is not None:
|
|
|
|
if dbg == "stderr":
|
|
|
|
self.debug = sys.stderr
|
|
|
|
else:
|
|
|
|
sys.stderr.write("Unknown value '{}' for PUTTY_TESTCRYPT_DEBUG"
|
|
|
|
" (try 'stderr'\n")
|
|
|
|
def start(self):
|
|
|
|
assert self.sp is None
|
|
|
|
override_command = os.environ.get("PUTTY_TESTCRYPT")
|
|
|
|
if override_command is None:
|
|
|
|
cmd = [os.path.join(putty_srcdir, "testcrypt")]
|
|
|
|
shell = False
|
|
|
|
else:
|
|
|
|
cmd = override_command
|
|
|
|
shell = True
|
|
|
|
self.sp = subprocess.Popen(
|
|
|
|
cmd, shell=shell, stdin=subprocess.PIPE, stdout=subprocess.PIPE)
|
|
|
|
def write_line(self, line):
|
|
|
|
if self.debug is not None:
|
|
|
|
self.debug.write("send: {}\n".format(line))
|
|
|
|
self.sp.stdin.write(line + b"\n")
|
|
|
|
self.sp.stdin.flush()
|
|
|
|
def read_line(self):
|
|
|
|
line = self.sp.stdout.readline().rstrip(b"\r\n")
|
|
|
|
if self.debug is not None:
|
|
|
|
self.debug.write("recv: {}\n".format(line))
|
|
|
|
return line
|
|
|
|
def funcall(self, cmd, args):
|
|
|
|
if self.sp is None:
|
|
|
|
self.start()
|
|
|
|
self.write_line(unicode_to_bytes(cmd) + b" " + b" ".join(
|
|
|
|
unicode_to_bytes(arg) for arg in args))
|
|
|
|
argcount = int(self.read_line())
|
|
|
|
return [self.read_line() for arg in range(argcount)]
|
2019-03-24 09:56:57 +00:00
|
|
|
def wait_for_exit(self):
|
|
|
|
if self.sp is not None:
|
|
|
|
self.sp.stdin.close()
|
|
|
|
self.exitstatus = self.sp.wait()
|
|
|
|
self.sp = None
|
New test system for mp_int and cryptography.
I've written a new standalone test program which incorporates all of
PuTTY's crypto code, including the mp_int and low-level elliptic curve
layers but also going all the way up to the implementations of the
MAC, hash, cipher, public key and kex abstractions.
The test program itself, 'testcrypt', speaks a simple line-oriented
protocol on standard I/O in which you write the name of a function
call followed by some inputs, and it gives you back a list of outputs
preceded by a line telling you how many there are. Dynamically
allocated objects are assigned string ids in the protocol, and there's
a 'free' function that tells testcrypt when it can dispose of one.
It's possible to speak that protocol by hand, but cumbersome. I've
also provided a Python module that wraps it, by running testcrypt as a
persistent subprocess and gatewaying all the function calls into
things that look reasonably natural to call from Python. The Python
module and testcrypt.c both read a carefully formatted header file
testcrypt.h which contains the name and signature of every exported
function, so it costs minimal effort to expose a given function
through this test API. In a few cases it's necessary to write a
wrapper in testcrypt.c that makes the function look more friendly, but
mostly you don't even need that. (Though that is one of the
motivations between a lot of API cleanups I've done recently!)
I considered doing Python integration in the more obvious way, by
linking parts of the PuTTY code directly into a native-code .so Python
module. I decided against it because this way is more flexible: I can
run the testcrypt program on its own, or compile it in a way that
Python wouldn't play nicely with (I bet compiling just that .so with
Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or
attach a debugger to it. I can even recompile testcrypt for a
different CPU architecture (32- vs 64-bit, or even running it on a
different machine over ssh or under emulation) and still layer the
nice API on top of that via the local Python interpreter. All I need
is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
|
|
|
def check_return_status(self):
|
2019-03-24 09:56:57 +00:00
|
|
|
self.wait_for_exit()
|
|
|
|
if self.exitstatus is not None and self.exitstatus != 0:
|
|
|
|
raise Exception("testcrypt returned exit status {}"
|
|
|
|
.format(self.exitstatus))
|
New test system for mp_int and cryptography.
I've written a new standalone test program which incorporates all of
PuTTY's crypto code, including the mp_int and low-level elliptic curve
layers but also going all the way up to the implementations of the
MAC, hash, cipher, public key and kex abstractions.
The test program itself, 'testcrypt', speaks a simple line-oriented
protocol on standard I/O in which you write the name of a function
call followed by some inputs, and it gives you back a list of outputs
preceded by a line telling you how many there are. Dynamically
allocated objects are assigned string ids in the protocol, and there's
a 'free' function that tells testcrypt when it can dispose of one.
It's possible to speak that protocol by hand, but cumbersome. I've
also provided a Python module that wraps it, by running testcrypt as a
persistent subprocess and gatewaying all the function calls into
things that look reasonably natural to call from Python. The Python
module and testcrypt.c both read a carefully formatted header file
testcrypt.h which contains the name and signature of every exported
function, so it costs minimal effort to expose a given function
through this test API. In a few cases it's necessary to write a
wrapper in testcrypt.c that makes the function look more friendly, but
mostly you don't even need that. (Though that is one of the
motivations between a lot of API cleanups I've done recently!)
I considered doing Python integration in the more obvious way, by
linking parts of the PuTTY code directly into a native-code .so Python
module. I decided against it because this way is more flexible: I can
run the testcrypt program on its own, or compile it in a way that
Python wouldn't play nicely with (I bet compiling just that .so with
Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or
attach a debugger to it. I can even recompile testcrypt for a
different CPU architecture (32- vs 64-bit, or even running it on a
different machine over ssh or under emulation) and still layer the
nice API on top of that via the local Python interpreter. All I need
is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
|
|
|
|
|
|
|
childprocess = ChildProcess()
|
|
|
|
|
|
|
|
class Value(object):
|
|
|
|
def __init__(self, typename, ident):
|
|
|
|
self.typename = typename
|
|
|
|
self.ident = ident
|
|
|
|
def consumed(self):
|
|
|
|
self.ident = None
|
|
|
|
def __repr__(self):
|
|
|
|
return "Value({!r}, {!r})".format(self.typename, self.ident)
|
|
|
|
def __del__(self):
|
|
|
|
if self.ident is not None:
|
|
|
|
childprocess.funcall("free", [self.ident])
|
|
|
|
def __long__(self):
|
|
|
|
if self.typename != "val_mpint":
|
|
|
|
raise TypeError("testcrypt values of types other than mpint"
|
|
|
|
" cannot be converted to integer")
|
|
|
|
hexval = childprocess.funcall("mp_dump", [self.ident])[0]
|
|
|
|
return 0 if len(hexval) == 0 else int(hexval, 16)
|
|
|
|
def __int__(self):
|
|
|
|
return int(self.__long__())
|
|
|
|
|
|
|
|
def make_argword(arg, argtype, fnname, argindex, to_preserve):
|
|
|
|
typename, consumed = argtype
|
|
|
|
if typename.startswith("opt_"):
|
|
|
|
if arg is None:
|
|
|
|
return "NULL"
|
|
|
|
typename = typename[4:]
|
|
|
|
if typename == "val_string":
|
|
|
|
arg = unicode_to_bytes(arg)
|
|
|
|
if isinstance(arg, bytes):
|
|
|
|
retwords = childprocess.funcall(
|
|
|
|
"newstring", ["".join("%{:02x}".format(b)
|
|
|
|
for b in bytevals(arg))])
|
|
|
|
arg = make_retvals([typename], retwords, unpack_strings=False)[0]
|
|
|
|
to_preserve.append(arg)
|
|
|
|
if typename == "val_mpint" and isinstance(arg, numbers.Integral):
|
|
|
|
retwords = childprocess.funcall("mp_literal", ["0x{:x}".format(arg)])
|
|
|
|
arg = make_retvals([typename], retwords)[0]
|
|
|
|
to_preserve.append(arg)
|
|
|
|
if isinstance(arg, Value):
|
|
|
|
if arg.typename != typename:
|
|
|
|
raise TypeError(
|
|
|
|
"{}() argument {:d} should be {} ({} given)".format(
|
|
|
|
fnname, argindex, typename, arg.typename))
|
|
|
|
ident = arg.ident
|
|
|
|
if consumed:
|
|
|
|
arg.consumed()
|
|
|
|
return ident
|
|
|
|
if typename == "uint" and isinstance(arg, numbers.Integral):
|
|
|
|
return "0x{:x}".format(arg)
|
|
|
|
if typename in {
|
Merge the ssh1_cipher type into ssh2_cipher.
The aim of this reorganisation is to make it easier to test all the
ciphers in PuTTY in a uniform way. It was inconvenient that there were
two separate vtable systems for the ciphers used in SSH-1 and SSH-2
with different functionality.
Now there's only one type, called ssh_cipher. But really it's the old
ssh2_cipher, just renamed: I haven't made any changes to the API on
the SSH-2 side. Instead, I've removed ssh1_cipher completely, and
adapted the SSH-1 BPP to use the SSH-2 style API.
(The relevant differences are that ssh1_cipher encapsulated both the
sending and receiving directions in one object - so now ssh1bpp has to
make a separate cipher instance per direction - and that ssh1_cipher
automatically initialised the IV to all zeroes, which ssh1bpp now has
to do by hand.)
The previous ssh1_cipher vtable for single-DES has been removed
completely, because when converted into the new API it became
identical to the SSH-2 single-DES vtable; so now there's just one
vtable for DES-CBC which works in both protocols. The other two SSH-1
ciphers each had to stay separate, because 3DES is completely
different between SSH-1 and SSH-2 (three layers of CBC structure
versus one), and Blowfish varies in endianness and key length between
the two.
(Actually, while I'm here, I've only just noticed that the SSH-1
Blowfish cipher mis-describes itself in log messages as Blowfish-128.
In fact it passes the whole of the input key buffer, which has length
SSH1_SESSION_KEY_LENGTH == 32 bytes == 256 bits. So it's actually
Blowfish-256, and has been all along!)
2019-01-17 18:06:08 +00:00
|
|
|
"hashalg", "macalg", "keyalg", "cipheralg",
|
New test system for mp_int and cryptography.
I've written a new standalone test program which incorporates all of
PuTTY's crypto code, including the mp_int and low-level elliptic curve
layers but also going all the way up to the implementations of the
MAC, hash, cipher, public key and kex abstractions.
The test program itself, 'testcrypt', speaks a simple line-oriented
protocol on standard I/O in which you write the name of a function
call followed by some inputs, and it gives you back a list of outputs
preceded by a line telling you how many there are. Dynamically
allocated objects are assigned string ids in the protocol, and there's
a 'free' function that tells testcrypt when it can dispose of one.
It's possible to speak that protocol by hand, but cumbersome. I've
also provided a Python module that wraps it, by running testcrypt as a
persistent subprocess and gatewaying all the function calls into
things that look reasonably natural to call from Python. The Python
module and testcrypt.c both read a carefully formatted header file
testcrypt.h which contains the name and signature of every exported
function, so it costs minimal effort to expose a given function
through this test API. In a few cases it's necessary to write a
wrapper in testcrypt.c that makes the function look more friendly, but
mostly you don't even need that. (Though that is one of the
motivations between a lot of API cleanups I've done recently!)
I considered doing Python integration in the more obvious way, by
linking parts of the PuTTY code directly into a native-code .so Python
module. I decided against it because this way is more flexible: I can
run the testcrypt program on its own, or compile it in a way that
Python wouldn't play nicely with (I bet compiling just that .so with
Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or
attach a debugger to it. I can even recompile testcrypt for a
different CPU architecture (32- vs 64-bit, or even running it on a
different machine over ssh or under emulation) and still layer the
nice API on top of that via the local Python interpreter. All I need
is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
|
|
|
"dh_group", "ecdh_alg", "rsaorder"}:
|
|
|
|
arg = unicode_to_bytes(arg)
|
|
|
|
if isinstance(arg, bytes) and b" " not in arg:
|
|
|
|
return arg
|
|
|
|
raise TypeError(
|
|
|
|
"Can't convert {}() argument {:d} to {} (value was {!r})".format(
|
|
|
|
fnname, argindex, typename, arg))
|
|
|
|
|
|
|
|
def make_retval(rettype, word, unpack_strings):
|
2019-01-11 06:47:39 +00:00
|
|
|
if rettype.startswith("opt_"):
|
2019-01-23 23:40:32 +00:00
|
|
|
if word == b"NULL":
|
2019-01-11 06:47:39 +00:00
|
|
|
return None
|
|
|
|
rettype = rettype[4:]
|
New test system for mp_int and cryptography.
I've written a new standalone test program which incorporates all of
PuTTY's crypto code, including the mp_int and low-level elliptic curve
layers but also going all the way up to the implementations of the
MAC, hash, cipher, public key and kex abstractions.
The test program itself, 'testcrypt', speaks a simple line-oriented
protocol on standard I/O in which you write the name of a function
call followed by some inputs, and it gives you back a list of outputs
preceded by a line telling you how many there are. Dynamically
allocated objects are assigned string ids in the protocol, and there's
a 'free' function that tells testcrypt when it can dispose of one.
It's possible to speak that protocol by hand, but cumbersome. I've
also provided a Python module that wraps it, by running testcrypt as a
persistent subprocess and gatewaying all the function calls into
things that look reasonably natural to call from Python. The Python
module and testcrypt.c both read a carefully formatted header file
testcrypt.h which contains the name and signature of every exported
function, so it costs minimal effort to expose a given function
through this test API. In a few cases it's necessary to write a
wrapper in testcrypt.c that makes the function look more friendly, but
mostly you don't even need that. (Though that is one of the
motivations between a lot of API cleanups I've done recently!)
I considered doing Python integration in the more obvious way, by
linking parts of the PuTTY code directly into a native-code .so Python
module. I decided against it because this way is more flexible: I can
run the testcrypt program on its own, or compile it in a way that
Python wouldn't play nicely with (I bet compiling just that .so with
Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or
attach a debugger to it. I can even recompile testcrypt for a
different CPU architecture (32- vs 64-bit, or even running it on a
different machine over ssh or under emulation) and still layer the
nice API on top of that via the local Python interpreter. All I need
is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
|
|
|
if rettype == "val_string" and unpack_strings:
|
|
|
|
retwords = childprocess.funcall("getstring", [word])
|
|
|
|
childprocess.funcall("free", [word])
|
|
|
|
return re.sub(b"%[0-9A-F][0-9A-F]",
|
2019-01-23 23:40:32 +00:00
|
|
|
lambda m: valbytes([int(m.group(0)[1:], 16)]),
|
New test system for mp_int and cryptography.
I've written a new standalone test program which incorporates all of
PuTTY's crypto code, including the mp_int and low-level elliptic curve
layers but also going all the way up to the implementations of the
MAC, hash, cipher, public key and kex abstractions.
The test program itself, 'testcrypt', speaks a simple line-oriented
protocol on standard I/O in which you write the name of a function
call followed by some inputs, and it gives you back a list of outputs
preceded by a line telling you how many there are. Dynamically
allocated objects are assigned string ids in the protocol, and there's
a 'free' function that tells testcrypt when it can dispose of one.
It's possible to speak that protocol by hand, but cumbersome. I've
also provided a Python module that wraps it, by running testcrypt as a
persistent subprocess and gatewaying all the function calls into
things that look reasonably natural to call from Python. The Python
module and testcrypt.c both read a carefully formatted header file
testcrypt.h which contains the name and signature of every exported
function, so it costs minimal effort to expose a given function
through this test API. In a few cases it's necessary to write a
wrapper in testcrypt.c that makes the function look more friendly, but
mostly you don't even need that. (Though that is one of the
motivations between a lot of API cleanups I've done recently!)
I considered doing Python integration in the more obvious way, by
linking parts of the PuTTY code directly into a native-code .so Python
module. I decided against it because this way is more flexible: I can
run the testcrypt program on its own, or compile it in a way that
Python wouldn't play nicely with (I bet compiling just that .so with
Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or
attach a debugger to it. I can even recompile testcrypt for a
different CPU architecture (32- vs 64-bit, or even running it on a
different machine over ssh or under emulation) and still layer the
nice API on top of that via the local Python interpreter. All I need
is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
|
|
|
retwords[0])
|
|
|
|
if rettype.startswith("val_"):
|
|
|
|
return Value(rettype, word)
|
|
|
|
elif rettype == "uint":
|
|
|
|
return int(word, 0)
|
|
|
|
elif rettype == "boolean":
|
|
|
|
assert word == b"true" or word == b"false"
|
|
|
|
return word == b"true"
|
|
|
|
raise TypeError("Can't deal with return value {!r} of type {!r}"
|
|
|
|
.format(rettype, word))
|
|
|
|
|
|
|
|
def make_retvals(rettypes, retwords, unpack_strings=True):
|
|
|
|
assert len(rettypes) == len(retwords) # FIXME: better exception
|
|
|
|
return [make_retval(rettype, word, unpack_strings)
|
|
|
|
for rettype, word in zip(rettypes, retwords)]
|
|
|
|
|
|
|
|
class Function(object):
|
|
|
|
def __init__(self, fnname, rettypes, argtypes):
|
|
|
|
self.fnname = fnname
|
|
|
|
self.rettypes = rettypes
|
|
|
|
self.argtypes = argtypes
|
|
|
|
def __repr__(self):
|
|
|
|
return "<Function {}>".format(self.fnname)
|
|
|
|
def __call__(self, *args):
|
|
|
|
if len(args) != len(self.argtypes):
|
|
|
|
raise TypeError(
|
|
|
|
"{}() takes exactly {} arguments ({} given)".format(
|
|
|
|
self.fnname, len(self.argtypes), len(args)))
|
|
|
|
to_preserve = []
|
|
|
|
retwords = childprocess.funcall(
|
|
|
|
self.fnname, [make_argword(args[i], self.argtypes[i],
|
|
|
|
self.fnname, i, to_preserve)
|
|
|
|
for i in range(len(args))])
|
|
|
|
retvals = make_retvals(self.rettypes, retwords)
|
|
|
|
if len(retvals) == 0:
|
|
|
|
return None
|
|
|
|
if len(retvals) == 1:
|
|
|
|
return retvals[0]
|
|
|
|
return tuple(retvals)
|
|
|
|
|
|
|
|
def _setup(scope):
|
|
|
|
header_file = os.path.join(putty_srcdir, "testcrypt.h")
|
|
|
|
|
Build testcrypt on Windows.
The bulk of this commit is the changes necessary to make testcrypt
compile under Visual Studio. Unfortunately, I've had to remove my
fiddly clever uses of C99 variadic macros, because Visual Studio does
something unexpected when a variadic macro's expansion puts
__VA_ARGS__ in the argument list of a further macro invocation: the
commas don't separate further arguments. In other words, if you write
#define INNER(x,y,z) some expansion involving x, y and z
#define OUTER(...) INNER(__VA_ARGS__)
OUTER(1,2,3)
then gcc and clang will translate OUTER(1,2,3) into INNER(1,2,3) in
the obvious way, and the inner macro will be expanded with x=1, y=2
and z=3. But try this in Visual Studio, and you'll get the macro
parameter x expanding to the entire string 1,2,3 and the other two
empty (with warnings complaining that INNER didn't get the number of
arguments it expected).
It's hard to cite chapter and verse of the standard to say which of
those is _definitely_ right, though my reading leans towards the
gcc/clang behaviour. But I do know I can't depend on it in code that
has to compile under both!
So I've removed the system that allowed me to declare everything in
testcrypt.h as FUNC(ret,fn,arg,arg,arg), and now I have to use a
different macro for each arity (FUNC0, FUNC1, FUNC2 etc). Also, the
WRAPPED_NAME system is gone (because that too depended on the use of a
comma to shift macro arguments along by one), and now I put a custom C
wrapper around a function by simply re-#defining that function's own
name (and therefore the subsequent code has to be a little more
careful to _not_ pass functions' names between several macros before
stringifying them).
That's all a bit tedious, and commits me to a small amount of ongoing
annoyance because now I'll have to add an explicit argument count
every time I add something to testcrypt.h. But then again, perhaps it
will make the code less incomprehensible to someone trying to
understand it!
2019-01-11 06:25:28 +00:00
|
|
|
linere = re.compile(r'^FUNC\d+\((.*)\)$')
|
New test system for mp_int and cryptography.
I've written a new standalone test program which incorporates all of
PuTTY's crypto code, including the mp_int and low-level elliptic curve
layers but also going all the way up to the implementations of the
MAC, hash, cipher, public key and kex abstractions.
The test program itself, 'testcrypt', speaks a simple line-oriented
protocol on standard I/O in which you write the name of a function
call followed by some inputs, and it gives you back a list of outputs
preceded by a line telling you how many there are. Dynamically
allocated objects are assigned string ids in the protocol, and there's
a 'free' function that tells testcrypt when it can dispose of one.
It's possible to speak that protocol by hand, but cumbersome. I've
also provided a Python module that wraps it, by running testcrypt as a
persistent subprocess and gatewaying all the function calls into
things that look reasonably natural to call from Python. The Python
module and testcrypt.c both read a carefully formatted header file
testcrypt.h which contains the name and signature of every exported
function, so it costs minimal effort to expose a given function
through this test API. In a few cases it's necessary to write a
wrapper in testcrypt.c that makes the function look more friendly, but
mostly you don't even need that. (Though that is one of the
motivations between a lot of API cleanups I've done recently!)
I considered doing Python integration in the more obvious way, by
linking parts of the PuTTY code directly into a native-code .so Python
module. I decided against it because this way is more flexible: I can
run the testcrypt program on its own, or compile it in a way that
Python wouldn't play nicely with (I bet compiling just that .so with
Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or
attach a debugger to it. I can even recompile testcrypt for a
different CPU architecture (32- vs 64-bit, or even running it on a
different machine over ssh or under emulation) and still layer the
nice API on top of that via the local Python interpreter. All I need
is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
|
|
|
valprefix = "val_"
|
|
|
|
outprefix = "out_"
|
2019-01-10 19:21:33 +00:00
|
|
|
optprefix = "opt_"
|
New test system for mp_int and cryptography.
I've written a new standalone test program which incorporates all of
PuTTY's crypto code, including the mp_int and low-level elliptic curve
layers but also going all the way up to the implementations of the
MAC, hash, cipher, public key and kex abstractions.
The test program itself, 'testcrypt', speaks a simple line-oriented
protocol on standard I/O in which you write the name of a function
call followed by some inputs, and it gives you back a list of outputs
preceded by a line telling you how many there are. Dynamically
allocated objects are assigned string ids in the protocol, and there's
a 'free' function that tells testcrypt when it can dispose of one.
It's possible to speak that protocol by hand, but cumbersome. I've
also provided a Python module that wraps it, by running testcrypt as a
persistent subprocess and gatewaying all the function calls into
things that look reasonably natural to call from Python. The Python
module and testcrypt.c both read a carefully formatted header file
testcrypt.h which contains the name and signature of every exported
function, so it costs minimal effort to expose a given function
through this test API. In a few cases it's necessary to write a
wrapper in testcrypt.c that makes the function look more friendly, but
mostly you don't even need that. (Though that is one of the
motivations between a lot of API cleanups I've done recently!)
I considered doing Python integration in the more obvious way, by
linking parts of the PuTTY code directly into a native-code .so Python
module. I decided against it because this way is more flexible: I can
run the testcrypt program on its own, or compile it in a way that
Python wouldn't play nicely with (I bet compiling just that .so with
Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or
attach a debugger to it. I can even recompile testcrypt for a
different CPU architecture (32- vs 64-bit, or even running it on a
different machine over ssh or under emulation) and still layer the
nice API on top of that via the local Python interpreter. All I need
is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
|
|
|
consprefix = "consumed_"
|
|
|
|
|
|
|
|
def trim_argtype(arg):
|
2019-01-10 19:21:33 +00:00
|
|
|
if arg.startswith(optprefix):
|
|
|
|
return optprefix + trim_argtype(arg[len(optprefix):])
|
|
|
|
|
New test system for mp_int and cryptography.
I've written a new standalone test program which incorporates all of
PuTTY's crypto code, including the mp_int and low-level elliptic curve
layers but also going all the way up to the implementations of the
MAC, hash, cipher, public key and kex abstractions.
The test program itself, 'testcrypt', speaks a simple line-oriented
protocol on standard I/O in which you write the name of a function
call followed by some inputs, and it gives you back a list of outputs
preceded by a line telling you how many there are. Dynamically
allocated objects are assigned string ids in the protocol, and there's
a 'free' function that tells testcrypt when it can dispose of one.
It's possible to speak that protocol by hand, but cumbersome. I've
also provided a Python module that wraps it, by running testcrypt as a
persistent subprocess and gatewaying all the function calls into
things that look reasonably natural to call from Python. The Python
module and testcrypt.c both read a carefully formatted header file
testcrypt.h which contains the name and signature of every exported
function, so it costs minimal effort to expose a given function
through this test API. In a few cases it's necessary to write a
wrapper in testcrypt.c that makes the function look more friendly, but
mostly you don't even need that. (Though that is one of the
motivations between a lot of API cleanups I've done recently!)
I considered doing Python integration in the more obvious way, by
linking parts of the PuTTY code directly into a native-code .so Python
module. I decided against it because this way is more flexible: I can
run the testcrypt program on its own, or compile it in a way that
Python wouldn't play nicely with (I bet compiling just that .so with
Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or
attach a debugger to it. I can even recompile testcrypt for a
different CPU architecture (32- vs 64-bit, or even running it on a
different machine over ssh or under emulation) and still layer the
nice API on top of that via the local Python interpreter. All I need
is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
|
|
|
if (arg.startswith(valprefix) and
|
|
|
|
"_" in arg[len(valprefix):]):
|
|
|
|
# Strip suffixes like val_string_asciz
|
|
|
|
arg = arg[:arg.index("_", len(valprefix))]
|
|
|
|
return arg
|
|
|
|
|
|
|
|
with open(header_file) as f:
|
|
|
|
for line in iter(f.readline, ""):
|
|
|
|
line = line.rstrip("\r\n").replace(" ", "")
|
Build testcrypt on Windows.
The bulk of this commit is the changes necessary to make testcrypt
compile under Visual Studio. Unfortunately, I've had to remove my
fiddly clever uses of C99 variadic macros, because Visual Studio does
something unexpected when a variadic macro's expansion puts
__VA_ARGS__ in the argument list of a further macro invocation: the
commas don't separate further arguments. In other words, if you write
#define INNER(x,y,z) some expansion involving x, y and z
#define OUTER(...) INNER(__VA_ARGS__)
OUTER(1,2,3)
then gcc and clang will translate OUTER(1,2,3) into INNER(1,2,3) in
the obvious way, and the inner macro will be expanded with x=1, y=2
and z=3. But try this in Visual Studio, and you'll get the macro
parameter x expanding to the entire string 1,2,3 and the other two
empty (with warnings complaining that INNER didn't get the number of
arguments it expected).
It's hard to cite chapter and verse of the standard to say which of
those is _definitely_ right, though my reading leans towards the
gcc/clang behaviour. But I do know I can't depend on it in code that
has to compile under both!
So I've removed the system that allowed me to declare everything in
testcrypt.h as FUNC(ret,fn,arg,arg,arg), and now I have to use a
different macro for each arity (FUNC0, FUNC1, FUNC2 etc). Also, the
WRAPPED_NAME system is gone (because that too depended on the use of a
comma to shift macro arguments along by one), and now I put a custom C
wrapper around a function by simply re-#defining that function's own
name (and therefore the subsequent code has to be a little more
careful to _not_ pass functions' names between several macros before
stringifying them).
That's all a bit tedious, and commits me to a small amount of ongoing
annoyance because now I'll have to add an explicit argument count
every time I add something to testcrypt.h. But then again, perhaps it
will make the code less incomprehensible to someone trying to
understand it!
2019-01-11 06:25:28 +00:00
|
|
|
m = linere.match(line)
|
|
|
|
if m is not None:
|
|
|
|
words = m.group(1).split(",")
|
New test system for mp_int and cryptography.
I've written a new standalone test program which incorporates all of
PuTTY's crypto code, including the mp_int and low-level elliptic curve
layers but also going all the way up to the implementations of the
MAC, hash, cipher, public key and kex abstractions.
The test program itself, 'testcrypt', speaks a simple line-oriented
protocol on standard I/O in which you write the name of a function
call followed by some inputs, and it gives you back a list of outputs
preceded by a line telling you how many there are. Dynamically
allocated objects are assigned string ids in the protocol, and there's
a 'free' function that tells testcrypt when it can dispose of one.
It's possible to speak that protocol by hand, but cumbersome. I've
also provided a Python module that wraps it, by running testcrypt as a
persistent subprocess and gatewaying all the function calls into
things that look reasonably natural to call from Python. The Python
module and testcrypt.c both read a carefully formatted header file
testcrypt.h which contains the name and signature of every exported
function, so it costs minimal effort to expose a given function
through this test API. In a few cases it's necessary to write a
wrapper in testcrypt.c that makes the function look more friendly, but
mostly you don't even need that. (Though that is one of the
motivations between a lot of API cleanups I've done recently!)
I considered doing Python integration in the more obvious way, by
linking parts of the PuTTY code directly into a native-code .so Python
module. I decided against it because this way is more flexible: I can
run the testcrypt program on its own, or compile it in a way that
Python wouldn't play nicely with (I bet compiling just that .so with
Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or
attach a debugger to it. I can even recompile testcrypt for a
different CPU architecture (32- vs 64-bit, or even running it on a
different machine over ssh or under emulation) and still layer the
nice API on top of that via the local Python interpreter. All I need
is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
|
|
|
function = words[1]
|
|
|
|
rettypes = []
|
|
|
|
argtypes = []
|
|
|
|
argsconsumed = []
|
|
|
|
if words[0] != "void":
|
|
|
|
rettypes.append(trim_argtype(words[0]))
|
|
|
|
for arg in words[2:]:
|
|
|
|
if arg.startswith(outprefix):
|
|
|
|
rettypes.append(trim_argtype(arg[len(outprefix):]))
|
|
|
|
else:
|
|
|
|
consumed = False
|
|
|
|
if arg.startswith(consprefix):
|
|
|
|
arg = arg[len(consprefix):]
|
|
|
|
consumed = True
|
|
|
|
arg = trim_argtype(arg)
|
|
|
|
argtypes.append((arg, consumed))
|
|
|
|
scope[function] = Function(function, rettypes, argtypes)
|
|
|
|
|
|
|
|
_setup(globals())
|
|
|
|
del _setup
|