1
0
mirror of https://git.tartarus.org/simon/putty.git synced 2025-01-09 17:38:00 +00:00
putty-source/test/testcrypt.py

442 lines
17 KiB
Python
Raw Permalink Normal View History

New test system for mp_int and cryptography. I've written a new standalone test program which incorporates all of PuTTY's crypto code, including the mp_int and low-level elliptic curve layers but also going all the way up to the implementations of the MAC, hash, cipher, public key and kex abstractions. The test program itself, 'testcrypt', speaks a simple line-oriented protocol on standard I/O in which you write the name of a function call followed by some inputs, and it gives you back a list of outputs preceded by a line telling you how many there are. Dynamically allocated objects are assigned string ids in the protocol, and there's a 'free' function that tells testcrypt when it can dispose of one. It's possible to speak that protocol by hand, but cumbersome. I've also provided a Python module that wraps it, by running testcrypt as a persistent subprocess and gatewaying all the function calls into things that look reasonably natural to call from Python. The Python module and testcrypt.c both read a carefully formatted header file testcrypt.h which contains the name and signature of every exported function, so it costs minimal effort to expose a given function through this test API. In a few cases it's necessary to write a wrapper in testcrypt.c that makes the function look more friendly, but mostly you don't even need that. (Though that is one of the motivations between a lot of API cleanups I've done recently!) I considered doing Python integration in the more obvious way, by linking parts of the PuTTY code directly into a native-code .so Python module. I decided against it because this way is more flexible: I can run the testcrypt program on its own, or compile it in a way that Python wouldn't play nicely with (I bet compiling just that .so with Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or attach a debugger to it. I can even recompile testcrypt for a different CPU architecture (32- vs 64-bit, or even running it on a different machine over ssh or under emulation) and still layer the nice API on top of that via the local Python interpreter. All I need is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
import sys
import os
import numbers
import subprocess
import re
Rewrite the testcrypt.c macro system. Yesterday's commit 52ee636b092c199 which further extended the huge pile of arity-specific annoying wrapper macros pushed me over the edge and inspired me to give some harder thought to finding a way to handle all arities at once. And this time I found one! The new technique changes the syntax of the function specifications in testcrypt.h. In particular, they now have to specify a _name_ for each parameter as well as a type, because the macros generating the C marshalling wrappers will need a structure field for each parameter and cpp isn't flexible enough to generate names for those fields automatically. Rather than tediously name them arg1, arg2 etc, I've reused the names of the parameters from the prototypes or definitions of the underlying real functions (via a one-off auto-extraction process starting from the output of 'clang -Xclang -dump-ast' plus some manual polishing), which means testcrypt.h is now a bit more self-documenting. The testcrypt.py end of the mechanism is rewritten to eat the new format. Since it's got more complicated syntax and nested parens and things, I've written something a bit like a separated lexer/parser system in place of the previous crude regex matcher, which should enforce that the whole header file really does conform to the restricted syntax it has to fit into. The new system uses a lot less code in testcrypt.c, but I've made up for that by also writing a long comment explaining how it works, which was another thing the previous system lacked! Similarly, the new testcrypt.h has some long-overdue instructions at the top.
2021-11-21 10:27:30 +00:00
import string
import struct
New test system for mp_int and cryptography. I've written a new standalone test program which incorporates all of PuTTY's crypto code, including the mp_int and low-level elliptic curve layers but also going all the way up to the implementations of the MAC, hash, cipher, public key and kex abstractions. The test program itself, 'testcrypt', speaks a simple line-oriented protocol on standard I/O in which you write the name of a function call followed by some inputs, and it gives you back a list of outputs preceded by a line telling you how many there are. Dynamically allocated objects are assigned string ids in the protocol, and there's a 'free' function that tells testcrypt when it can dispose of one. It's possible to speak that protocol by hand, but cumbersome. I've also provided a Python module that wraps it, by running testcrypt as a persistent subprocess and gatewaying all the function calls into things that look reasonably natural to call from Python. The Python module and testcrypt.c both read a carefully formatted header file testcrypt.h which contains the name and signature of every exported function, so it costs minimal effort to expose a given function through this test API. In a few cases it's necessary to write a wrapper in testcrypt.c that makes the function look more friendly, but mostly you don't even need that. (Though that is one of the motivations between a lot of API cleanups I've done recently!) I considered doing Python integration in the more obvious way, by linking parts of the PuTTY code directly into a native-code .so Python module. I decided against it because this way is more flexible: I can run the testcrypt program on its own, or compile it in a way that Python wouldn't play nicely with (I bet compiling just that .so with Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or attach a debugger to it. I can even recompile testcrypt for a different CPU architecture (32- vs 64-bit, or even running it on a different machine over ssh or under emulation) and still layer the nice API on top of that via the local Python interpreter. All I need is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
from binascii import hexlify
assert sys.version_info[:2] >= (3,0), "This is Python 3 code"
New test system for mp_int and cryptography. I've written a new standalone test program which incorporates all of PuTTY's crypto code, including the mp_int and low-level elliptic curve layers but also going all the way up to the implementations of the MAC, hash, cipher, public key and kex abstractions. The test program itself, 'testcrypt', speaks a simple line-oriented protocol on standard I/O in which you write the name of a function call followed by some inputs, and it gives you back a list of outputs preceded by a line telling you how many there are. Dynamically allocated objects are assigned string ids in the protocol, and there's a 'free' function that tells testcrypt when it can dispose of one. It's possible to speak that protocol by hand, but cumbersome. I've also provided a Python module that wraps it, by running testcrypt as a persistent subprocess and gatewaying all the function calls into things that look reasonably natural to call from Python. The Python module and testcrypt.c both read a carefully formatted header file testcrypt.h which contains the name and signature of every exported function, so it costs minimal effort to expose a given function through this test API. In a few cases it's necessary to write a wrapper in testcrypt.c that makes the function look more friendly, but mostly you don't even need that. (Though that is one of the motivations between a lot of API cleanups I've done recently!) I considered doing Python integration in the more obvious way, by linking parts of the PuTTY code directly into a native-code .so Python module. I decided against it because this way is more flexible: I can run the testcrypt program on its own, or compile it in a way that Python wouldn't play nicely with (I bet compiling just that .so with Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or attach a debugger to it. I can even recompile testcrypt for a different CPU architecture (32- vs 64-bit, or even running it on a different machine over ssh or under emulation) and still layer the nice API on top of that via the local Python interpreter. All I need is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
# Expect to be run from the 'test' subdirectory, one level down from
# the main source
putty_srcdir = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
def coerce_to_bytes(arg):
return arg.encode("UTF-8") if isinstance(arg, str) else arg
New test system for mp_int and cryptography. I've written a new standalone test program which incorporates all of PuTTY's crypto code, including the mp_int and low-level elliptic curve layers but also going all the way up to the implementations of the MAC, hash, cipher, public key and kex abstractions. The test program itself, 'testcrypt', speaks a simple line-oriented protocol on standard I/O in which you write the name of a function call followed by some inputs, and it gives you back a list of outputs preceded by a line telling you how many there are. Dynamically allocated objects are assigned string ids in the protocol, and there's a 'free' function that tells testcrypt when it can dispose of one. It's possible to speak that protocol by hand, but cumbersome. I've also provided a Python module that wraps it, by running testcrypt as a persistent subprocess and gatewaying all the function calls into things that look reasonably natural to call from Python. The Python module and testcrypt.c both read a carefully formatted header file testcrypt.h which contains the name and signature of every exported function, so it costs minimal effort to expose a given function through this test API. In a few cases it's necessary to write a wrapper in testcrypt.c that makes the function look more friendly, but mostly you don't even need that. (Though that is one of the motivations between a lot of API cleanups I've done recently!) I considered doing Python integration in the more obvious way, by linking parts of the PuTTY code directly into a native-code .so Python module. I decided against it because this way is more flexible: I can run the testcrypt program on its own, or compile it in a way that Python wouldn't play nicely with (I bet compiling just that .so with Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or attach a debugger to it. I can even recompile testcrypt for a different CPU architecture (32- vs 64-bit, or even running it on a different machine over ssh or under emulation) and still layer the nice API on top of that via the local Python interpreter. All I need is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
Handle crashes in the testcrypt binary more cleanly. Previously, if the testcrypt subprocess suffered any kind of crash or assertion failure during a run of the Python-based test system, the effect would be that ChildProcess.read_line() would get EOF, ignore it, and silently return the empty string. Then it would carry on doing that for the rest of the program, leading to a long string of error reports in tests that were nowhere near the code that actually caused the crash. Now ChildProcess.read_line() detects EOF and raises an exception, so that the test suite won't heedlessly carry on trying to do things once it's noticed that its subprocess has gone away. This is more fiddly than it sounds, however, because of the wrinkle that sometimes that function can be called while a Python __del__ method is asking testcrypt to free something. If that happens, the exception can't be propagated out of the __del__ (analogously to the rule that it's a really terrible idea for C++ destructors to throw). So you get an annoying warning message on standard error, and then the next command sent to testcrypt will be back in the same position. Worse still, this can also happen if testcrypt has _already_ crashed, because the __del__ methods will still run. To protect against _that_, ChildProcess caches the exception after throwing it, and then each subsequent write_line() will rethrow it. And __del__ catches and explicitly ignores the exception (to avoid the annoying warning if Python has to do the same). The combined result should be that if testcrypt crashes in normal (non-__del__) context, we should get a single exception that terminates the run cleanly without cascade failures, and whose backtrace localises the problem to the actual operation that caused the crash. If testcrypt crashes in __del__, we can't quite do that well, but we can still terminate with an exception at the next opportunity, avoiding multiple cascade failures. Also in this commit, I've got rid of the try-finally in cryptsuite.py's (trivial) main program.
2019-03-24 09:33:58 +00:00
class ChildProcessFailure(Exception):
pass
New test system for mp_int and cryptography. I've written a new standalone test program which incorporates all of PuTTY's crypto code, including the mp_int and low-level elliptic curve layers but also going all the way up to the implementations of the MAC, hash, cipher, public key and kex abstractions. The test program itself, 'testcrypt', speaks a simple line-oriented protocol on standard I/O in which you write the name of a function call followed by some inputs, and it gives you back a list of outputs preceded by a line telling you how many there are. Dynamically allocated objects are assigned string ids in the protocol, and there's a 'free' function that tells testcrypt when it can dispose of one. It's possible to speak that protocol by hand, but cumbersome. I've also provided a Python module that wraps it, by running testcrypt as a persistent subprocess and gatewaying all the function calls into things that look reasonably natural to call from Python. The Python module and testcrypt.c both read a carefully formatted header file testcrypt.h which contains the name and signature of every exported function, so it costs minimal effort to expose a given function through this test API. In a few cases it's necessary to write a wrapper in testcrypt.c that makes the function look more friendly, but mostly you don't even need that. (Though that is one of the motivations between a lot of API cleanups I've done recently!) I considered doing Python integration in the more obvious way, by linking parts of the PuTTY code directly into a native-code .so Python module. I decided against it because this way is more flexible: I can run the testcrypt program on its own, or compile it in a way that Python wouldn't play nicely with (I bet compiling just that .so with Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or attach a debugger to it. I can even recompile testcrypt for a different CPU architecture (32- vs 64-bit, or even running it on a different machine over ssh or under emulation) and still layer the nice API on top of that via the local Python interpreter. All I need is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
class ChildProcess(object):
def __init__(self):
self.sp = None
self.debug = None
self.exitstatus = None
Handle crashes in the testcrypt binary more cleanly. Previously, if the testcrypt subprocess suffered any kind of crash or assertion failure during a run of the Python-based test system, the effect would be that ChildProcess.read_line() would get EOF, ignore it, and silently return the empty string. Then it would carry on doing that for the rest of the program, leading to a long string of error reports in tests that were nowhere near the code that actually caused the crash. Now ChildProcess.read_line() detects EOF and raises an exception, so that the test suite won't heedlessly carry on trying to do things once it's noticed that its subprocess has gone away. This is more fiddly than it sounds, however, because of the wrinkle that sometimes that function can be called while a Python __del__ method is asking testcrypt to free something. If that happens, the exception can't be propagated out of the __del__ (analogously to the rule that it's a really terrible idea for C++ destructors to throw). So you get an annoying warning message on standard error, and then the next command sent to testcrypt will be back in the same position. Worse still, this can also happen if testcrypt has _already_ crashed, because the __del__ methods will still run. To protect against _that_, ChildProcess caches the exception after throwing it, and then each subsequent write_line() will rethrow it. And __del__ catches and explicitly ignores the exception (to avoid the annoying warning if Python has to do the same). The combined result should be that if testcrypt crashes in normal (non-__del__) context, we should get a single exception that terminates the run cleanly without cascade failures, and whose backtrace localises the problem to the actual operation that caused the crash. If testcrypt crashes in __del__, we can't quite do that well, but we can still terminate with an exception at the next opportunity, avoiding multiple cascade failures. Also in this commit, I've got rid of the try-finally in cryptsuite.py's (trivial) main program.
2019-03-24 09:33:58 +00:00
self.exception = None
New test system for mp_int and cryptography. I've written a new standalone test program which incorporates all of PuTTY's crypto code, including the mp_int and low-level elliptic curve layers but also going all the way up to the implementations of the MAC, hash, cipher, public key and kex abstractions. The test program itself, 'testcrypt', speaks a simple line-oriented protocol on standard I/O in which you write the name of a function call followed by some inputs, and it gives you back a list of outputs preceded by a line telling you how many there are. Dynamically allocated objects are assigned string ids in the protocol, and there's a 'free' function that tells testcrypt when it can dispose of one. It's possible to speak that protocol by hand, but cumbersome. I've also provided a Python module that wraps it, by running testcrypt as a persistent subprocess and gatewaying all the function calls into things that look reasonably natural to call from Python. The Python module and testcrypt.c both read a carefully formatted header file testcrypt.h which contains the name and signature of every exported function, so it costs minimal effort to expose a given function through this test API. In a few cases it's necessary to write a wrapper in testcrypt.c that makes the function look more friendly, but mostly you don't even need that. (Though that is one of the motivations between a lot of API cleanups I've done recently!) I considered doing Python integration in the more obvious way, by linking parts of the PuTTY code directly into a native-code .so Python module. I decided against it because this way is more flexible: I can run the testcrypt program on its own, or compile it in a way that Python wouldn't play nicely with (I bet compiling just that .so with Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or attach a debugger to it. I can even recompile testcrypt for a different CPU architecture (32- vs 64-bit, or even running it on a different machine over ssh or under emulation) and still layer the nice API on top of that via the local Python interpreter. All I need is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
dbg = os.environ.get("PUTTY_TESTCRYPT_DEBUG")
if dbg is not None:
if dbg == "stderr":
self.debug = sys.stderr
else:
sys.stderr.write("Unknown value '{}' for PUTTY_TESTCRYPT_DEBUG"
" (try 'stderr'\n")
def start(self):
assert self.sp is None
override_command = os.environ.get("PUTTY_TESTCRYPT")
if override_command is None:
cmd = [os.path.join(putty_srcdir, "testcrypt")]
shell = False
else:
cmd = override_command
shell = True
self.sp = subprocess.Popen(
cmd, shell=shell, stdin=subprocess.PIPE, stdout=subprocess.PIPE)
def write_line(self, line):
Handle crashes in the testcrypt binary more cleanly. Previously, if the testcrypt subprocess suffered any kind of crash or assertion failure during a run of the Python-based test system, the effect would be that ChildProcess.read_line() would get EOF, ignore it, and silently return the empty string. Then it would carry on doing that for the rest of the program, leading to a long string of error reports in tests that were nowhere near the code that actually caused the crash. Now ChildProcess.read_line() detects EOF and raises an exception, so that the test suite won't heedlessly carry on trying to do things once it's noticed that its subprocess has gone away. This is more fiddly than it sounds, however, because of the wrinkle that sometimes that function can be called while a Python __del__ method is asking testcrypt to free something. If that happens, the exception can't be propagated out of the __del__ (analogously to the rule that it's a really terrible idea for C++ destructors to throw). So you get an annoying warning message on standard error, and then the next command sent to testcrypt will be back in the same position. Worse still, this can also happen if testcrypt has _already_ crashed, because the __del__ methods will still run. To protect against _that_, ChildProcess caches the exception after throwing it, and then each subsequent write_line() will rethrow it. And __del__ catches and explicitly ignores the exception (to avoid the annoying warning if Python has to do the same). The combined result should be that if testcrypt crashes in normal (non-__del__) context, we should get a single exception that terminates the run cleanly without cascade failures, and whose backtrace localises the problem to the actual operation that caused the crash. If testcrypt crashes in __del__, we can't quite do that well, but we can still terminate with an exception at the next opportunity, avoiding multiple cascade failures. Also in this commit, I've got rid of the try-finally in cryptsuite.py's (trivial) main program.
2019-03-24 09:33:58 +00:00
if self.exception is not None:
# Re-raise our fatal-error exception, if it previously
# occurred in a context where it couldn't be propagated (a
# __del__ method).
raise self.exception
New test system for mp_int and cryptography. I've written a new standalone test program which incorporates all of PuTTY's crypto code, including the mp_int and low-level elliptic curve layers but also going all the way up to the implementations of the MAC, hash, cipher, public key and kex abstractions. The test program itself, 'testcrypt', speaks a simple line-oriented protocol on standard I/O in which you write the name of a function call followed by some inputs, and it gives you back a list of outputs preceded by a line telling you how many there are. Dynamically allocated objects are assigned string ids in the protocol, and there's a 'free' function that tells testcrypt when it can dispose of one. It's possible to speak that protocol by hand, but cumbersome. I've also provided a Python module that wraps it, by running testcrypt as a persistent subprocess and gatewaying all the function calls into things that look reasonably natural to call from Python. The Python module and testcrypt.c both read a carefully formatted header file testcrypt.h which contains the name and signature of every exported function, so it costs minimal effort to expose a given function through this test API. In a few cases it's necessary to write a wrapper in testcrypt.c that makes the function look more friendly, but mostly you don't even need that. (Though that is one of the motivations between a lot of API cleanups I've done recently!) I considered doing Python integration in the more obvious way, by linking parts of the PuTTY code directly into a native-code .so Python module. I decided against it because this way is more flexible: I can run the testcrypt program on its own, or compile it in a way that Python wouldn't play nicely with (I bet compiling just that .so with Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or attach a debugger to it. I can even recompile testcrypt for a different CPU architecture (32- vs 64-bit, or even running it on a different machine over ssh or under emulation) and still layer the nice API on top of that via the local Python interpreter. All I need is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
if self.debug is not None:
self.debug.write("send: {}\n".format(line))
self.sp.stdin.write(line + b"\n")
self.sp.stdin.flush()
def read_line(self):
Handle crashes in the testcrypt binary more cleanly. Previously, if the testcrypt subprocess suffered any kind of crash or assertion failure during a run of the Python-based test system, the effect would be that ChildProcess.read_line() would get EOF, ignore it, and silently return the empty string. Then it would carry on doing that for the rest of the program, leading to a long string of error reports in tests that were nowhere near the code that actually caused the crash. Now ChildProcess.read_line() detects EOF and raises an exception, so that the test suite won't heedlessly carry on trying to do things once it's noticed that its subprocess has gone away. This is more fiddly than it sounds, however, because of the wrinkle that sometimes that function can be called while a Python __del__ method is asking testcrypt to free something. If that happens, the exception can't be propagated out of the __del__ (analogously to the rule that it's a really terrible idea for C++ destructors to throw). So you get an annoying warning message on standard error, and then the next command sent to testcrypt will be back in the same position. Worse still, this can also happen if testcrypt has _already_ crashed, because the __del__ methods will still run. To protect against _that_, ChildProcess caches the exception after throwing it, and then each subsequent write_line() will rethrow it. And __del__ catches and explicitly ignores the exception (to avoid the annoying warning if Python has to do the same). The combined result should be that if testcrypt crashes in normal (non-__del__) context, we should get a single exception that terminates the run cleanly without cascade failures, and whose backtrace localises the problem to the actual operation that caused the crash. If testcrypt crashes in __del__, we can't quite do that well, but we can still terminate with an exception at the next opportunity, avoiding multiple cascade failures. Also in this commit, I've got rid of the try-finally in cryptsuite.py's (trivial) main program.
2019-03-24 09:33:58 +00:00
line = self.sp.stdout.readline()
if len(line) == 0:
self.exception = ChildProcessFailure("received EOF from testcrypt")
raise self.exception
line = line.rstrip(b"\r\n")
New test system for mp_int and cryptography. I've written a new standalone test program which incorporates all of PuTTY's crypto code, including the mp_int and low-level elliptic curve layers but also going all the way up to the implementations of the MAC, hash, cipher, public key and kex abstractions. The test program itself, 'testcrypt', speaks a simple line-oriented protocol on standard I/O in which you write the name of a function call followed by some inputs, and it gives you back a list of outputs preceded by a line telling you how many there are. Dynamically allocated objects are assigned string ids in the protocol, and there's a 'free' function that tells testcrypt when it can dispose of one. It's possible to speak that protocol by hand, but cumbersome. I've also provided a Python module that wraps it, by running testcrypt as a persistent subprocess and gatewaying all the function calls into things that look reasonably natural to call from Python. The Python module and testcrypt.c both read a carefully formatted header file testcrypt.h which contains the name and signature of every exported function, so it costs minimal effort to expose a given function through this test API. In a few cases it's necessary to write a wrapper in testcrypt.c that makes the function look more friendly, but mostly you don't even need that. (Though that is one of the motivations between a lot of API cleanups I've done recently!) I considered doing Python integration in the more obvious way, by linking parts of the PuTTY code directly into a native-code .so Python module. I decided against it because this way is more flexible: I can run the testcrypt program on its own, or compile it in a way that Python wouldn't play nicely with (I bet compiling just that .so with Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or attach a debugger to it. I can even recompile testcrypt for a different CPU architecture (32- vs 64-bit, or even running it on a different machine over ssh or under emulation) and still layer the nice API on top of that via the local Python interpreter. All I need is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
if self.debug is not None:
self.debug.write("recv: {}\n".format(line))
return line
def already_terminated(self):
return self.sp is None and self.exitstatus is not None
New test system for mp_int and cryptography. I've written a new standalone test program which incorporates all of PuTTY's crypto code, including the mp_int and low-level elliptic curve layers but also going all the way up to the implementations of the MAC, hash, cipher, public key and kex abstractions. The test program itself, 'testcrypt', speaks a simple line-oriented protocol on standard I/O in which you write the name of a function call followed by some inputs, and it gives you back a list of outputs preceded by a line telling you how many there are. Dynamically allocated objects are assigned string ids in the protocol, and there's a 'free' function that tells testcrypt when it can dispose of one. It's possible to speak that protocol by hand, but cumbersome. I've also provided a Python module that wraps it, by running testcrypt as a persistent subprocess and gatewaying all the function calls into things that look reasonably natural to call from Python. The Python module and testcrypt.c both read a carefully formatted header file testcrypt.h which contains the name and signature of every exported function, so it costs minimal effort to expose a given function through this test API. In a few cases it's necessary to write a wrapper in testcrypt.c that makes the function look more friendly, but mostly you don't even need that. (Though that is one of the motivations between a lot of API cleanups I've done recently!) I considered doing Python integration in the more obvious way, by linking parts of the PuTTY code directly into a native-code .so Python module. I decided against it because this way is more flexible: I can run the testcrypt program on its own, or compile it in a way that Python wouldn't play nicely with (I bet compiling just that .so with Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or attach a debugger to it. I can even recompile testcrypt for a different CPU architecture (32- vs 64-bit, or even running it on a different machine over ssh or under emulation) and still layer the nice API on top of that via the local Python interpreter. All I need is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
def funcall(self, cmd, args):
if self.sp is None:
assert self.exitstatus is None
New test system for mp_int and cryptography. I've written a new standalone test program which incorporates all of PuTTY's crypto code, including the mp_int and low-level elliptic curve layers but also going all the way up to the implementations of the MAC, hash, cipher, public key and kex abstractions. The test program itself, 'testcrypt', speaks a simple line-oriented protocol on standard I/O in which you write the name of a function call followed by some inputs, and it gives you back a list of outputs preceded by a line telling you how many there are. Dynamically allocated objects are assigned string ids in the protocol, and there's a 'free' function that tells testcrypt when it can dispose of one. It's possible to speak that protocol by hand, but cumbersome. I've also provided a Python module that wraps it, by running testcrypt as a persistent subprocess and gatewaying all the function calls into things that look reasonably natural to call from Python. The Python module and testcrypt.c both read a carefully formatted header file testcrypt.h which contains the name and signature of every exported function, so it costs minimal effort to expose a given function through this test API. In a few cases it's necessary to write a wrapper in testcrypt.c that makes the function look more friendly, but mostly you don't even need that. (Though that is one of the motivations between a lot of API cleanups I've done recently!) I considered doing Python integration in the more obvious way, by linking parts of the PuTTY code directly into a native-code .so Python module. I decided against it because this way is more flexible: I can run the testcrypt program on its own, or compile it in a way that Python wouldn't play nicely with (I bet compiling just that .so with Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or attach a debugger to it. I can even recompile testcrypt for a different CPU architecture (32- vs 64-bit, or even running it on a different machine over ssh or under emulation) and still layer the nice API on top of that via the local Python interpreter. All I need is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
self.start()
self.write_line(coerce_to_bytes(cmd) + b" " + b" ".join(
coerce_to_bytes(arg) for arg in args))
New test system for mp_int and cryptography. I've written a new standalone test program which incorporates all of PuTTY's crypto code, including the mp_int and low-level elliptic curve layers but also going all the way up to the implementations of the MAC, hash, cipher, public key and kex abstractions. The test program itself, 'testcrypt', speaks a simple line-oriented protocol on standard I/O in which you write the name of a function call followed by some inputs, and it gives you back a list of outputs preceded by a line telling you how many there are. Dynamically allocated objects are assigned string ids in the protocol, and there's a 'free' function that tells testcrypt when it can dispose of one. It's possible to speak that protocol by hand, but cumbersome. I've also provided a Python module that wraps it, by running testcrypt as a persistent subprocess and gatewaying all the function calls into things that look reasonably natural to call from Python. The Python module and testcrypt.c both read a carefully formatted header file testcrypt.h which contains the name and signature of every exported function, so it costs minimal effort to expose a given function through this test API. In a few cases it's necessary to write a wrapper in testcrypt.c that makes the function look more friendly, but mostly you don't even need that. (Though that is one of the motivations between a lot of API cleanups I've done recently!) I considered doing Python integration in the more obvious way, by linking parts of the PuTTY code directly into a native-code .so Python module. I decided against it because this way is more flexible: I can run the testcrypt program on its own, or compile it in a way that Python wouldn't play nicely with (I bet compiling just that .so with Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or attach a debugger to it. I can even recompile testcrypt for a different CPU architecture (32- vs 64-bit, or even running it on a different machine over ssh or under emulation) and still layer the nice API on top of that via the local Python interpreter. All I need is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
argcount = int(self.read_line())
return [self.read_line() for arg in range(argcount)]
def wait_for_exit(self):
if self.sp is not None:
self.sp.stdin.close()
self.exitstatus = self.sp.wait()
self.sp = None
New test system for mp_int and cryptography. I've written a new standalone test program which incorporates all of PuTTY's crypto code, including the mp_int and low-level elliptic curve layers but also going all the way up to the implementations of the MAC, hash, cipher, public key and kex abstractions. The test program itself, 'testcrypt', speaks a simple line-oriented protocol on standard I/O in which you write the name of a function call followed by some inputs, and it gives you back a list of outputs preceded by a line telling you how many there are. Dynamically allocated objects are assigned string ids in the protocol, and there's a 'free' function that tells testcrypt when it can dispose of one. It's possible to speak that protocol by hand, but cumbersome. I've also provided a Python module that wraps it, by running testcrypt as a persistent subprocess and gatewaying all the function calls into things that look reasonably natural to call from Python. The Python module and testcrypt.c both read a carefully formatted header file testcrypt.h which contains the name and signature of every exported function, so it costs minimal effort to expose a given function through this test API. In a few cases it's necessary to write a wrapper in testcrypt.c that makes the function look more friendly, but mostly you don't even need that. (Though that is one of the motivations between a lot of API cleanups I've done recently!) I considered doing Python integration in the more obvious way, by linking parts of the PuTTY code directly into a native-code .so Python module. I decided against it because this way is more flexible: I can run the testcrypt program on its own, or compile it in a way that Python wouldn't play nicely with (I bet compiling just that .so with Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or attach a debugger to it. I can even recompile testcrypt for a different CPU architecture (32- vs 64-bit, or even running it on a different machine over ssh or under emulation) and still layer the nice API on top of that via the local Python interpreter. All I need is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
def check_return_status(self):
self.wait_for_exit()
if self.exitstatus is not None and self.exitstatus != 0:
Handle crashes in the testcrypt binary more cleanly. Previously, if the testcrypt subprocess suffered any kind of crash or assertion failure during a run of the Python-based test system, the effect would be that ChildProcess.read_line() would get EOF, ignore it, and silently return the empty string. Then it would carry on doing that for the rest of the program, leading to a long string of error reports in tests that were nowhere near the code that actually caused the crash. Now ChildProcess.read_line() detects EOF and raises an exception, so that the test suite won't heedlessly carry on trying to do things once it's noticed that its subprocess has gone away. This is more fiddly than it sounds, however, because of the wrinkle that sometimes that function can be called while a Python __del__ method is asking testcrypt to free something. If that happens, the exception can't be propagated out of the __del__ (analogously to the rule that it's a really terrible idea for C++ destructors to throw). So you get an annoying warning message on standard error, and then the next command sent to testcrypt will be back in the same position. Worse still, this can also happen if testcrypt has _already_ crashed, because the __del__ methods will still run. To protect against _that_, ChildProcess caches the exception after throwing it, and then each subsequent write_line() will rethrow it. And __del__ catches and explicitly ignores the exception (to avoid the annoying warning if Python has to do the same). The combined result should be that if testcrypt crashes in normal (non-__del__) context, we should get a single exception that terminates the run cleanly without cascade failures, and whose backtrace localises the problem to the actual operation that caused the crash. If testcrypt crashes in __del__, we can't quite do that well, but we can still terminate with an exception at the next opportunity, avoiding multiple cascade failures. Also in this commit, I've got rid of the try-finally in cryptsuite.py's (trivial) main program.
2019-03-24 09:33:58 +00:00
raise ChildProcessFailure("testcrypt returned exit status {}"
.format(self.exitstatus))
New test system for mp_int and cryptography. I've written a new standalone test program which incorporates all of PuTTY's crypto code, including the mp_int and low-level elliptic curve layers but also going all the way up to the implementations of the MAC, hash, cipher, public key and kex abstractions. The test program itself, 'testcrypt', speaks a simple line-oriented protocol on standard I/O in which you write the name of a function call followed by some inputs, and it gives you back a list of outputs preceded by a line telling you how many there are. Dynamically allocated objects are assigned string ids in the protocol, and there's a 'free' function that tells testcrypt when it can dispose of one. It's possible to speak that protocol by hand, but cumbersome. I've also provided a Python module that wraps it, by running testcrypt as a persistent subprocess and gatewaying all the function calls into things that look reasonably natural to call from Python. The Python module and testcrypt.c both read a carefully formatted header file testcrypt.h which contains the name and signature of every exported function, so it costs minimal effort to expose a given function through this test API. In a few cases it's necessary to write a wrapper in testcrypt.c that makes the function look more friendly, but mostly you don't even need that. (Though that is one of the motivations between a lot of API cleanups I've done recently!) I considered doing Python integration in the more obvious way, by linking parts of the PuTTY code directly into a native-code .so Python module. I decided against it because this way is more flexible: I can run the testcrypt program on its own, or compile it in a way that Python wouldn't play nicely with (I bet compiling just that .so with Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or attach a debugger to it. I can even recompile testcrypt for a different CPU architecture (32- vs 64-bit, or even running it on a different machine over ssh or under emulation) and still layer the nice API on top of that via the local Python interpreter. All I need is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
childprocess = ChildProcess()
method_prefixes = {
'val_wpoint': ['ecc_weierstrass_'],
'val_mpoint': ['ecc_montgomery_'],
'val_epoint': ['ecc_edwards_'],
'val_hash': ['ssh_hash_'],
'val_mac': ['ssh2_mac_'],
'val_key': ['ssh_key_'],
'val_cipher': ['ssh_cipher_'],
'val_dh': ['dh_'],
'val_ecdh': ['ssh_ecdhkex_'],
'val_rsakex': ['ssh_rsakex_'],
'val_prng': ['prng_'],
'val_pcs': ['pcs_'],
'val_pockle': ['pockle_'],
'val_ntruencodeschedule': ['ntru_encode_schedule_', 'ntru_'],
}
method_lists = {t: [] for t in method_prefixes}
checked_enum_values = {}
New test system for mp_int and cryptography. I've written a new standalone test program which incorporates all of PuTTY's crypto code, including the mp_int and low-level elliptic curve layers but also going all the way up to the implementations of the MAC, hash, cipher, public key and kex abstractions. The test program itself, 'testcrypt', speaks a simple line-oriented protocol on standard I/O in which you write the name of a function call followed by some inputs, and it gives you back a list of outputs preceded by a line telling you how many there are. Dynamically allocated objects are assigned string ids in the protocol, and there's a 'free' function that tells testcrypt when it can dispose of one. It's possible to speak that protocol by hand, but cumbersome. I've also provided a Python module that wraps it, by running testcrypt as a persistent subprocess and gatewaying all the function calls into things that look reasonably natural to call from Python. The Python module and testcrypt.c both read a carefully formatted header file testcrypt.h which contains the name and signature of every exported function, so it costs minimal effort to expose a given function through this test API. In a few cases it's necessary to write a wrapper in testcrypt.c that makes the function look more friendly, but mostly you don't even need that. (Though that is one of the motivations between a lot of API cleanups I've done recently!) I considered doing Python integration in the more obvious way, by linking parts of the PuTTY code directly into a native-code .so Python module. I decided against it because this way is more flexible: I can run the testcrypt program on its own, or compile it in a way that Python wouldn't play nicely with (I bet compiling just that .so with Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or attach a debugger to it. I can even recompile testcrypt for a different CPU architecture (32- vs 64-bit, or even running it on a different machine over ssh or under emulation) and still layer the nice API on top of that via the local Python interpreter. All I need is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
class Value(object):
def __init__(self, typename, ident):
self._typename = typename
self._ident = ident
for methodname, function in method_lists.get(self._typename, []):
setattr(self, methodname,
(lambda f: lambda *args: f(self, *args))(function))
def _consumed(self):
self._ident = None
New test system for mp_int and cryptography. I've written a new standalone test program which incorporates all of PuTTY's crypto code, including the mp_int and low-level elliptic curve layers but also going all the way up to the implementations of the MAC, hash, cipher, public key and kex abstractions. The test program itself, 'testcrypt', speaks a simple line-oriented protocol on standard I/O in which you write the name of a function call followed by some inputs, and it gives you back a list of outputs preceded by a line telling you how many there are. Dynamically allocated objects are assigned string ids in the protocol, and there's a 'free' function that tells testcrypt when it can dispose of one. It's possible to speak that protocol by hand, but cumbersome. I've also provided a Python module that wraps it, by running testcrypt as a persistent subprocess and gatewaying all the function calls into things that look reasonably natural to call from Python. The Python module and testcrypt.c both read a carefully formatted header file testcrypt.h which contains the name and signature of every exported function, so it costs minimal effort to expose a given function through this test API. In a few cases it's necessary to write a wrapper in testcrypt.c that makes the function look more friendly, but mostly you don't even need that. (Though that is one of the motivations between a lot of API cleanups I've done recently!) I considered doing Python integration in the more obvious way, by linking parts of the PuTTY code directly into a native-code .so Python module. I decided against it because this way is more flexible: I can run the testcrypt program on its own, or compile it in a way that Python wouldn't play nicely with (I bet compiling just that .so with Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or attach a debugger to it. I can even recompile testcrypt for a different CPU architecture (32- vs 64-bit, or even running it on a different machine over ssh or under emulation) and still layer the nice API on top of that via the local Python interpreter. All I need is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
def __repr__(self):
return "Value({!r}, {!r})".format(self._typename, self._ident)
New test system for mp_int and cryptography. I've written a new standalone test program which incorporates all of PuTTY's crypto code, including the mp_int and low-level elliptic curve layers but also going all the way up to the implementations of the MAC, hash, cipher, public key and kex abstractions. The test program itself, 'testcrypt', speaks a simple line-oriented protocol on standard I/O in which you write the name of a function call followed by some inputs, and it gives you back a list of outputs preceded by a line telling you how many there are. Dynamically allocated objects are assigned string ids in the protocol, and there's a 'free' function that tells testcrypt when it can dispose of one. It's possible to speak that protocol by hand, but cumbersome. I've also provided a Python module that wraps it, by running testcrypt as a persistent subprocess and gatewaying all the function calls into things that look reasonably natural to call from Python. The Python module and testcrypt.c both read a carefully formatted header file testcrypt.h which contains the name and signature of every exported function, so it costs minimal effort to expose a given function through this test API. In a few cases it's necessary to write a wrapper in testcrypt.c that makes the function look more friendly, but mostly you don't even need that. (Though that is one of the motivations between a lot of API cleanups I've done recently!) I considered doing Python integration in the more obvious way, by linking parts of the PuTTY code directly into a native-code .so Python module. I decided against it because this way is more flexible: I can run the testcrypt program on its own, or compile it in a way that Python wouldn't play nicely with (I bet compiling just that .so with Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or attach a debugger to it. I can even recompile testcrypt for a different CPU architecture (32- vs 64-bit, or even running it on a different machine over ssh or under emulation) and still layer the nice API on top of that via the local Python interpreter. All I need is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
def __del__(self):
if self._ident is not None and not childprocess.already_terminated():
Handle crashes in the testcrypt binary more cleanly. Previously, if the testcrypt subprocess suffered any kind of crash or assertion failure during a run of the Python-based test system, the effect would be that ChildProcess.read_line() would get EOF, ignore it, and silently return the empty string. Then it would carry on doing that for the rest of the program, leading to a long string of error reports in tests that were nowhere near the code that actually caused the crash. Now ChildProcess.read_line() detects EOF and raises an exception, so that the test suite won't heedlessly carry on trying to do things once it's noticed that its subprocess has gone away. This is more fiddly than it sounds, however, because of the wrinkle that sometimes that function can be called while a Python __del__ method is asking testcrypt to free something. If that happens, the exception can't be propagated out of the __del__ (analogously to the rule that it's a really terrible idea for C++ destructors to throw). So you get an annoying warning message on standard error, and then the next command sent to testcrypt will be back in the same position. Worse still, this can also happen if testcrypt has _already_ crashed, because the __del__ methods will still run. To protect against _that_, ChildProcess caches the exception after throwing it, and then each subsequent write_line() will rethrow it. And __del__ catches and explicitly ignores the exception (to avoid the annoying warning if Python has to do the same). The combined result should be that if testcrypt crashes in normal (non-__del__) context, we should get a single exception that terminates the run cleanly without cascade failures, and whose backtrace localises the problem to the actual operation that caused the crash. If testcrypt crashes in __del__, we can't quite do that well, but we can still terminate with an exception at the next opportunity, avoiding multiple cascade failures. Also in this commit, I've got rid of the try-finally in cryptsuite.py's (trivial) main program.
2019-03-24 09:33:58 +00:00
try:
childprocess.funcall("free", [self._ident])
Handle crashes in the testcrypt binary more cleanly. Previously, if the testcrypt subprocess suffered any kind of crash or assertion failure during a run of the Python-based test system, the effect would be that ChildProcess.read_line() would get EOF, ignore it, and silently return the empty string. Then it would carry on doing that for the rest of the program, leading to a long string of error reports in tests that were nowhere near the code that actually caused the crash. Now ChildProcess.read_line() detects EOF and raises an exception, so that the test suite won't heedlessly carry on trying to do things once it's noticed that its subprocess has gone away. This is more fiddly than it sounds, however, because of the wrinkle that sometimes that function can be called while a Python __del__ method is asking testcrypt to free something. If that happens, the exception can't be propagated out of the __del__ (analogously to the rule that it's a really terrible idea for C++ destructors to throw). So you get an annoying warning message on standard error, and then the next command sent to testcrypt will be back in the same position. Worse still, this can also happen if testcrypt has _already_ crashed, because the __del__ methods will still run. To protect against _that_, ChildProcess caches the exception after throwing it, and then each subsequent write_line() will rethrow it. And __del__ catches and explicitly ignores the exception (to avoid the annoying warning if Python has to do the same). The combined result should be that if testcrypt crashes in normal (non-__del__) context, we should get a single exception that terminates the run cleanly without cascade failures, and whose backtrace localises the problem to the actual operation that caused the crash. If testcrypt crashes in __del__, we can't quite do that well, but we can still terminate with an exception at the next opportunity, avoiding multiple cascade failures. Also in this commit, I've got rid of the try-finally in cryptsuite.py's (trivial) main program.
2019-03-24 09:33:58 +00:00
except ChildProcessFailure:
# If we see this exception now, we can't do anything
# about it, because exceptions don't propagate out of
# __del__ methods. Squelch it to prevent the annoying
# runtime warning from Python, and the
# 'self.exception' mechanism in the ChildProcess class
# will raise it again at the next opportunity.
#
# (This covers both the case where testcrypt crashes
# _during_ one of these free operations, and the
# silencing of cascade failures when we try to send a
# "free" command to testcrypt after it had already
# crashed for some other reason.)
pass
New test system for mp_int and cryptography. I've written a new standalone test program which incorporates all of PuTTY's crypto code, including the mp_int and low-level elliptic curve layers but also going all the way up to the implementations of the MAC, hash, cipher, public key and kex abstractions. The test program itself, 'testcrypt', speaks a simple line-oriented protocol on standard I/O in which you write the name of a function call followed by some inputs, and it gives you back a list of outputs preceded by a line telling you how many there are. Dynamically allocated objects are assigned string ids in the protocol, and there's a 'free' function that tells testcrypt when it can dispose of one. It's possible to speak that protocol by hand, but cumbersome. I've also provided a Python module that wraps it, by running testcrypt as a persistent subprocess and gatewaying all the function calls into things that look reasonably natural to call from Python. The Python module and testcrypt.c both read a carefully formatted header file testcrypt.h which contains the name and signature of every exported function, so it costs minimal effort to expose a given function through this test API. In a few cases it's necessary to write a wrapper in testcrypt.c that makes the function look more friendly, but mostly you don't even need that. (Though that is one of the motivations between a lot of API cleanups I've done recently!) I considered doing Python integration in the more obvious way, by linking parts of the PuTTY code directly into a native-code .so Python module. I decided against it because this way is more flexible: I can run the testcrypt program on its own, or compile it in a way that Python wouldn't play nicely with (I bet compiling just that .so with Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or attach a debugger to it. I can even recompile testcrypt for a different CPU architecture (32- vs 64-bit, or even running it on a different machine over ssh or under emulation) and still layer the nice API on top of that via the local Python interpreter. All I need is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
def __long__(self):
if self._typename != "val_mpint":
New test system for mp_int and cryptography. I've written a new standalone test program which incorporates all of PuTTY's crypto code, including the mp_int and low-level elliptic curve layers but also going all the way up to the implementations of the MAC, hash, cipher, public key and kex abstractions. The test program itself, 'testcrypt', speaks a simple line-oriented protocol on standard I/O in which you write the name of a function call followed by some inputs, and it gives you back a list of outputs preceded by a line telling you how many there are. Dynamically allocated objects are assigned string ids in the protocol, and there's a 'free' function that tells testcrypt when it can dispose of one. It's possible to speak that protocol by hand, but cumbersome. I've also provided a Python module that wraps it, by running testcrypt as a persistent subprocess and gatewaying all the function calls into things that look reasonably natural to call from Python. The Python module and testcrypt.c both read a carefully formatted header file testcrypt.h which contains the name and signature of every exported function, so it costs minimal effort to expose a given function through this test API. In a few cases it's necessary to write a wrapper in testcrypt.c that makes the function look more friendly, but mostly you don't even need that. (Though that is one of the motivations between a lot of API cleanups I've done recently!) I considered doing Python integration in the more obvious way, by linking parts of the PuTTY code directly into a native-code .so Python module. I decided against it because this way is more flexible: I can run the testcrypt program on its own, or compile it in a way that Python wouldn't play nicely with (I bet compiling just that .so with Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or attach a debugger to it. I can even recompile testcrypt for a different CPU architecture (32- vs 64-bit, or even running it on a different machine over ssh or under emulation) and still layer the nice API on top of that via the local Python interpreter. All I need is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
raise TypeError("testcrypt values of types other than mpint"
" cannot be converted to integer")
hexval = childprocess.funcall("mp_dump", [self._ident])[0]
New test system for mp_int and cryptography. I've written a new standalone test program which incorporates all of PuTTY's crypto code, including the mp_int and low-level elliptic curve layers but also going all the way up to the implementations of the MAC, hash, cipher, public key and kex abstractions. The test program itself, 'testcrypt', speaks a simple line-oriented protocol on standard I/O in which you write the name of a function call followed by some inputs, and it gives you back a list of outputs preceded by a line telling you how many there are. Dynamically allocated objects are assigned string ids in the protocol, and there's a 'free' function that tells testcrypt when it can dispose of one. It's possible to speak that protocol by hand, but cumbersome. I've also provided a Python module that wraps it, by running testcrypt as a persistent subprocess and gatewaying all the function calls into things that look reasonably natural to call from Python. The Python module and testcrypt.c both read a carefully formatted header file testcrypt.h which contains the name and signature of every exported function, so it costs minimal effort to expose a given function through this test API. In a few cases it's necessary to write a wrapper in testcrypt.c that makes the function look more friendly, but mostly you don't even need that. (Though that is one of the motivations between a lot of API cleanups I've done recently!) I considered doing Python integration in the more obvious way, by linking parts of the PuTTY code directly into a native-code .so Python module. I decided against it because this way is more flexible: I can run the testcrypt program on its own, or compile it in a way that Python wouldn't play nicely with (I bet compiling just that .so with Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or attach a debugger to it. I can even recompile testcrypt for a different CPU architecture (32- vs 64-bit, or even running it on a different machine over ssh or under emulation) and still layer the nice API on top of that via the local Python interpreter. All I need is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
return 0 if len(hexval) == 0 else int(hexval, 16)
def __int__(self):
return int(self.__long__())
def marshal_string(val):
val = coerce_to_bytes(val)
assert isinstance(val, bytes), "Bad type for val_string input"
return "".join(
chr(b) if (0x20 <= b < 0x7F and b != 0x25)
else "%{:02x}".format(b)
for b in val)
def make_argword(arg, argtype, fnname, argindex, argname, to_preserve):
New test system for mp_int and cryptography. I've written a new standalone test program which incorporates all of PuTTY's crypto code, including the mp_int and low-level elliptic curve layers but also going all the way up to the implementations of the MAC, hash, cipher, public key and kex abstractions. The test program itself, 'testcrypt', speaks a simple line-oriented protocol on standard I/O in which you write the name of a function call followed by some inputs, and it gives you back a list of outputs preceded by a line telling you how many there are. Dynamically allocated objects are assigned string ids in the protocol, and there's a 'free' function that tells testcrypt when it can dispose of one. It's possible to speak that protocol by hand, but cumbersome. I've also provided a Python module that wraps it, by running testcrypt as a persistent subprocess and gatewaying all the function calls into things that look reasonably natural to call from Python. The Python module and testcrypt.c both read a carefully formatted header file testcrypt.h which contains the name and signature of every exported function, so it costs minimal effort to expose a given function through this test API. In a few cases it's necessary to write a wrapper in testcrypt.c that makes the function look more friendly, but mostly you don't even need that. (Though that is one of the motivations between a lot of API cleanups I've done recently!) I considered doing Python integration in the more obvious way, by linking parts of the PuTTY code directly into a native-code .so Python module. I decided against it because this way is more flexible: I can run the testcrypt program on its own, or compile it in a way that Python wouldn't play nicely with (I bet compiling just that .so with Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or attach a debugger to it. I can even recompile testcrypt for a different CPU architecture (32- vs 64-bit, or even running it on a different machine over ssh or under emulation) and still layer the nice API on top of that via the local Python interpreter. All I need is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
typename, consumed = argtype
if typename.startswith("opt_"):
if arg is None:
return "NULL"
typename = typename[4:]
if typename == "val_string":
retwords = childprocess.funcall("newstring", [marshal_string(arg)])
arg = make_retvals([typename], retwords, unpack_strings=False)[0]
to_preserve.append(arg)
New test system for mp_int and cryptography. I've written a new standalone test program which incorporates all of PuTTY's crypto code, including the mp_int and low-level elliptic curve layers but also going all the way up to the implementations of the MAC, hash, cipher, public key and kex abstractions. The test program itself, 'testcrypt', speaks a simple line-oriented protocol on standard I/O in which you write the name of a function call followed by some inputs, and it gives you back a list of outputs preceded by a line telling you how many there are. Dynamically allocated objects are assigned string ids in the protocol, and there's a 'free' function that tells testcrypt when it can dispose of one. It's possible to speak that protocol by hand, but cumbersome. I've also provided a Python module that wraps it, by running testcrypt as a persistent subprocess and gatewaying all the function calls into things that look reasonably natural to call from Python. The Python module and testcrypt.c both read a carefully formatted header file testcrypt.h which contains the name and signature of every exported function, so it costs minimal effort to expose a given function through this test API. In a few cases it's necessary to write a wrapper in testcrypt.c that makes the function look more friendly, but mostly you don't even need that. (Though that is one of the motivations between a lot of API cleanups I've done recently!) I considered doing Python integration in the more obvious way, by linking parts of the PuTTY code directly into a native-code .so Python module. I decided against it because this way is more flexible: I can run the testcrypt program on its own, or compile it in a way that Python wouldn't play nicely with (I bet compiling just that .so with Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or attach a debugger to it. I can even recompile testcrypt for a different CPU architecture (32- vs 64-bit, or even running it on a different machine over ssh or under emulation) and still layer the nice API on top of that via the local Python interpreter. All I need is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
if typename == "val_mpint" and isinstance(arg, numbers.Integral):
retwords = childprocess.funcall("mp_literal", ["0x{:x}".format(arg)])
arg = make_retvals([typename], retwords)[0]
to_preserve.append(arg)
if isinstance(arg, Value):
if arg._typename != typename:
New test system for mp_int and cryptography. I've written a new standalone test program which incorporates all of PuTTY's crypto code, including the mp_int and low-level elliptic curve layers but also going all the way up to the implementations of the MAC, hash, cipher, public key and kex abstractions. The test program itself, 'testcrypt', speaks a simple line-oriented protocol on standard I/O in which you write the name of a function call followed by some inputs, and it gives you back a list of outputs preceded by a line telling you how many there are. Dynamically allocated objects are assigned string ids in the protocol, and there's a 'free' function that tells testcrypt when it can dispose of one. It's possible to speak that protocol by hand, but cumbersome. I've also provided a Python module that wraps it, by running testcrypt as a persistent subprocess and gatewaying all the function calls into things that look reasonably natural to call from Python. The Python module and testcrypt.c both read a carefully formatted header file testcrypt.h which contains the name and signature of every exported function, so it costs minimal effort to expose a given function through this test API. In a few cases it's necessary to write a wrapper in testcrypt.c that makes the function look more friendly, but mostly you don't even need that. (Though that is one of the motivations between a lot of API cleanups I've done recently!) I considered doing Python integration in the more obvious way, by linking parts of the PuTTY code directly into a native-code .so Python module. I decided against it because this way is more flexible: I can run the testcrypt program on its own, or compile it in a way that Python wouldn't play nicely with (I bet compiling just that .so with Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or attach a debugger to it. I can even recompile testcrypt for a different CPU architecture (32- vs 64-bit, or even running it on a different machine over ssh or under emulation) and still layer the nice API on top of that via the local Python interpreter. All I need is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
raise TypeError(
"{}() argument #{:d} ({}) should be {} ({} given)".format(
fnname, argindex, argname, typename, arg._typename))
ident = arg._ident
New test system for mp_int and cryptography. I've written a new standalone test program which incorporates all of PuTTY's crypto code, including the mp_int and low-level elliptic curve layers but also going all the way up to the implementations of the MAC, hash, cipher, public key and kex abstractions. The test program itself, 'testcrypt', speaks a simple line-oriented protocol on standard I/O in which you write the name of a function call followed by some inputs, and it gives you back a list of outputs preceded by a line telling you how many there are. Dynamically allocated objects are assigned string ids in the protocol, and there's a 'free' function that tells testcrypt when it can dispose of one. It's possible to speak that protocol by hand, but cumbersome. I've also provided a Python module that wraps it, by running testcrypt as a persistent subprocess and gatewaying all the function calls into things that look reasonably natural to call from Python. The Python module and testcrypt.c both read a carefully formatted header file testcrypt.h which contains the name and signature of every exported function, so it costs minimal effort to expose a given function through this test API. In a few cases it's necessary to write a wrapper in testcrypt.c that makes the function look more friendly, but mostly you don't even need that. (Though that is one of the motivations between a lot of API cleanups I've done recently!) I considered doing Python integration in the more obvious way, by linking parts of the PuTTY code directly into a native-code .so Python module. I decided against it because this way is more flexible: I can run the testcrypt program on its own, or compile it in a way that Python wouldn't play nicely with (I bet compiling just that .so with Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or attach a debugger to it. I can even recompile testcrypt for a different CPU architecture (32- vs 64-bit, or even running it on a different machine over ssh or under emulation) and still layer the nice API on top of that via the local Python interpreter. All I need is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
if consumed:
arg._consumed()
New test system for mp_int and cryptography. I've written a new standalone test program which incorporates all of PuTTY's crypto code, including the mp_int and low-level elliptic curve layers but also going all the way up to the implementations of the MAC, hash, cipher, public key and kex abstractions. The test program itself, 'testcrypt', speaks a simple line-oriented protocol on standard I/O in which you write the name of a function call followed by some inputs, and it gives you back a list of outputs preceded by a line telling you how many there are. Dynamically allocated objects are assigned string ids in the protocol, and there's a 'free' function that tells testcrypt when it can dispose of one. It's possible to speak that protocol by hand, but cumbersome. I've also provided a Python module that wraps it, by running testcrypt as a persistent subprocess and gatewaying all the function calls into things that look reasonably natural to call from Python. The Python module and testcrypt.c both read a carefully formatted header file testcrypt.h which contains the name and signature of every exported function, so it costs minimal effort to expose a given function through this test API. In a few cases it's necessary to write a wrapper in testcrypt.c that makes the function look more friendly, but mostly you don't even need that. (Though that is one of the motivations between a lot of API cleanups I've done recently!) I considered doing Python integration in the more obvious way, by linking parts of the PuTTY code directly into a native-code .so Python module. I decided against it because this way is more flexible: I can run the testcrypt program on its own, or compile it in a way that Python wouldn't play nicely with (I bet compiling just that .so with Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or attach a debugger to it. I can even recompile testcrypt for a different CPU architecture (32- vs 64-bit, or even running it on a different machine over ssh or under emulation) and still layer the nice API on top of that via the local Python interpreter. All I need is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
return ident
if typename == "uint" and isinstance(arg, numbers.Integral):
return "0x{:x}".format(arg)
2020-03-02 06:52:09 +00:00
if typename == "boolean":
return "true" if arg else "false"
New test system for mp_int and cryptography. I've written a new standalone test program which incorporates all of PuTTY's crypto code, including the mp_int and low-level elliptic curve layers but also going all the way up to the implementations of the MAC, hash, cipher, public key and kex abstractions. The test program itself, 'testcrypt', speaks a simple line-oriented protocol on standard I/O in which you write the name of a function call followed by some inputs, and it gives you back a list of outputs preceded by a line telling you how many there are. Dynamically allocated objects are assigned string ids in the protocol, and there's a 'free' function that tells testcrypt when it can dispose of one. It's possible to speak that protocol by hand, but cumbersome. I've also provided a Python module that wraps it, by running testcrypt as a persistent subprocess and gatewaying all the function calls into things that look reasonably natural to call from Python. The Python module and testcrypt.c both read a carefully formatted header file testcrypt.h which contains the name and signature of every exported function, so it costs minimal effort to expose a given function through this test API. In a few cases it's necessary to write a wrapper in testcrypt.c that makes the function look more friendly, but mostly you don't even need that. (Though that is one of the motivations between a lot of API cleanups I've done recently!) I considered doing Python integration in the more obvious way, by linking parts of the PuTTY code directly into a native-code .so Python module. I decided against it because this way is more flexible: I can run the testcrypt program on its own, or compile it in a way that Python wouldn't play nicely with (I bet compiling just that .so with Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or attach a debugger to it. I can even recompile testcrypt for a different CPU architecture (32- vs 64-bit, or even running it on a different machine over ssh or under emulation) and still layer the nice API on top of that via the local Python interpreter. All I need is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
if typename in {
Merge the ssh1_cipher type into ssh2_cipher. The aim of this reorganisation is to make it easier to test all the ciphers in PuTTY in a uniform way. It was inconvenient that there were two separate vtable systems for the ciphers used in SSH-1 and SSH-2 with different functionality. Now there's only one type, called ssh_cipher. But really it's the old ssh2_cipher, just renamed: I haven't made any changes to the API on the SSH-2 side. Instead, I've removed ssh1_cipher completely, and adapted the SSH-1 BPP to use the SSH-2 style API. (The relevant differences are that ssh1_cipher encapsulated both the sending and receiving directions in one object - so now ssh1bpp has to make a separate cipher instance per direction - and that ssh1_cipher automatically initialised the IV to all zeroes, which ssh1bpp now has to do by hand.) The previous ssh1_cipher vtable for single-DES has been removed completely, because when converted into the new API it became identical to the SSH-2 single-DES vtable; so now there's just one vtable for DES-CBC which works in both protocols. The other two SSH-1 ciphers each had to stay separate, because 3DES is completely different between SSH-1 and SSH-2 (three layers of CBC structure versus one), and Blowfish varies in endianness and key length between the two. (Actually, while I'm here, I've only just noticed that the SSH-1 Blowfish cipher mis-describes itself in log messages as Blowfish-128. In fact it passes the whole of the input key buffer, which has length SSH1_SESSION_KEY_LENGTH == 32 bytes == 256 bits. So it's actually Blowfish-256, and has been all along!)
2019-01-17 18:06:08 +00:00
"hashalg", "macalg", "keyalg", "cipheralg",
"dh_group", "ecdh_alg", "rsaorder", "primegenpolicy",
New post-quantum kex: ML-KEM, and three hybrids of it. As standardised by NIST in FIPS 203, this is a lattice-based post-quantum KEM. Very vaguely, the idea of it is that your public key is a matrix A and vector t, and the private key is the knowledge of how to decompose t into two vectors with all their coefficients small, one transformed by A relative to the other. Encryption of a binary secret starts by turning each bit into one of two maximally separated residues mod a prime q, and then adding 'noise' based on the public key in the form of small increments and decrements mod q, again with some of the noise transformed by A relative to the rest. Decryption uses the knowledge of t's decomposition to align the two sets of noise so that the _large_ changes (which masked the secret from an eavesdropper) cancel out, leaving only a collection of small changes to the original secret vector. Then the vector of input bits can be recovered by assuming that those accumulated small pieces of noise haven't concentrated in any particular residue enough to push it more than half way to the other of its possible starting values. A weird feature of it is that decryption is not a true mathematical inverse of encryption. The assumption that the noise doesn't get large enough to flip any bit of the secret is only probabilistically valid, not a hard guarantee. In other words, key agreement can fail, simply by getting particularly unlucky with the distribution of your random noise! However, the probability of a failure is very low - less than 2^-138 even for ML-KEM-512, and gets even smaller with the larger variants. An awkward feature for our purposes is that the matrix A, containing a large number of residues mod the prime q=3329, is required to be constructed by a process of rejection sampling, i.e. generating random 12-bit values and throwing away the out-of-range ones. That would be a real pain for our side-channel testing system, which generally handles rejection sampling badly (since it necessarily involves data-dependent control flow and timing variation). Fortunately, the matrix and the random seed it was made from are both public: the matrix seed is transmitted as part of the public key, so it's not necessary to try to hide it. Accordingly, I was able to get the implementation to pass testsc by means of not varying the matrix seed between runs, which is justified by the principle of testsc that you vary the _secrets_ to ensure timing is independent of them - and the matrix seed isn't a secret, so you're allowed to keep it the same. The three hybrid algorithms, defined by the current Internet-Draft draft-kampanakis-curdle-ssh-pq-ke, include one hybrid of ML-KEM-768 with Curve25519 in exactly the same way we were already hybridising NTRU Prime with Curve25519, and two more hybrids of ML-KEM with ECDH over a NIST curve. The former hybrid interoperates with the implementation in OpenSSH 9.9; all three interoperate with the fork 'openssh-oqs' at github.com/open-quantum-safe/openssh, and also with the Python library AsyncSSH.
2024-12-07 19:33:39 +00:00
"argon2flavour", "fptype", "httpdigesthash", "mlkem_params"}:
arg = coerce_to_bytes(arg)
New test system for mp_int and cryptography. I've written a new standalone test program which incorporates all of PuTTY's crypto code, including the mp_int and low-level elliptic curve layers but also going all the way up to the implementations of the MAC, hash, cipher, public key and kex abstractions. The test program itself, 'testcrypt', speaks a simple line-oriented protocol on standard I/O in which you write the name of a function call followed by some inputs, and it gives you back a list of outputs preceded by a line telling you how many there are. Dynamically allocated objects are assigned string ids in the protocol, and there's a 'free' function that tells testcrypt when it can dispose of one. It's possible to speak that protocol by hand, but cumbersome. I've also provided a Python module that wraps it, by running testcrypt as a persistent subprocess and gatewaying all the function calls into things that look reasonably natural to call from Python. The Python module and testcrypt.c both read a carefully formatted header file testcrypt.h which contains the name and signature of every exported function, so it costs minimal effort to expose a given function through this test API. In a few cases it's necessary to write a wrapper in testcrypt.c that makes the function look more friendly, but mostly you don't even need that. (Though that is one of the motivations between a lot of API cleanups I've done recently!) I considered doing Python integration in the more obvious way, by linking parts of the PuTTY code directly into a native-code .so Python module. I decided against it because this way is more flexible: I can run the testcrypt program on its own, or compile it in a way that Python wouldn't play nicely with (I bet compiling just that .so with Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or attach a debugger to it. I can even recompile testcrypt for a different CPU architecture (32- vs 64-bit, or even running it on a different machine over ssh or under emulation) and still layer the nice API on top of that via the local Python interpreter. All I need is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
if isinstance(arg, bytes) and b" " not in arg:
dictkey = (typename, arg)
if dictkey not in checked_enum_values:
retwords = childprocess.funcall("checkenum", [typename, arg])
assert len(retwords) == 1
checked_enum_values[dictkey] = (retwords[0] == b"ok")
if checked_enum_values[dictkey]:
return arg
New 'Pockle' object, for verifying primality. This implements an extended form of primality verification using certificates based on Pocklington's theorem. You make a Pockle object, and then try to convince it that one number after another is prime, by means of providing it with a list of prime factors of p-1 and a primitive root. (Or just by saying 'this prime is small enough for you to check yourself'.) Pocklington's theorem requires you to have factors of p-1 whose product is at least the square root of p. I've extended that to support factorisations only as big as the cube root, via an extension of the theorem given in Maurer's paper on generating provable primes. The Pockle object is more or less write-only: it has no methods for reading out its contents. Its only output channel is the return value when you try to insert a prime into it: if it isn't sufficiently convinced that your prime is prime, it will return an error code. So anything for which it returns POCKLE_OK you can be confident of. I'm going to use this for provable prime generation. But exposing this part of the system as an object in its own right means I can write a set of unit tests for this specifically. My negative tests exercise all the different ways a certification can be erroneous or inadequate; the positive tests include proofs of primality of various primes used in elliptic-curve crypto. The Poly1305 proof in particular is taken from a proof in DJB's paper, which has exactly the form of a Pocklington certificate only written in English.
2020-02-23 15:16:30 +00:00
if typename == "mpint_list":
sublist = [make_argword(len(arg), ("uint", False),
fnname, argindex, argname, to_preserve)]
New 'Pockle' object, for verifying primality. This implements an extended form of primality verification using certificates based on Pocklington's theorem. You make a Pockle object, and then try to convince it that one number after another is prime, by means of providing it with a list of prime factors of p-1 and a primitive root. (Or just by saying 'this prime is small enough for you to check yourself'.) Pocklington's theorem requires you to have factors of p-1 whose product is at least the square root of p. I've extended that to support factorisations only as big as the cube root, via an extension of the theorem given in Maurer's paper on generating provable primes. The Pockle object is more or less write-only: it has no methods for reading out its contents. Its only output channel is the return value when you try to insert a prime into it: if it isn't sufficiently convinced that your prime is prime, it will return an error code. So anything for which it returns POCKLE_OK you can be confident of. I'm going to use this for provable prime generation. But exposing this part of the system as an object in its own right means I can write a set of unit tests for this specifically. My negative tests exercise all the different ways a certification can be erroneous or inadequate; the positive tests include proofs of primality of various primes used in elliptic-curve crypto. The Poly1305 proof in particular is taken from a proof in DJB's paper, which has exactly the form of a Pocklington certificate only written in English.
2020-02-23 15:16:30 +00:00
for val in arg:
sublist.append(make_argword(val, ("val_mpint", False),
fnname, argindex, argname, to_preserve))
return b" ".join(coerce_to_bytes(sub) for sub in sublist)
if typename == "int16_list":
sublist = [make_argword(len(arg), ("uint", False),
fnname, argindex, argname, to_preserve)]
for val in arg:
sublist.append(make_argword(val & 0xFFFF, ("uint", False),
fnname, argindex, argname, to_preserve))
return b" ".join(coerce_to_bytes(sub) for sub in sublist)
New test system for mp_int and cryptography. I've written a new standalone test program which incorporates all of PuTTY's crypto code, including the mp_int and low-level elliptic curve layers but also going all the way up to the implementations of the MAC, hash, cipher, public key and kex abstractions. The test program itself, 'testcrypt', speaks a simple line-oriented protocol on standard I/O in which you write the name of a function call followed by some inputs, and it gives you back a list of outputs preceded by a line telling you how many there are. Dynamically allocated objects are assigned string ids in the protocol, and there's a 'free' function that tells testcrypt when it can dispose of one. It's possible to speak that protocol by hand, but cumbersome. I've also provided a Python module that wraps it, by running testcrypt as a persistent subprocess and gatewaying all the function calls into things that look reasonably natural to call from Python. The Python module and testcrypt.c both read a carefully formatted header file testcrypt.h which contains the name and signature of every exported function, so it costs minimal effort to expose a given function through this test API. In a few cases it's necessary to write a wrapper in testcrypt.c that makes the function look more friendly, but mostly you don't even need that. (Though that is one of the motivations between a lot of API cleanups I've done recently!) I considered doing Python integration in the more obvious way, by linking parts of the PuTTY code directly into a native-code .so Python module. I decided against it because this way is more flexible: I can run the testcrypt program on its own, or compile it in a way that Python wouldn't play nicely with (I bet compiling just that .so with Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or attach a debugger to it. I can even recompile testcrypt for a different CPU architecture (32- vs 64-bit, or even running it on a different machine over ssh or under emulation) and still layer the nice API on top of that via the local Python interpreter. All I need is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
raise TypeError(
"Can't convert {}() argument #{:d} ({}) to {} (value was {!r})".format(
fnname, argindex, argname, typename, arg))
New test system for mp_int and cryptography. I've written a new standalone test program which incorporates all of PuTTY's crypto code, including the mp_int and low-level elliptic curve layers but also going all the way up to the implementations of the MAC, hash, cipher, public key and kex abstractions. The test program itself, 'testcrypt', speaks a simple line-oriented protocol on standard I/O in which you write the name of a function call followed by some inputs, and it gives you back a list of outputs preceded by a line telling you how many there are. Dynamically allocated objects are assigned string ids in the protocol, and there's a 'free' function that tells testcrypt when it can dispose of one. It's possible to speak that protocol by hand, but cumbersome. I've also provided a Python module that wraps it, by running testcrypt as a persistent subprocess and gatewaying all the function calls into things that look reasonably natural to call from Python. The Python module and testcrypt.c both read a carefully formatted header file testcrypt.h which contains the name and signature of every exported function, so it costs minimal effort to expose a given function through this test API. In a few cases it's necessary to write a wrapper in testcrypt.c that makes the function look more friendly, but mostly you don't even need that. (Though that is one of the motivations between a lot of API cleanups I've done recently!) I considered doing Python integration in the more obvious way, by linking parts of the PuTTY code directly into a native-code .so Python module. I decided against it because this way is more flexible: I can run the testcrypt program on its own, or compile it in a way that Python wouldn't play nicely with (I bet compiling just that .so with Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or attach a debugger to it. I can even recompile testcrypt for a different CPU architecture (32- vs 64-bit, or even running it on a different machine over ssh or under emulation) and still layer the nice API on top of that via the local Python interpreter. All I need is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
def unpack_string(identifier):
retwords = childprocess.funcall("getstring", [identifier])
childprocess.funcall("free", [identifier])
return re.sub(b"%[0-9A-F][0-9A-F]",
lambda m: bytes([int(m.group(0)[1:], 16)]),
retwords[0])
def unpack_mp(identifier):
retwords = childprocess.funcall("mp_dump", [identifier])
childprocess.funcall("free", [identifier])
return int(retwords[0], 16)
New test system for mp_int and cryptography. I've written a new standalone test program which incorporates all of PuTTY's crypto code, including the mp_int and low-level elliptic curve layers but also going all the way up to the implementations of the MAC, hash, cipher, public key and kex abstractions. The test program itself, 'testcrypt', speaks a simple line-oriented protocol on standard I/O in which you write the name of a function call followed by some inputs, and it gives you back a list of outputs preceded by a line telling you how many there are. Dynamically allocated objects are assigned string ids in the protocol, and there's a 'free' function that tells testcrypt when it can dispose of one. It's possible to speak that protocol by hand, but cumbersome. I've also provided a Python module that wraps it, by running testcrypt as a persistent subprocess and gatewaying all the function calls into things that look reasonably natural to call from Python. The Python module and testcrypt.c both read a carefully formatted header file testcrypt.h which contains the name and signature of every exported function, so it costs minimal effort to expose a given function through this test API. In a few cases it's necessary to write a wrapper in testcrypt.c that makes the function look more friendly, but mostly you don't even need that. (Though that is one of the motivations between a lot of API cleanups I've done recently!) I considered doing Python integration in the more obvious way, by linking parts of the PuTTY code directly into a native-code .so Python module. I decided against it because this way is more flexible: I can run the testcrypt program on its own, or compile it in a way that Python wouldn't play nicely with (I bet compiling just that .so with Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or attach a debugger to it. I can even recompile testcrypt for a different CPU architecture (32- vs 64-bit, or even running it on a different machine over ssh or under emulation) and still layer the nice API on top of that via the local Python interpreter. All I need is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
def make_retval(rettype, word, unpack_strings):
if rettype.startswith("opt_"):
if word == b"NULL":
return None
rettype = rettype[4:]
New test system for mp_int and cryptography. I've written a new standalone test program which incorporates all of PuTTY's crypto code, including the mp_int and low-level elliptic curve layers but also going all the way up to the implementations of the MAC, hash, cipher, public key and kex abstractions. The test program itself, 'testcrypt', speaks a simple line-oriented protocol on standard I/O in which you write the name of a function call followed by some inputs, and it gives you back a list of outputs preceded by a line telling you how many there are. Dynamically allocated objects are assigned string ids in the protocol, and there's a 'free' function that tells testcrypt when it can dispose of one. It's possible to speak that protocol by hand, but cumbersome. I've also provided a Python module that wraps it, by running testcrypt as a persistent subprocess and gatewaying all the function calls into things that look reasonably natural to call from Python. The Python module and testcrypt.c both read a carefully formatted header file testcrypt.h which contains the name and signature of every exported function, so it costs minimal effort to expose a given function through this test API. In a few cases it's necessary to write a wrapper in testcrypt.c that makes the function look more friendly, but mostly you don't even need that. (Though that is one of the motivations between a lot of API cleanups I've done recently!) I considered doing Python integration in the more obvious way, by linking parts of the PuTTY code directly into a native-code .so Python module. I decided against it because this way is more flexible: I can run the testcrypt program on its own, or compile it in a way that Python wouldn't play nicely with (I bet compiling just that .so with Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or attach a debugger to it. I can even recompile testcrypt for a different CPU architecture (32- vs 64-bit, or even running it on a different machine over ssh or under emulation) and still layer the nice API on top of that via the local Python interpreter. All I need is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
if rettype == "val_string" and unpack_strings:
return unpack_string(word)
if rettype == "val_keycomponents":
kc = {}
retwords = childprocess.funcall("key_components_count", [word])
for i in range(int(retwords[0], 0)):
args = [word, "{:d}".format(i)]
retwords = childprocess.funcall("key_components_nth_name", args)
kc_key = unpack_string(retwords[0])
retwords = childprocess.funcall("key_components_nth_str", args)
if retwords[0] != b"NULL":
kc_value = unpack_string(retwords[0]).decode("ASCII")
else:
retwords = childprocess.funcall("key_components_nth_mp", args)
kc_value = unpack_mp(retwords[0])
kc[kc_key.decode("ASCII")] = kc_value
New test system for mp_int and cryptography. I've written a new standalone test program which incorporates all of PuTTY's crypto code, including the mp_int and low-level elliptic curve layers but also going all the way up to the implementations of the MAC, hash, cipher, public key and kex abstractions. The test program itself, 'testcrypt', speaks a simple line-oriented protocol on standard I/O in which you write the name of a function call followed by some inputs, and it gives you back a list of outputs preceded by a line telling you how many there are. Dynamically allocated objects are assigned string ids in the protocol, and there's a 'free' function that tells testcrypt when it can dispose of one. It's possible to speak that protocol by hand, but cumbersome. I've also provided a Python module that wraps it, by running testcrypt as a persistent subprocess and gatewaying all the function calls into things that look reasonably natural to call from Python. The Python module and testcrypt.c both read a carefully formatted header file testcrypt.h which contains the name and signature of every exported function, so it costs minimal effort to expose a given function through this test API. In a few cases it's necessary to write a wrapper in testcrypt.c that makes the function look more friendly, but mostly you don't even need that. (Though that is one of the motivations between a lot of API cleanups I've done recently!) I considered doing Python integration in the more obvious way, by linking parts of the PuTTY code directly into a native-code .so Python module. I decided against it because this way is more flexible: I can run the testcrypt program on its own, or compile it in a way that Python wouldn't play nicely with (I bet compiling just that .so with Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or attach a debugger to it. I can even recompile testcrypt for a different CPU architecture (32- vs 64-bit, or even running it on a different machine over ssh or under emulation) and still layer the nice API on top of that via the local Python interpreter. All I need is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
childprocess.funcall("free", [word])
return kc
New test system for mp_int and cryptography. I've written a new standalone test program which incorporates all of PuTTY's crypto code, including the mp_int and low-level elliptic curve layers but also going all the way up to the implementations of the MAC, hash, cipher, public key and kex abstractions. The test program itself, 'testcrypt', speaks a simple line-oriented protocol on standard I/O in which you write the name of a function call followed by some inputs, and it gives you back a list of outputs preceded by a line telling you how many there are. Dynamically allocated objects are assigned string ids in the protocol, and there's a 'free' function that tells testcrypt when it can dispose of one. It's possible to speak that protocol by hand, but cumbersome. I've also provided a Python module that wraps it, by running testcrypt as a persistent subprocess and gatewaying all the function calls into things that look reasonably natural to call from Python. The Python module and testcrypt.c both read a carefully formatted header file testcrypt.h which contains the name and signature of every exported function, so it costs minimal effort to expose a given function through this test API. In a few cases it's necessary to write a wrapper in testcrypt.c that makes the function look more friendly, but mostly you don't even need that. (Though that is one of the motivations between a lot of API cleanups I've done recently!) I considered doing Python integration in the more obvious way, by linking parts of the PuTTY code directly into a native-code .so Python module. I decided against it because this way is more flexible: I can run the testcrypt program on its own, or compile it in a way that Python wouldn't play nicely with (I bet compiling just that .so with Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or attach a debugger to it. I can even recompile testcrypt for a different CPU architecture (32- vs 64-bit, or even running it on a different machine over ssh or under emulation) and still layer the nice API on top of that via the local Python interpreter. All I need is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
if rettype.startswith("val_"):
return Value(rettype, word)
elif rettype == "int" or rettype == "uint":
New test system for mp_int and cryptography. I've written a new standalone test program which incorporates all of PuTTY's crypto code, including the mp_int and low-level elliptic curve layers but also going all the way up to the implementations of the MAC, hash, cipher, public key and kex abstractions. The test program itself, 'testcrypt', speaks a simple line-oriented protocol on standard I/O in which you write the name of a function call followed by some inputs, and it gives you back a list of outputs preceded by a line telling you how many there are. Dynamically allocated objects are assigned string ids in the protocol, and there's a 'free' function that tells testcrypt when it can dispose of one. It's possible to speak that protocol by hand, but cumbersome. I've also provided a Python module that wraps it, by running testcrypt as a persistent subprocess and gatewaying all the function calls into things that look reasonably natural to call from Python. The Python module and testcrypt.c both read a carefully formatted header file testcrypt.h which contains the name and signature of every exported function, so it costs minimal effort to expose a given function through this test API. In a few cases it's necessary to write a wrapper in testcrypt.c that makes the function look more friendly, but mostly you don't even need that. (Though that is one of the motivations between a lot of API cleanups I've done recently!) I considered doing Python integration in the more obvious way, by linking parts of the PuTTY code directly into a native-code .so Python module. I decided against it because this way is more flexible: I can run the testcrypt program on its own, or compile it in a way that Python wouldn't play nicely with (I bet compiling just that .so with Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or attach a debugger to it. I can even recompile testcrypt for a different CPU architecture (32- vs 64-bit, or even running it on a different machine over ssh or under emulation) and still layer the nice API on top of that via the local Python interpreter. All I need is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
return int(word, 0)
elif rettype == "boolean":
assert word == b"true" or word == b"false"
return word == b"true"
elif rettype in {"pocklestatus", "mr_result"}:
New 'Pockle' object, for verifying primality. This implements an extended form of primality verification using certificates based on Pocklington's theorem. You make a Pockle object, and then try to convince it that one number after another is prime, by means of providing it with a list of prime factors of p-1 and a primitive root. (Or just by saying 'this prime is small enough for you to check yourself'.) Pocklington's theorem requires you to have factors of p-1 whose product is at least the square root of p. I've extended that to support factorisations only as big as the cube root, via an extension of the theorem given in Maurer's paper on generating provable primes. The Pockle object is more or less write-only: it has no methods for reading out its contents. Its only output channel is the return value when you try to insert a prime into it: if it isn't sufficiently convinced that your prime is prime, it will return an error code. So anything for which it returns POCKLE_OK you can be confident of. I'm going to use this for provable prime generation. But exposing this part of the system as an object in its own right means I can write a set of unit tests for this specifically. My negative tests exercise all the different ways a certification can be erroneous or inadequate; the positive tests include proofs of primality of various primes used in elliptic-curve crypto. The Poly1305 proof in particular is taken from a proof in DJB's paper, which has exactly the form of a Pocklington certificate only written in English.
2020-02-23 15:16:30 +00:00
return word.decode("ASCII")
elif rettype == "int16_list":
return list(map(int, word.split(b',')))
New test system for mp_int and cryptography. I've written a new standalone test program which incorporates all of PuTTY's crypto code, including the mp_int and low-level elliptic curve layers but also going all the way up to the implementations of the MAC, hash, cipher, public key and kex abstractions. The test program itself, 'testcrypt', speaks a simple line-oriented protocol on standard I/O in which you write the name of a function call followed by some inputs, and it gives you back a list of outputs preceded by a line telling you how many there are. Dynamically allocated objects are assigned string ids in the protocol, and there's a 'free' function that tells testcrypt when it can dispose of one. It's possible to speak that protocol by hand, but cumbersome. I've also provided a Python module that wraps it, by running testcrypt as a persistent subprocess and gatewaying all the function calls into things that look reasonably natural to call from Python. The Python module and testcrypt.c both read a carefully formatted header file testcrypt.h which contains the name and signature of every exported function, so it costs minimal effort to expose a given function through this test API. In a few cases it's necessary to write a wrapper in testcrypt.c that makes the function look more friendly, but mostly you don't even need that. (Though that is one of the motivations between a lot of API cleanups I've done recently!) I considered doing Python integration in the more obvious way, by linking parts of the PuTTY code directly into a native-code .so Python module. I decided against it because this way is more flexible: I can run the testcrypt program on its own, or compile it in a way that Python wouldn't play nicely with (I bet compiling just that .so with Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or attach a debugger to it. I can even recompile testcrypt for a different CPU architecture (32- vs 64-bit, or even running it on a different machine over ssh or under emulation) and still layer the nice API on top of that via the local Python interpreter. All I need is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
raise TypeError("Can't deal with return value {!r} of type {!r}"
.format(word, rettype))
New test system for mp_int and cryptography. I've written a new standalone test program which incorporates all of PuTTY's crypto code, including the mp_int and low-level elliptic curve layers but also going all the way up to the implementations of the MAC, hash, cipher, public key and kex abstractions. The test program itself, 'testcrypt', speaks a simple line-oriented protocol on standard I/O in which you write the name of a function call followed by some inputs, and it gives you back a list of outputs preceded by a line telling you how many there are. Dynamically allocated objects are assigned string ids in the protocol, and there's a 'free' function that tells testcrypt when it can dispose of one. It's possible to speak that protocol by hand, but cumbersome. I've also provided a Python module that wraps it, by running testcrypt as a persistent subprocess and gatewaying all the function calls into things that look reasonably natural to call from Python. The Python module and testcrypt.c both read a carefully formatted header file testcrypt.h which contains the name and signature of every exported function, so it costs minimal effort to expose a given function through this test API. In a few cases it's necessary to write a wrapper in testcrypt.c that makes the function look more friendly, but mostly you don't even need that. (Though that is one of the motivations between a lot of API cleanups I've done recently!) I considered doing Python integration in the more obvious way, by linking parts of the PuTTY code directly into a native-code .so Python module. I decided against it because this way is more flexible: I can run the testcrypt program on its own, or compile it in a way that Python wouldn't play nicely with (I bet compiling just that .so with Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or attach a debugger to it. I can even recompile testcrypt for a different CPU architecture (32- vs 64-bit, or even running it on a different machine over ssh or under emulation) and still layer the nice API on top of that via the local Python interpreter. All I need is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
def make_retvals(rettypes, retwords, unpack_strings=True):
assert len(rettypes) == len(retwords) # FIXME: better exception
return [make_retval(rettype, word, unpack_strings)
for rettype, word in zip(rettypes, retwords)]
class Function(object):
def __init__(self, fnname, rettypes, retnames, argtypes, argnames):
New test system for mp_int and cryptography. I've written a new standalone test program which incorporates all of PuTTY's crypto code, including the mp_int and low-level elliptic curve layers but also going all the way up to the implementations of the MAC, hash, cipher, public key and kex abstractions. The test program itself, 'testcrypt', speaks a simple line-oriented protocol on standard I/O in which you write the name of a function call followed by some inputs, and it gives you back a list of outputs preceded by a line telling you how many there are. Dynamically allocated objects are assigned string ids in the protocol, and there's a 'free' function that tells testcrypt when it can dispose of one. It's possible to speak that protocol by hand, but cumbersome. I've also provided a Python module that wraps it, by running testcrypt as a persistent subprocess and gatewaying all the function calls into things that look reasonably natural to call from Python. The Python module and testcrypt.c both read a carefully formatted header file testcrypt.h which contains the name and signature of every exported function, so it costs minimal effort to expose a given function through this test API. In a few cases it's necessary to write a wrapper in testcrypt.c that makes the function look more friendly, but mostly you don't even need that. (Though that is one of the motivations between a lot of API cleanups I've done recently!) I considered doing Python integration in the more obvious way, by linking parts of the PuTTY code directly into a native-code .so Python module. I decided against it because this way is more flexible: I can run the testcrypt program on its own, or compile it in a way that Python wouldn't play nicely with (I bet compiling just that .so with Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or attach a debugger to it. I can even recompile testcrypt for a different CPU architecture (32- vs 64-bit, or even running it on a different machine over ssh or under emulation) and still layer the nice API on top of that via the local Python interpreter. All I need is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
self.fnname = fnname
self.rettypes = rettypes
self.retnames = retnames
New test system for mp_int and cryptography. I've written a new standalone test program which incorporates all of PuTTY's crypto code, including the mp_int and low-level elliptic curve layers but also going all the way up to the implementations of the MAC, hash, cipher, public key and kex abstractions. The test program itself, 'testcrypt', speaks a simple line-oriented protocol on standard I/O in which you write the name of a function call followed by some inputs, and it gives you back a list of outputs preceded by a line telling you how many there are. Dynamically allocated objects are assigned string ids in the protocol, and there's a 'free' function that tells testcrypt when it can dispose of one. It's possible to speak that protocol by hand, but cumbersome. I've also provided a Python module that wraps it, by running testcrypt as a persistent subprocess and gatewaying all the function calls into things that look reasonably natural to call from Python. The Python module and testcrypt.c both read a carefully formatted header file testcrypt.h which contains the name and signature of every exported function, so it costs minimal effort to expose a given function through this test API. In a few cases it's necessary to write a wrapper in testcrypt.c that makes the function look more friendly, but mostly you don't even need that. (Though that is one of the motivations between a lot of API cleanups I've done recently!) I considered doing Python integration in the more obvious way, by linking parts of the PuTTY code directly into a native-code .so Python module. I decided against it because this way is more flexible: I can run the testcrypt program on its own, or compile it in a way that Python wouldn't play nicely with (I bet compiling just that .so with Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or attach a debugger to it. I can even recompile testcrypt for a different CPU architecture (32- vs 64-bit, or even running it on a different machine over ssh or under emulation) and still layer the nice API on top of that via the local Python interpreter. All I need is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
self.argtypes = argtypes
self.argnames = argnames
New test system for mp_int and cryptography. I've written a new standalone test program which incorporates all of PuTTY's crypto code, including the mp_int and low-level elliptic curve layers but also going all the way up to the implementations of the MAC, hash, cipher, public key and kex abstractions. The test program itself, 'testcrypt', speaks a simple line-oriented protocol on standard I/O in which you write the name of a function call followed by some inputs, and it gives you back a list of outputs preceded by a line telling you how many there are. Dynamically allocated objects are assigned string ids in the protocol, and there's a 'free' function that tells testcrypt when it can dispose of one. It's possible to speak that protocol by hand, but cumbersome. I've also provided a Python module that wraps it, by running testcrypt as a persistent subprocess and gatewaying all the function calls into things that look reasonably natural to call from Python. The Python module and testcrypt.c both read a carefully formatted header file testcrypt.h which contains the name and signature of every exported function, so it costs minimal effort to expose a given function through this test API. In a few cases it's necessary to write a wrapper in testcrypt.c that makes the function look more friendly, but mostly you don't even need that. (Though that is one of the motivations between a lot of API cleanups I've done recently!) I considered doing Python integration in the more obvious way, by linking parts of the PuTTY code directly into a native-code .so Python module. I decided against it because this way is more flexible: I can run the testcrypt program on its own, or compile it in a way that Python wouldn't play nicely with (I bet compiling just that .so with Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or attach a debugger to it. I can even recompile testcrypt for a different CPU architecture (32- vs 64-bit, or even running it on a different machine over ssh or under emulation) and still layer the nice API on top of that via the local Python interpreter. All I need is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
def __repr__(self):
return "<Function {}({}) -> ({})>".format(
self.fnname,
", ".join(("consumed " if c else "")+t+" "+n
for (t,c),n in zip(self.argtypes, self.argnames)),
", ".join((t+" "+n if n is not None else t)
for t,n in zip(self.rettypes, self.retnames)),
)
New test system for mp_int and cryptography. I've written a new standalone test program which incorporates all of PuTTY's crypto code, including the mp_int and low-level elliptic curve layers but also going all the way up to the implementations of the MAC, hash, cipher, public key and kex abstractions. The test program itself, 'testcrypt', speaks a simple line-oriented protocol on standard I/O in which you write the name of a function call followed by some inputs, and it gives you back a list of outputs preceded by a line telling you how many there are. Dynamically allocated objects are assigned string ids in the protocol, and there's a 'free' function that tells testcrypt when it can dispose of one. It's possible to speak that protocol by hand, but cumbersome. I've also provided a Python module that wraps it, by running testcrypt as a persistent subprocess and gatewaying all the function calls into things that look reasonably natural to call from Python. The Python module and testcrypt.c both read a carefully formatted header file testcrypt.h which contains the name and signature of every exported function, so it costs minimal effort to expose a given function through this test API. In a few cases it's necessary to write a wrapper in testcrypt.c that makes the function look more friendly, but mostly you don't even need that. (Though that is one of the motivations between a lot of API cleanups I've done recently!) I considered doing Python integration in the more obvious way, by linking parts of the PuTTY code directly into a native-code .so Python module. I decided against it because this way is more flexible: I can run the testcrypt program on its own, or compile it in a way that Python wouldn't play nicely with (I bet compiling just that .so with Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or attach a debugger to it. I can even recompile testcrypt for a different CPU architecture (32- vs 64-bit, or even running it on a different machine over ssh or under emulation) and still layer the nice API on top of that via the local Python interpreter. All I need is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
def __call__(self, *args):
if len(args) != len(self.argtypes):
raise TypeError(
"{}() takes exactly {} arguments ({} given)".format(
self.fnname, len(self.argtypes), len(args)))
to_preserve = []
retwords = childprocess.funcall(
self.fnname, [make_argword(args[i], self.argtypes[i],
self.fnname, i, self.argnames[i],
to_preserve)
New test system for mp_int and cryptography. I've written a new standalone test program which incorporates all of PuTTY's crypto code, including the mp_int and low-level elliptic curve layers but also going all the way up to the implementations of the MAC, hash, cipher, public key and kex abstractions. The test program itself, 'testcrypt', speaks a simple line-oriented protocol on standard I/O in which you write the name of a function call followed by some inputs, and it gives you back a list of outputs preceded by a line telling you how many there are. Dynamically allocated objects are assigned string ids in the protocol, and there's a 'free' function that tells testcrypt when it can dispose of one. It's possible to speak that protocol by hand, but cumbersome. I've also provided a Python module that wraps it, by running testcrypt as a persistent subprocess and gatewaying all the function calls into things that look reasonably natural to call from Python. The Python module and testcrypt.c both read a carefully formatted header file testcrypt.h which contains the name and signature of every exported function, so it costs minimal effort to expose a given function through this test API. In a few cases it's necessary to write a wrapper in testcrypt.c that makes the function look more friendly, but mostly you don't even need that. (Though that is one of the motivations between a lot of API cleanups I've done recently!) I considered doing Python integration in the more obvious way, by linking parts of the PuTTY code directly into a native-code .so Python module. I decided against it because this way is more flexible: I can run the testcrypt program on its own, or compile it in a way that Python wouldn't play nicely with (I bet compiling just that .so with Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or attach a debugger to it. I can even recompile testcrypt for a different CPU architecture (32- vs 64-bit, or even running it on a different machine over ssh or under emulation) and still layer the nice API on top of that via the local Python interpreter. All I need is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
for i in range(len(args))])
retvals = make_retvals(self.rettypes, retwords)
if len(retvals) == 0:
return None
if len(retvals) == 1:
return retvals[0]
return tuple(retvals)
Rewrite the testcrypt.c macro system. Yesterday's commit 52ee636b092c199 which further extended the huge pile of arity-specific annoying wrapper macros pushed me over the edge and inspired me to give some harder thought to finding a way to handle all arities at once. And this time I found one! The new technique changes the syntax of the function specifications in testcrypt.h. In particular, they now have to specify a _name_ for each parameter as well as a type, because the macros generating the C marshalling wrappers will need a structure field for each parameter and cpp isn't flexible enough to generate names for those fields automatically. Rather than tediously name them arg1, arg2 etc, I've reused the names of the parameters from the prototypes or definitions of the underlying real functions (via a one-off auto-extraction process starting from the output of 'clang -Xclang -dump-ast' plus some manual polishing), which means testcrypt.h is now a bit more self-documenting. The testcrypt.py end of the mechanism is rewritten to eat the new format. Since it's got more complicated syntax and nested parens and things, I've written something a bit like a separated lexer/parser system in place of the previous crude regex matcher, which should enforce that the whole header file really does conform to the restricted syntax it has to fit into. The new system uses a lot less code in testcrypt.c, but I've made up for that by also writing a long comment explaining how it works, which was another thing the previous system lacked! Similarly, the new testcrypt.h has some long-overdue instructions at the top.
2021-11-21 10:27:30 +00:00
def _lex_testcrypt_header(header):
pat = re.compile(
# Skip any combination of whitespace and comments
'(?:{})*'.format('|'.join((
'[ \t\n]', # whitespace
'/\\*(?:.|\n)*?\\*/', # C90-style /* ... */ comment, ended eagerly
'//[^\n]*\n', # C99-style comment to end-of-line
))) +
# And then match a token
'({})'.format('|'.join((
# Punctuation
r'\(',
r'\)',
Rewrite the testcrypt.c macro system. Yesterday's commit 52ee636b092c199 which further extended the huge pile of arity-specific annoying wrapper macros pushed me over the edge and inspired me to give some harder thought to finding a way to handle all arities at once. And this time I found one! The new technique changes the syntax of the function specifications in testcrypt.h. In particular, they now have to specify a _name_ for each parameter as well as a type, because the macros generating the C marshalling wrappers will need a structure field for each parameter and cpp isn't flexible enough to generate names for those fields automatically. Rather than tediously name them arg1, arg2 etc, I've reused the names of the parameters from the prototypes or definitions of the underlying real functions (via a one-off auto-extraction process starting from the output of 'clang -Xclang -dump-ast' plus some manual polishing), which means testcrypt.h is now a bit more self-documenting. The testcrypt.py end of the mechanism is rewritten to eat the new format. Since it's got more complicated syntax and nested parens and things, I've written something a bit like a separated lexer/parser system in place of the previous crude regex matcher, which should enforce that the whole header file really does conform to the restricted syntax it has to fit into. The new system uses a lot less code in testcrypt.c, but I've made up for that by also writing a long comment explaining how it works, which was another thing the previous system lacked! Similarly, the new testcrypt.h has some long-overdue instructions at the top.
2021-11-21 10:27:30 +00:00
',',
# Identifier
'[A-Za-z_][A-Za-z0-9_]*',
# End of string
'$',
)))
)
pos = 0
end = len(header)
while pos < end:
m = pat.match(header, pos)
assert m is not None, (
"Failed to lex testcrypt-func.h at byte position {:d}".format(pos))
Rewrite the testcrypt.c macro system. Yesterday's commit 52ee636b092c199 which further extended the huge pile of arity-specific annoying wrapper macros pushed me over the edge and inspired me to give some harder thought to finding a way to handle all arities at once. And this time I found one! The new technique changes the syntax of the function specifications in testcrypt.h. In particular, they now have to specify a _name_ for each parameter as well as a type, because the macros generating the C marshalling wrappers will need a structure field for each parameter and cpp isn't flexible enough to generate names for those fields automatically. Rather than tediously name them arg1, arg2 etc, I've reused the names of the parameters from the prototypes or definitions of the underlying real functions (via a one-off auto-extraction process starting from the output of 'clang -Xclang -dump-ast' plus some manual polishing), which means testcrypt.h is now a bit more self-documenting. The testcrypt.py end of the mechanism is rewritten to eat the new format. Since it's got more complicated syntax and nested parens and things, I've written something a bit like a separated lexer/parser system in place of the previous crude regex matcher, which should enforce that the whole header file really does conform to the restricted syntax it has to fit into. The new system uses a lot less code in testcrypt.c, but I've made up for that by also writing a long comment explaining how it works, which was another thing the previous system lacked! Similarly, the new testcrypt.h has some long-overdue instructions at the top.
2021-11-21 10:27:30 +00:00
pos = m.end()
tok = m.group(1)
if len(tok) == 0:
assert pos == end, (
"Empty token should only be returned at end of string")
yield tok, m.start(1)
def _parse_testcrypt_header(tokens):
def is_id(tok):
return tok[0] in string.ascii_letters+"_"
def expect(what, why, eof_ok=False):
tok, pos = next(tokens)
if tok == '' and eof_ok:
return None
if hasattr(what, '__call__'):
description = lambda: ""
ok = what(tok)
elif isinstance(what, set):
description = lambda: " or ".join("'"+x+"' " for x in sorted(what))
ok = tok in what
else:
description = lambda: "'"+what+"' "
ok = tok == what
if not ok:
sys.exit("testcrypt-func.h:{:d}: expected {}{}".format(
Rewrite the testcrypt.c macro system. Yesterday's commit 52ee636b092c199 which further extended the huge pile of arity-specific annoying wrapper macros pushed me over the edge and inspired me to give some harder thought to finding a way to handle all arities at once. And this time I found one! The new technique changes the syntax of the function specifications in testcrypt.h. In particular, they now have to specify a _name_ for each parameter as well as a type, because the macros generating the C marshalling wrappers will need a structure field for each parameter and cpp isn't flexible enough to generate names for those fields automatically. Rather than tediously name them arg1, arg2 etc, I've reused the names of the parameters from the prototypes or definitions of the underlying real functions (via a one-off auto-extraction process starting from the output of 'clang -Xclang -dump-ast' plus some manual polishing), which means testcrypt.h is now a bit more self-documenting. The testcrypt.py end of the mechanism is rewritten to eat the new format. Since it's got more complicated syntax and nested parens and things, I've written something a bit like a separated lexer/parser system in place of the previous crude regex matcher, which should enforce that the whole header file really does conform to the restricted syntax it has to fit into. The new system uses a lot less code in testcrypt.c, but I've made up for that by also writing a long comment explaining how it works, which was another thing the previous system lacked! Similarly, the new testcrypt.h has some long-overdue instructions at the top.
2021-11-21 10:27:30 +00:00
pos, description(), why))
return tok
New test system for mp_int and cryptography. I've written a new standalone test program which incorporates all of PuTTY's crypto code, including the mp_int and low-level elliptic curve layers but also going all the way up to the implementations of the MAC, hash, cipher, public key and kex abstractions. The test program itself, 'testcrypt', speaks a simple line-oriented protocol on standard I/O in which you write the name of a function call followed by some inputs, and it gives you back a list of outputs preceded by a line telling you how many there are. Dynamically allocated objects are assigned string ids in the protocol, and there's a 'free' function that tells testcrypt when it can dispose of one. It's possible to speak that protocol by hand, but cumbersome. I've also provided a Python module that wraps it, by running testcrypt as a persistent subprocess and gatewaying all the function calls into things that look reasonably natural to call from Python. The Python module and testcrypt.c both read a carefully formatted header file testcrypt.h which contains the name and signature of every exported function, so it costs minimal effort to expose a given function through this test API. In a few cases it's necessary to write a wrapper in testcrypt.c that makes the function look more friendly, but mostly you don't even need that. (Though that is one of the motivations between a lot of API cleanups I've done recently!) I considered doing Python integration in the more obvious way, by linking parts of the PuTTY code directly into a native-code .so Python module. I decided against it because this way is more flexible: I can run the testcrypt program on its own, or compile it in a way that Python wouldn't play nicely with (I bet compiling just that .so with Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or attach a debugger to it. I can even recompile testcrypt for a different CPU architecture (32- vs 64-bit, or even running it on a different machine over ssh or under emulation) and still layer the nice API on top of that via the local Python interpreter. All I need is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
Rewrite the testcrypt.c macro system. Yesterday's commit 52ee636b092c199 which further extended the huge pile of arity-specific annoying wrapper macros pushed me over the edge and inspired me to give some harder thought to finding a way to handle all arities at once. And this time I found one! The new technique changes the syntax of the function specifications in testcrypt.h. In particular, they now have to specify a _name_ for each parameter as well as a type, because the macros generating the C marshalling wrappers will need a structure field for each parameter and cpp isn't flexible enough to generate names for those fields automatically. Rather than tediously name them arg1, arg2 etc, I've reused the names of the parameters from the prototypes or definitions of the underlying real functions (via a one-off auto-extraction process starting from the output of 'clang -Xclang -dump-ast' plus some manual polishing), which means testcrypt.h is now a bit more self-documenting. The testcrypt.py end of the mechanism is rewritten to eat the new format. Since it's got more complicated syntax and nested parens and things, I've written something a bit like a separated lexer/parser system in place of the previous crude regex matcher, which should enforce that the whole header file really does conform to the restricted syntax it has to fit into. The new system uses a lot less code in testcrypt.c, but I've made up for that by also writing a long comment explaining how it works, which was another thing the previous system lacked! Similarly, the new testcrypt.h has some long-overdue instructions at the top.
2021-11-21 10:27:30 +00:00
while True:
tok = expect({"FUNC", "FUNC_WRAPPED"},
"at start of function specification", eof_ok=True)
Rewrite the testcrypt.c macro system. Yesterday's commit 52ee636b092c199 which further extended the huge pile of arity-specific annoying wrapper macros pushed me over the edge and inspired me to give some harder thought to finding a way to handle all arities at once. And this time I found one! The new technique changes the syntax of the function specifications in testcrypt.h. In particular, they now have to specify a _name_ for each parameter as well as a type, because the macros generating the C marshalling wrappers will need a structure field for each parameter and cpp isn't flexible enough to generate names for those fields automatically. Rather than tediously name them arg1, arg2 etc, I've reused the names of the parameters from the prototypes or definitions of the underlying real functions (via a one-off auto-extraction process starting from the output of 'clang -Xclang -dump-ast' plus some manual polishing), which means testcrypt.h is now a bit more self-documenting. The testcrypt.py end of the mechanism is rewritten to eat the new format. Since it's got more complicated syntax and nested parens and things, I've written something a bit like a separated lexer/parser system in place of the previous crude regex matcher, which should enforce that the whole header file really does conform to the restricted syntax it has to fit into. The new system uses a lot less code in testcrypt.c, but I've made up for that by also writing a long comment explaining how it works, which was another thing the previous system lacked! Similarly, the new testcrypt.h has some long-overdue instructions at the top.
2021-11-21 10:27:30 +00:00
if tok is None:
break
expect("(", "after FUNC")
rettype = expect(is_id, "return type")
expect(",", "after return type")
funcname = expect(is_id, "function name")
expect(",", "after function name")
args = []
firstargkind = expect({"ARG", "VOID"}, "at start of argument list")
if firstargkind == "VOID":
expect(")", "after VOID")
else:
while True:
# Every time we come back to the top of this loop, we've
# just seen 'ARG'
expect("(", "after ARG")
argtype = expect(is_id, "argument type")
expect(",", "after argument type")
argname = expect(is_id, "argument name")
args.append((argtype, argname))
expect(")", "at end of ARG")
punct = expect({",", ")"}, "after argument")
if punct == ")":
break
expect("ARG", "to begin next argument")
yield funcname, rettype, args
def _setup(scope):
New test system for mp_int and cryptography. I've written a new standalone test program which incorporates all of PuTTY's crypto code, including the mp_int and low-level elliptic curve layers but also going all the way up to the implementations of the MAC, hash, cipher, public key and kex abstractions. The test program itself, 'testcrypt', speaks a simple line-oriented protocol on standard I/O in which you write the name of a function call followed by some inputs, and it gives you back a list of outputs preceded by a line telling you how many there are. Dynamically allocated objects are assigned string ids in the protocol, and there's a 'free' function that tells testcrypt when it can dispose of one. It's possible to speak that protocol by hand, but cumbersome. I've also provided a Python module that wraps it, by running testcrypt as a persistent subprocess and gatewaying all the function calls into things that look reasonably natural to call from Python. The Python module and testcrypt.c both read a carefully formatted header file testcrypt.h which contains the name and signature of every exported function, so it costs minimal effort to expose a given function through this test API. In a few cases it's necessary to write a wrapper in testcrypt.c that makes the function look more friendly, but mostly you don't even need that. (Though that is one of the motivations between a lot of API cleanups I've done recently!) I considered doing Python integration in the more obvious way, by linking parts of the PuTTY code directly into a native-code .so Python module. I decided against it because this way is more flexible: I can run the testcrypt program on its own, or compile it in a way that Python wouldn't play nicely with (I bet compiling just that .so with Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or attach a debugger to it. I can even recompile testcrypt for a different CPU architecture (32- vs 64-bit, or even running it on a different machine over ssh or under emulation) and still layer the nice API on top of that via the local Python interpreter. All I need is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
valprefix = "val_"
outprefix = "out_"
optprefix = "opt_"
New test system for mp_int and cryptography. I've written a new standalone test program which incorporates all of PuTTY's crypto code, including the mp_int and low-level elliptic curve layers but also going all the way up to the implementations of the MAC, hash, cipher, public key and kex abstractions. The test program itself, 'testcrypt', speaks a simple line-oriented protocol on standard I/O in which you write the name of a function call followed by some inputs, and it gives you back a list of outputs preceded by a line telling you how many there are. Dynamically allocated objects are assigned string ids in the protocol, and there's a 'free' function that tells testcrypt when it can dispose of one. It's possible to speak that protocol by hand, but cumbersome. I've also provided a Python module that wraps it, by running testcrypt as a persistent subprocess and gatewaying all the function calls into things that look reasonably natural to call from Python. The Python module and testcrypt.c both read a carefully formatted header file testcrypt.h which contains the name and signature of every exported function, so it costs minimal effort to expose a given function through this test API. In a few cases it's necessary to write a wrapper in testcrypt.c that makes the function look more friendly, but mostly you don't even need that. (Though that is one of the motivations between a lot of API cleanups I've done recently!) I considered doing Python integration in the more obvious way, by linking parts of the PuTTY code directly into a native-code .so Python module. I decided against it because this way is more flexible: I can run the testcrypt program on its own, or compile it in a way that Python wouldn't play nicely with (I bet compiling just that .so with Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or attach a debugger to it. I can even recompile testcrypt for a different CPU architecture (32- vs 64-bit, or even running it on a different machine over ssh or under emulation) and still layer the nice API on top of that via the local Python interpreter. All I need is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
consprefix = "consumed_"
def trim_argtype(arg):
if arg.startswith(optprefix):
return optprefix + trim_argtype(arg[len(optprefix):])
New test system for mp_int and cryptography. I've written a new standalone test program which incorporates all of PuTTY's crypto code, including the mp_int and low-level elliptic curve layers but also going all the way up to the implementations of the MAC, hash, cipher, public key and kex abstractions. The test program itself, 'testcrypt', speaks a simple line-oriented protocol on standard I/O in which you write the name of a function call followed by some inputs, and it gives you back a list of outputs preceded by a line telling you how many there are. Dynamically allocated objects are assigned string ids in the protocol, and there's a 'free' function that tells testcrypt when it can dispose of one. It's possible to speak that protocol by hand, but cumbersome. I've also provided a Python module that wraps it, by running testcrypt as a persistent subprocess and gatewaying all the function calls into things that look reasonably natural to call from Python. The Python module and testcrypt.c both read a carefully formatted header file testcrypt.h which contains the name and signature of every exported function, so it costs minimal effort to expose a given function through this test API. In a few cases it's necessary to write a wrapper in testcrypt.c that makes the function look more friendly, but mostly you don't even need that. (Though that is one of the motivations between a lot of API cleanups I've done recently!) I considered doing Python integration in the more obvious way, by linking parts of the PuTTY code directly into a native-code .so Python module. I decided against it because this way is more flexible: I can run the testcrypt program on its own, or compile it in a way that Python wouldn't play nicely with (I bet compiling just that .so with Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or attach a debugger to it. I can even recompile testcrypt for a different CPU architecture (32- vs 64-bit, or even running it on a different machine over ssh or under emulation) and still layer the nice API on top of that via the local Python interpreter. All I need is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
if (arg.startswith(valprefix) and
"_" in arg[len(valprefix):]):
# Strip suffixes like val_string_asciz
arg = arg[:arg.index("_", len(valprefix))]
return arg
with open(os.path.join(putty_srcdir, "test", "testcrypt-func.h")) as f:
Rewrite the testcrypt.c macro system. Yesterday's commit 52ee636b092c199 which further extended the huge pile of arity-specific annoying wrapper macros pushed me over the edge and inspired me to give some harder thought to finding a way to handle all arities at once. And this time I found one! The new technique changes the syntax of the function specifications in testcrypt.h. In particular, they now have to specify a _name_ for each parameter as well as a type, because the macros generating the C marshalling wrappers will need a structure field for each parameter and cpp isn't flexible enough to generate names for those fields automatically. Rather than tediously name them arg1, arg2 etc, I've reused the names of the parameters from the prototypes or definitions of the underlying real functions (via a one-off auto-extraction process starting from the output of 'clang -Xclang -dump-ast' plus some manual polishing), which means testcrypt.h is now a bit more self-documenting. The testcrypt.py end of the mechanism is rewritten to eat the new format. Since it's got more complicated syntax and nested parens and things, I've written something a bit like a separated lexer/parser system in place of the previous crude regex matcher, which should enforce that the whole header file really does conform to the restricted syntax it has to fit into. The new system uses a lot less code in testcrypt.c, but I've made up for that by also writing a long comment explaining how it works, which was another thing the previous system lacked! Similarly, the new testcrypt.h has some long-overdue instructions at the top.
2021-11-21 10:27:30 +00:00
header = f.read()
tokens = _lex_testcrypt_header(header)
for function, rettype, arglist in _parse_testcrypt_header(tokens):
rettypes = []
retnames = []
Rewrite the testcrypt.c macro system. Yesterday's commit 52ee636b092c199 which further extended the huge pile of arity-specific annoying wrapper macros pushed me over the edge and inspired me to give some harder thought to finding a way to handle all arities at once. And this time I found one! The new technique changes the syntax of the function specifications in testcrypt.h. In particular, they now have to specify a _name_ for each parameter as well as a type, because the macros generating the C marshalling wrappers will need a structure field for each parameter and cpp isn't flexible enough to generate names for those fields automatically. Rather than tediously name them arg1, arg2 etc, I've reused the names of the parameters from the prototypes or definitions of the underlying real functions (via a one-off auto-extraction process starting from the output of 'clang -Xclang -dump-ast' plus some manual polishing), which means testcrypt.h is now a bit more self-documenting. The testcrypt.py end of the mechanism is rewritten to eat the new format. Since it's got more complicated syntax and nested parens and things, I've written something a bit like a separated lexer/parser system in place of the previous crude regex matcher, which should enforce that the whole header file really does conform to the restricted syntax it has to fit into. The new system uses a lot less code in testcrypt.c, but I've made up for that by also writing a long comment explaining how it works, which was another thing the previous system lacked! Similarly, the new testcrypt.h has some long-overdue instructions at the top.
2021-11-21 10:27:30 +00:00
if rettype != "void":
rettypes.append(trim_argtype(rettype))
retnames.append(None)
Rewrite the testcrypt.c macro system. Yesterday's commit 52ee636b092c199 which further extended the huge pile of arity-specific annoying wrapper macros pushed me over the edge and inspired me to give some harder thought to finding a way to handle all arities at once. And this time I found one! The new technique changes the syntax of the function specifications in testcrypt.h. In particular, they now have to specify a _name_ for each parameter as well as a type, because the macros generating the C marshalling wrappers will need a structure field for each parameter and cpp isn't flexible enough to generate names for those fields automatically. Rather than tediously name them arg1, arg2 etc, I've reused the names of the parameters from the prototypes or definitions of the underlying real functions (via a one-off auto-extraction process starting from the output of 'clang -Xclang -dump-ast' plus some manual polishing), which means testcrypt.h is now a bit more self-documenting. The testcrypt.py end of the mechanism is rewritten to eat the new format. Since it's got more complicated syntax and nested parens and things, I've written something a bit like a separated lexer/parser system in place of the previous crude regex matcher, which should enforce that the whole header file really does conform to the restricted syntax it has to fit into. The new system uses a lot less code in testcrypt.c, but I've made up for that by also writing a long comment explaining how it works, which was another thing the previous system lacked! Similarly, the new testcrypt.h has some long-overdue instructions at the top.
2021-11-21 10:27:30 +00:00
argtypes = []
argnames = []
Rewrite the testcrypt.c macro system. Yesterday's commit 52ee636b092c199 which further extended the huge pile of arity-specific annoying wrapper macros pushed me over the edge and inspired me to give some harder thought to finding a way to handle all arities at once. And this time I found one! The new technique changes the syntax of the function specifications in testcrypt.h. In particular, they now have to specify a _name_ for each parameter as well as a type, because the macros generating the C marshalling wrappers will need a structure field for each parameter and cpp isn't flexible enough to generate names for those fields automatically. Rather than tediously name them arg1, arg2 etc, I've reused the names of the parameters from the prototypes or definitions of the underlying real functions (via a one-off auto-extraction process starting from the output of 'clang -Xclang -dump-ast' plus some manual polishing), which means testcrypt.h is now a bit more self-documenting. The testcrypt.py end of the mechanism is rewritten to eat the new format. Since it's got more complicated syntax and nested parens and things, I've written something a bit like a separated lexer/parser system in place of the previous crude regex matcher, which should enforce that the whole header file really does conform to the restricted syntax it has to fit into. The new system uses a lot less code in testcrypt.c, but I've made up for that by also writing a long comment explaining how it works, which was another thing the previous system lacked! Similarly, the new testcrypt.h has some long-overdue instructions at the top.
2021-11-21 10:27:30 +00:00
argsconsumed = []
for arg, argname in arglist:
if arg.startswith(outprefix):
rettypes.append(trim_argtype(arg[len(outprefix):]))
retnames.append(argname)
Rewrite the testcrypt.c macro system. Yesterday's commit 52ee636b092c199 which further extended the huge pile of arity-specific annoying wrapper macros pushed me over the edge and inspired me to give some harder thought to finding a way to handle all arities at once. And this time I found one! The new technique changes the syntax of the function specifications in testcrypt.h. In particular, they now have to specify a _name_ for each parameter as well as a type, because the macros generating the C marshalling wrappers will need a structure field for each parameter and cpp isn't flexible enough to generate names for those fields automatically. Rather than tediously name them arg1, arg2 etc, I've reused the names of the parameters from the prototypes or definitions of the underlying real functions (via a one-off auto-extraction process starting from the output of 'clang -Xclang -dump-ast' plus some manual polishing), which means testcrypt.h is now a bit more self-documenting. The testcrypt.py end of the mechanism is rewritten to eat the new format. Since it's got more complicated syntax and nested parens and things, I've written something a bit like a separated lexer/parser system in place of the previous crude regex matcher, which should enforce that the whole header file really does conform to the restricted syntax it has to fit into. The new system uses a lot less code in testcrypt.c, but I've made up for that by also writing a long comment explaining how it works, which was another thing the previous system lacked! Similarly, the new testcrypt.h has some long-overdue instructions at the top.
2021-11-21 10:27:30 +00:00
else:
consumed = False
if arg.startswith(consprefix):
arg = arg[len(consprefix):]
consumed = True
arg = trim_argtype(arg)
argtypes.append((arg, consumed))
argnames.append(argname)
func = Function(function, rettypes, retnames,
argtypes, argnames)
Rewrite the testcrypt.c macro system. Yesterday's commit 52ee636b092c199 which further extended the huge pile of arity-specific annoying wrapper macros pushed me over the edge and inspired me to give some harder thought to finding a way to handle all arities at once. And this time I found one! The new technique changes the syntax of the function specifications in testcrypt.h. In particular, they now have to specify a _name_ for each parameter as well as a type, because the macros generating the C marshalling wrappers will need a structure field for each parameter and cpp isn't flexible enough to generate names for those fields automatically. Rather than tediously name them arg1, arg2 etc, I've reused the names of the parameters from the prototypes or definitions of the underlying real functions (via a one-off auto-extraction process starting from the output of 'clang -Xclang -dump-ast' plus some manual polishing), which means testcrypt.h is now a bit more self-documenting. The testcrypt.py end of the mechanism is rewritten to eat the new format. Since it's got more complicated syntax and nested parens and things, I've written something a bit like a separated lexer/parser system in place of the previous crude regex matcher, which should enforce that the whole header file really does conform to the restricted syntax it has to fit into. The new system uses a lot less code in testcrypt.c, but I've made up for that by also writing a long comment explaining how it works, which was another thing the previous system lacked! Similarly, the new testcrypt.h has some long-overdue instructions at the top.
2021-11-21 10:27:30 +00:00
scope[function] = func
if len(argtypes) > 0:
t = argtypes[0][0]
if t in method_prefixes:
for prefix in method_prefixes[t]:
if function.startswith(prefix):
methodname = function[len(prefix):]
method_lists[t].append((methodname, func))
break
New test system for mp_int and cryptography. I've written a new standalone test program which incorporates all of PuTTY's crypto code, including the mp_int and low-level elliptic curve layers but also going all the way up to the implementations of the MAC, hash, cipher, public key and kex abstractions. The test program itself, 'testcrypt', speaks a simple line-oriented protocol on standard I/O in which you write the name of a function call followed by some inputs, and it gives you back a list of outputs preceded by a line telling you how many there are. Dynamically allocated objects are assigned string ids in the protocol, and there's a 'free' function that tells testcrypt when it can dispose of one. It's possible to speak that protocol by hand, but cumbersome. I've also provided a Python module that wraps it, by running testcrypt as a persistent subprocess and gatewaying all the function calls into things that look reasonably natural to call from Python. The Python module and testcrypt.c both read a carefully formatted header file testcrypt.h which contains the name and signature of every exported function, so it costs minimal effort to expose a given function through this test API. In a few cases it's necessary to write a wrapper in testcrypt.c that makes the function look more friendly, but mostly you don't even need that. (Though that is one of the motivations between a lot of API cleanups I've done recently!) I considered doing Python integration in the more obvious way, by linking parts of the PuTTY code directly into a native-code .so Python module. I decided against it because this way is more flexible: I can run the testcrypt program on its own, or compile it in a way that Python wouldn't play nicely with (I bet compiling just that .so with Leak Sanitiser wouldn't do what you wanted when Python loaded it!), or attach a debugger to it. I can even recompile testcrypt for a different CPU architecture (32- vs 64-bit, or even running it on a different machine over ssh or under emulation) and still layer the nice API on top of that via the local Python interpreter. All I need is a bidirectional data channel.
2019-01-01 19:08:37 +00:00
_setup(globals())
del _setup