Some new cert-related stuff wasn't documented in the usage message
and/or man page; and the longer-standing "-E fptype" was entirely
omitted from the usage message.
As of the cyclic-dependency fix in b01173c6b7, building from our tarball
using the instructions in its README (using the source tree as build
tree), in the absence of Halibut, would lead to the pre-built man pages
not being installed.
(Also, a load of "Could not build man page" complaints at cmake
generation time, which is how I actually noticed.)
With the new larger fixed-group methods, it's less clearly always the
right answer. (Really it seems more sensible to use ECDH over any of
the integer DH, these days.)
Also, reword other kex descriptions a bit.
If Halibut is not available to build the docs, but on the other hand
pre-built man pages already exist (e.g. because you unpacked a source
zip file with them already provided), then docs/CMakeLists.txt creates
a set of build rules that copy the pre-built man pages from the source
directory to the build directory.
However, if the source and build directories are the _same_, this
creates a set of cyclic dependencies, i.e. files which depend directly
on themselves. Some build tools (in particular 'ninja') will report
this as an error.
In that situation, the simple fix is to leave off the build rules
completely: if the man pages are already where the build will want
them to end up, there need not be any build rule to do anything about
them.
In recent months I've had two requests from different people to build
support into PuTTY for automatically handling complicated third-party
auth protocols layered on top of keyboard-interactive - the kind of
thing where you're asked to enter some auth response, and you have to
refer to some external source like a web server to find out what the
right response _is_, which is a pain to do by hand, so you'd prefer it
to be automated in the SSH client.
That seems like a reasonable thing for an end user to want, but I
didn't think it was a good idea to build support for specific
protocols of that kind directly into PuTTY, where there would no doubt
be an ever-lengthening list, and maintenance needed on all of them.
So instead, in collaboration with one of my correspondents, I've
designed and implemented a protocol to be spoken between PuTTY and a
plugin running as a subprocess. The plugin can opt to handle the
keyboard-interactive authentication loop on behalf of the user, in
which case PuTTY passes on all the INFO_REQUEST packets to it, and
lets it make up responses. It can also ask questions of the user if
necessary.
The protocol spec is provided in a documentation appendix. The entire
configuration for the end user consists of providing a full command
line to use as the subprocess.
In the contrib directory I've provided an example plugin written in
Python. It gives a set of fixed responses suitable for getting through
Uppity's made-up k-i system, because that was a reasonable thing I
already had lying around to test against. But it also provides example
code that someone else could pick up and insert their own live
response-provider into the middle of, assuming they were happy with it
being in Python.
We've occasionally had reports of SSH servers disconnecting as soon as
they receive PuTTY's KEXINIT. I think all such reports have involved
the kind of simple ROM-based SSH server software you find in small
embedded devices.
I've never been able to prove it, but I've always suspected that one
possible cause of this is simply that PuTTY's KEXINIT is _too long_,
either in number of algorithms listed or in total length (especially
given all the ones that end in @very.long.domain.name suffixes).
If I'm right about either of those being the cause, then it's just
become even more likely to happen, because of all the extra
Diffie-Hellman groups and GSSAPI algorithms we just threw into our
already-long list in the previous few commits.
A workaround I've had in mind for ages is to wait for the server's
KEXINIT, and then filter our own down to just the algorithms the
server also mentioned. Then our KEXINIT is no longer than that of the
server, and hence, presumably fits in whatever buffer it has. So I've
implemented that workaround, in anticipation of it being needed in the
near future.
(Well ... it's not _quite_ true that our KEXINIT is at most the same
length as the server. In fact I had to leave in one KEXINIT item that
won't match anything in the server's list, namely "ext-info-c" which
gates access to SHA-2 based RSA. So if we turn out to support
absolutely everything on all the server's lists, then our KEXINIT
would be a few bytes longer than the server's, even with this
workaround. But that would only cause trouble if the server's outgoing
KEXINIT was skating very close to whatever buffer size it has for the
incoming one, and I'm guessing that's not very likely.)
((Another possible cause of this kind of disconnection would be a
server that simply objects to seeing any KEXINIT string it doesn't
know how to speak. But _surely_ no such server would have survived
initial testing against any full-featured client at all!))
I recently noticed a mysterious delay at connection startup while
using an SSH jump host, and investigated it in case it was a bug in
the new jump host code that ought to be fixed before 0.77 goes out.
strace showed that at the time of the delay PuTTY was doing a DNS
lookup for the destination host, which was hanging due to the
authoritative DNS server in question not being reachable. But that was
odd, because I'd configured it to leave DNS lookup to the proxy,
anticipating exactly that problem.
But on closer investigation, the _proxy_ code was doing exactly what
I'd told it. The DNS lookup was coming from somewhere else: namely, an
(unsuccessful) attempt to set up a GSSAPI context. The GSSAPI library
had called gethostbyname, completely separately from PuTTY's own use
of DNS.
Simple workaround for me: turn off GSSAPI, which doesn't work for that
particular SSH connection anyway, and there's no point spending 30
seconds faffing just to find that out.
But also, if that puzzled me, it's worth documenting!
There's now a command-line option to make Pageant open an AF_UNIX
socket at a pathname of your choice. This allows it to act as an SSH
agent for any client program willing to use a WinSock AF_UNIX socket.
In particular, this allows WSL 1 processes to talk directly to Windows
Pageant without needing any intermediate process, because the AF_UNIX
sockets in the WSL 1 world interoperate with WinSock's ones.
(However, not WSL 2, which isn't very surprising.)
I tried setting this up on a different Windows machine today and had
some slightly different experiences. I found that in at least some
situations the command 'Include c:\...\pageant.conf' will cause
OpenSSH to emit a log message saying it's trying to open the file
'~/.ssh/c:\...\pageant.conf', which it then doesn't find. But 'Include
pageant.conf' works, because that's interpreted relative to the .ssh
directory that it's already found.
(I don't know why this happened on one Windows machine and not
another, since I only have a sample size of two. But an obvious guess
would be a bug fix in the Windows OpenSSH port, present in the version
on one of the machines I tried, and not in the other. Certainly that
failure mode looks to me like 'apply Unix instead of Windows rules to
decide what's an absolute pathname'.)
Also, clarified that all of this only works with the version of
OpenSSH that's available as a Windows optional feature, and not with
the MSYS-based one that ships with Windows git.
When I was writing the documentation for the new command-line options,
I wondered why there was an existing section for the corresponding GUI
setting for each option I'd added except strong primes. Now I've found
it: strong primes are discussed in the same section as prime-
generation methods. So I can replace the second explanation with a
cross-reference.
Those were forbidden so that we could still compile on pre-C99 C
compilers. But now we expect C99 everywhere (or at least most of it,
excluding the parts that MSVC never implemented and C11 made
optional), so // comments aren't forbidden any more.
Most of the comments in this code base are still old-style, but that's
now a matter of stylistic consistency rather than hard requirement.
Correcting a source file name in the docs just now reminded me that
I've seen a lot of outdated source file names elsewhere in the code,
due to all the reorganisation since we moved to cmake. Here's a giant
pass of trying to make them all accurate again.
It's been so long since Windows Plink kept its stdio subthreads in its
own main source file that I'd forgotten it had ever done so! They've
lived in a separate module for managing Windows HANDLE-based I/O for
ages. That module has recently changed its filename, but this piece of
documentation was so out of date that the old filename wasn't in there
- it was still mentioning the filename _before_ that.
Jacob reported that on Debian buster, the command sequence
cmake $srcdir
cmake --build .
cmake --build . --target doc
would fail at the third step, with the make error "No rule to make
target 'doc/cmake_version.but', needed by 'doc/html/index.html'".
That seems odd, because the file ${VERSION_BUT} _was_ declared as a
dependency of the rule that builds doc/html/*.html, and _cmake_ knew
what rule built it (namely the custom target 'cmake_version_but'). I
suspect this is a bug in cmake 3.13, because the same command sequence
works fine with cmake 3.20.
However, it's possible to work around, by means of adding the cmake
_target name_ to the dependencies for any rule that uses that file,
instead of relying on it to map the output _file_ name to that target.
While I'm at it, I've transformed the rules that build copy.but and
licence.but in the same way, turning those too into custom targets
instead of custom commands (I've found that the former are more
generally reliable across a range of cmake versions), and including
the target names themselves as dependencies.
After a discussion with a user recently, I investigated the Windows
native ssh.exe, and found it uses a Windows named pipe to talk to its
ssh-agent, in exactly the same way Pageant does. So if you tell
ssh.exe where to find Pageant's pipe, it can talk directly to Pageant,
and then you can have just one SSH agent.
The slight problem is that Pageant's pipe name is not stable. It's
generated using the same system as connection-sharing pipe names, and
contains a hex hash value whose preimage was fed through
CryptProtectData. And the problem with _that_ is that CryptProtectData
apparently reinitialises its seed between login sessions (though it's
stable within a login session), which I hadn't fully realised when I
reused the same pipe-name construction code.
One possibility, of course, would be to change Pageant so that it uses
a fixed pipe name. But after a bit of thought, I think I actually like
this feature, because the Windows named pipe namespace isn't
segregated into areas writable by only particular users, so anyone
using that namespace on a multiuser Windows box is potentially
vulnerable to someone else squatting on the name you wanted to use.
Using this system makes that harder, because the squatter won't be
able to predict what the name is going to be! (Unless you shut down
Pageant and start it up again within one login session - but there's
only so much we can do. And squatting is at most a DoS, because
PuTTY's named-pipe client code checks ownership of the other end of
the pipe in all cases.)
So instead I've gone for a different approach. Windows Pageant now
supports an extra command-line option to write out a snippet of
OpenSSH config file format on startup, containing an 'IdentityAgent'
directive which points at the location of its named pipe. So you can
use the 'Include' directive in your main .ssh/config to include this
extra snippet, and then ssh.exe invocations will be able to find
wherever the current Pageant has put its pipe.
This imports the following options from command-line PuTTYgen, which
all correspond to controls in Windows PuTTYgen's GUI, and let you set
the GUI controls to initial values of your choice:
-t <key type>
-b <bits>
-E <fingerprint type>
--primes <prime gen policy>
--strong-rsa
--ppk-param <KDF parameters or PPK version etc>
The idea is that if someone generates a lot of keys and has standard
non-default preferences, they can make a shortcut that passes those
preferences on the command line.
This update covers several recently added features: SSH proxying, HTTP
Digest proxy auth, and interactive prompting for proxy auth in general.
Also, downplayed the use of 'plink -nc' as a Local-type proxy command.
It still works, but it's no longer the recommended way of tunnelling
SSH over SSH, so there's no need to explain it quite so
enthusiastically.
I have _no_ idea how I managed to leave this out of the list of
examples when I first wrote this man page. It should have been the
very first one I thought of, since Cygwin was the platform I wrote
cygtermd for, and one of psusan's primary purposes was to be a
productised and improved replacement for cygtermd!
Oh well, better late than never.
After this change, the cmake setup now works even on Debian stretch
(oldoldstable), which runs cmake 3.7.
In order to support a version that early I had to:
- write a fallback implementation of 'add_compile_definitions' for
older cmakes, which is easy, because add_compile_definitions(FOO)
is basically just add_compile_options(-DFOO)
- stop using list(TRANSFORM) and string(JOIN), of which I had one
case each, and they were easily replaced with simple foreach loops
- stop putting OBJECT libraries in the target_link_libraries command
for executable targets, in favour of adding $<TARGET_OBJECTS:foo>
to the main sources list for the same target. That matches what I
do with library targets, so it's probably more sensible anyway.
I tried going back by another Debian release and getting this cmake
setup to work on jessie, but that runs CMake 3.0.1, and in _that_
version of cmake the target_sources command is missing, and I didn't
find any alternative way to add extra sources to a target after having
first declared it. Reorganising to cope with _that_ omission would be
too much upheaval without a very good reason.
This is the same as the previous FUNKY_XTERM mode if you don't press
any modifier keys, but now Shift or Ctrl or Alt with function keys
adds an extra bitmap parameter. The bitmaps are the same as the ones
used by the new SHARROW_BITMAP arrow key mode.
This commit introduces a new config option for how to handle shifted
arrow keys.
In the default mode (SHARROW_APPLICATION), we do what we've always
done: Ctrl flips the arrow keys between sending their most usual
escape sequences (ESC [ A ... ESC [ D) and sending the 'application
cursor keys' sequences (ESC O A ... ESC O D). Whichever of those modes
is currently configured, Ctrl+arrow sends the other one.
In the new mode (SHARROW_BITMAP), application cursor key mode is
unaffected by any shift keys, but the default sequences acquire two
numeric arguments. The first argument is 1 (reflecting the fact that a
shifted arrow key still notionally moves just 1 character cell); the
second is the bitmap (1 for Shift) + (2 for Alt) + (4 for Ctrl),
offset by 1. (Except that if _none_ of those modifiers is pressed,
both numeric arguments are simply omitted.)
The new bitmap mode is what current xterm generates, and also what
Windows ConPTY seems to expect. If you start an ordinary Command
Prompt and launch into WSL, those are the sequences it will generate
for shifted arrow keys; conversely, if you run a Command Prompt within
a ConPTY, then these sequences for Ctrl+arrow will have the effect you
expect in cmd.exe command-line editing (going backward or forward a
word). For that reason, I enable this mode unconditionally when
launching Windows pterm.
Similarly to cmdgen's passphrase options, this replaces the password
on the command line with a filename to read the password out of, which
means it can't show up in 'ps' or the Windows task manager.
In the section about our ad-hoc trait idioms, I described a code
sample as containing a set of 'static inline' wrapper functions, which
indeed it should have done - but I forgot to put the 'inline' keyword
in the code sample itself.
In most Halibut man pages I write, I have a standard convention of
referring to another man page by wrapping the page name in \cw and the
section number in \e, leaving the parentheses un-marked-up. Apparently
I forgot in this particular collection.
When UML terminates, it kills its entire process group. The way PuTTY
invokes proxy processes, they are part of its process group. So if UML
is used directly as the proxy process, it will commit patricide on
termination.
Wrapping it in 'setsid' is overkill (it doesn't need to be part of a
separate _session_, only a separate pgrp), but it's good enough to
work around this problem, and give PuTTY the opportunity to shut down
cleanly when the UML it's talking to vanishes.
Ian Jackson recently tried to use the recipe in the psusan manpage for
talking to UML, and found that the connection was not successfully set
up, because at some point during startup, UML read the SSH greeting
(ok, the bare-ssh-connection greeting) from its input fd and threw it
away. So by the time psusan was run by the guest init process, the
greeting wasn't there to be read.
Ian's report: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=991958
I was also able to reproduce this locally, which makes me wonder why I
_didn't_ notice it when I originally wrote that part of the psusan man
page. It worked for me before, honest! But now it doesn't.
Anyway. The ssh verstring module already has a mode switch to decide
whether we ought to send our greeting before or after waiting for the
other side's greeting (because that decision varies between client and
server, and between SSH-1 and SSH-2). So it's easy to implement an
override that forces it to 'wait for the server greeting first'.
I've added this as yet another bug workaround flag. But unlike all the
others, it can't be autodetected from the server's version string,
because, of course, we have to act on it _before_ seeing the server's
greeting and version string! So it's a manual-only flag.
However, I've mentioned it in the UML section of the psusan man page,
since that's the place where I _know_ people are likely to need to use
this flag.
Following the same pattern as the previous one (commit 6c924ba862),
except that this time, I don't have to _set up_ the pattern in the
front-end code of presenting the current and previous key details -
just change over the actual string literals in putty.h.
But the rest is the same: new keys at the top of pgpkeys.but, old ones
relegated to the historical appendix, key ids in sign.sh switched over.
Suggested by Manfred Kaiser, who also wrote most of this patch
(although outlying parts, like documentation and SSH-1 support, are by
me).
This is a second line of defence against the kind of spoofing attacks
in which a malicious or compromised SSH server rushes the client
through the userauth phase of SSH without actually requiring any auth
inputs (passwords or signatures or whatever), and then at the start of
the connection phase it presents something like a spoof prompt,
intended to be taken for part of userauth by the user but in fact with
some more sinister purpose.
Our existing line of defence against this is the trust sigil system,
and as far as I know, that's still working. This option allows a bit of
extra defence in depth: if you don't expect your SSH server to
trivially accept authentication in the first place, then enabling this
option will cause PuTTY to disconnect if it unexpectedly does so,
without the user having to spot the presence or absence of a fiddly
little sigil anywhere.
Several types of authentication count as 'trivial'. The obvious one is
the SSH-2 "none" method, which clients always try first so that the
failure message will tell them what else they can try, and which a
server can instead accept in order to authenticate you unconditionally.
But there are two other ways to do it that we know of: one is to run
keyboard-interactive authentication and send an empty INFO_REQUEST
packet containing no actual prompts for the user, and another even
weirder one is to send USERAUTH_SUCCESS in response to the user's
preliminary *offer* of a public key (instead of sending the usual PK_OK
to request an actual signature from the key).
This new option detects all of those, by clearing the 'is_trivial_auth'
flag only when we send some kind of substantive authentication response
(be it a password, a k-i prompt response, a signature, or a GSSAPI
token). So even if there's a further path through the userauth maze we
haven't spotted, that somehow avoids sending anything substantive, this
strategy should still pick it up.
(cherry picked from commit 5f5c710cf3)
Suggested by Manfred Kaiser, who also wrote most of this patch
(although outlying parts, like documentation and SSH-1 support, are by
me).
This is a second line of defence against the kind of spoofing attacks
in which a malicious or compromised SSH server rushes the client
through the userauth phase of SSH without actually requiring any auth
inputs (passwords or signatures or whatever), and then at the start of
the connection phase it presents something like a spoof prompt,
intended to be taken for part of userauth by the user but in fact with
some more sinister purpose.
Our existing line of defence against this is the trust sigil system,
and as far as I know, that's still working. This option allows a bit of
extra defence in depth: if you don't expect your SSH server to
trivially accept authentication in the first place, then enabling this
option will cause PuTTY to disconnect if it unexpectedly does so,
without the user having to spot the presence or absence of a fiddly
little sigil anywhere.
Several types of authentication count as 'trivial'. The obvious one is
the SSH-2 "none" method, which clients always try first so that the
failure message will tell them what else they can try, and which a
server can instead accept in order to authenticate you unconditionally.
But there are two other ways to do it that we know of: one is to run
keyboard-interactive authentication and send an empty INFO_REQUEST
packet containing no actual prompts for the user, and another even
weirder one is to send USERAUTH_SUCCESS in response to the user's
preliminary *offer* of a public key (instead of sending the usual PK_OK
to request an actual signature from the key).
This new option detects all of those, by clearing the 'is_trivial_auth'
flag only when we send some kind of substantive authentication response
(be it a password, a k-i prompt response, a signature, or a GSSAPI
token). So even if there's a further path through the userauth maze we
haven't spotted, that somehow avoids sending anything substantive, this
strategy should still pick it up.
It's silly to require all the time-consuming cmake configuration for
the source code, if all you want to do is to build the documentation.
My own website update script will like this optimisation, and so will
Buildscr.
In order to make doc/CMakeLists.txt work standalone, I had to add a
'project' header (citing no languages, so that cmake won't even bother
looking for a C compiler); include FindGit, which cmake/setup.cmake
now won't be doing for it; change all references to CMAKE_SOURCE_DIR
to CMAKE_CURRENT_SOURCE_DIR/.. (since now the former will be defined
differently in a nested or standalone doc build); and spot whether
we're nested or not in order to conditionalise things designed to
interoperate with the parent CMakeLists.
It's always the same as the cwd when the script is invoked, and by
having the script get it _from_ its own cwd, we arrange a bit of
automatic normalisation in situations where you need to invoke it with
some non-canonical path like one ending in "/.." - which I'll do in
the next commit.
doc/CMakeLists.txt now sets a variable indicating that we either have,
or can build, each individual man page. And when we call our
installed_program() function to mark a program as official enough to
put in 'make install', that function also installs the man page
similarly if it exists, and warns if not.
For the convenience of people building-and-installing from the .tar.gz
we ship, I've arranged that they can still get the man pages installed
without needing Halibut: the previous commit ensured that the prebuilt
man pages are still in the tarball, and this one arranges that if we
don't have Halibut but we do have prebuilt man pages, then we can
'build' them by copying from the prebuilt versions.
The standalone separate doc/Makefile is gone, replaced by a
CMakeLists.txt that makes 'doc' function as a subdirectory of the main
CMake build system. This auto-detects Halibut, and if it's present,
uses it to build the man pages and the various forms of the main
manual, including the Windows CHM help file in particular.
One awkward thing I had to do was to move just one config directive in
blurb.but into its own file: the one that cites a relative path to the
stylesheet file to put into the CHM. CMake builds often like to be
out-of-tree, so there's no longer a fixed relative path between the
build directory and chm.css. And Halibut has no concept of an include
path to search for files cited by other files, so I can't fix that
with an -I option on the Halibut command line. So I moved that single
config directive into its own file, and had CMake write out a custom
version of that file in the build directory citing the right path.
(Perhaps in the longer term I should fix that omission in Halibut;
out-of-tree friendliness seems like a useful feature. But even if I
do, I still need this build to work now.)
Since the previous commit is causing an RC2 build of 0.75 anyway,
let's take the opportunity to bring in updates to the docs from main,
so that the release will have the most up-to-date version available.
This is a combined cherry-pick of:
f6142ba29b7c1bea59a3f5d1d4ce4b
I've just spent the afternoon playing with it (rather belatedly - this
is the first time I've tried it out since it was first announced!),
and quickly decided that on the one hand it looks quite useful, but on
the other hand, running it in a Windows console is not for me and I'd
prefer to talk to it via PuTTY and psusan, for nicer copy-paste
controls and the ability to forward Pageant into it.
That turns out to be very easy and (I think) useful, so in it goes as
another psusan use case.
Suggested by Jacob: if this dialog box is going to pop up
_unexpectedly_ - perhaps when people have momentarily forgotten
they're even running Pageant, or at least forgotten they added a key
encrypted,, or maybe haven't found out yet that their IT installed it
- then it could usefully come with a help button that pops up further
explanation of what the dialog box means, and from which you can find
your way to the rest of the help.
It's no longer a hard requirement, because now we're on cmake rather
than mkfiles.pl, we _can_ compile the same source file multiple times
with different ifdefs.
I still think it's a better idea not to: I'd prefer that most of this
code base remained in the form of libraries reused between
applications, with parametrisation done by choice of what other
objects to link them to rather than by recompiling the library modules
themselves with different settings. But the latter is now a
possibility at need.
This brings various concrete advantages over the previous system:
- consistent support for out-of-tree builds on all platforms
- more thorough support for Visual Studio IDE project files
- support for Ninja-based builds, which is particularly useful on
Windows where the alternative nmake has no parallel option
- a really simple set of build instructions that work the same way on
all the major platforms (look how much shorter README is!)
- better decoupling of the project configuration from the toolchain
configuration, so that my Windows cross-building doesn't need
(much) special treatment in CMakeLists.txt
- configure-time tests on Windows as well as Linux, so that a lot of
ad-hoc #ifdefs second-guessing a particular feature's presence from
the compiler version can now be replaced by tests of the feature
itself
Also some longer-term software-engineering advantages:
- other people have actually heard of CMake, so they'll be able to
produce patches to the new build setup more easily
- unlike the old mkfiles.pl, CMake is not my personal problem to
maintain
- most importantly, mkfiles.pl was just a horrible pile of
unmaintainable cruft, which even I found it painful to make changes
to or to use, and desperately needed throwing in the bin. I've
already thrown away all the variants of it I had in other projects
of mine, and was only delaying this one so we could make the 0.75
release branch first.
This change comes with a noticeable build-level restructuring. The
previous Recipe worked by compiling every object file exactly once,
and then making each executable by linking a precisely specified
subset of the same object files. But in CMake, that's not the natural
way to work - if you write the obvious command that puts the same
source file into two executable targets, CMake generates a makefile
that compiles it once per target. That can be an advantage, because it
gives you the freedom to compile it differently in each case (e.g.
with a #define telling it which program it's part of). But in a
project that has many executable targets and had carefully contrived
to _never_ need to build any module more than once, all it does is
bloat the build time pointlessly!
To avoid slowing down the build by a large factor, I've put most of
the modules of the code base into a collection of static libraries
organised vaguely thematically (SSH, other backends, crypto, network,
...). That means all those modules can still be compiled just once
each, because once each library is built it's reused unchanged for all
the executable targets.
One upside of this library-based structure is that now I don't have to
manually specify exactly which objects go into which programs any more
- it's enough to specify which libraries are needed, and the linker
will figure out the fine detail automatically. So there's less
maintenance to do in CMakeLists.txt when the source code changes.
But that reorganisation also adds fragility, because of the trad Unix
linker semantics of walking along the library list once each, so that
cyclic references between your libraries will provoke link errors. The
current setup builds successfully, but I suspect it only just manages
it.
(In particular, I've found that MinGW is the most finicky on this
score of the Windows compilers I've tried building with. So I've
included a MinGW test build in the new-look Buildscr, because
otherwise I think there'd be a significant risk of introducing
MinGW-only build failures due to library search order, which wasn't a
risk in the previous library-free build organisation.)
In the longer term I hope to be able to reduce the risk of that, via
gradual reorganisation (in particular, breaking up too-monolithic
modules, to reduce the risk of knock-on references when you included a
module for function A and it also contains function B with an
unsatisfied dependency you didn't really need). Ideally I want to
reach a state in which the libraries all have sensibly described
purposes, a clearly documented (partial) order in which they're
permitted to depend on each other, and a specification of what stubs
you have to put where if you're leaving one of them out (e.g.
nocrypto) and what callbacks you have to define in your non-library
objects to satisfy dependencies from things low in the stack (e.g.
out_of_memory()).
One thing that's gone completely missing in this migration,
unfortunately, is the unfinished MacOS port linked against Quartz GTK.
That's because it turned out that I can't currently build it myself,
on my own Mac: my previous installation of GTK had bit-rotted as a
side effect of an Xcode upgrade, and I haven't yet been able to
persuade jhbuild to make me a new one. So I can't even build the MacOS
port with the _old_ makefiles, and hence, I have no way of checking
that the new ones also work. I hope to bring that port back to life at
some point, but I don't want it to block the rest of this change.