In the section about our ad-hoc trait idioms, I described a code
sample as containing a set of 'static inline' wrapper functions, which
indeed it should have done - but I forgot to put the 'inline' keyword
in the code sample itself.
In most Halibut man pages I write, I have a standard convention of
referring to another man page by wrapping the page name in \cw and the
section number in \e, leaving the parentheses un-marked-up. Apparently
I forgot in this particular collection.
When UML terminates, it kills its entire process group. The way PuTTY
invokes proxy processes, they are part of its process group. So if UML
is used directly as the proxy process, it will commit patricide on
termination.
Wrapping it in 'setsid' is overkill (it doesn't need to be part of a
separate _session_, only a separate pgrp), but it's good enough to
work around this problem, and give PuTTY the opportunity to shut down
cleanly when the UML it's talking to vanishes.
Ian Jackson recently tried to use the recipe in the psusan manpage for
talking to UML, and found that the connection was not successfully set
up, because at some point during startup, UML read the SSH greeting
(ok, the bare-ssh-connection greeting) from its input fd and threw it
away. So by the time psusan was run by the guest init process, the
greeting wasn't there to be read.
Ian's report: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=991958
I was also able to reproduce this locally, which makes me wonder why I
_didn't_ notice it when I originally wrote that part of the psusan man
page. It worked for me before, honest! But now it doesn't.
Anyway. The ssh verstring module already has a mode switch to decide
whether we ought to send our greeting before or after waiting for the
other side's greeting (because that decision varies between client and
server, and between SSH-1 and SSH-2). So it's easy to implement an
override that forces it to 'wait for the server greeting first'.
I've added this as yet another bug workaround flag. But unlike all the
others, it can't be autodetected from the server's version string,
because, of course, we have to act on it _before_ seeing the server's
greeting and version string! So it's a manual-only flag.
However, I've mentioned it in the UML section of the psusan man page,
since that's the place where I _know_ people are likely to need to use
this flag.
Following the same pattern as the previous one (commit 6c924ba862),
except that this time, I don't have to _set up_ the pattern in the
front-end code of presenting the current and previous key details -
just change over the actual string literals in putty.h.
But the rest is the same: new keys at the top of pgpkeys.but, old ones
relegated to the historical appendix, key ids in sign.sh switched over.
Suggested by Manfred Kaiser, who also wrote most of this patch
(although outlying parts, like documentation and SSH-1 support, are by
me).
This is a second line of defence against the kind of spoofing attacks
in which a malicious or compromised SSH server rushes the client
through the userauth phase of SSH without actually requiring any auth
inputs (passwords or signatures or whatever), and then at the start of
the connection phase it presents something like a spoof prompt,
intended to be taken for part of userauth by the user but in fact with
some more sinister purpose.
Our existing line of defence against this is the trust sigil system,
and as far as I know, that's still working. This option allows a bit of
extra defence in depth: if you don't expect your SSH server to
trivially accept authentication in the first place, then enabling this
option will cause PuTTY to disconnect if it unexpectedly does so,
without the user having to spot the presence or absence of a fiddly
little sigil anywhere.
Several types of authentication count as 'trivial'. The obvious one is
the SSH-2 "none" method, which clients always try first so that the
failure message will tell them what else they can try, and which a
server can instead accept in order to authenticate you unconditionally.
But there are two other ways to do it that we know of: one is to run
keyboard-interactive authentication and send an empty INFO_REQUEST
packet containing no actual prompts for the user, and another even
weirder one is to send USERAUTH_SUCCESS in response to the user's
preliminary *offer* of a public key (instead of sending the usual PK_OK
to request an actual signature from the key).
This new option detects all of those, by clearing the 'is_trivial_auth'
flag only when we send some kind of substantive authentication response
(be it a password, a k-i prompt response, a signature, or a GSSAPI
token). So even if there's a further path through the userauth maze we
haven't spotted, that somehow avoids sending anything substantive, this
strategy should still pick it up.
(cherry picked from commit 5f5c710cf3)
Suggested by Manfred Kaiser, who also wrote most of this patch
(although outlying parts, like documentation and SSH-1 support, are by
me).
This is a second line of defence against the kind of spoofing attacks
in which a malicious or compromised SSH server rushes the client
through the userauth phase of SSH without actually requiring any auth
inputs (passwords or signatures or whatever), and then at the start of
the connection phase it presents something like a spoof prompt,
intended to be taken for part of userauth by the user but in fact with
some more sinister purpose.
Our existing line of defence against this is the trust sigil system,
and as far as I know, that's still working. This option allows a bit of
extra defence in depth: if you don't expect your SSH server to
trivially accept authentication in the first place, then enabling this
option will cause PuTTY to disconnect if it unexpectedly does so,
without the user having to spot the presence or absence of a fiddly
little sigil anywhere.
Several types of authentication count as 'trivial'. The obvious one is
the SSH-2 "none" method, which clients always try first so that the
failure message will tell them what else they can try, and which a
server can instead accept in order to authenticate you unconditionally.
But there are two other ways to do it that we know of: one is to run
keyboard-interactive authentication and send an empty INFO_REQUEST
packet containing no actual prompts for the user, and another even
weirder one is to send USERAUTH_SUCCESS in response to the user's
preliminary *offer* of a public key (instead of sending the usual PK_OK
to request an actual signature from the key).
This new option detects all of those, by clearing the 'is_trivial_auth'
flag only when we send some kind of substantive authentication response
(be it a password, a k-i prompt response, a signature, or a GSSAPI
token). So even if there's a further path through the userauth maze we
haven't spotted, that somehow avoids sending anything substantive, this
strategy should still pick it up.
It's silly to require all the time-consuming cmake configuration for
the source code, if all you want to do is to build the documentation.
My own website update script will like this optimisation, and so will
Buildscr.
In order to make doc/CMakeLists.txt work standalone, I had to add a
'project' header (citing no languages, so that cmake won't even bother
looking for a C compiler); include FindGit, which cmake/setup.cmake
now won't be doing for it; change all references to CMAKE_SOURCE_DIR
to CMAKE_CURRENT_SOURCE_DIR/.. (since now the former will be defined
differently in a nested or standalone doc build); and spot whether
we're nested or not in order to conditionalise things designed to
interoperate with the parent CMakeLists.
It's always the same as the cwd when the script is invoked, and by
having the script get it _from_ its own cwd, we arrange a bit of
automatic normalisation in situations where you need to invoke it with
some non-canonical path like one ending in "/.." - which I'll do in
the next commit.
doc/CMakeLists.txt now sets a variable indicating that we either have,
or can build, each individual man page. And when we call our
installed_program() function to mark a program as official enough to
put in 'make install', that function also installs the man page
similarly if it exists, and warns if not.
For the convenience of people building-and-installing from the .tar.gz
we ship, I've arranged that they can still get the man pages installed
without needing Halibut: the previous commit ensured that the prebuilt
man pages are still in the tarball, and this one arranges that if we
don't have Halibut but we do have prebuilt man pages, then we can
'build' them by copying from the prebuilt versions.
The standalone separate doc/Makefile is gone, replaced by a
CMakeLists.txt that makes 'doc' function as a subdirectory of the main
CMake build system. This auto-detects Halibut, and if it's present,
uses it to build the man pages and the various forms of the main
manual, including the Windows CHM help file in particular.
One awkward thing I had to do was to move just one config directive in
blurb.but into its own file: the one that cites a relative path to the
stylesheet file to put into the CHM. CMake builds often like to be
out-of-tree, so there's no longer a fixed relative path between the
build directory and chm.css. And Halibut has no concept of an include
path to search for files cited by other files, so I can't fix that
with an -I option on the Halibut command line. So I moved that single
config directive into its own file, and had CMake write out a custom
version of that file in the build directory citing the right path.
(Perhaps in the longer term I should fix that omission in Halibut;
out-of-tree friendliness seems like a useful feature. But even if I
do, I still need this build to work now.)
Since the previous commit is causing an RC2 build of 0.75 anyway,
let's take the opportunity to bring in updates to the docs from main,
so that the release will have the most up-to-date version available.
This is a combined cherry-pick of:
f6142ba29b7c1bea59a3f5d1d4ce4b
I've just spent the afternoon playing with it (rather belatedly - this
is the first time I've tried it out since it was first announced!),
and quickly decided that on the one hand it looks quite useful, but on
the other hand, running it in a Windows console is not for me and I'd
prefer to talk to it via PuTTY and psusan, for nicer copy-paste
controls and the ability to forward Pageant into it.
That turns out to be very easy and (I think) useful, so in it goes as
another psusan use case.
Suggested by Jacob: if this dialog box is going to pop up
_unexpectedly_ - perhaps when people have momentarily forgotten
they're even running Pageant, or at least forgotten they added a key
encrypted,, or maybe haven't found out yet that their IT installed it
- then it could usefully come with a help button that pops up further
explanation of what the dialog box means, and from which you can find
your way to the rest of the help.
It's no longer a hard requirement, because now we're on cmake rather
than mkfiles.pl, we _can_ compile the same source file multiple times
with different ifdefs.
I still think it's a better idea not to: I'd prefer that most of this
code base remained in the form of libraries reused between
applications, with parametrisation done by choice of what other
objects to link them to rather than by recompiling the library modules
themselves with different settings. But the latter is now a
possibility at need.
This brings various concrete advantages over the previous system:
- consistent support for out-of-tree builds on all platforms
- more thorough support for Visual Studio IDE project files
- support for Ninja-based builds, which is particularly useful on
Windows where the alternative nmake has no parallel option
- a really simple set of build instructions that work the same way on
all the major platforms (look how much shorter README is!)
- better decoupling of the project configuration from the toolchain
configuration, so that my Windows cross-building doesn't need
(much) special treatment in CMakeLists.txt
- configure-time tests on Windows as well as Linux, so that a lot of
ad-hoc #ifdefs second-guessing a particular feature's presence from
the compiler version can now be replaced by tests of the feature
itself
Also some longer-term software-engineering advantages:
- other people have actually heard of CMake, so they'll be able to
produce patches to the new build setup more easily
- unlike the old mkfiles.pl, CMake is not my personal problem to
maintain
- most importantly, mkfiles.pl was just a horrible pile of
unmaintainable cruft, which even I found it painful to make changes
to or to use, and desperately needed throwing in the bin. I've
already thrown away all the variants of it I had in other projects
of mine, and was only delaying this one so we could make the 0.75
release branch first.
This change comes with a noticeable build-level restructuring. The
previous Recipe worked by compiling every object file exactly once,
and then making each executable by linking a precisely specified
subset of the same object files. But in CMake, that's not the natural
way to work - if you write the obvious command that puts the same
source file into two executable targets, CMake generates a makefile
that compiles it once per target. That can be an advantage, because it
gives you the freedom to compile it differently in each case (e.g.
with a #define telling it which program it's part of). But in a
project that has many executable targets and had carefully contrived
to _never_ need to build any module more than once, all it does is
bloat the build time pointlessly!
To avoid slowing down the build by a large factor, I've put most of
the modules of the code base into a collection of static libraries
organised vaguely thematically (SSH, other backends, crypto, network,
...). That means all those modules can still be compiled just once
each, because once each library is built it's reused unchanged for all
the executable targets.
One upside of this library-based structure is that now I don't have to
manually specify exactly which objects go into which programs any more
- it's enough to specify which libraries are needed, and the linker
will figure out the fine detail automatically. So there's less
maintenance to do in CMakeLists.txt when the source code changes.
But that reorganisation also adds fragility, because of the trad Unix
linker semantics of walking along the library list once each, so that
cyclic references between your libraries will provoke link errors. The
current setup builds successfully, but I suspect it only just manages
it.
(In particular, I've found that MinGW is the most finicky on this
score of the Windows compilers I've tried building with. So I've
included a MinGW test build in the new-look Buildscr, because
otherwise I think there'd be a significant risk of introducing
MinGW-only build failures due to library search order, which wasn't a
risk in the previous library-free build organisation.)
In the longer term I hope to be able to reduce the risk of that, via
gradual reorganisation (in particular, breaking up too-monolithic
modules, to reduce the risk of knock-on references when you included a
module for function A and it also contains function B with an
unsatisfied dependency you didn't really need). Ideally I want to
reach a state in which the libraries all have sensibly described
purposes, a clearly documented (partial) order in which they're
permitted to depend on each other, and a specification of what stubs
you have to put where if you're leaving one of them out (e.g.
nocrypto) and what callbacks you have to define in your non-library
objects to satisfy dependencies from things low in the stack (e.g.
out_of_memory()).
One thing that's gone completely missing in this migration,
unfortunately, is the unfinished MacOS port linked against Quartz GTK.
That's because it turned out that I can't currently build it myself,
on my own Mac: my previous installation of GTK had bit-rotted as a
side effect of an Xcode upgrade, and I haven't yet been able to
persuade jhbuild to make me a new one. So I can't even build the MacOS
port with the _old_ makefiles, and hence, I have no way of checking
that the new ones also work. I hope to bring that port back to life at
some point, but I don't want it to block the rest of this change.
Now you can run it with --header, --copyrightdoc or --licencedoc
depending on which file you want it to generate. mkfiles.pl only runs
the header mode; the other two modes have become rules in
Makefile.doc.
If we're publishing the server, then we should say something about the
fact that this option exists to talk to it. Also, if the option exists
on the front page at all in a released version of PuTTY, it behooves
us to document it slightly more usefully than just a handwave at 'this
is specialist and experimental'.
SUPDUP came, at my insistence, with a history section in the docs
for people who hadn't heard of it. It seems only fair that the
other obsolete network protocols (or, at least, the ones we *wish*
were obsolete :-) should have the same kind of treatment.
Moved the Raw protocol to below Serial, so that the first two
sections are SSH and Serial, matching the (now very emphatic)
priority order in the config UI.
Similarly, reordered the bullet points in \k{config-hostname}.
I've filled in some text about prime generation methods and Ed448,
which were all the things marked as 'review before release'.
While I'm at it, also filled in a reasonable enough DSA key length
recommendation, because the FIXME comment in that section was within
sight of one of the places I was editing. FIPS 186-4 seemed to think
that RSA and DSA had comparable relationships between the key length
and practical security level, so I see no reason not to use the same
recommendation for both key types.