An irate user complained today that they wished we'd documented
firewalls as a possible cause of WSAECONNREFUSED, because it took them
ages to think of checking that. Fair enough.
Someone just asked us a question which suggests they might have thought
they need to supply both files in the 'Public-key authentication' box in
the config dialog, to use public-key authentication at all. I can see
why someone might think that, anyway.
For serial connections, &H generally expanded to the empty string.
This seems more useful.
(It so happens that &H _could_ expand to the serial line if it came from
the command-line, but that's accidental.)
Index the older format as 'PEM-style', since PEM is how it's referred to
in OpenSSH's own docs; and justify why you might want to use the newer
format.
When you send a "publickey" USERAUTH_REQUEST containing a certified
RSA key, and you want to use a SHA-2 based RSA algorithm, modern
OpenSSH expects you to send the algorithm string as
rsa-sha2-NNN-cert-v01@openssh.com. But 7.7 and earlier didn't
recognise those names, and expected the algorithm string in the
userauth request packet to be ssh-rsa-cert-v01@... and would then
follow it with an rsa-sha2-NNN signature.
OpenSSH itself has a bug workaround for its own older versions. Follow
suit.
For years I've been following the principle that before I'll add
auto-detection of an SSH server bug, I want the server maintainer to
have fixed the bug, so that the list of affected version numbers
triggering the workaround is complete, and to provide an incentive for
implementations to gradually converge on rightness.
*Finally*, I've got round to documenting that policy in public, for
the Feedback page!
(cherry picked from commit b5645f79dd)
We still don't build or ship a PDF PuTTY manual by default, but we may
as well conveniently expose Halibut's ability to do so.
(I don't guarantee the resulting PDF is particularly pretty -- some of
our overlong code lines do go off the right margin currently.)
(cherry picked from commit f9a8213d95)
Mainly to try to clarify that if you're sat at this warning dialog/
prompt, no response you make to it will cause a new CA to be trusted for
signing arbitrary host keys.
The 'certified host key' variant of the host key warning always comes
with a scary 'POTENTIAL SECURITY BREACH!' message. So the error message
section with the scary title that should acknowledge that variant, and
the section about that variant should mention the scary warning.
To try to prime readers learning the often-seen "unknown host key"
warning to recognise the rarer and scarier "wrong host key" warning, if
they see it.
The previous name, which included '(quantum-resistant)', was too long to
be completely seen in the Windows config dialog's kex list (which is
narrower than the Gtk one, due to the Up/Down buttons). No point
including that explanation if people can't actually read it, so we'll
have to rely on docs to explain it.
(I did try squashing the rest of the name to "SNTRUP/X25519 hybrid", but
that wasn't enough.)
As some sort of compensation, index it more thoroughly in the docs, and
while I'm there, tweak the indexing of other key exchange algorithms
too.
GUI Pageant stopped using SSH identifiers for key types in fea08bb244,
but the docs were still referring to them.
As part of this, ensure that the term "NIST" is thoroughly
cross-referenced and indexed, since it now appears so prominently in
Pageant.
(While I'm there, reword the "it's OK that elliptic-curve keys are
smaller than RSA ones" note, as I kept tripping over the old wording.)
Some new cert-related stuff wasn't documented in the usage message
and/or man page; and the longer-standing "-E fptype" was entirely
omitted from the usage message.
As of the cyclic-dependency fix in b01173c6b7, building from our tarball
using the instructions in its README (using the source tree as build
tree), in the absence of Halibut, would lead to the pre-built man pages
not being installed.
(Also, a load of "Could not build man page" complaints at cmake
generation time, which is how I actually noticed.)
With the new larger fixed-group methods, it's less clearly always the
right answer. (Really it seems more sensible to use ECDH over any of
the integer DH, these days.)
Also, reword other kex descriptions a bit.
If Halibut is not available to build the docs, but on the other hand
pre-built man pages already exist (e.g. because you unpacked a source
zip file with them already provided), then docs/CMakeLists.txt creates
a set of build rules that copy the pre-built man pages from the source
directory to the build directory.
However, if the source and build directories are the _same_, this
creates a set of cyclic dependencies, i.e. files which depend directly
on themselves. Some build tools (in particular 'ninja') will report
this as an error.
In that situation, the simple fix is to leave off the build rules
completely: if the man pages are already where the build will want
them to end up, there need not be any build rule to do anything about
them.
In recent months I've had two requests from different people to build
support into PuTTY for automatically handling complicated third-party
auth protocols layered on top of keyboard-interactive - the kind of
thing where you're asked to enter some auth response, and you have to
refer to some external source like a web server to find out what the
right response _is_, which is a pain to do by hand, so you'd prefer it
to be automated in the SSH client.
That seems like a reasonable thing for an end user to want, but I
didn't think it was a good idea to build support for specific
protocols of that kind directly into PuTTY, where there would no doubt
be an ever-lengthening list, and maintenance needed on all of them.
So instead, in collaboration with one of my correspondents, I've
designed and implemented a protocol to be spoken between PuTTY and a
plugin running as a subprocess. The plugin can opt to handle the
keyboard-interactive authentication loop on behalf of the user, in
which case PuTTY passes on all the INFO_REQUEST packets to it, and
lets it make up responses. It can also ask questions of the user if
necessary.
The protocol spec is provided in a documentation appendix. The entire
configuration for the end user consists of providing a full command
line to use as the subprocess.
In the contrib directory I've provided an example plugin written in
Python. It gives a set of fixed responses suitable for getting through
Uppity's made-up k-i system, because that was a reasonable thing I
already had lying around to test against. But it also provides example
code that someone else could pick up and insert their own live
response-provider into the middle of, assuming they were happy with it
being in Python.
We've occasionally had reports of SSH servers disconnecting as soon as
they receive PuTTY's KEXINIT. I think all such reports have involved
the kind of simple ROM-based SSH server software you find in small
embedded devices.
I've never been able to prove it, but I've always suspected that one
possible cause of this is simply that PuTTY's KEXINIT is _too long_,
either in number of algorithms listed or in total length (especially
given all the ones that end in @very.long.domain.name suffixes).
If I'm right about either of those being the cause, then it's just
become even more likely to happen, because of all the extra
Diffie-Hellman groups and GSSAPI algorithms we just threw into our
already-long list in the previous few commits.
A workaround I've had in mind for ages is to wait for the server's
KEXINIT, and then filter our own down to just the algorithms the
server also mentioned. Then our KEXINIT is no longer than that of the
server, and hence, presumably fits in whatever buffer it has. So I've
implemented that workaround, in anticipation of it being needed in the
near future.
(Well ... it's not _quite_ true that our KEXINIT is at most the same
length as the server. In fact I had to leave in one KEXINIT item that
won't match anything in the server's list, namely "ext-info-c" which
gates access to SHA-2 based RSA. So if we turn out to support
absolutely everything on all the server's lists, then our KEXINIT
would be a few bytes longer than the server's, even with this
workaround. But that would only cause trouble if the server's outgoing
KEXINIT was skating very close to whatever buffer size it has for the
incoming one, and I'm guessing that's not very likely.)
((Another possible cause of this kind of disconnection would be a
server that simply objects to seeing any KEXINIT string it doesn't
know how to speak. But _surely_ no such server would have survived
initial testing against any full-featured client at all!))