When you send a "publickey" USERAUTH_REQUEST containing a certified
RSA key, and you want to use a SHA-2 based RSA algorithm, modern
OpenSSH expects you to send the algorithm string as
rsa-sha2-NNN-cert-v01@openssh.com. But 7.7 and earlier didn't
recognise those names, and expected the algorithm string in the
userauth request packet to be ssh-rsa-cert-v01@... and would then
follow it with an rsa-sha2-NNN signature.
OpenSSH itself has a bug workaround for its own older versions. Follow
suit.
During the transition to cmake, commit b00e5fb129 renamed
unix/unix.h to unix/platform.h, and for visual consistency, also
renamed the guard macro PUTTY_UNIX_H to PUTTY_UNIX_PLATFORM_H.
But I had failed to notice that that guard macro is re-tested in
settings.c, as a convenient method of knowing whether we're building
the Windows or Unix version of PuTTY in order to store some settings
differently. So all those '#ifdef PUTTY_UNIX_H' statements silently
became equivalent to '#if 0', because PUTTY_UNIX_H is _never_ defined
any more.
Specifically, these ifdefs were causing the time intervals relating to
bell overloads to be off by a factor of 1000, because for some reason
I can't remember, we were storing those intervals using a different
time unit on Unix and Windows. In my own configuration, for example,
~/.putty/sessions/Default%20Settings contains "BellOverloadT=2000000"
and "BellOverloadS=5000000", which originally meant that too many
bells within 2 seconds would silence the bell until there were 5
seconds of silence - but current PuTTY shows it in the configurer as
2000 and 5000 seconds!
This commit belatedly rewrites the ifdefs in settings.c, so that saved
sessions from before 0.77 will now be interpreted correctly. Saved
sessions from after that may need a rewrite. (But you have to have one
or the other.)
(cherry picked from commit 62b69a4f16)
In the initial commit 031d86ed5b that introduced them, I
accidentally put them below the 'warn about insecurity' line, which I
didn't mean to. Moved them up to just above the existing group14.
Also, I've arranged them in a slightly weird order, so that the most
preferred group of this collection is the medium-sized group16,
followed by the larger ones (17 and 18) and then the smaller 15.
Rationale: larger is better _until_ it starts costing way too much CPU
time, and group18 can grind quite painfully on a slow machine. (And of
course users are free to reconfigure if they have different
preferences.)
This isn't really ideal, of course. The idea that you might not want
to use group18 *because it's slow* contradicts the basic concept of
PuTTY's current crypto-preferences UI, which assumes that you rank
things by security, which is why there's a dividing line below which
things are assumed insecure. I hope that in a future release we'll
rework the UI so that you can express more subtle ideas of what crypto
you do and don't like. But this will do for the moment.
The GSS versions of the same DH methods are reordered similarly.
In recent months I've had two requests from different people to build
support into PuTTY for automatically handling complicated third-party
auth protocols layered on top of keyboard-interactive - the kind of
thing where you're asked to enter some auth response, and you have to
refer to some external source like a web server to find out what the
right response _is_, which is a pain to do by hand, so you'd prefer it
to be automated in the SSH client.
That seems like a reasonable thing for an end user to want, but I
didn't think it was a good idea to build support for specific
protocols of that kind directly into PuTTY, where there would no doubt
be an ever-lengthening list, and maintenance needed on all of them.
So instead, in collaboration with one of my correspondents, I've
designed and implemented a protocol to be spoken between PuTTY and a
plugin running as a subprocess. The plugin can opt to handle the
keyboard-interactive authentication loop on behalf of the user, in
which case PuTTY passes on all the INFO_REQUEST packets to it, and
lets it make up responses. It can also ask questions of the user if
necessary.
The protocol spec is provided in a documentation appendix. The entire
configuration for the end user consists of providing a full command
line to use as the subprocess.
In the contrib directory I've provided an example plugin written in
Python. It gives a set of fixed responses suitable for getting through
Uppity's made-up k-i system, because that was a reasonable thing I
already had lying around to test against. But it also provides example
code that someone else could pick up and insert their own live
response-provider into the middle of, assuming they were happy with it
being in Python.
Like "dh-gex-sha1", this string used in session storage really covers
both SHA-256 and SHA-1 variants (since a624786333), with the former
preferred; but backward-compatibility makes it fiddly to change (and
it's mostly not visible to users).
We've occasionally had reports of SSH servers disconnecting as soon as
they receive PuTTY's KEXINIT. I think all such reports have involved
the kind of simple ROM-based SSH server software you find in small
embedded devices.
I've never been able to prove it, but I've always suspected that one
possible cause of this is simply that PuTTY's KEXINIT is _too long_,
either in number of algorithms listed or in total length (especially
given all the ones that end in @very.long.domain.name suffixes).
If I'm right about either of those being the cause, then it's just
become even more likely to happen, because of all the extra
Diffie-Hellman groups and GSSAPI algorithms we just threw into our
already-long list in the previous few commits.
A workaround I've had in mind for ages is to wait for the server's
KEXINIT, and then filter our own down to just the algorithms the
server also mentioned. Then our KEXINIT is no longer than that of the
server, and hence, presumably fits in whatever buffer it has. So I've
implemented that workaround, in anticipation of it being needed in the
near future.
(Well ... it's not _quite_ true that our KEXINIT is at most the same
length as the server. In fact I had to leave in one KEXINIT item that
won't match anything in the server's list, namely "ext-info-c" which
gates access to SHA-2 based RSA. So if we turn out to support
absolutely everything on all the server's lists, then our KEXINIT
would be a few bytes longer than the server's, even with this
workaround. But that would only cause trouble if the server's outgoing
KEXINIT was skating very close to whatever buffer size it has for the
incoming one, and I'm guessing that's not very likely.)
((Another possible cause of this kind of disconnection would be a
server that simply objects to seeing any KEXINIT string it doesn't
know how to speak. But _surely_ no such server would have survived
initial testing against any full-featured client at all!))
I only recently found out that OpenSSH defined their own protocol IDs
for AES-GCM, defined to work the same as the standard ones except that
they fixed the semantics for how you select the linked cipher+MAC pair
during key exchange.
(RFC 5647 defines protocol ids for AES-GCM in both the cipher and MAC
namespaces, and requires that you MUST select both or neither - but
this contradicts the selection policy set out in the base SSH RFCs,
and there's no discussion of how you resolve a conflict between them!
OpenSSH's answer is to do it the same way ChaCha20-Poly1305 works,
because that will ensure the two suites don't fight.)
People do occasionally ask us for this linked cipher/MAC pair, and now
I know it's actually feasible, I've implemented it, including a pair
of vector implementations for x86 and Arm using their respective
architecture extensions for multiplying polynomials over GF(2).
Unlike ChaCha20-Poly1305, I've kept the cipher and MAC implementations
in separate objects, with an arm's-length link between them that the
MAC uses when it needs to encrypt single cipher blocks to use as the
inputs to the MAC algorithm. That enables the cipher and the MAC to be
independently selected from their hardware-accelerated versions, just
in case someone runs on a system that has polynomial multiplication
instructions but not AES acceleration, or vice versa.
There's a fourth implementation of the GCM MAC, which is a pure
software implementation of the same algorithm used in the vectorised
versions. It's too slow to use live, but I've kept it in the code for
future testing needs, and because it's a convenient place to dump my
design comments.
The vectorised implementations are fairly crude as far as optimisation
goes. I'm sure serious x86 _or_ Arm optimisation engineers would look
at them and laugh. But GCM is a fast MAC compared to HMAC-SHA-256
(indeed compared to HMAC-anything-at-all), so it should at least be
good enough to use. And we've got a working version with some tests
now, so if someone else wants to improve them, they can.
This is triggered by a new config option, or alternatively a -cert
command-line option. You provide a certificate file (i.e. a public key
containing one of the cert key formats), and then, whenever you
authenticate with a private key that matches the public key inside
that certificate, the certificate will be sent to the server in place
of whatever public key it would have used before.
I expect this to be more convenient for some users than the approach
of baking the certificate into a modified version of the PPK file -
especially users who want to use different certificates on the same
key, either in sequence (if a CA continually reissues certificates
with short lifetimes) or in parallel (if different hosts trust
different CAs).
In particular, this substitution is applied consistently, even when
doing authentication via an agent. So if your bare private key is held
in Pageant, you can _still_ specify a detached certificate, and PuTTY
will spot that the key it's picked from Pageant matches that
certificate, and do the same substitution.
The detached certificate also overrides an existing certificate, if
there was one on the public key already.
This consists of DJB's 'Streamlined NTRU Prime' quantum-resistant
cryptosystem, currently in round 3 of the NIST post-quantum key
exchange competition; it's run in parallel with ordinary Curve25519,
and generates a shared secret combining the output of both systems.
(Hence, even if you don't trust this newfangled NTRU Prime thing at
all, it's at least no _less_ secure than the kex you were using
already.)
As the OpenSSH developers point out, key exchange is the most urgent
thing to make quantum-resistant, even before working quantum computers
big enough to break crypto become available, because a break of the
kex algorithm can be applied retroactively to recordings of your past
sessions. By contrast, authentication is a real-time protocol, and can
only be broken by a quantum computer if there's one available to
attack you _already_.
I've implemented both sides of the mechanism, so that PuTTY and Uppity
both support it. In my initial testing, the two sides can both
interoperate with the appropriate half of OpenSSH, and also (of
course, but it would be embarrassing to mess it up) with each other.
Using a new screenshot-taking module I just added in windows/utils,
these new options allow me to start up one of the tools with
demonstration window contents and automatically save a .BMP screenshot
to disk. This will allow me to keep essentially the same set of demo
images and update them easily to keep pace with the current appearance
of the real tools as PuTTY - and Windows itself - both evolve.
Correcting a source file name in the docs just now reminded me that
I've seen a lot of outdated source file names elsewhere in the code,
due to all the reorganisation since we moved to cmake. Here's a giant
pass of trying to make them all accurate again.
Since it's a manually-enabled bug compatibility mode, AUTO isn't one of
the available UI options.
This was causing Windows PuTTY to display a blank entry in the drop-down
for "Discards data sent before its greeting".
(It is possible that this unhelpful default has escaped into saved
sessions of snapshot users, which would have the same effect, but since
the actual using code can cope with it, I've not done anything to clean
that up.)
This commit introduces a new config option for how to handle shifted
arrow keys.
In the default mode (SHARROW_APPLICATION), we do what we've always
done: Ctrl flips the arrow keys between sending their most usual
escape sequences (ESC [ A ... ESC [ D) and sending the 'application
cursor keys' sequences (ESC O A ... ESC O D). Whichever of those modes
is currently configured, Ctrl+arrow sends the other one.
In the new mode (SHARROW_BITMAP), application cursor key mode is
unaffected by any shift keys, but the default sequences acquire two
numeric arguments. The first argument is 1 (reflecting the fact that a
shifted arrow key still notionally moves just 1 character cell); the
second is the bitmap (1 for Shift) + (2 for Alt) + (4 for Ctrl),
offset by 1. (Except that if _none_ of those modifiers is pressed,
both numeric arguments are simply omitted.)
The new bitmap mode is what current xterm generates, and also what
Windows ConPTY seems to expect. If you start an ordinary Command
Prompt and launch into WSL, those are the sequences it will generate
for shifted arrow keys; conversely, if you run a Command Prompt within
a ConPTY, then these sequences for Ctrl+arrow will have the effect you
expect in cmd.exe command-line editing (going backward or forward a
word). For that reason, I enable this mode unconditionally when
launching Windows pterm.
Ian Jackson recently tried to use the recipe in the psusan manpage for
talking to UML, and found that the connection was not successfully set
up, because at some point during startup, UML read the SSH greeting
(ok, the bare-ssh-connection greeting) from its input fd and threw it
away. So by the time psusan was run by the guest init process, the
greeting wasn't there to be read.
Ian's report: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=991958
I was also able to reproduce this locally, which makes me wonder why I
_didn't_ notice it when I originally wrote that part of the psusan man
page. It worked for me before, honest! But now it doesn't.
Anyway. The ssh verstring module already has a mode switch to decide
whether we ought to send our greeting before or after waiting for the
other side's greeting (because that decision varies between client and
server, and between SSH-1 and SSH-2). So it's easy to implement an
override that forces it to 'wait for the server greeting first'.
I've added this as yet another bug workaround flag. But unlike all the
others, it can't be autodetected from the server's version string,
because, of course, we have to act on it _before_ seeing the server's
greeting and version string! So it's a manual-only flag.
However, I've mentioned it in the UML section of the psusan man page,
since that's the place where I _know_ people are likely to need to use
this flag.
Suggested by Manfred Kaiser, who also wrote most of this patch
(although outlying parts, like documentation and SSH-1 support, are by
me).
This is a second line of defence against the kind of spoofing attacks
in which a malicious or compromised SSH server rushes the client
through the userauth phase of SSH without actually requiring any auth
inputs (passwords or signatures or whatever), and then at the start of
the connection phase it presents something like a spoof prompt,
intended to be taken for part of userauth by the user but in fact with
some more sinister purpose.
Our existing line of defence against this is the trust sigil system,
and as far as I know, that's still working. This option allows a bit of
extra defence in depth: if you don't expect your SSH server to
trivially accept authentication in the first place, then enabling this
option will cause PuTTY to disconnect if it unexpectedly does so,
without the user having to spot the presence or absence of a fiddly
little sigil anywhere.
Several types of authentication count as 'trivial'. The obvious one is
the SSH-2 "none" method, which clients always try first so that the
failure message will tell them what else they can try, and which a
server can instead accept in order to authenticate you unconditionally.
But there are two other ways to do it that we know of: one is to run
keyboard-interactive authentication and send an empty INFO_REQUEST
packet containing no actual prompts for the user, and another even
weirder one is to send USERAUTH_SUCCESS in response to the user's
preliminary *offer* of a public key (instead of sending the usual PK_OK
to request an actual signature from the key).
This new option detects all of those, by clearing the 'is_trivial_auth'
flag only when we send some kind of substantive authentication response
(be it a password, a k-i prompt response, a signature, or a GSSAPI
token). So even if there's a further path through the userauth maze we
haven't spotted, that somehow avoids sending anything substantive, this
strategy should still pick it up.
This clears up another large pile of clutter at the top level, and in
the process, allows me to rename source files to things that don't all
have that annoying 'ssh' prefix at the top.
Two minor memory-leak fixes on 0.74 seem not to be needed on master:
the fix in an early exit path of pageant_add_keyfile is done already
on master in a different way, and the missing sfree(fdlist) in
uxsftp.c is in code that's been completely rewritten in the uxcliloop
refactoring.
Other minor conflicts: the rework in commit b52641644905 of
ssh1login.c collided with the change from FLAG_VERBOSE to
seat_verbose(), and master and 0.74 each added an unrelated extra
field to the end of struct SshServerConfig.
This mitigates CVE-2020-14002: if you're in the habit of clicking OK
to unknown host keys (the TOFU policy - trust on first use), then an
active attacker looking to exploit that policy to substitute their own
host key in your first connection to a server can use the host key
algorithm order in your KEXINIT to (not wholly reliably) detect
whether you have a key already stored for this host, and if so, abort
their attack to avoid giving themself away.
However, for users who _don't_ use the TOFU policy and instead check
new host keys out of band, the dynamic policy is more useful. So it's
provided as a configurable option.
This is standardised by RFC 8709 at SHOULD level, and for us it's not
too difficult (because we use general-purpose elliptic-curve code). So
let's be up to date for a change, and add it.
This implementation uses all the formats defined in the RFC. But we
also have to choose a wire format for the public+private key blob sent
to an agent, and since the OpenSSH agent protocol is the de facto
standard but not (yet?) handled by the IETF, OpenSSH themselves get to
say what the format for a key should or shouldn't be. So if they don't
support a particular key method, what do you do?
I checked with them, and they agreed that there's an obviously right
format for Ed448 keys, which is to do them exactly like Ed25519 except
that you have a 57-byte string everywhere Ed25519 had a 32-byte
string. So I've done that.
The previous 'name' field was awkwardly serving both purposes: it was
a machine-readable identifier for the backend used in the saved
session format, and it was also used in error messages when Plink
wanted to complain that it didn't support a particular backend. Now
there are two separate name fields for those purposes.
Sometimes, within a switch statement, you want to declare local
variables specific to the handler for one particular case. Until now
I've mostly been writing this in the form
switch (discriminant) {
case SIMPLE:
do stuff;
break;
case COMPLICATED:
{
declare variables;
do stuff;
}
break;
}
which is ugly because the two pieces of essentially similar code
appear at different indent levels, and also inconvenient because you
have less horizontal space available to write the complicated case
handler in - particuarly undesirable because _complicated_ case
handlers are the ones most likely to need all the space they can get!
After encountering a rather nicer idiom in the LLVM source code, and
after a bit of hackery this morning figuring out how to persuade
Emacs's auto-indent to do what I wanted with it, I've decided to move
to an idiom in which the open brace comes right after the case
statement, and the code within it is indented the same as it would
have been without the brace. Then the whole case handler (including
the break) lives inside those braces, and you get something that looks
more like this:
switch (discriminant) {
case SIMPLE:
do stuff;
break;
case COMPLICATED: {
declare variables;
do stuff;
break;
}
}
This commit is a big-bang change that reformats all the complicated
case handlers I could find into the new layout. This is particularly
nice in the Pageant main function, in which almost _every_ case
handler had a bundle of variables and was long and complicated. (In
fact that's what motivated me to get round to this.) Some of the
innermost parts of the terminal escape-sequence handling are also
breathing a bit easier now the horizontal pressure on them is
relieved.
(Also, in a few cases, I was able to remove the extra braces
completely, because the only variable local to the case handler was a
loop variable which our new C99 policy allows me to move into the
initialiser clause of its for statement.)
Viewed with whitespace ignored, this is not too disruptive a change.
Downstream patches that conflict with it may need to be reapplied
using --ignore-whitespace or similar.
These global variables are only ever used by load_settings, which uses
them to vary the default protocol and port number in the absence of
any specification elsewhere. So there's no real need for them to be
universally accessible via the awkward GLOBAL mechanism: they can be
statics inside settings.c, with accessor functions that can set them.
That was the last GLOBAL in putty.h, so I've removed the definition of
the macro GLOBAL itself as well. There are still some GLOBALs in the
Windows subdirectory, though.
Up until now, it's been a variadic _function_, whose argument list
consists of 'const char *' ASCIZ strings to concatenate, terminated by
one containing a null pointer. Now, that function is dupcat_fn(), and
it's wrapped by a C99 variadic _macro_ called dupcat(), which
automatically suffixes the null-pointer terminating argument.
This has three benefits. Firstly, it's just less effort at every call
site. Secondly, it protects against the risk of accidentally leaving
off the NULL, causing arbitrary words of stack memory to be
dereferenced as char pointers. And thirdly, it protects against the
more subtle risk of writing a bare 'NULL' as the terminating argument,
instead of casting it explicitly to a pointer. That last one is
necessary because C permits the macro NULL to expand to an integer
constant such as 0, so NULL by itself may not have pointer type, and
worse, it may not be marshalled in a variadic argument list in the
same way as a pointer. (For example, on a 64-bit machine it might only
occupy 32 bits. And yet, on another 64-bit platform, it might work
just fine, so that you don't notice the mistake!)
I was inspired to do this by happening to notice one of those bare
NULL terminators, and thinking I'd better check if there were any
more. Turned out there were quite a few. Now there are none.
The number of people has been steadily increasing who read our source
code with an editor that thinks tab stops are 4 spaces apart, as
opposed to the traditional tty-derived 8 that the PuTTY code expects.
So I've been wondering for ages about just fixing it, and switching to
a spaces-only policy throughout the code. And I recently found out
about 'git blame -w', which should make this change not too disruptive
for the purposes of source-control archaeology; so perhaps now is the
time.
While I'm at it, I've also taken the opportunity to remove all the
trailing spaces from source lines (on the basis that git dislikes
them, and is the only thing that seems to have a strong opinion one
way or the other).
Apologies to anyone downstream of this code who has complicated patch
sets to rebase past this change. I don't intend it to be needed again.
This reverts commit 1b2f39c24b.
The intention of that commit was to support the development of Uppity,
by arranging that I could get a Conf populated with completely default
values by calling load_open_settings(NULL,conf), with no risk of
interference from the normal PuTTY saved sessions full of client-side
configuration (which would have been confusing to apply unexpectedly
in a server).
So I arranged that a NULL session handle was never passed to the
low-level read_setting_[type] functions, in case it caused a segfault.
But I overlooked two things.
Firstly, on Unix, read_setting_* is where we check the tree234 of data
derived from X resources and/or -xrm command-line options. So if you
don't call those functions at all (e.g. if you have no on-disk PuTTY
saved configuration at all, not even Default Settings), you also don't
get your X defaults honoured.
Secondly, those functions themselves already all checked their
argument for NULL before doing anything dangerous with it. So the
thing I wanted to make possible was already possible _anyway_, without
me having to do anything!
So I'm exactly reverting that commit, because the _only_ thing it did
was to introduce a bug in X resource handling.
Those two flags had the opposite sense to what you might expect: each
one is the value of the Conf entry corresponding to the checkbox that
_disables_ the corresponding terminal feature. So term->bidi is true
if and only if bidi is _off_.
I think that confusion of naming probably contributed to the control-
flow error fixed in the previous commit, just by increasing cognitive
load until I couldn't remember which flags were set where any more! So
now I've renamed the two fields of Terminal, and the corresponding
Conf keywords, to be called "no_bidi" and "no_arabicshaping", in line
with other 'disable this feature' flags, so that it's clear what the
sense should be.
This is a fairly shallow patch, which removes the UI and interactions
with external libraries. Some other machinery (which is dead code in
this configuration) is left in place.
Adapted by me from a patch by Jeroen Roovers.
After all the side-channel removal work I've done, I'm demoting
Blowfish and Arcfour right to the bottom of the pile, because they're
the ciphers that still have cache and timing leaks in them. (Neither
one can be sanitised without making it _extremely_ slow - the factor
of 4.5 slowdown in DES would be small by comparison).
Single-DES is also still below the warning threshold for simply being
too weak (irrespective of side channels), but it's above the other two.
Mostly noticed in passing while using Address / Leak Sanitiser to
check over the previous commit. One highlight here is freeing of the
previous iqmp value in rsa_verify, which was actually a potentially
sensitive leak, introduced in the mp_int rewrite (commit 25b034ee3).
For a start, they now have different names on Windows and Unix,
reflecting their different roles: on Windows they apply escaping to
any string that's going to be used as a registry key (be it a session
name, or a host name for host key storage), whereas on Unix they're
for constructing saved-session file names in particular (and also
handle the special case of filling in "Default Settings" for NULL).
Also, they now produce output by writing to a strbuf, which simplifies
a lot of the call sites. In particular, the strbuf output idiom is
passed on to enum_settings_next, which is especially nice because its
only actual caller was doing an ad-hoc realloc loop that I can now get
rid of completely.
Thirdly, on Windows they're centralised into winmisc.c instead of
living in winstore.c, because that way Pageant can use the unescape
function too. (It was spotting the duplication there that made me
think of doing this in the first place, but once I'd started, I had to
keep unravelling the thread...)
My normal habit these days, in new code, is to treat int and bool as
_almost_ completely separate types. I'm still willing to use C's
implicit test for zero on an integer (e.g. 'if (!blob.len)' is fine,
no need to spell it out as blob.len != 0), but generally, if a
variable is going to be conceptually a boolean, I like to declare it
bool and assign to it using 'true' or 'false' rather than 0 or 1.
PuTTY is an exception, because it predates the C99 bool, and I've
stuck to its existing coding style even when adding new code to it.
But it's been annoying me more and more, so now that I've decided C99
bool is an acceptable thing to require from our toolchain in the first
place, here's a quite thorough trawl through the source doing
'boolification'. Many variables and function parameters are now typed
as bool rather than int; many assignments of 0 or 1 to those variables
are now spelled 'true' or 'false'.
I managed this thorough conversion with the help of a custom clang
plugin that I wrote to trawl the AST and apply heuristics to point out
where things might want changing. So I've even managed to do a decent
job on parts of the code I haven't looked at in years!
To make the plugin's work easier, I pushed platform front ends
generally in the direction of using standard 'bool' in preference to
platform-specific boolean types like Windows BOOL or GTK's gboolean;
I've left the platform booleans in places they _have_ to be for the
platform APIs to work right, but variables only used by my own code
have been converted wherever I found them.
In a few places there are int values that look very like booleans in
_most_ of the places they're used, but have a rarely-used third value,
or a distinction between different nonzero values that most users
don't care about. In these cases, I've _removed_ uses of 'true' and
'false' for the return values, to emphasise that there's something
more subtle going on than a simple boolean answer:
- the 'multisel' field in dialog.h's list box structure, for which
the GTK front end in particular recognises a difference between 1
and 2 but nearly everything else treats as boolean
- the 'urgent' parameter to plug_receive, where 1 vs 2 tells you
something about the specific location of the urgent pointer, but
most clients only care about 0 vs 'something nonzero'
- the return value of wc_match, where -1 indicates a syntax error in
the wildcard.
- the return values from SSH-1 RSA-key loading functions, which use
-1 for 'wrong passphrase' and 0 for all other failures (so any
caller which already knows it's not loading an _encrypted private_
key can treat them as boolean)
- term->esc_query, and the 'query' parameter in toggle_mode in
terminal.c, which _usually_ hold 0 for ESC[123h or 1 for ESC[?123h,
but can also hold -1 for some other intervening character that we
don't support.
In a few places there's an integer that I haven't turned into a bool
even though it really _can_ only take values 0 or 1 (and, as above,
tried to make the call sites consistent in not calling those values
true and false), on the grounds that I thought it would make it more
confusing to imply that the 0 value was in some sense 'negative' or
bad and the 1 positive or good:
- the return value of plug_accepting uses the POSIXish convention of
0=success and nonzero=error; I think if I made it bool then I'd
also want to reverse its sense, and that's a job for a separate
piece of work.
- the 'screen' parameter to lineptr() in terminal.c, where 0 and 1
represent the default and alternate screens. There's no obvious
reason why one of those should be considered 'true' or 'positive'
or 'success' - they're just indices - so I've left it as int.
ssh_scp_recv had particularly confusing semantics for its previous int
return value: its call sites used '<= 0' to check for error, but it
never actually returned a negative number, just 0 or 1. Now the
function and its call sites agree that it's a bool.
In a couple of places I've renamed variables called 'ret', because I
don't like that name any more - it's unclear whether it means the
return value (in preparation) for the _containing_ function or the
return value received from a subroutine call, and occasionally I've
accidentally used the same variable for both and introduced a bug. So
where one of those got in my way, I've renamed it to 'toret' or 'retd'
(the latter short for 'returned') in line with my usual modern
practice, but I haven't done a thorough job of finding all of them.
Finally, one amusing side effect of doing this is that I've had to
separate quite a few chained assignments. It used to be perfectly fine
to write 'a = b = c = TRUE' when a,b,c were int and TRUE was just a
the 'true' defined by stdbool.h, that idiom provokes a warning from
gcc: 'suggest parentheses around assignment used as truth value'!
I think this is the full set of things that ought logically to be
boolean.
One annoyance is that quite a few radio-button controls in config.c
address Conf fields that are now bool rather than int, which means
that the shared handler function can't just access them all with
conf_{get,set}_int. Rather than back out the rigorous separation of
int and bool in conf.c itself, I've just added a similar alternative
handler function for the bool-typed ones.
This commit includes <stdbool.h> from defs.h and deletes my
traditional definitions of TRUE and FALSE, but other than that, it's a
100% mechanical search-and-replace transforming all uses of TRUE and
FALSE into the C99-standardised lowercase spellings.
No actual types are changed in this commit; that will come next. This
is just getting the noise out of the way, so that subsequent commits
can have a higher proportion of signal.
All the lowest-level helper functions in settings.c that read a single
setting from a settings_r are now prepared to tolerate being passed a
null settings_r pointer, which will be treated as if reading from it
always failed. This means you can call load_open_settings(NULL, conf)
to populate a Conf with all of the _built-in_ internal defaults,
without ever loading from the saved-session storage at all (not even
Default Settings).
(Doing this will still call the platform_default_foo function family,
if nothing else because Filenames and FontSpecs can't be constructed
in any platform-independent way at all.)
Commit 733fcca2c introduced named types settings_r and settings_w and
made the per-platform storage abstraction use those in place of 'void
*'. But a lot of the intermediate helper functions in the centralised
settings.c, like gpps() and wmap(), were still taking 'void *'. It
still worked, because of C's implicit casting, but it was avoiding the
type-checking advantages from making that change in the first place.
It is useful to be able to exclude the header so that the log file
can be used for realtime input to other programs such as Kst for
plotting live data from sensors.
Most of these were 'void *' because they weren't even reliably a
structure type underneath - the per-OS storage systems would directly
cast read/write/enum settings handles to and from random things like
FILE *, Unix DIR *, or Windows HKEY. So I've wrapped them in tiny
structs for the sake of having a sensible structure tag visible
elsewhere in the code.
Nearly every part of the code that ever handles a full backend
structure has historically done it using a pair of pointer variables,
one pointing at a constant struct full of function pointers, and the
other pointing to a 'void *' state object that's passed to each of
those.
While I'm modernising the rest of the code, this seems like a good
time to turn that into the same more or less type-safe and less
cumbersome system as I'm using for other parts of the code, such as
Socket, Plug, BinaryPacketProtocol and so forth: the Backend structure
contains a vtable pointer, and a system of macro wrappers handles
dispatching through that vtable.
Thanks to Jiri Kaspar for sending this patch (apart from the new docs
section, which is in my own words), which implements a feature we've
had as a wishlist item ('utf8-plus-vt100') for a long time.
I was actually surprised it was possible to implement it in so few
lines of code! I'd forgotten, or possibly never noticed in the first
place, that even in UTF-8 mode PuTTY not only accepts but still
_processes_ all the ISO 2022 control sequences and shift characters,
and keeps running track of all the same state in term->cset and
term->cset_attrs that it tracks in IS0-2022-enabled modes. It's just
that in UTF-8 mode, at the very last minute when a character+attribute
pair is about to be written into the terminal's character buffer, it
deliberately ignores the contents of those variables.
So all that was needed was a new flag checked at that last moment
which causes it not quite to ignore them after all, and bingo,
utf8-plus-vt100 is supported. And it works no matter which ISO 2022
sequences you're using; whether you're using ESC ( 0 to select the
line drawing set directly into GL and ESC ( B to get back when you're
done, or whether you send a preliminary ESC ( B ESC ) 0 to get GL/GR
to be ASCII and line drawing respectively so you can use SI and SO as
one-byte mode switches thereafter, both work just as well.
This implementation strategy has a couple of consequences, which I
don't think matter very much one way or the other but I document them
just in case they turn out to be important later:
- if an application expecting this mode has already filled your
terminal window with lqqqqqqqqk, then enabling this mode in Change
Settings won't retroactively turn them into the line drawing
characters you wanted, because no memory is preserved in the screen
buffer of what the ISO 2022 state was when they were printed. So
the application still has to do a screen refresh.
- on the other hand, if you already sent the ESC ( 0 or whatever to
put the terminal _into_ line drawing mode, and then you turn on
this mode in Change Settings, you _will_ still be in line drawing
mode, because the system _does_ remember your current ISO 2022
state at all times, whether it's currently applying it to output
printing characters or not.
Commit d515e4f1a went through a lot of very different shapes before it
was finally pushed. In some of them, GSS kex had its own value in the
kex enumeration, but it was used in ssh.c but not in config.c
(because, as in the final version, it wasn't configured by the same
drag-list system as the rest of them). So we had to distinguish the
set of key exchange ids known to the program as a whole from the set
controllable in the configuration.
In the final version, GSS kex ended up even more separated from the
kex enumeration than that: the enum value KEX_GSS_SHA1_K5 isn't used
at all. Instead, GSS key exchange appears in the list at the point of
translation from the list of enum values into the list of pointers to
data structures full of kex methods.
But after all the changes, everyone involved forgot to revert the part
of the patch which split KEX_MAX in two and introduced the pointless
value KEX_GSS_SHA1_K5! Better late than never: I'm reverting it now,
to avoid confusion, and because I don't have any reason to think the
distinction will be useful for any other purpose.
Simon tells me it was left over from an abandoned configuration design
for GSS key exchange. Let's get rid of it before it starts cluttering
snapshot users' saved sessions.
The former has advantages in terms of keeping Kerberos credentials up
to date, but it also does something sufficiently weird to the usual
SSH host key system that I think it's worth making sure users have a
means of turning it off separately from the less intrusive GSS
userauth.
This is a heavily edited (by me) version of a patch originally due to
Nico Williams and Viktor Dukhovni. Their comments:
* Don't delegate credentials when rekeying unless there's a new TGT
or the old service ticket is nearly expired.
* Check for the above conditions more frequently (every two minutes
by default) and rekey when we would delegate credentials.
* Do not rekey with very short service ticket lifetimes; some GSSAPI
libraries may lose the race to use an almost expired ticket. Adjust
the timing of rekey checks to try to avoid this possibility.
My further comments:
The most interesting thing about this patch to me is that the use of
GSS key exchange causes a switch over to a completely different model
of what host keys are for. This comes from RFC 4462 section 2.1: the
basic idea is that when your session is mostly bidirectionally
authenticated by the GSSAPI exchanges happening in initial kex and
every rekey, host keys become more or less vestigial, and their
remaining purpose is to allow a rekey to happen if the requirements of
the SSH protocol demand it at an awkward moment when the GSS
credentials are not currently available (e.g. timed out and haven't
been renewed yet). As such, there's no need for host keys to be
_permanent_ or to be a reliable identifier of a particular host, and
RFC 4462 allows for the possibility that they might be purely
transient and only for this kind of emergency fallback purpose.
Therefore, once PuTTY has done a GSS key exchange, it disconnects
itself completely from the permanent host key cache functions in
storage.h, and instead switches to a _transient_ host key cache stored
in memory with the lifetime of just that SSH session. That cache is
populated with keys received from the server as a side effect of GSS
kex (via the optional SSH2_MSG_KEXGSS_HOSTKEY message), and used if
later in the session we have to fall back to a non-GSS key exchange.
However, in practice servers we've tested against do not send a host
key in that way, so we also have a fallback method of populating the
transient cache by triggering an immediate non-GSS rekey straight
after userauth (reusing the code path we also use to turn on OpenSSH
delayed encryption without the race condition).