DIT, for 'Data-Independent Timing', is a bit you can set in the
processor state on sufficiently new Arm CPUs, which promises that a
long list of instructions will deliberately avoid varying their timing
based on the input register values. Just what you want for keeping
your constant-time crypto primitives constant-time.
As far as I'm aware, no CPU has _yet_ implemented any data-dependent
optimisations, so DIT is a safety precaution against them doing so in
future. It would be embarrassing to be caught without it if a future
CPU does do that, so we now turn on DIT in the PuTTY process state.
I've put a call to the new enable_dit() function at the start of every
main() and WinMain() belonging to a program that might do
cryptography (even testcrypt, in case someone uses it for something!),
and in case I missed one there, also added a second call at the first
moment that any cryptography-using part of the code looks as if it
might become active: when an instance of the SSH protocol object is
configured, when the system PRNG is initialised, and when selecting
any cryptographic authentication protocol in an HTTP or SOCKS proxy
connection. With any luck those precautions between them should ensure
it's on whenever we need it.
Arm's own recommendation is that you should carefully choose the
granularity at which you enable and disable DIT: there's a potential
time cost to turning it on and off (I'm not sure what, but plausibly
something of the order of a pipeline flush), so it's a performance hit
to do it _inside_ each individual crypto function, but if CPUs start
supporting significant data-dependent optimisation in future, then it
will also become a noticeable performance hit to just leave it on
across the whole process. So you'd like to do it somewhere in the
middle: for example, you might turn on DIT once around the whole
process of verifying and decrypting an SSH packet, instead of once for
decryption and once for MAC.
With all respect to that recommendation as a strategy for maximum
performance, I'm not following it here. I turn on DIT at the start of
the PuTTY process, and then leave it on. Rationale:
1. PuTTY is not otherwise a performance-critical application: it's
not likely to max out your CPU for any purpose _other_ than
cryptography. The most CPU-intensive non-cryptographic thing I can
imagine a PuTTY process doing is the complicated computation of
font rendering in the terminal, and that will normally be cached
(you don't recompute each glyph from its outline and hints for
every time you display it).
2. I think a bigger risk lies in accidental side channels from having
DIT turned off when it should have been on. I can imagine lots of
causes for that. Missing a crypto operation in some unswept corner
of the code; confusing control flow (like my coroutine macros)
jumping with DIT clear into the middle of a region of code that
expected DIT to have been set at the beginning; having a reference
counter of DIT requests and getting it out of sync.
In a more sophisticated programming language, it might be possible to
avoid the risk in #2 by cleverness with the type system. For example,
in Rust, you could have a zero-sized type that acts as a proof token
for DIT being enabled (it would be constructed by a function that also
sets DIT, have a Drop implementation that clears DIT, and be !Send so
you couldn't use it in a thread other than the one where DIT was set),
and then you could require all the actual crypto functions to take a
DitToken as an extra parameter, at zero runtime cost. Then "oops I
forgot to set DIT around this piece of crypto" would become a compile
error. Even so, you'd have to take some care with coroutine-structured
code (what happens if a Rust async function yields while holding a DIT
token?) and with nesting (if you have two DIT tokens, you don't want
dropping the inner one to clear DIT while the outer one is still there
to wrongly convince callees that it's set). Maybe in Rust you could
get this all to work reliably. But not in C!
DIT is an optional feature of the Arm architecture, so we must first
test to see if it's supported. This is done the same way as we already
do for the various Arm crypto accelerators: on ELF-based systems,
check the appropriate bit in the 'hwcap' words in the ELF aux vector;
on Mac, look for an appropriate sysctl flag.
On Windows I don't know of a way to query the DIT feature, _or_ of a
way to write the necessary enabling instruction in an MSVC-compatible
way. I've _heard_ that it might not be necessary, because Windows
might just turn on DIT unconditionally and leave it on, in an even
more extreme version of my own strategy. I don't have a source for
that - I heard it by word of mouth - but I _hope_ it's true, because
that would suit me very well! Certainly I can't write code to enable
DIT without knowing (a) how to do it, (b) how to know if it's safe.
Nonetheless, I've put the enable_dit() call in all the right places in
the Windows main programs as well as the Unix and cross-platform code,
so that if I later find out that I _can_ put in an explicit enable of
DIT in some way, I'll only have to arrange to set HAVE_ARM_DIT and
compile the enable_dit() function appropriately.
In 0.81 and before, we put an application manifest (XML-formatted
Windows resource) into all the GUI tools on purpose, and the CLI tools
like Plink didn't have one. But in 0.82, the CLI tools do have one,
and it's a small default one we didn't write ourselves, inserted by
some combination of cmake and clang-imitating-MSVC (I haven't checked
which of those is the cause).
This appears to have happened as a side effect of a build-tools
update, not on purpose. And its effect is that Windows XP now objects
to our plink.exe, because it's very picky about manifest format (we
have an old 'xp-wont-run' bug record about that).
Since it seemed to work fine to not have a manifest at all in 0.81,
let's go back to that. We were already passing /manifest:no to inhibit
the default manifest in the GUI tools, to stop it fighting with our
custom one; now I've moved /manifest:no into the global linker flags,
so it's applied to _all_ binaries, whether we're putting our own
manifest in or not.
A user reported that when a PuTTY window is resized by the
'FancyZones' tool included in Microsoft PowerToys, the terminal itself
knows the new size ('stty' showed that it had sent a correct SIGWINCH
to the SSH server), but the next invocation of the Change Settings
dialog box still has the old size entered in it, leading to confusing
behaviour when you press Apply.
Inside PuTTY, this must mean that we updated the actual terminal's
size, but didn't update the main Conf object to match it, which is
where Change Settings populates its initial dialog state from.
It looks as if this is because FancyZones resizes the window by
sending it one single WM_SIZE, without wrapping it in the
WM_ENTERSIZEMOVE and WM_EXITSIZEMOVE messages that signal the start
and end of an interactive dragging resize operation. And the update of
Conf in wm_size_resize_term was in only one branch of the if statement
that checks whether we're in an interactive resize. Now it's outside
the if, so Conf will be updated in both cases.
I was about to try to debug a window resizing issue, and it looked as
if this patch had a plausible set of diagnostics already. But in fact
when I turned on this #ifdef it failed to compile, so I'm getting rid
of it.
Perhaps there is a use for having types of diagnostic living
permanently in the source code and easy to enable in themed sets, but
if so, I think they'd be better compiled in and enabled by an option,
than compiled out and enabled by #ifdef. That way they're less likely
to rot, and also, you can ask a user to turn one on really easily and
get extra logs for whatever is bothering them!
This centralises into windows/utils/request_file.c all of the code
that deals with the OPENFILENAME structure, and decides centrally
whether to use the Unicode or ANSI version of that structure and its
associated APIs. Now the output of any request_file function is our
own 'Filename' abstract type, instead of a raw char or wchar_t buffer,
which means that _any_ file dialog can produce a full Unicode filename
if the user wants to select one - and yet, in the w32old build, they
all uniformly fall back to the ANSI version, which is the only one
that works at all pre-NT.
A side effect: I've turned the FILTER_FOO_FILES family of definitions
from platform-specific #defines into a reasonably sensible enum. This
didn't affect the GTK side of things , because I'd never got round to
figuring out how to filter a file dialog down to a subset of files in
GTK, and still haven't. So I've just moved the existing FIXME comment
from platform.h to dialog.c.
This reverts the change to is_interactive() by commit 80aed96286,
which switched it to using the new conio system. Now we're back to
doing it the same way as we used to: we check if stdin is a console.
The only use of is_interactive() on Windows is deciding whether to
present the console antispoof prompt. (On Unix it has an additional
use in cmdgen, for deciding whether to emit progress reports, but on
Windows that doesn't come up).
On Unix this is based on stdin being a tty, which means that a
command such as "plink host do stuff </dev/null" omits the antispoof
prompt. That's deliberate: the prompt is to defend against attacks
where the user sends interactive input to the SSH session channel
believing it to be directed at userauth, but if the input _isn't_
coming from the interactive terminal where the user is answering
userauth prompts, then they can't do that even if they are fooled.
On Windows, I think the same argument applies, now that we're reading
userauth prompts from the console in the same way as Unix. So
is_interactive() now does the analogous thing on Windows.
Conveniently, this _also_ means is_interactive() is back to exactly
how it was before the conio rewrite, which means it's one fewer thing
that can unexpectedly change and break someone's workflow. (Otherwise
I might also have wanted to change its behaviour based on
-legacy-stdio-handling, which would be extra ugly.)
In 0.81, some prompts (such as host-key prompts) went to stderr, while
others (such as username and password prompts) went to stdout.
With -legacy-stdio-prompts (or if we otherwise couldn't get console
handles), we were sending all such prompts to stdout, which might have
messed up someone's workflow if they were interacting with the
non-username/password prompts programmatically.
The fix in commit 31ab5b8e30 was incomplete. It works when you
maximise the window by pressing the maximise button in the title bar,
but not when you drag the window to the very top of the screen. In the
latter case, the resize at the instant of maximisation works right,
but then we get a WM_EXITSIZEMOVE when the drag is released,
triggering one final resize which ignored the window border again.
That was because wm_size_resize_term() was being given a flag on
purpose telling it to ignore the border: apparently at some point in
the past, ignoring the border for even a normally maximised
window (not even full-screen) was done on purpose. But for the reasons
in the previous commit, I'm changing that.
Just as with other recent changes like usernames, this allows the
remote command line to include characters outside the system code
page, encoding as UTF-8 on the wire (as the SSH protocol has wanted
all along).
The UCS-2 translation of the access-mode string ("r", "wb" etc) was
not being freed, because the free call appeared after an unconditional
return statement.
Spotted by Coverity, although now I wonder why an ordinary compiler
warning hadn't long since pointed this one out!
When Windows console prompting moved away from stdin/stderr by default
(80aed96286), 0.78 was the most recent release; but we've made several
more releases off branches since then.
During startup, calculate the top/left position of the window so that
the whole window is visible. Without this change, sometimes the window
is clipped vertically and you can't see the bottom.
Thanks to Bruno Kraychete da Costa for the original version of this
patch; I've tweaked it slightly, but the basic strategy of adjusting
the output of GetWindowRect against the monitor's working area is all
his.
Debian #1084583 points out that Python 3.13 is going to get rid of the
'pipes' module completely. shlex.quote has been available as a
replacement for ages.
(Not that Debian actually cares, since they don't re-run our wobbly
edifice of MSI build bodges! But thanks to some bug reporter for
pointing it out anyway.)
On most keyboard modes the escape-sequence for Alt-Fn is the same as
for Fn alone but with an additional ESC prepended.
However, this additional ESC should not be sent for keyboard modes
that sends a different escape-sequence when Alt is pressed, like e.g.
Xterm216+, as this is not expected by the server-side application.
Signed-off-by: Anders Larsen <al@alarsen.net>
The immediate usefulness of this is in pterm.exe: when the user uses
-e to specify a command to run in the pterm, we retrieve the command
in Unicode, store it in CONF_remote_cmd as UTF-8, and then in conpty.c
we can extract it in the same form and convert it back to Unicode to
pass losslessly to CreateProcessW. So now non-ACP Unicode works in
that part of the pterm command line.
While testing the new command-line handling, I tried actually using
that option for the first time in a long time, and saw
These are the fingerprints of the PuTTY PGP Master Keys. They
can
be used to establish a trust path from this executable to another
one. See the manual for more information.
because Windows MessageBox() had done its own wrapping on the text, at
a slightly narrower width than the one implied by the hard newlines in
the input string. Removed the mid-paragraph newlines, so that
Windows's wrapping will be the only wrapping.
That's not a failure outcome. The user asked for some information; we
printed it; nothing went wrong. Mission successful, so exit(0)!
I noticed this because it was sitting right next to some of the
usage() calls modified in the previous commit. Those also had the
misfeature of exiting with failure after successfully printing the
help, possibly due to confusion arising from the way that usage() was
_sometimes_ printed on error as well. But pgp_fingerprints() has no
such excuse. That one's just silly.
In the course of debugging the command-line argument refactoring in
previous commits, I found I wasn't quite sure whether PSCP thought I'd
given it too many arguments, or too few, because it didn't print an
error message saying which: it just printed its giant usage message.
Over the last few years I've come to the belief that this is Just
Wrong anyway. Printing the whole of a giant help message should only
be done when the user asked for it: otherwise, print a short and
to-the-point error, and maybe _suggest_ how to get help, but scrolling
everything else off the user's screen is not a good response to a
typo. I wrote this thought up more fully last year:
https://www.chiark.greenend.org.uk/~sgtatham/quasiblog/stop-helping/
So, time to practise what I preach! The PuTTY tools now follow the
'Stop helping!' principle. You can get full help by saying --help.
Also, when we do print the help, we now exit(0) rather than exit(1),
because there's no reason to report failure: we successfully did what
the user asked us for.
Converting a CmdlineArg straight to a Filename allows us to make the
filename out of the wide-character version of the string on Windows.
So now filenames specified on the command line should generally be
able to handle pathnames containing Unicode characters not in the
system code page.
This change also involves making some char pointers _into_ Filename
structs where they weren't previously: for example, the
'openssh_config_file' variable in Windows Pageant's WinMain().
This begins the process of enabling our Windows applications to handle
Unicode characters on their command lines which don't fit in the
system code page.
Instead of passing plain strings to cmdline_process_param, we now pass
a partially opaque and platform-specific thing called a CmdlineArg.
This has a method that extracts the argument word as a default-encoded
string, and another one that tries to extract it as UTF-8 (though it
may fail if the UTF-8 isn't available).
On Windows, the command line is now constructed by calling
split_into_argv_w on the Unicode command line returned by
GetCommandLineW(), and the UTF-8 method returns text converted
directly from that wide-character form, not going via the system code
page. So it _can_ include UTF-8 characters that wouldn't have
round-tripped via CP_ACP.
This commit introduces the abstraction and switches over the
cross-platform and Windows argv-handling code to use it, with minimal
functional change. Nothing yet tries to call cmdline_arg_get_utf8().
I say 'cross-platform and Windows' because on the Unix side there's
still a lot of use of plain old argv which I haven't converted. That
would be a much larger project, and isn't currently needed: the
_current_ aim of this abstraction is to get the right things to happen
relating to Unicode on Windows, so for code that doesn't run on
Windows anyway, it's not adding value. (Also there's a tension with
GTK, which wants to talk to standard argv and extract arguments _it_
knows about, so at the very least we'd have to let it munge argv
before importing it into this new system.)
Even though I wrote them, I keep forgetting their semantics. In
particular I can never quite remember off the top of my head whether
they modify their input command line, or allocate it fresh. To make
that clearer, I've made it a const parameter.
(That means the output argstart pointers have the same awkward const
-> mutable conversion as strchr - but then, strchr is itself precedent
for that being the usual way to not-quite-handle that annoyance in C!)
There's no difficulty with implementing these, on either platform.
Windows has native Unicode support for its edit boxes: we can set and
retrieve the text as a wide-character string, and then converting it
to and from UTF-8 is easy. And GTK has specified its edit boxes as
being UTF-8 all along, no matter what the system locale.
The previous mb_to_wc and wc_to_mb had horrible and also buggy APIs.
This commit introduces a fresh pair of functions to replace them,
which generate output by writing to a BinarySink. So it's now up to
the caller to decide whether it wants the output written to a
fixed-size buffer with overflow checking (via buffer_sink), or
dynamically allocated, or even written directly to some other output
channel.
Nothing uses the new functions yet. I plan to migrate things over in
upcoming commits.
What was wrong with the old APIs: they had that awkward undocumented
Windows-specific 'flags' parameter that I described in the previous
commit and took out of the dup_X_to_Y wrappers. But much worse, the
semantics for buffer overflow were not just undocumented but actually
inconsistent. dup_wc_to_mb() in utils assumed that the underlying
wc_to_mb would fill the buffer nearly full and return the size of data
it wrote. In fact, this was untrue in the case where wc_to_mb called
WideCharToMultiByte: that returns straight-up failure, setting the
Windows error code to ERROR_INSUFFICIENT_BUFFER. It _does_ partially
fill the output buffer, but doesn't tell you how much it wrote!
What's wrong with the new API: it's a bit awkward to write a sequence
of wchar_t in native byte order to a byte-oriented BinarySink, so
people using put_mb_to_wc directly have to do some annoying pointer
casting. But I think that's less horrible than the previous APIs.
Another change: in the new API for wc_to_mb, defchr can be "", but not
NULL.
This parameter was undocumented, and Windows-specific: its semantics
date from before PuTTY was cross-platform, and are "Pass this flags
parameter straight through to the Win32 API's conversion functions".
So in Windows platform code you can pass flags like MB_USEGLYPHCHARS,
but in cross-platform code, you dare not pass anything nonzero at all
because the Unix frontend won't recognise it (or, likely, even
compile).
I've kept the flag for now in the underlying mb_to_wc / wc_to_mb
functions. Partly that's because there's one place in the Windows code
where the parameter _is_ used; mostly, it's because I'm about to
replace those functions anyway, so there's no point in editing all the
call sites twice.
The code in the 'if (IsZoomed)' statement in reset_window() was
failing to take account of the user-configured gap between the text
and the window edge, so that the requested border was lost. Now it
does take that into account.
In this commit, this change of behaviour applies to both a normally
maximised window (with the window frame still visible round the edge)
and to a full-screen window (nothing visible on the whole monitor
except PuTTY).
I'm not 100% sure whether that's the right behaviour: perhaps the
purpose of this configurable border is to space the text away from the
window furniture, so that there's no need for it if there isn't any
furniture? But on the other hand, one thing _I_ use this border for is
to make space round the edge of a terminal window for the green border
Zoom superimposes when sharing the window. And that's a use case that
would still make sense when the window is full-screened.
These actions were already available in the Ctrl + right-click context
menu (or just right-click, if you shifted the mouse-button actions
into Windows mode). But a user might not know about Ctrl + right-click,
and only know to look in the system menu. So it makes them easier to
find if they're in that menu too. Also, I don't really see any reason
why the two menus _should_ be different.
Thanks to Will Bond for spotting this. Using STD_INPUT_HANDLE as an
output handle apparently works often enough that I didn't immediately
notice the mistake, but it's not what I _meant_ to do. Obviously we
should have used STD_OUTPUT_HANDLE instead.
I've had more than one conversation recently in which users have
mentioned finding this mode inconvenient. I don't know whether any of
them would want to turn it off completely, but it seems likely that
_somebody_ will, sooner or later. So here's an option to do that.
In the previous few commits I noticed some repeated work in the form
of pointless empty implementations of Plug's log method, plus some
existing (and some new) empty cases of Socket's endpoint_info. As a
cleanup, I'm replacing as many as I can find with uses of a central
null implementation in the stubs directory.
This enables plug_log to run query methods on the socket in order to
find out useful information to log. I don't expect it's sensible to do
anything else with it.
The peer_info method in the Socket vtable is replaced with
endpoint_info, which takes a boolean indicating which end you're
asking about.
sk_peer_info still exists, as a wrapper on the new sk_endpoint_info.
I'm preparing to be able to ask about the other end of the connection
too, so the first step is to give this data structure a neutral name
that can refer to either. No functional change yet.
As I mentioned in the previous commit, this merge was nontrivial
enough that it wasn't a good idea for the release checklist to suggest
doing it in a hurry. And indeed, now I look at it again this morning,
there are mistakes: a memory leak of ConsoleIO on the abort path from
both affected functions, and a missing space in one of the prompt
messages. Now both fixed.
This involved a trivial merge conflict fix in terminal.c because of
the way the cherry-pick 73b41feba5 differed from its original
bdbd5f429c.
But a more significant rework was needed in windows/console.c, because
the updates to confirm_weak_* conflicted with the changes on main to
abstract out the ConsoleIO system.
These handy wrappers on the verbose underlying Win32 registry API have
to lose some expressiveness, and one thing they lost was the ability
to open a registry key without asking for both read and write access.
This meant they couldn't be used for accessing keys not owned by the
calling user.
So far, I've only used them for accessing PuTTY's own saved data,
which means that hasn't been a problem. But I want to use them
elsewhere in an upcoming commit, so I need to fix that.
The obvious thing would be to change the meaning of the existing
'create' boolean flag so that if it's false, we also don't request
write access. The rationale would be that you're either reading or
writing, and if you're writing you want both RW access and to create
keys that don't already exist. But in fact that's not true: you do
want to set create==false and have write access in the case where
you're _deleting_ things from the key (or the whole key). So we really
do need three ways to call the wrapper function.
Rather than add another boolean field to every call site or mess about
with an 'access type' enum, I've taken an in-between route: the
underlying open_regkey_fn *function* takes a 'create' and a 'write'
flag, but at call sites, it's wrapped with a macro anyway (to append
NULL to the variadic argument list), so I've just made three macros
whose names request different access. That makes call sites marginally
_less_ verbose, while still
(cherry picked from commit 7339e00f4a)
This centralises the messages for weak crypto algorithms (general, and
host keys in particular, the latter including a list of all the other
available host key types) into ssh/common.c, in much the same way as
we previously did for ordinary host key warnings.
The reason is the same too: I'm about to want to vary the text in one
of those dialog boxes, so it's convenient to start by putting it
somewhere that I can modify just once.
This aims to be a reasonably exhaustive test of what happens if you
set Conf values to various things, and then save your session, and
find out what ends up in the storage. Or vice versa.
Currently, the test program is written to match the existing
behaviour. The idea is that I can refactor the code that does the
loading and saving, and if this test still passes, I've probably done
it right.
However, in the long term, this test will be a liability: it's yet
another place you have to add every new config option. So my plan is
to get rid of it again once the refactorings I'm planning are
finished.
Or rather, I'll get rid of _that_ part of its functionality. I also
suspect I'll have added new kinds of consistency check by then, which
won't be a liability in the same way, and which I'll want to keep.
FontSpec is completely different per platform; not only is the
structure type different, not only are the behind-the-scenes
implementations of copy and free functions different, but even the API
of the constructor function is different. Cross-platform code can't
construct a FontSpec at all. This makes it hard to write a
cross-platform test program that works with them.
So, as a nasty bodge, I'll allow test programs to #define
SUPERSEDE_FONTSPEC_FOR_TESTING before including putty.h. Then they can
provide their own definition of FontSpec, and they also take
responsibility for superseding all the other functions that work with
one.
A user reports, _just_ in time to make the 0.79 release, that changes
in the Windows port of OpenSSH from 8.9.x have made it unhappy with
the use of \ as a path separator in the 'IdentityAgent' config
directive. Switch to /, which is also accepted by earlier versions, so
it should work everywhere.
The pathname of Pageant's named pipe includes the name of the user
running it. And Windows usernames are allowed to have spaces in! So
the pipe pathname may also have a space, in which case Windows OpenSSH
will interpret the spacey pathname as an invalid first half followed
by a trailing garbage word.
A user reports that quoting the filename makes this work. Since double
quotes are an illegal Windows filename character, I think it should
therefore do no harm to quote it unconditionally, which is the easiest
fix.
CONF_bold_style is a pair of bit flags rather than an enum, so its
values aren't just BOLD_STYLE_FONT and BOLD_STYLE_COLOUR but also the
bitwise OR of them. (Hopefully not neither.)
The Windows version of the Filename structure now contains three
versions of the pathname, in UTF-16, UTF-8 and the system code page.
Callers can use whichever is most convenient.
All uses of filenames for actually opening files now use the UTF-16
version, which means they can tolerate 'exotic' filenames, by which I
mean those including Unicode characters outside the host system's
CP_ACP default code page.
Other uses of Filename structures inside the 'windows' subdirectory do
something appropriate, e.g. when printing a filename inside a message
box or a console message, we use the UTF-8 version of the filename
with the UTF-8 version of the appropriate API.
There are three remaining pieces to full Unicode filename support:
One is that the cross-platform code has many calls to
filename_to_str(), embodying the assumption that a file name can be
reliably converted into the unspecified current character set; those
will all need changing in some way.
Another is that write_setting_filename(), in windows/storage.c, still
saves filenames to the Registry as an ordinary REG_SZ in the system
code page. So even if an exotic filename were stored in a Conf, that
Conf couldn't round-trip via the Registry and back without corrupting
that filename by coercing it back to a string that fits in CP_ACP and
therefore doesn't represent the same file. This can't be fixed without
a compatibility break in the storage format, and I don't want to make
a minimal change in that area: if we're going to break compatibility,
then we should break it good and hard (the Nanny Ogg principle), and
devise a completely fresh storage representation that fixes as many
other legacy problems as possible at the same time. So that's my plan,
not yet started.
The final point, much more obviously, is that we're still short of
methods to _construct_ any Filename structures using a Unicode input
string! It should now work to enter one in the GUI configurer (either
by manual text input or via the file selector), but it won't
round-trip through a save and load (as discussed above), and there's
still no way to specify one on the command line (the groundwork is
laid by commit 10e1ac7752 but not yet linked up).
But this is a start.