Now that we use gdk_window_create_similar_surface() to make our backing
surface, most of the code to handle scale factors is unnecessary and
indeed slightly harmful. gdk_window_create_similar_surface() creates a
surface suitable for use as backing for a window, so it already has the
scale factor applied. This means that it should be sized in nominal
pixels, we can paint to it in nominal pixels, and the fact that it has
the same extra resolution as the actual window is entirely transparent
to us.
Now the only reason we pay attention to the scale factor at all is to
detect changes and use them as a prompt to re-create the backing
surface.
CreatePixmap returns a Pixmap with undefined contents, and ImageText16
doesn't quite erase the whole rectangle covered by the text (and hence
the whole Pixmap. So to be on the safe side we should make sure to
erase the entire Pixmap before drawing the text.
Conveniently, ImageText16 ignores the function specified in the GC, so
we can set that to GXclear and avoid needing to change the GC
thereafter.
This is a fairly radical change of how X bitmap fonts are handled when
using Cairo for rendering. Before, we would download each glyph to the
client on first use and then composite those glyphs into the terminal's
backing surface. This worked pretty well when we were keeping an image
of the whole screen on the client anyway, but once I'd pushed all the
other Cairo rendering onto the X server, it meant that the character
bitmaps had to be repeatedly pushed to the X server.
The new arrangement just renders each string into a temporary Pixmap
using the usual X text-drawing calls and then asks Cairo to paste it
into the main backing Pixmap. It's tempting to draw the text straight
into the backing Pixmap, but that would require dealing directly with
X colour management. This way, we get to leave colours in the hands
of Cairo (and hence the Render extension).
There are still fragments of the old system around. Those should go
in the next commit.
If gdk_window_create_similar_surface() isn't available, we now fall back
to cairo_surface_create_similar(). This is relevant only on GTK between
2.00 and 2.22 with deprecated calls disabled.
By default, when we're asked to draw a GTK widget, GTK creates a
temporary surface (a Pixmap under X) and redirects our rendering into
that. Then it blits that into the actual window. This is silly for
PuTTY because all that PuTTY does to render its drawing area is to
blit into it from _another_ surface. So now PuTTY asks GTK not to do
that. According to the GTK documentation, GTK as of 3.10 has
completely restructured its drawing routines so that turning off
double-buffering actually makes things worse and slower, so we turn it
off only in GTK 2
Still, this now means that painting text on the screen in GTK 2 causes
precisely one CopyArea operation, which is what we want.
This removes the case where we draw into a Cairo surface and then copy
the results into a GdkPixmap. Now, if we've got a GdkPixmap, we just
draw into it directly using Cairo. This vastly reduces the number of
CopyArea operations needed to draw on the screen.
I just found cairo_paint() while wandering the Cairo documentation. It
just fills the entire clip region with data from the current source,
which is precisely what draw_area() wants to do. This is simpler for us
than requesting the bounding rectangle of the clipping region and then
filling it, and as far as I can tell the clipping rectangle generally
covers the whole window anyway.
Rather than constructing a transformation matrix piece by piece (with
very branchy code), draw_stretch_before now just calls cairo_translate()
and cairo_scale() with values that are almost-obviously correct.
Also, rather than stashing and restoring the transformation matrix
ourselves, it seems simpler to use cairo_save() and cairo_restore().
That requires that draw_stretch_before() and draw_stretch_after() be
called strictly in pairs, but they are so that's OK.
According to the X specs, WhitePixel and BlackPixel refer to permanent
entries in the default colourmap. This means that they're not
necessarily appropriate for use with a Drawable with a different depth
than the root window. When drawing to a Pixmap that will be used as a
1-bit alpha mask by Cairo, the correct values are simply 0
(transparent) and 1 (opaque).
This commit fixes a problem that Simon observed when using an X bitmap
font with Cairo and making a line double-width or double-size. When
using Cairo, PuTTY implements double-width and double-size by just
asking Cairo to scale all its drawing operations. This works fine
with outline fonts, but when using a bitmap font the results are a bit
fuzzy. This appears to be because Cairo's default is to use bilinear
interpolation when scaling an image, which is fine for photos but not
so good for fonts.
In this commit, I decompose PuTTY's cairo_mask_surface() call into its
component parts so that I can set the mask pattern's filter to
CAIRO_FILTER_NEAREST before using it. That solves the problem, but it
suggests that maybe we should be caching the pattern rather then the
surface.
When using an X server-side font with Cairo rendering, PuTTY takes the
rather horrible approach of rendering each glyph it uses into a depth-1
Pixmap and then copying the result into a Cairo surface that it uses
every time it wants to display that glyph.
Heretofore, the conversion of the Pixmap into a Cairo surface was done
by downloading it using XGetImage() and then manually re-arringing the
bits into a suitable form for Cairo. But Cairo has a way of turning an
X Drawable (including a Pixmap) into a surface, and then it's just a
case of copying one surface to another using cairo_paint(). So that's
what PuTTY does now and the process is a little less unpleasant than it
was.
The Cairo documentation is clear that cairo_set_antialias() only
affects shape drawing and not text rendering. To change
anti-aliasing settings for font rendering you need
cairo_font_options_set_antialias() instead. Therefore the comment
can be a bit more certain than just describing what Cairo "appears"
to do.
The terminal code doesn't yet do anything with the text other than feed
it to a debugging printf. The call uses UTF-8 and expects the terminal
to copy the string because that's compatible with
gtk_im_context_get_preedit_string().
GtkIMContext has focus_in and focus_out methods for telling it when the
corresponding widget gains or loses keyboard focus. It's not obvious to
me why these are necessary, but PuTTY now calls them when it sees
focus-in and focus-out events for the terminal window. Somehow, this
has caused Hangul input to start working in PuTTY. I can't yet
see what I'm typing for lack of proper preedit support, though.
"server:fixed" was a good default when GTK1 was common and non-X11
environments were rare. Now it's the other way round - Wayland is very
common and the GTK1 configuration of PuTTY is legacy - so it's time to
make the default GTK font a client-side one.
Of course, anyone with an existing saved session (including Default
Settings) won't be affected by this change; it only helps new users
without an existing ~/.putty at all. That's why we _also_ need the
fallbacks introduced by the previous couple of commits. But we can at
least start making it sensible for new users.
(I considered keeping the #if, and switching it round so that it tests
GTK_CHECK_VERSION(2,0,0) rather than NOT_X_WINDOWS, i.e. selects the
client-side default whenever client-side fonts _are_ available,
instead of only when server-side fonts _aren't_. That way, in GTK1
builds, the Conf default font would _still_ be "server:fixed". But I
think this is firstly too marginal to worry about, and secondly, it's
more futureproof to make the default the same everywhere: if anyone
still stuck on a GTK1 environment later manages to update it, then
their saved settings are less likely to have had a legacy thing
written into them. And the GTK1 build will still run out of the box
because of the last-ditch fallback mechanism I've just added.)
I'd forgotten that I'd already chosen a default client-side font, for
NOT_X_WINDOWS builds. I should have made the two defaults match! Now
both default font names are defined in the header file.
If the user's choice of fonts can't be instantiated during initial
terminal-window setup, then we now automatically try two fallback
options, "client:Monospace 10" and "server:fixed", and only give a
fatal error if _no_ option allows us to open a terminal window.
Previously, on Wayland, PuTTY and pterm with default configuration
would completely fail to open a terminal window at all, because our
default font configuration is still the trad X11 "server:fixed", and
on Wayland, X11-style server-side bitmap fonts don't exist at all.
Conversely, in the GTK1 build of PuTTY which we're still supporting,
_client-side_ fonts aren't supported, so if a user had configured one
in their normal PuTTY configuration, then the GTK1 version would
similarly fail to launch.
Now both of those cases should work, because the fallbacks include a
client-side font _and_ a server-side one, and I hope that any usable
Pango system will make "Monospace" map to _some_ locally available
font, and that any remotely sensible X server has 'fixed'
I think it would be even better if there was a mechanism for the Conf
to specify a fallback list of fonts. For example, this might be
specified via a new multifont prefix along the lines of
choices:client:Monospace 10:server:fixed
with the semantics that the "choices:" prefix means that the rest of
the string is split up at every other colon to find a list of fonts to
try to make. Then we could not only set PuTTY's default to a list of
possibilities likely to find a usable font everywhere, but also, users
could configure their own list of preferred fallbacks.
But I haven't thought of a good answer to the design question of what
should happen if a Conf font setting looks like that and the user
triggers the GUI font selector! (Also, you'd need to figure out what
happened if a 'choices:' string had another 'choices' in it...)
DIT, for 'Data-Independent Timing', is a bit you can set in the
processor state on sufficiently new Arm CPUs, which promises that a
long list of instructions will deliberately avoid varying their timing
based on the input register values. Just what you want for keeping
your constant-time crypto primitives constant-time.
As far as I'm aware, no CPU has _yet_ implemented any data-dependent
optimisations, so DIT is a safety precaution against them doing so in
future. It would be embarrassing to be caught without it if a future
CPU does do that, so we now turn on DIT in the PuTTY process state.
I've put a call to the new enable_dit() function at the start of every
main() and WinMain() belonging to a program that might do
cryptography (even testcrypt, in case someone uses it for something!),
and in case I missed one there, also added a second call at the first
moment that any cryptography-using part of the code looks as if it
might become active: when an instance of the SSH protocol object is
configured, when the system PRNG is initialised, and when selecting
any cryptographic authentication protocol in an HTTP or SOCKS proxy
connection. With any luck those precautions between them should ensure
it's on whenever we need it.
Arm's own recommendation is that you should carefully choose the
granularity at which you enable and disable DIT: there's a potential
time cost to turning it on and off (I'm not sure what, but plausibly
something of the order of a pipeline flush), so it's a performance hit
to do it _inside_ each individual crypto function, but if CPUs start
supporting significant data-dependent optimisation in future, then it
will also become a noticeable performance hit to just leave it on
across the whole process. So you'd like to do it somewhere in the
middle: for example, you might turn on DIT once around the whole
process of verifying and decrypting an SSH packet, instead of once for
decryption and once for MAC.
With all respect to that recommendation as a strategy for maximum
performance, I'm not following it here. I turn on DIT at the start of
the PuTTY process, and then leave it on. Rationale:
1. PuTTY is not otherwise a performance-critical application: it's
not likely to max out your CPU for any purpose _other_ than
cryptography. The most CPU-intensive non-cryptographic thing I can
imagine a PuTTY process doing is the complicated computation of
font rendering in the terminal, and that will normally be cached
(you don't recompute each glyph from its outline and hints for
every time you display it).
2. I think a bigger risk lies in accidental side channels from having
DIT turned off when it should have been on. I can imagine lots of
causes for that. Missing a crypto operation in some unswept corner
of the code; confusing control flow (like my coroutine macros)
jumping with DIT clear into the middle of a region of code that
expected DIT to have been set at the beginning; having a reference
counter of DIT requests and getting it out of sync.
In a more sophisticated programming language, it might be possible to
avoid the risk in #2 by cleverness with the type system. For example,
in Rust, you could have a zero-sized type that acts as a proof token
for DIT being enabled (it would be constructed by a function that also
sets DIT, have a Drop implementation that clears DIT, and be !Send so
you couldn't use it in a thread other than the one where DIT was set),
and then you could require all the actual crypto functions to take a
DitToken as an extra parameter, at zero runtime cost. Then "oops I
forgot to set DIT around this piece of crypto" would become a compile
error. Even so, you'd have to take some care with coroutine-structured
code (what happens if a Rust async function yields while holding a DIT
token?) and with nesting (if you have two DIT tokens, you don't want
dropping the inner one to clear DIT while the outer one is still there
to wrongly convince callees that it's set). Maybe in Rust you could
get this all to work reliably. But not in C!
DIT is an optional feature of the Arm architecture, so we must first
test to see if it's supported. This is done the same way as we already
do for the various Arm crypto accelerators: on ELF-based systems,
check the appropriate bit in the 'hwcap' words in the ELF aux vector;
on Mac, look for an appropriate sysctl flag.
On Windows I don't know of a way to query the DIT feature, _or_ of a
way to write the necessary enabling instruction in an MSVC-compatible
way. I've _heard_ that it might not be necessary, because Windows
might just turn on DIT unconditionally and leave it on, in an even
more extreme version of my own strategy. I don't have a source for
that - I heard it by word of mouth - but I _hope_ it's true, because
that would suit me very well! Certainly I can't write code to enable
DIT without knowing (a) how to do it, (b) how to know if it's safe.
Nonetheless, I've put the enable_dit() call in all the right places in
the Windows main programs as well as the Unix and cross-platform code,
so that if I later find out that I _can_ put in an explicit enable of
DIT in some way, I'll only have to arrange to set HAVE_ARM_DIT and
compile the enable_dit() function appropriately.
When running on Wayland, gdk_display_get_name() can return things like
"wayland-0" rather than valid X display names. PuTTY nonetheless
treated them as X display names, meaning that when running under
Wayland, pterm would set DISPLAY to "wayland-0" in subprocesses, and
PuTTY's X forwarding wouldn't work properly.
To fix this, places that call gdk_display_get_name() now only do so on
displays for which GDK_IS_X_DISPLAY() is true. As with
GDK_IS_X_WINDOW(), this requires some backward-compatibility for GDK
versions where everything is implicitly running on X.
To make this work usefully, pterm now also won't unset DISPLAY if it
can't get an X display name and instead will pass through whatever value
of DISPLAY it received. I think that's better behaviour anyway.
There are two separate parts of PuTTY that call gdk_display_get_name().
platform_get_x_display() in unix/putty.c is used for X forwarding, while
gtk_seat_get_x_display() in unix/window.c is used used for setting DISPLAY
and recording in utmp. I've updated both of them.
This centralises into windows/utils/request_file.c all of the code
that deals with the OPENFILENAME structure, and decides centrally
whether to use the Unicode or ANSI version of that structure and its
associated APIs. Now the output of any request_file function is our
own 'Filename' abstract type, instead of a raw char or wchar_t buffer,
which means that _any_ file dialog can produce a full Unicode filename
if the user wants to select one - and yet, in the w32old build, they
all uniformly fall back to the ANSI version, which is the only one
that works at all pre-NT.
A side effect: I've turned the FILTER_FOO_FILES family of definitions
from platform-specific #defines into a reasonably sensible enum. This
didn't affect the GTK side of things , because I'd never got round to
figuring out how to filter a file dialog down to a subset of files in
GTK, and still haven't. So I've just moved the existing FIXME comment
from platform.h to dialog.c.
I only observed this in the GTK1 build, but I don't know for sure it
can't happen in other situations, so there's no reason not to be
careful.
What seems to happen is that when the user clicks Cancel on the Change
Settings dialog box, we call gtk_widget_destroy on the window, which
emits the "destroy" signal on the window, our handler for which frees
the whole dlgparam. But _then_ GTK goes through and cleans up all the
sub-widgets of the dialog box, and some of those generate extra
events. In particular, destroying a list box is done by first deleting
all the list entries - and if one of those is selected, the list box's
selection changes, triggering an event which calls our callback that
tries to look up the control in the dlgparam we just freed.
My simple workaround is to defer actually freeing the dlgparam, via a
toplevel callback. Then it's still lying around empty while all those
random events are firing.
In commit 20f818af1201277 I renamed a lot of variables called 'ret',
by using clang-rename to do the heavy lifting. But clang-rename only
saw instances of the variable name that the _compiler_ saw. The ones
that never got through the preprocessor weren't renamed, and I didn't
eyeball the patch hard enough to find instances in the #else branch of
ifdefs that should also have been renamed.
Thanks to Lars Wendler for the report and the fixes.
If this occurs before cmdline_run_saved, then the latter will use its
saved pointers to arguments in the freed CmdlineArgList.
Affects uses of PuTTY without a saved session (like 'putty -ssh
foohost'), and a very small number of pterm options, in particular
-sessionlog.
This is the simplest possible fix: just remove the free completely,
so that the parsed command-line arguments leak. There's at most one
instance of them per process, so it doesn't matter.
The immediate usefulness of this is in pterm.exe: when the user uses
-e to specify a command to run in the pterm, we retrieve the command
in Unicode, store it in CONF_remote_cmd as UTF-8, and then in conpty.c
we can extract it in the same form and convert it back to Unicode to
pass losslessly to CreateProcessW. So now non-ACP Unicode works in
that part of the pterm command line.
That's not a failure outcome. The user asked for some information; we
printed it; nothing went wrong. Mission successful, so exit(0)!
I noticed this because it was sitting right next to some of the
usage() calls modified in the previous commit. Those also had the
misfeature of exiting with failure after successfully printing the
help, possibly due to confusion arising from the way that usage() was
_sometimes_ printed on error as well. But pgp_fingerprints() has no
such excuse. That one's just silly.
In the course of debugging the command-line argument refactoring in
previous commits, I found I wasn't quite sure whether PSCP thought I'd
given it too many arguments, or too few, because it didn't print an
error message saying which: it just printed its giant usage message.
Over the last few years I've come to the belief that this is Just
Wrong anyway. Printing the whole of a giant help message should only
be done when the user asked for it: otherwise, print a short and
to-the-point error, and maybe _suggest_ how to get help, but scrolling
everything else off the user's screen is not a good response to a
typo. I wrote this thought up more fully last year:
https://www.chiark.greenend.org.uk/~sgtatham/quasiblog/stop-helping/
So, time to practise what I preach! The PuTTY tools now follow the
'Stop helping!' principle. You can get full help by saying --help.
Also, when we do print the help, we now exit(0) rather than exit(1),
because there's no reason to report failure: we successfully did what
the user asked us for.
Converting a CmdlineArg straight to a Filename allows us to make the
filename out of the wide-character version of the string on Windows.
So now filenames specified on the command line should generally be
able to handle pathnames containing Unicode characters not in the
system code page.
This change also involves making some char pointers _into_ Filename
structs where they weren't previously: for example, the
'openssh_config_file' variable in Windows Pageant's WinMain().
This is the pathfinding change that proves it's possible for _one_
Conf setting to become Unicode-capable.
That seems like quite a small reward for all the refactoring in the
previous patches this week! But changing over one configuration
setting is enough to get started with: once all the bugs are out of
this one, we can try switching over some more.
Changing the type to CONF_TYPE_STR_AMBI is enough by itself to make
the configuration dialog box write it into Conf as UTF-8, because
conf_editbox_handler automatically checks whether that possibility is
available. However, setting the same Conf entry from the command line
isn't automatic: I had to add code in the handler for the -l
command-line option in cmdline.c.
This commit also doesn't yet handle the _other_ way to specify a
username on the command line: including it as part of the hostname
argument via "putty user@host" or similar. That's more difficult,
because it also requires deciding what to do about UTF-8 in the actual
hostname.
(That looks as if it ought to be possible: Windows should be able to
handle looking up Unicode hostnames if you use GetAddrInfoW() in place
of getaddrinfo(). But plumbing it through everything in between
cmdline.c and windows/network.c is a bigger job than I'm prepared to
do in this proof-of-concept commit.)
This begins the process of enabling our Windows applications to handle
Unicode characters on their command lines which don't fit in the
system code page.
Instead of passing plain strings to cmdline_process_param, we now pass
a partially opaque and platform-specific thing called a CmdlineArg.
This has a method that extracts the argument word as a default-encoded
string, and another one that tries to extract it as UTF-8 (though it
may fail if the UTF-8 isn't available).
On Windows, the command line is now constructed by calling
split_into_argv_w on the Unicode command line returned by
GetCommandLineW(), and the UTF-8 method returns text converted
directly from that wide-character form, not going via the system code
page. So it _can_ include UTF-8 characters that wouldn't have
round-tripped via CP_ACP.
This commit introduces the abstraction and switches over the
cross-platform and Windows argv-handling code to use it, with minimal
functional change. Nothing yet tries to call cmdline_arg_get_utf8().
I say 'cross-platform and Windows' because on the Unix side there's
still a lot of use of plain old argv which I haven't converted. That
would be a much larger project, and isn't currently needed: the
_current_ aim of this abstraction is to get the right things to happen
relating to Unicode on Windows, so for code that doesn't run on
Windows anyway, it's not adding value. (Also there's a tension with
GTK, which wants to talk to standard argv and extract arguments _it_
knows about, so at the very least we'd have to let it munge argv
before importing it into this new system.)
There's no difficulty with implementing these, on either platform.
Windows has native Unicode support for its edit boxes: we can set and
retrieve the text as a wide-character string, and then converting it
to and from UTF-8 is easy. And GTK has specified its edit boxes as
being UTF-8 all along, no matter what the system locale.
This begins the process of making PuTTY more able to handle Unicode
strings as a first-class type in its configuration. One of the new
types, CONF_TYPE_UTF8, looks physically just like CONF_TYPE_STR but
the semantics are that it's definitely encoded in UTF-8, instead of
'shrug, whatever the system locale's encoding is'.
Unfortunately, we can't yet switch over any Conf items to having that
type, because our data representations in saved configuration (both on
Unix and Windows) store char strings in the system encoding. So we'll
have to change that representation at the same time, which risks
breaking backwards compatibility with old PuTTYs reading the same
configuration.
So the other new type, CONF_TYPE_STR_AMBI, is intended as a
transitional form, recording a configuration setting that _might_ be
explicitly UTF-8 or might have the legacy 'shrug, whatever' semantics,
depending on where we got it from.
My general migration plan is that first I _enable_ Unicode support in
a Conf item, by turning it into STR_AMBI; the Unicode version of the
string (if any) is saved in a new location, and a best-effort
local-charset version is saved where it's always been. That way new
PuTTY can read the Unicode version, and old PuTTY reading that
configuration will behave no worse than it would have done already.
It would be nice to think that in the far future we've migrated
everything to STR_AMBI and can move them all to mandatory UTF-8,
obsoleting the old configuration. I think it's more likely we'll never
get there. But at least _new_ Conf items, with no backwards
compatibility requirement in the first place, can be CONF_TYPE_UTF8
where appropriate.
(In conf_get_str_ambi(), I considered making it mandatory via assert()
to pass the 'utf8' output pointer as non-NULL, to defend against lazy
adaptation of existing code by just changing the function call. But in
fact I think there's a legitimate use case for not caring if the
output is UTF-8 or not, because some of the existing SSH code
currently just shoves strings like usernames directly on to the wire
whether they're in the right encoding or not; so if you want to do the
correct UTF-8 thing where possible and preserve legacy behaviour if
not, then treating both classes of string the same _is_ the right
thing to do.)
This also requires linking the Unicode support into many Unix
applications that hadn't previously needed it.
The previous mb_to_wc and wc_to_mb had horrible and also buggy APIs.
This commit introduces a fresh pair of functions to replace them,
which generate output by writing to a BinarySink. So it's now up to
the caller to decide whether it wants the output written to a
fixed-size buffer with overflow checking (via buffer_sink), or
dynamically allocated, or even written directly to some other output
channel.
Nothing uses the new functions yet. I plan to migrate things over in
upcoming commits.
What was wrong with the old APIs: they had that awkward undocumented
Windows-specific 'flags' parameter that I described in the previous
commit and took out of the dup_X_to_Y wrappers. But much worse, the
semantics for buffer overflow were not just undocumented but actually
inconsistent. dup_wc_to_mb() in utils assumed that the underlying
wc_to_mb would fill the buffer nearly full and return the size of data
it wrote. In fact, this was untrue in the case where wc_to_mb called
WideCharToMultiByte: that returns straight-up failure, setting the
Windows error code to ERROR_INSUFFICIENT_BUFFER. It _does_ partially
fill the output buffer, but doesn't tell you how much it wrote!
What's wrong with the new API: it's a bit awkward to write a sequence
of wchar_t in native byte order to a byte-oriented BinarySink, so
people using put_mb_to_wc directly have to do some annoying pointer
casting. But I think that's less horrible than the previous APIs.
Another change: in the new API for wc_to_mb, defchr can be "", but not
NULL.
This parameter was undocumented, and Windows-specific: its semantics
date from before PuTTY was cross-platform, and are "Pass this flags
parameter straight through to the Win32 API's conversion functions".
So in Windows platform code you can pass flags like MB_USEGLYPHCHARS,
but in cross-platform code, you dare not pass anything nonzero at all
because the Unix frontend won't recognise it (or, likely, even
compile).
I've kept the flag for now in the underlying mb_to_wc / wc_to_mb
functions. Partly that's because there's one place in the Windows code
where the parameter _is_ used; mostly, it's because I'm about to
replace those functions anyway, so there's no point in editing all the
call sites twice.
This is a small refinement of my own to Marco Ricci's new mode
introduced by the previous commit. If Pageant is being run by a parent
process intending to make requests to it, then it's probably put a
pipe on Pageant's stdout, and will be reading from that pipe to
retrieve the environment setup commands. So it needs to know when it's
read enough.
Closing stdout immediately makes this as easy as possible, freeing the
parent process of the need to count lines of output (and also know how
many lines to expect): it can simply read until there's no more data.
This also means there's no need to make stdout line-buffered, of
course – the fclose will flush it anyway.
This new mode makes it easy to run Pageant as a "supervised" instance,
e.g. as part of a test harness for other programs interacting with an
SSH agent, which is the original use case. Because Pageant is then
running as a child process of the supervisor, the operating system
notifies the supervisor of the child's aliveness without resorting to
PIDs or socket addresses, both of which may principally run stale and/or
get recycled.
My normal usage of --debug is to run it in a terminal, where it starts
by printing its SSH_AUTH_SOCK setting for me to paste into another
terminal to run test commands, and then follows that with diagnostic
logging of the requests it's receiving.
But if you'd rather get that diagnostic information in some location
other than a terminal – say, sent to a file which you're viewing in
'less' so that you can search back and forth in it, or piped to
another machine because your test requests are going to come from
somewhere out of sight of your monitor – then you might run 'pageant
--debug' with its stdout being a pipe or a file rather than a
terminal, in which case the standard stdio policy will make it
unbuffered, and the diagnostics won't show up in a timely manner.
The one-line code change is due to Marco Ricci, who had a rather
different motivation.
Jacob reports that bullseye objected to the change from
G_APPLICATION_FLAGS_NONE to G_APPLICATION_DEFAULT_FLAGS, on the
grounds that it only has the former defined. Sigh. Added a cmake
check.
In the previous few commits I noticed some repeated work in the form
of pointless empty implementations of Plug's log method, plus some
existing (and some new) empty cases of Socket's endpoint_info. As a
cleanup, I'm replacing as many as I can find with uses of a central
null implementation in the stubs directory.