1
0
mirror of https://git.tartarus.org/simon/putty.git synced 2025-01-09 17:38:00 +00:00
putty-source/windows/handle-io.c

688 lines
22 KiB
C
Raw Permalink Normal View History

/*
* handle-io.c: Module to give Windows front ends the general
* ability to deal with consoles, pipes, serial ports, or any other
* type of data stream accessed through a Windows API HANDLE rather
* than a WinSock SOCKET.
*
* We do this by spawning a subthread to continuously try to read
* from the handle. Every time a read successfully returns some
* data, the subthread sets an event object which is picked up by
* the main thread, and the main thread then sets an event in
* return to instruct the subthread to resume reading.
*
* Output works precisely the other way round, in a second
* subthread. The output subthread should not be attempting to
* write all the time, because it hasn't always got data _to_
* write; so the output thread waits for an event object notifying
* it to _attempt_ a write, and then it sets an event in return
* when one completes.
*
* (It's terribly annoying having to spawn a subthread for each
* direction of each handle. Technically it isn't necessary for
* serial ports, since we could use overlapped I/O within the main
* thread and wait directly on the event objects in the OVERLAPPED
* structures. However, we can't use this trick for some types of
* file handle at all - for some reason Windows restricts use of
* OVERLAPPED to files which were opened with the overlapped flag -
* and so we must use threads for those. This being the case, it's
* simplest just to use threads for everything rather than trying
* to keep track of multiple completely separate mechanisms.)
*/
#include <assert.h>
#include "putty.h"
/* ----------------------------------------------------------------------
* Generic definitions.
*/
Reorganise Windows HANDLE management. Before commit 6e69223dc262755, Pageant would stop working after a certain number of PuTTYs were active at the same time. (At most about 60, but maybe fewer - see below.) This was because of two separate bugs. The easy one, fixed in 6e69223dc262755 itself, was that PuTTY left each named-pipe connection to Pageant open for the rest of its lifetime. So the real problem was that Pageant had too many active connections at once. (And since a given PuTTY might make multiple connections during userauth - one to list keys, and maybe another to actually make a signature - that was why the number of _PuTTYs_ might vary.) It was clearly a bug that PuTTY was leaving connections to Pageant needlessly open. But it was _also_ a bug that Pageant couldn't handle more than about 60 at once. In this commit, I fix that secondary bug. The cause of the bug is that the WaitForMultipleObjects function family in the Windows API have a limit on the number of HANDLE objects they can select between. The limit is MAXIMUM_WAIT_OBJECTS, defined to be 64. And handle-io.c was using a separate event object for each I/O subthread to communicate back to the main thread, so as soon as all those event objects (plus a handful of other HANDLEs) added up to more than 64, we'd start passing an overlarge handle array to WaitForMultipleObjects, and it would start not doing what we wanted. To fix this, I've reorganised handle-io.c so that all its subthreads share just _one_ event object to signal readiness back to the main thread. There's now a linked list of 'struct handle' objects that are ready to be processed, protected by a CRITICAL_SECTION. Each subthread signals readiness by adding itself to the linked list, and setting the event object to indicate that the list is now non-empty. When the main thread receives the event, it iterates over the whole list processing all the ready handles. (Each 'struct handle' still has a separate event object for the main thread to use to communicate _to_ the subthread. That's OK, because no thread is ever waiting on all those events at once: each subthread only waits on its own.) The previous HT_FOREIGN system didn't really fit into this framework. So I've moved it out into its own system. There's now a handle-wait.c which deals with the relatively simple job of managing a list of handles that need to be waited for, each with a callback function; that's what communicates a list of HANDLEs to event loops, and receives the notification when the event loop notices that one of them has done something. And handle-io.c is now just one client of handle-wait.c, providing a single HANDLE to the event loop, and dealing internally with everything that needs to be done when that handle fires. The new top-level handle-wait.c system *still* can't deal with more than MAXIMUM_WAIT_OBJECTS. At the moment, I'm reasonably convinced it doesn't need to: the only kind of HANDLE that any of our tools could previously have needed to wait on more than one of was the one in handle-io.c that I've just removed. But I've left some assertions and a TODO comment in there just in case we need to change that in future.
2021-05-24 12:06:10 +00:00
typedef struct handle_list_node handle_list_node;
struct handle_list_node {
handle_list_node *next, *prev;
};
static void add_to_ready_list(handle_list_node *node);
/*
* Maximum amount of backlog we will allow to build up on an input
* handle before we stop reading from it.
*/
#define MAX_BACKLOG 32768
struct handle_generic {
/*
* Initial fields common to both handle_input and handle_output
* structures.
*
* The three HANDLEs are set up at initialisation time and are
* thereafter read-only to both main thread and subthread.
* `moribund' is only used by the main thread; `done' is
* written by the main thread before signalling to the
* subthread. `defunct' and `busy' are used only by the main
* thread.
*/
HANDLE h; /* the handle itself */
Reorganise Windows HANDLE management. Before commit 6e69223dc262755, Pageant would stop working after a certain number of PuTTYs were active at the same time. (At most about 60, but maybe fewer - see below.) This was because of two separate bugs. The easy one, fixed in 6e69223dc262755 itself, was that PuTTY left each named-pipe connection to Pageant open for the rest of its lifetime. So the real problem was that Pageant had too many active connections at once. (And since a given PuTTY might make multiple connections during userauth - one to list keys, and maybe another to actually make a signature - that was why the number of _PuTTYs_ might vary.) It was clearly a bug that PuTTY was leaving connections to Pageant needlessly open. But it was _also_ a bug that Pageant couldn't handle more than about 60 at once. In this commit, I fix that secondary bug. The cause of the bug is that the WaitForMultipleObjects function family in the Windows API have a limit on the number of HANDLE objects they can select between. The limit is MAXIMUM_WAIT_OBJECTS, defined to be 64. And handle-io.c was using a separate event object for each I/O subthread to communicate back to the main thread, so as soon as all those event objects (plus a handful of other HANDLEs) added up to more than 64, we'd start passing an overlarge handle array to WaitForMultipleObjects, and it would start not doing what we wanted. To fix this, I've reorganised handle-io.c so that all its subthreads share just _one_ event object to signal readiness back to the main thread. There's now a linked list of 'struct handle' objects that are ready to be processed, protected by a CRITICAL_SECTION. Each subthread signals readiness by adding itself to the linked list, and setting the event object to indicate that the list is now non-empty. When the main thread receives the event, it iterates over the whole list processing all the ready handles. (Each 'struct handle' still has a separate event object for the main thread to use to communicate _to_ the subthread. That's OK, because no thread is ever waiting on all those events at once: each subthread only waits on its own.) The previous HT_FOREIGN system didn't really fit into this framework. So I've moved it out into its own system. There's now a handle-wait.c which deals with the relatively simple job of managing a list of handles that need to be waited for, each with a callback function; that's what communicates a list of HANDLEs to event loops, and receives the notification when the event loop notices that one of them has done something. And handle-io.c is now just one client of handle-wait.c, providing a single HANDLE to the event loop, and dealing internally with everything that needs to be done when that handle fires. The new top-level handle-wait.c system *still* can't deal with more than MAXIMUM_WAIT_OBJECTS. At the moment, I'm reasonably convinced it doesn't need to: the only kind of HANDLE that any of our tools could previously have needed to wait on more than one of was the one in handle-io.c that I've just removed. But I've left some assertions and a TODO comment in there just in case we need to change that in future.
2021-05-24 12:06:10 +00:00
handle_list_node ready_node; /* for linking on to the ready list */
HANDLE ev_from_main; /* event used to signal back to us */
Convert a lot of 'int' variables to 'bool'. My normal habit these days, in new code, is to treat int and bool as _almost_ completely separate types. I'm still willing to use C's implicit test for zero on an integer (e.g. 'if (!blob.len)' is fine, no need to spell it out as blob.len != 0), but generally, if a variable is going to be conceptually a boolean, I like to declare it bool and assign to it using 'true' or 'false' rather than 0 or 1. PuTTY is an exception, because it predates the C99 bool, and I've stuck to its existing coding style even when adding new code to it. But it's been annoying me more and more, so now that I've decided C99 bool is an acceptable thing to require from our toolchain in the first place, here's a quite thorough trawl through the source doing 'boolification'. Many variables and function parameters are now typed as bool rather than int; many assignments of 0 or 1 to those variables are now spelled 'true' or 'false'. I managed this thorough conversion with the help of a custom clang plugin that I wrote to trawl the AST and apply heuristics to point out where things might want changing. So I've even managed to do a decent job on parts of the code I haven't looked at in years! To make the plugin's work easier, I pushed platform front ends generally in the direction of using standard 'bool' in preference to platform-specific boolean types like Windows BOOL or GTK's gboolean; I've left the platform booleans in places they _have_ to be for the platform APIs to work right, but variables only used by my own code have been converted wherever I found them. In a few places there are int values that look very like booleans in _most_ of the places they're used, but have a rarely-used third value, or a distinction between different nonzero values that most users don't care about. In these cases, I've _removed_ uses of 'true' and 'false' for the return values, to emphasise that there's something more subtle going on than a simple boolean answer: - the 'multisel' field in dialog.h's list box structure, for which the GTK front end in particular recognises a difference between 1 and 2 but nearly everything else treats as boolean - the 'urgent' parameter to plug_receive, where 1 vs 2 tells you something about the specific location of the urgent pointer, but most clients only care about 0 vs 'something nonzero' - the return value of wc_match, where -1 indicates a syntax error in the wildcard. - the return values from SSH-1 RSA-key loading functions, which use -1 for 'wrong passphrase' and 0 for all other failures (so any caller which already knows it's not loading an _encrypted private_ key can treat them as boolean) - term->esc_query, and the 'query' parameter in toggle_mode in terminal.c, which _usually_ hold 0 for ESC[123h or 1 for ESC[?123h, but can also hold -1 for some other intervening character that we don't support. In a few places there's an integer that I haven't turned into a bool even though it really _can_ only take values 0 or 1 (and, as above, tried to make the call sites consistent in not calling those values true and false), on the grounds that I thought it would make it more confusing to imply that the 0 value was in some sense 'negative' or bad and the 1 positive or good: - the return value of plug_accepting uses the POSIXish convention of 0=success and nonzero=error; I think if I made it bool then I'd also want to reverse its sense, and that's a job for a separate piece of work. - the 'screen' parameter to lineptr() in terminal.c, where 0 and 1 represent the default and alternate screens. There's no obvious reason why one of those should be considered 'true' or 'positive' or 'success' - they're just indices - so I've left it as int. ssh_scp_recv had particularly confusing semantics for its previous int return value: its call sites used '<= 0' to check for error, but it never actually returned a negative number, just 0 or 1. Now the function and its call sites agree that it's a bool. In a couple of places I've renamed variables called 'ret', because I don't like that name any more - it's unclear whether it means the return value (in preparation) for the _containing_ function or the return value received from a subroutine call, and occasionally I've accidentally used the same variable for both and introduced a bug. So where one of those got in my way, I've renamed it to 'toret' or 'retd' (the latter short for 'returned') in line with my usual modern practice, but I haven't done a thorough job of finding all of them. Finally, one amusing side effect of doing this is that I've had to separate quite a few chained assignments. It used to be perfectly fine to write 'a = b = c = TRUE' when a,b,c were int and TRUE was just a the 'true' defined by stdbool.h, that idiom provokes a warning from gcc: 'suggest parentheses around assignment used as truth value'!
2018-11-02 19:23:19 +00:00
bool moribund; /* are we going to kill this soon? */
bool done; /* request subthread to terminate */
bool defunct; /* has the subthread already gone? */
bool busy; /* operation currently in progress? */
void *privdata; /* for client to remember who they are */
};
Reorganise Windows HANDLE management. Before commit 6e69223dc262755, Pageant would stop working after a certain number of PuTTYs were active at the same time. (At most about 60, but maybe fewer - see below.) This was because of two separate bugs. The easy one, fixed in 6e69223dc262755 itself, was that PuTTY left each named-pipe connection to Pageant open for the rest of its lifetime. So the real problem was that Pageant had too many active connections at once. (And since a given PuTTY might make multiple connections during userauth - one to list keys, and maybe another to actually make a signature - that was why the number of _PuTTYs_ might vary.) It was clearly a bug that PuTTY was leaving connections to Pageant needlessly open. But it was _also_ a bug that Pageant couldn't handle more than about 60 at once. In this commit, I fix that secondary bug. The cause of the bug is that the WaitForMultipleObjects function family in the Windows API have a limit on the number of HANDLE objects they can select between. The limit is MAXIMUM_WAIT_OBJECTS, defined to be 64. And handle-io.c was using a separate event object for each I/O subthread to communicate back to the main thread, so as soon as all those event objects (plus a handful of other HANDLEs) added up to more than 64, we'd start passing an overlarge handle array to WaitForMultipleObjects, and it would start not doing what we wanted. To fix this, I've reorganised handle-io.c so that all its subthreads share just _one_ event object to signal readiness back to the main thread. There's now a linked list of 'struct handle' objects that are ready to be processed, protected by a CRITICAL_SECTION. Each subthread signals readiness by adding itself to the linked list, and setting the event object to indicate that the list is now non-empty. When the main thread receives the event, it iterates over the whole list processing all the ready handles. (Each 'struct handle' still has a separate event object for the main thread to use to communicate _to_ the subthread. That's OK, because no thread is ever waiting on all those events at once: each subthread only waits on its own.) The previous HT_FOREIGN system didn't really fit into this framework. So I've moved it out into its own system. There's now a handle-wait.c which deals with the relatively simple job of managing a list of handles that need to be waited for, each with a callback function; that's what communicates a list of HANDLEs to event loops, and receives the notification when the event loop notices that one of them has done something. And handle-io.c is now just one client of handle-wait.c, providing a single HANDLE to the event loop, and dealing internally with everything that needs to be done when that handle fires. The new top-level handle-wait.c system *still* can't deal with more than MAXIMUM_WAIT_OBJECTS. At the moment, I'm reasonably convinced it doesn't need to: the only kind of HANDLE that any of our tools could previously have needed to wait on more than one of was the one in handle-io.c that I've just removed. But I've left some assertions and a TODO comment in there just in case we need to change that in future.
2021-05-24 12:06:10 +00:00
typedef enum { HT_INPUT, HT_OUTPUT } HandleType;
/* ----------------------------------------------------------------------
* Input threads.
*/
/*
* Data required by an input thread.
*/
struct handle_input {
/*
* Copy of the handle_generic structure.
*/
HANDLE h; /* the handle itself */
Reorganise Windows HANDLE management. Before commit 6e69223dc262755, Pageant would stop working after a certain number of PuTTYs were active at the same time. (At most about 60, but maybe fewer - see below.) This was because of two separate bugs. The easy one, fixed in 6e69223dc262755 itself, was that PuTTY left each named-pipe connection to Pageant open for the rest of its lifetime. So the real problem was that Pageant had too many active connections at once. (And since a given PuTTY might make multiple connections during userauth - one to list keys, and maybe another to actually make a signature - that was why the number of _PuTTYs_ might vary.) It was clearly a bug that PuTTY was leaving connections to Pageant needlessly open. But it was _also_ a bug that Pageant couldn't handle more than about 60 at once. In this commit, I fix that secondary bug. The cause of the bug is that the WaitForMultipleObjects function family in the Windows API have a limit on the number of HANDLE objects they can select between. The limit is MAXIMUM_WAIT_OBJECTS, defined to be 64. And handle-io.c was using a separate event object for each I/O subthread to communicate back to the main thread, so as soon as all those event objects (plus a handful of other HANDLEs) added up to more than 64, we'd start passing an overlarge handle array to WaitForMultipleObjects, and it would start not doing what we wanted. To fix this, I've reorganised handle-io.c so that all its subthreads share just _one_ event object to signal readiness back to the main thread. There's now a linked list of 'struct handle' objects that are ready to be processed, protected by a CRITICAL_SECTION. Each subthread signals readiness by adding itself to the linked list, and setting the event object to indicate that the list is now non-empty. When the main thread receives the event, it iterates over the whole list processing all the ready handles. (Each 'struct handle' still has a separate event object for the main thread to use to communicate _to_ the subthread. That's OK, because no thread is ever waiting on all those events at once: each subthread only waits on its own.) The previous HT_FOREIGN system didn't really fit into this framework. So I've moved it out into its own system. There's now a handle-wait.c which deals with the relatively simple job of managing a list of handles that need to be waited for, each with a callback function; that's what communicates a list of HANDLEs to event loops, and receives the notification when the event loop notices that one of them has done something. And handle-io.c is now just one client of handle-wait.c, providing a single HANDLE to the event loop, and dealing internally with everything that needs to be done when that handle fires. The new top-level handle-wait.c system *still* can't deal with more than MAXIMUM_WAIT_OBJECTS. At the moment, I'm reasonably convinced it doesn't need to: the only kind of HANDLE that any of our tools could previously have needed to wait on more than one of was the one in handle-io.c that I've just removed. But I've left some assertions and a TODO comment in there just in case we need to change that in future.
2021-05-24 12:06:10 +00:00
handle_list_node ready_node; /* for linking on to the ready list */
HANDLE ev_from_main; /* event used to signal back to us */
Convert a lot of 'int' variables to 'bool'. My normal habit these days, in new code, is to treat int and bool as _almost_ completely separate types. I'm still willing to use C's implicit test for zero on an integer (e.g. 'if (!blob.len)' is fine, no need to spell it out as blob.len != 0), but generally, if a variable is going to be conceptually a boolean, I like to declare it bool and assign to it using 'true' or 'false' rather than 0 or 1. PuTTY is an exception, because it predates the C99 bool, and I've stuck to its existing coding style even when adding new code to it. But it's been annoying me more and more, so now that I've decided C99 bool is an acceptable thing to require from our toolchain in the first place, here's a quite thorough trawl through the source doing 'boolification'. Many variables and function parameters are now typed as bool rather than int; many assignments of 0 or 1 to those variables are now spelled 'true' or 'false'. I managed this thorough conversion with the help of a custom clang plugin that I wrote to trawl the AST and apply heuristics to point out where things might want changing. So I've even managed to do a decent job on parts of the code I haven't looked at in years! To make the plugin's work easier, I pushed platform front ends generally in the direction of using standard 'bool' in preference to platform-specific boolean types like Windows BOOL or GTK's gboolean; I've left the platform booleans in places they _have_ to be for the platform APIs to work right, but variables only used by my own code have been converted wherever I found them. In a few places there are int values that look very like booleans in _most_ of the places they're used, but have a rarely-used third value, or a distinction between different nonzero values that most users don't care about. In these cases, I've _removed_ uses of 'true' and 'false' for the return values, to emphasise that there's something more subtle going on than a simple boolean answer: - the 'multisel' field in dialog.h's list box structure, for which the GTK front end in particular recognises a difference between 1 and 2 but nearly everything else treats as boolean - the 'urgent' parameter to plug_receive, where 1 vs 2 tells you something about the specific location of the urgent pointer, but most clients only care about 0 vs 'something nonzero' - the return value of wc_match, where -1 indicates a syntax error in the wildcard. - the return values from SSH-1 RSA-key loading functions, which use -1 for 'wrong passphrase' and 0 for all other failures (so any caller which already knows it's not loading an _encrypted private_ key can treat them as boolean) - term->esc_query, and the 'query' parameter in toggle_mode in terminal.c, which _usually_ hold 0 for ESC[123h or 1 for ESC[?123h, but can also hold -1 for some other intervening character that we don't support. In a few places there's an integer that I haven't turned into a bool even though it really _can_ only take values 0 or 1 (and, as above, tried to make the call sites consistent in not calling those values true and false), on the grounds that I thought it would make it more confusing to imply that the 0 value was in some sense 'negative' or bad and the 1 positive or good: - the return value of plug_accepting uses the POSIXish convention of 0=success and nonzero=error; I think if I made it bool then I'd also want to reverse its sense, and that's a job for a separate piece of work. - the 'screen' parameter to lineptr() in terminal.c, where 0 and 1 represent the default and alternate screens. There's no obvious reason why one of those should be considered 'true' or 'positive' or 'success' - they're just indices - so I've left it as int. ssh_scp_recv had particularly confusing semantics for its previous int return value: its call sites used '<= 0' to check for error, but it never actually returned a negative number, just 0 or 1. Now the function and its call sites agree that it's a bool. In a couple of places I've renamed variables called 'ret', because I don't like that name any more - it's unclear whether it means the return value (in preparation) for the _containing_ function or the return value received from a subroutine call, and occasionally I've accidentally used the same variable for both and introduced a bug. So where one of those got in my way, I've renamed it to 'toret' or 'retd' (the latter short for 'returned') in line with my usual modern practice, but I haven't done a thorough job of finding all of them. Finally, one amusing side effect of doing this is that I've had to separate quite a few chained assignments. It used to be perfectly fine to write 'a = b = c = TRUE' when a,b,c were int and TRUE was just a the 'true' defined by stdbool.h, that idiom provokes a warning from gcc: 'suggest parentheses around assignment used as truth value'!
2018-11-02 19:23:19 +00:00
bool moribund; /* are we going to kill this soon? */
bool done; /* request subthread to terminate */
bool defunct; /* has the subthread already gone? */
bool busy; /* operation currently in progress? */
void *privdata; /* for client to remember who they are */
/*
* Data set at initialisation and then read-only.
*/
int flags;
/*
Reorganise Windows HANDLE management. Before commit 6e69223dc262755, Pageant would stop working after a certain number of PuTTYs were active at the same time. (At most about 60, but maybe fewer - see below.) This was because of two separate bugs. The easy one, fixed in 6e69223dc262755 itself, was that PuTTY left each named-pipe connection to Pageant open for the rest of its lifetime. So the real problem was that Pageant had too many active connections at once. (And since a given PuTTY might make multiple connections during userauth - one to list keys, and maybe another to actually make a signature - that was why the number of _PuTTYs_ might vary.) It was clearly a bug that PuTTY was leaving connections to Pageant needlessly open. But it was _also_ a bug that Pageant couldn't handle more than about 60 at once. In this commit, I fix that secondary bug. The cause of the bug is that the WaitForMultipleObjects function family in the Windows API have a limit on the number of HANDLE objects they can select between. The limit is MAXIMUM_WAIT_OBJECTS, defined to be 64. And handle-io.c was using a separate event object for each I/O subthread to communicate back to the main thread, so as soon as all those event objects (plus a handful of other HANDLEs) added up to more than 64, we'd start passing an overlarge handle array to WaitForMultipleObjects, and it would start not doing what we wanted. To fix this, I've reorganised handle-io.c so that all its subthreads share just _one_ event object to signal readiness back to the main thread. There's now a linked list of 'struct handle' objects that are ready to be processed, protected by a CRITICAL_SECTION. Each subthread signals readiness by adding itself to the linked list, and setting the event object to indicate that the list is now non-empty. When the main thread receives the event, it iterates over the whole list processing all the ready handles. (Each 'struct handle' still has a separate event object for the main thread to use to communicate _to_ the subthread. That's OK, because no thread is ever waiting on all those events at once: each subthread only waits on its own.) The previous HT_FOREIGN system didn't really fit into this framework. So I've moved it out into its own system. There's now a handle-wait.c which deals with the relatively simple job of managing a list of handles that need to be waited for, each with a callback function; that's what communicates a list of HANDLEs to event loops, and receives the notification when the event loop notices that one of them has done something. And handle-io.c is now just one client of handle-wait.c, providing a single HANDLE to the event loop, and dealing internally with everything that needs to be done when that handle fires. The new top-level handle-wait.c system *still* can't deal with more than MAXIMUM_WAIT_OBJECTS. At the moment, I'm reasonably convinced it doesn't need to: the only kind of HANDLE that any of our tools could previously have needed to wait on more than one of was the one in handle-io.c that I've just removed. But I've left some assertions and a TODO comment in there just in case we need to change that in future.
2021-05-24 12:06:10 +00:00
* Data set by the input thread before marking the handle ready,
* and read by the main thread after receiving that signal.
*/
char buffer[4096]; /* the data read from the handle */
DWORD len; /* how much data that was */
int readerr; /* lets us know about read errors */
/*
* Callback function called by this module when data arrives on
* an input handle.
*/
handle_inputfn_t gotdata;
};
/*
* The actual thread procedure for an input thread.
*/
static DWORD WINAPI handle_input_threadfunc(void *param)
{
struct handle_input *ctx = (struct handle_input *) param;
OVERLAPPED ovl, *povl;
HANDLE oev;
Convert a lot of 'int' variables to 'bool'. My normal habit these days, in new code, is to treat int and bool as _almost_ completely separate types. I'm still willing to use C's implicit test for zero on an integer (e.g. 'if (!blob.len)' is fine, no need to spell it out as blob.len != 0), but generally, if a variable is going to be conceptually a boolean, I like to declare it bool and assign to it using 'true' or 'false' rather than 0 or 1. PuTTY is an exception, because it predates the C99 bool, and I've stuck to its existing coding style even when adding new code to it. But it's been annoying me more and more, so now that I've decided C99 bool is an acceptable thing to require from our toolchain in the first place, here's a quite thorough trawl through the source doing 'boolification'. Many variables and function parameters are now typed as bool rather than int; many assignments of 0 or 1 to those variables are now spelled 'true' or 'false'. I managed this thorough conversion with the help of a custom clang plugin that I wrote to trawl the AST and apply heuristics to point out where things might want changing. So I've even managed to do a decent job on parts of the code I haven't looked at in years! To make the plugin's work easier, I pushed platform front ends generally in the direction of using standard 'bool' in preference to platform-specific boolean types like Windows BOOL or GTK's gboolean; I've left the platform booleans in places they _have_ to be for the platform APIs to work right, but variables only used by my own code have been converted wherever I found them. In a few places there are int values that look very like booleans in _most_ of the places they're used, but have a rarely-used third value, or a distinction between different nonzero values that most users don't care about. In these cases, I've _removed_ uses of 'true' and 'false' for the return values, to emphasise that there's something more subtle going on than a simple boolean answer: - the 'multisel' field in dialog.h's list box structure, for which the GTK front end in particular recognises a difference between 1 and 2 but nearly everything else treats as boolean - the 'urgent' parameter to plug_receive, where 1 vs 2 tells you something about the specific location of the urgent pointer, but most clients only care about 0 vs 'something nonzero' - the return value of wc_match, where -1 indicates a syntax error in the wildcard. - the return values from SSH-1 RSA-key loading functions, which use -1 for 'wrong passphrase' and 0 for all other failures (so any caller which already knows it's not loading an _encrypted private_ key can treat them as boolean) - term->esc_query, and the 'query' parameter in toggle_mode in terminal.c, which _usually_ hold 0 for ESC[123h or 1 for ESC[?123h, but can also hold -1 for some other intervening character that we don't support. In a few places there's an integer that I haven't turned into a bool even though it really _can_ only take values 0 or 1 (and, as above, tried to make the call sites consistent in not calling those values true and false), on the grounds that I thought it would make it more confusing to imply that the 0 value was in some sense 'negative' or bad and the 1 positive or good: - the return value of plug_accepting uses the POSIXish convention of 0=success and nonzero=error; I think if I made it bool then I'd also want to reverse its sense, and that's a job for a separate piece of work. - the 'screen' parameter to lineptr() in terminal.c, where 0 and 1 represent the default and alternate screens. There's no obvious reason why one of those should be considered 'true' or 'positive' or 'success' - they're just indices - so I've left it as int. ssh_scp_recv had particularly confusing semantics for its previous int return value: its call sites used '<= 0' to check for error, but it never actually returned a negative number, just 0 or 1. Now the function and its call sites agree that it's a bool. In a couple of places I've renamed variables called 'ret', because I don't like that name any more - it's unclear whether it means the return value (in preparation) for the _containing_ function or the return value received from a subroutine call, and occasionally I've accidentally used the same variable for both and introduced a bug. So where one of those got in my way, I've renamed it to 'toret' or 'retd' (the latter short for 'returned') in line with my usual modern practice, but I haven't done a thorough job of finding all of them. Finally, one amusing side effect of doing this is that I've had to separate quite a few chained assignments. It used to be perfectly fine to write 'a = b = c = TRUE' when a,b,c were int and TRUE was just a the 'true' defined by stdbool.h, that idiom provokes a warning from gcc: 'suggest parentheses around assignment used as truth value'!
2018-11-02 19:23:19 +00:00
bool readret, finished;
int readlen;
if (ctx->flags & HANDLE_FLAG_OVERLAPPED) {
povl = &ovl;
oev = CreateEvent(NULL, true, false, NULL);
} else {
povl = NULL;
}
if (ctx->flags & HANDLE_FLAG_UNITBUFFER)
readlen = 1;
else
readlen = sizeof(ctx->buffer);
while (1) {
if (povl) {
memset(povl, 0, sizeof(OVERLAPPED));
povl->hEvent = oev;
}
readret = ReadFile(ctx->h, ctx->buffer,readlen, &ctx->len, povl);
if (!readret)
ctx->readerr = GetLastError();
else
ctx->readerr = 0;
if (povl && !readret && ctx->readerr == ERROR_IO_PENDING) {
WaitForSingleObject(povl->hEvent, INFINITE);
readret = GetOverlappedResult(ctx->h, povl, &ctx->len, false);
if (!readret)
ctx->readerr = GetLastError();
else
ctx->readerr = 0;
}
if (!readret) {
/*
* Windows apparently sends ERROR_BROKEN_PIPE when a
* pipe we're reading from is closed normally from the
* writing end. This is ludicrous; if that situation
* isn't a natural EOF, _nothing_ is. So if we get that
* particular error, we pretend it's EOF.
*/
if (ctx->readerr == ERROR_BROKEN_PIPE)
ctx->readerr = 0;
ctx->len = 0;
}
if (readret && ctx->len == 0 &&
(ctx->flags & HANDLE_FLAG_IGNOREEOF))
continue;
/*
* If we just set ctx->len to 0, that means the read operation
* has returned end-of-file. Telling that to the main thread
* will cause it to set its 'defunct' flag and dispose of the
* handle structure at the next opportunity, in which case we
* mustn't touch ctx at all after the SetEvent. (Hence we do
* even _this_ check before the SetEvent.)
*/
finished = (ctx->len == 0);
Reorganise Windows HANDLE management. Before commit 6e69223dc262755, Pageant would stop working after a certain number of PuTTYs were active at the same time. (At most about 60, but maybe fewer - see below.) This was because of two separate bugs. The easy one, fixed in 6e69223dc262755 itself, was that PuTTY left each named-pipe connection to Pageant open for the rest of its lifetime. So the real problem was that Pageant had too many active connections at once. (And since a given PuTTY might make multiple connections during userauth - one to list keys, and maybe another to actually make a signature - that was why the number of _PuTTYs_ might vary.) It was clearly a bug that PuTTY was leaving connections to Pageant needlessly open. But it was _also_ a bug that Pageant couldn't handle more than about 60 at once. In this commit, I fix that secondary bug. The cause of the bug is that the WaitForMultipleObjects function family in the Windows API have a limit on the number of HANDLE objects they can select between. The limit is MAXIMUM_WAIT_OBJECTS, defined to be 64. And handle-io.c was using a separate event object for each I/O subthread to communicate back to the main thread, so as soon as all those event objects (plus a handful of other HANDLEs) added up to more than 64, we'd start passing an overlarge handle array to WaitForMultipleObjects, and it would start not doing what we wanted. To fix this, I've reorganised handle-io.c so that all its subthreads share just _one_ event object to signal readiness back to the main thread. There's now a linked list of 'struct handle' objects that are ready to be processed, protected by a CRITICAL_SECTION. Each subthread signals readiness by adding itself to the linked list, and setting the event object to indicate that the list is now non-empty. When the main thread receives the event, it iterates over the whole list processing all the ready handles. (Each 'struct handle' still has a separate event object for the main thread to use to communicate _to_ the subthread. That's OK, because no thread is ever waiting on all those events at once: each subthread only waits on its own.) The previous HT_FOREIGN system didn't really fit into this framework. So I've moved it out into its own system. There's now a handle-wait.c which deals with the relatively simple job of managing a list of handles that need to be waited for, each with a callback function; that's what communicates a list of HANDLEs to event loops, and receives the notification when the event loop notices that one of them has done something. And handle-io.c is now just one client of handle-wait.c, providing a single HANDLE to the event loop, and dealing internally with everything that needs to be done when that handle fires. The new top-level handle-wait.c system *still* can't deal with more than MAXIMUM_WAIT_OBJECTS. At the moment, I'm reasonably convinced it doesn't need to: the only kind of HANDLE that any of our tools could previously have needed to wait on more than one of was the one in handle-io.c that I've just removed. But I've left some assertions and a TODO comment in there just in case we need to change that in future.
2021-05-24 12:06:10 +00:00
add_to_ready_list(&ctx->ready_node);
if (finished)
break;
WaitForSingleObject(ctx->ev_from_main, INFINITE);
if (ctx->done) {
/*
* The main thread has asked us to shut down. Send back an
* event indicating that we've done so. Hereafter we must
* not touch ctx at all, because the main thread might
* have freed it.
*/
Reorganise Windows HANDLE management. Before commit 6e69223dc262755, Pageant would stop working after a certain number of PuTTYs were active at the same time. (At most about 60, but maybe fewer - see below.) This was because of two separate bugs. The easy one, fixed in 6e69223dc262755 itself, was that PuTTY left each named-pipe connection to Pageant open for the rest of its lifetime. So the real problem was that Pageant had too many active connections at once. (And since a given PuTTY might make multiple connections during userauth - one to list keys, and maybe another to actually make a signature - that was why the number of _PuTTYs_ might vary.) It was clearly a bug that PuTTY was leaving connections to Pageant needlessly open. But it was _also_ a bug that Pageant couldn't handle more than about 60 at once. In this commit, I fix that secondary bug. The cause of the bug is that the WaitForMultipleObjects function family in the Windows API have a limit on the number of HANDLE objects they can select between. The limit is MAXIMUM_WAIT_OBJECTS, defined to be 64. And handle-io.c was using a separate event object for each I/O subthread to communicate back to the main thread, so as soon as all those event objects (plus a handful of other HANDLEs) added up to more than 64, we'd start passing an overlarge handle array to WaitForMultipleObjects, and it would start not doing what we wanted. To fix this, I've reorganised handle-io.c so that all its subthreads share just _one_ event object to signal readiness back to the main thread. There's now a linked list of 'struct handle' objects that are ready to be processed, protected by a CRITICAL_SECTION. Each subthread signals readiness by adding itself to the linked list, and setting the event object to indicate that the list is now non-empty. When the main thread receives the event, it iterates over the whole list processing all the ready handles. (Each 'struct handle' still has a separate event object for the main thread to use to communicate _to_ the subthread. That's OK, because no thread is ever waiting on all those events at once: each subthread only waits on its own.) The previous HT_FOREIGN system didn't really fit into this framework. So I've moved it out into its own system. There's now a handle-wait.c which deals with the relatively simple job of managing a list of handles that need to be waited for, each with a callback function; that's what communicates a list of HANDLEs to event loops, and receives the notification when the event loop notices that one of them has done something. And handle-io.c is now just one client of handle-wait.c, providing a single HANDLE to the event loop, and dealing internally with everything that needs to be done when that handle fires. The new top-level handle-wait.c system *still* can't deal with more than MAXIMUM_WAIT_OBJECTS. At the moment, I'm reasonably convinced it doesn't need to: the only kind of HANDLE that any of our tools could previously have needed to wait on more than one of was the one in handle-io.c that I've just removed. But I've left some assertions and a TODO comment in there just in case we need to change that in future.
2021-05-24 12:06:10 +00:00
add_to_ready_list(&ctx->ready_node);
break;
}
}
if (povl)
CloseHandle(oev);
return 0;
}
/*
* This is called after a successful read, or from the
* `unthrottle' function. It decides whether or not to begin a new
* read operation.
*/
static void handle_throttle(struct handle_input *ctx, int backlog)
{
if (ctx->defunct)
return;
/*
* If there's a read operation already in progress, do nothing:
* when that completes, we'll come back here and be in a
* position to make a better decision.
*/
if (ctx->busy)
return;
/*
* Otherwise, we must decide whether to start a new read based
* on the size of the backlog.
*/
if (backlog < MAX_BACKLOG) {
SetEvent(ctx->ev_from_main);
ctx->busy = true;
}
}
/* ----------------------------------------------------------------------
* Output threads.
*/
/*
* Data required by an output thread.
*/
struct handle_output {
/*
* Copy of the handle_generic structure.
*/
HANDLE h; /* the handle itself */
Reorganise Windows HANDLE management. Before commit 6e69223dc262755, Pageant would stop working after a certain number of PuTTYs were active at the same time. (At most about 60, but maybe fewer - see below.) This was because of two separate bugs. The easy one, fixed in 6e69223dc262755 itself, was that PuTTY left each named-pipe connection to Pageant open for the rest of its lifetime. So the real problem was that Pageant had too many active connections at once. (And since a given PuTTY might make multiple connections during userauth - one to list keys, and maybe another to actually make a signature - that was why the number of _PuTTYs_ might vary.) It was clearly a bug that PuTTY was leaving connections to Pageant needlessly open. But it was _also_ a bug that Pageant couldn't handle more than about 60 at once. In this commit, I fix that secondary bug. The cause of the bug is that the WaitForMultipleObjects function family in the Windows API have a limit on the number of HANDLE objects they can select between. The limit is MAXIMUM_WAIT_OBJECTS, defined to be 64. And handle-io.c was using a separate event object for each I/O subthread to communicate back to the main thread, so as soon as all those event objects (plus a handful of other HANDLEs) added up to more than 64, we'd start passing an overlarge handle array to WaitForMultipleObjects, and it would start not doing what we wanted. To fix this, I've reorganised handle-io.c so that all its subthreads share just _one_ event object to signal readiness back to the main thread. There's now a linked list of 'struct handle' objects that are ready to be processed, protected by a CRITICAL_SECTION. Each subthread signals readiness by adding itself to the linked list, and setting the event object to indicate that the list is now non-empty. When the main thread receives the event, it iterates over the whole list processing all the ready handles. (Each 'struct handle' still has a separate event object for the main thread to use to communicate _to_ the subthread. That's OK, because no thread is ever waiting on all those events at once: each subthread only waits on its own.) The previous HT_FOREIGN system didn't really fit into this framework. So I've moved it out into its own system. There's now a handle-wait.c which deals with the relatively simple job of managing a list of handles that need to be waited for, each with a callback function; that's what communicates a list of HANDLEs to event loops, and receives the notification when the event loop notices that one of them has done something. And handle-io.c is now just one client of handle-wait.c, providing a single HANDLE to the event loop, and dealing internally with everything that needs to be done when that handle fires. The new top-level handle-wait.c system *still* can't deal with more than MAXIMUM_WAIT_OBJECTS. At the moment, I'm reasonably convinced it doesn't need to: the only kind of HANDLE that any of our tools could previously have needed to wait on more than one of was the one in handle-io.c that I've just removed. But I've left some assertions and a TODO comment in there just in case we need to change that in future.
2021-05-24 12:06:10 +00:00
handle_list_node ready_node; /* for linking on to the ready list */
HANDLE ev_from_main; /* event used to signal back to us */
Convert a lot of 'int' variables to 'bool'. My normal habit these days, in new code, is to treat int and bool as _almost_ completely separate types. I'm still willing to use C's implicit test for zero on an integer (e.g. 'if (!blob.len)' is fine, no need to spell it out as blob.len != 0), but generally, if a variable is going to be conceptually a boolean, I like to declare it bool and assign to it using 'true' or 'false' rather than 0 or 1. PuTTY is an exception, because it predates the C99 bool, and I've stuck to its existing coding style even when adding new code to it. But it's been annoying me more and more, so now that I've decided C99 bool is an acceptable thing to require from our toolchain in the first place, here's a quite thorough trawl through the source doing 'boolification'. Many variables and function parameters are now typed as bool rather than int; many assignments of 0 or 1 to those variables are now spelled 'true' or 'false'. I managed this thorough conversion with the help of a custom clang plugin that I wrote to trawl the AST and apply heuristics to point out where things might want changing. So I've even managed to do a decent job on parts of the code I haven't looked at in years! To make the plugin's work easier, I pushed platform front ends generally in the direction of using standard 'bool' in preference to platform-specific boolean types like Windows BOOL or GTK's gboolean; I've left the platform booleans in places they _have_ to be for the platform APIs to work right, but variables only used by my own code have been converted wherever I found them. In a few places there are int values that look very like booleans in _most_ of the places they're used, but have a rarely-used third value, or a distinction between different nonzero values that most users don't care about. In these cases, I've _removed_ uses of 'true' and 'false' for the return values, to emphasise that there's something more subtle going on than a simple boolean answer: - the 'multisel' field in dialog.h's list box structure, for which the GTK front end in particular recognises a difference between 1 and 2 but nearly everything else treats as boolean - the 'urgent' parameter to plug_receive, where 1 vs 2 tells you something about the specific location of the urgent pointer, but most clients only care about 0 vs 'something nonzero' - the return value of wc_match, where -1 indicates a syntax error in the wildcard. - the return values from SSH-1 RSA-key loading functions, which use -1 for 'wrong passphrase' and 0 for all other failures (so any caller which already knows it's not loading an _encrypted private_ key can treat them as boolean) - term->esc_query, and the 'query' parameter in toggle_mode in terminal.c, which _usually_ hold 0 for ESC[123h or 1 for ESC[?123h, but can also hold -1 for some other intervening character that we don't support. In a few places there's an integer that I haven't turned into a bool even though it really _can_ only take values 0 or 1 (and, as above, tried to make the call sites consistent in not calling those values true and false), on the grounds that I thought it would make it more confusing to imply that the 0 value was in some sense 'negative' or bad and the 1 positive or good: - the return value of plug_accepting uses the POSIXish convention of 0=success and nonzero=error; I think if I made it bool then I'd also want to reverse its sense, and that's a job for a separate piece of work. - the 'screen' parameter to lineptr() in terminal.c, where 0 and 1 represent the default and alternate screens. There's no obvious reason why one of those should be considered 'true' or 'positive' or 'success' - they're just indices - so I've left it as int. ssh_scp_recv had particularly confusing semantics for its previous int return value: its call sites used '<= 0' to check for error, but it never actually returned a negative number, just 0 or 1. Now the function and its call sites agree that it's a bool. In a couple of places I've renamed variables called 'ret', because I don't like that name any more - it's unclear whether it means the return value (in preparation) for the _containing_ function or the return value received from a subroutine call, and occasionally I've accidentally used the same variable for both and introduced a bug. So where one of those got in my way, I've renamed it to 'toret' or 'retd' (the latter short for 'returned') in line with my usual modern practice, but I haven't done a thorough job of finding all of them. Finally, one amusing side effect of doing this is that I've had to separate quite a few chained assignments. It used to be perfectly fine to write 'a = b = c = TRUE' when a,b,c were int and TRUE was just a the 'true' defined by stdbool.h, that idiom provokes a warning from gcc: 'suggest parentheses around assignment used as truth value'!
2018-11-02 19:23:19 +00:00
bool moribund; /* are we going to kill this soon? */
bool done; /* request subthread to terminate */
bool defunct; /* has the subthread already gone? */
bool busy; /* operation currently in progress? */
void *privdata; /* for client to remember who they are */
/*
* Data set at initialisation and then read-only.
*/
int flags;
/*
* Data set by the main thread before signalling ev_from_main,
* and read by the input thread after receiving that signal.
*/
const char *buffer; /* the data to write */
DWORD len; /* how much data there is */
/*
Reorganise Windows HANDLE management. Before commit 6e69223dc262755, Pageant would stop working after a certain number of PuTTYs were active at the same time. (At most about 60, but maybe fewer - see below.) This was because of two separate bugs. The easy one, fixed in 6e69223dc262755 itself, was that PuTTY left each named-pipe connection to Pageant open for the rest of its lifetime. So the real problem was that Pageant had too many active connections at once. (And since a given PuTTY might make multiple connections during userauth - one to list keys, and maybe another to actually make a signature - that was why the number of _PuTTYs_ might vary.) It was clearly a bug that PuTTY was leaving connections to Pageant needlessly open. But it was _also_ a bug that Pageant couldn't handle more than about 60 at once. In this commit, I fix that secondary bug. The cause of the bug is that the WaitForMultipleObjects function family in the Windows API have a limit on the number of HANDLE objects they can select between. The limit is MAXIMUM_WAIT_OBJECTS, defined to be 64. And handle-io.c was using a separate event object for each I/O subthread to communicate back to the main thread, so as soon as all those event objects (plus a handful of other HANDLEs) added up to more than 64, we'd start passing an overlarge handle array to WaitForMultipleObjects, and it would start not doing what we wanted. To fix this, I've reorganised handle-io.c so that all its subthreads share just _one_ event object to signal readiness back to the main thread. There's now a linked list of 'struct handle' objects that are ready to be processed, protected by a CRITICAL_SECTION. Each subthread signals readiness by adding itself to the linked list, and setting the event object to indicate that the list is now non-empty. When the main thread receives the event, it iterates over the whole list processing all the ready handles. (Each 'struct handle' still has a separate event object for the main thread to use to communicate _to_ the subthread. That's OK, because no thread is ever waiting on all those events at once: each subthread only waits on its own.) The previous HT_FOREIGN system didn't really fit into this framework. So I've moved it out into its own system. There's now a handle-wait.c which deals with the relatively simple job of managing a list of handles that need to be waited for, each with a callback function; that's what communicates a list of HANDLEs to event loops, and receives the notification when the event loop notices that one of them has done something. And handle-io.c is now just one client of handle-wait.c, providing a single HANDLE to the event loop, and dealing internally with everything that needs to be done when that handle fires. The new top-level handle-wait.c system *still* can't deal with more than MAXIMUM_WAIT_OBJECTS. At the moment, I'm reasonably convinced it doesn't need to: the only kind of HANDLE that any of our tools could previously have needed to wait on more than one of was the one in handle-io.c that I've just removed. But I've left some assertions and a TODO comment in there just in case we need to change that in future.
2021-05-24 12:06:10 +00:00
* Data set by the input thread before marking this handle as
* ready, and read by the main thread after receiving that signal.
*/
DWORD lenwritten; /* how much data we actually wrote */
int writeerr; /* return value from WriteFile */
/*
* Data only ever read or written by the main thread.
*/
bufchain queued_data; /* data still waiting to be written */
enum { EOF_NO, EOF_PENDING, EOF_SENT } outgoingeof;
/*
* Callback function called when the backlog in the bufchain
* drops.
*/
handle_outputfn_t sentdata;
handle_write_eof: delegate CloseHandle back to the client. When a writable HANDLE is managed by the handle-io.c system, you ask to send EOF on the handle by calling handle_write_eof. That waits until all buffered data has been written, and then sends an EOF event by simply closing the handle. That is, of course, the only way to send an EOF signal on a handle at all. And yet, it's a bug, because the handle_output system does not take ownership of the handle you give it: the client of handle_output retains ownership, keeps its own copy of the handle, and will expect to close it itself. In most cases, the extra close will harmlessly fail, and return ERROR_INVALID_HANDLE (which the caller didn't notice anyway). But if you're unlucky, in conditions of frantic handle opening and closing (e.g. with a lot of separate named-pipe-style agent forwarding connections being constantly set up and torn down), the handle value might have been reused between the two closes, so that the second CloseHandle closes an unrelated handle belonging to some other part of the program. We can't fix this by giving handle_output permanent ownership of the handle, because it really _is_ necessary for copies of it to survive elsewhere: in particular, for a bidirectional file such as a serial port or named pipe, the reading side also needs a copy of the same handle! And yet, we can't replace the handle_write_eof call in the client with a direct CloseHandle, because that won't wait until buffered output has been drained. The solution is that the client still calls handle_write_eof to register that it _wants_ an EOF sent; the handle_output system will wait until it's ready, but then, instead of calling CloseHandle, it will ask its _client_ to close the handle, by calling the provided 'sentdata' callback with the new 'close' flag set to true. And then the client can not only close the handle, but do whatever else it needs to do to record that that has been done.
2021-09-30 18:16:20 +00:00
struct handle *sentdata_param;
};
static DWORD WINAPI handle_output_threadfunc(void *param)
{
struct handle_output *ctx = (struct handle_output *) param;
OVERLAPPED ovl, *povl;
HANDLE oev;
Convert a lot of 'int' variables to 'bool'. My normal habit these days, in new code, is to treat int and bool as _almost_ completely separate types. I'm still willing to use C's implicit test for zero on an integer (e.g. 'if (!blob.len)' is fine, no need to spell it out as blob.len != 0), but generally, if a variable is going to be conceptually a boolean, I like to declare it bool and assign to it using 'true' or 'false' rather than 0 or 1. PuTTY is an exception, because it predates the C99 bool, and I've stuck to its existing coding style even when adding new code to it. But it's been annoying me more and more, so now that I've decided C99 bool is an acceptable thing to require from our toolchain in the first place, here's a quite thorough trawl through the source doing 'boolification'. Many variables and function parameters are now typed as bool rather than int; many assignments of 0 or 1 to those variables are now spelled 'true' or 'false'. I managed this thorough conversion with the help of a custom clang plugin that I wrote to trawl the AST and apply heuristics to point out where things might want changing. So I've even managed to do a decent job on parts of the code I haven't looked at in years! To make the plugin's work easier, I pushed platform front ends generally in the direction of using standard 'bool' in preference to platform-specific boolean types like Windows BOOL or GTK's gboolean; I've left the platform booleans in places they _have_ to be for the platform APIs to work right, but variables only used by my own code have been converted wherever I found them. In a few places there are int values that look very like booleans in _most_ of the places they're used, but have a rarely-used third value, or a distinction between different nonzero values that most users don't care about. In these cases, I've _removed_ uses of 'true' and 'false' for the return values, to emphasise that there's something more subtle going on than a simple boolean answer: - the 'multisel' field in dialog.h's list box structure, for which the GTK front end in particular recognises a difference between 1 and 2 but nearly everything else treats as boolean - the 'urgent' parameter to plug_receive, where 1 vs 2 tells you something about the specific location of the urgent pointer, but most clients only care about 0 vs 'something nonzero' - the return value of wc_match, where -1 indicates a syntax error in the wildcard. - the return values from SSH-1 RSA-key loading functions, which use -1 for 'wrong passphrase' and 0 for all other failures (so any caller which already knows it's not loading an _encrypted private_ key can treat them as boolean) - term->esc_query, and the 'query' parameter in toggle_mode in terminal.c, which _usually_ hold 0 for ESC[123h or 1 for ESC[?123h, but can also hold -1 for some other intervening character that we don't support. In a few places there's an integer that I haven't turned into a bool even though it really _can_ only take values 0 or 1 (and, as above, tried to make the call sites consistent in not calling those values true and false), on the grounds that I thought it would make it more confusing to imply that the 0 value was in some sense 'negative' or bad and the 1 positive or good: - the return value of plug_accepting uses the POSIXish convention of 0=success and nonzero=error; I think if I made it bool then I'd also want to reverse its sense, and that's a job for a separate piece of work. - the 'screen' parameter to lineptr() in terminal.c, where 0 and 1 represent the default and alternate screens. There's no obvious reason why one of those should be considered 'true' or 'positive' or 'success' - they're just indices - so I've left it as int. ssh_scp_recv had particularly confusing semantics for its previous int return value: its call sites used '<= 0' to check for error, but it never actually returned a negative number, just 0 or 1. Now the function and its call sites agree that it's a bool. In a couple of places I've renamed variables called 'ret', because I don't like that name any more - it's unclear whether it means the return value (in preparation) for the _containing_ function or the return value received from a subroutine call, and occasionally I've accidentally used the same variable for both and introduced a bug. So where one of those got in my way, I've renamed it to 'toret' or 'retd' (the latter short for 'returned') in line with my usual modern practice, but I haven't done a thorough job of finding all of them. Finally, one amusing side effect of doing this is that I've had to separate quite a few chained assignments. It used to be perfectly fine to write 'a = b = c = TRUE' when a,b,c were int and TRUE was just a the 'true' defined by stdbool.h, that idiom provokes a warning from gcc: 'suggest parentheses around assignment used as truth value'!
2018-11-02 19:23:19 +00:00
bool writeret;
if (ctx->flags & HANDLE_FLAG_OVERLAPPED) {
povl = &ovl;
oev = CreateEvent(NULL, true, false, NULL);
} else {
povl = NULL;
}
while (1) {
WaitForSingleObject(ctx->ev_from_main, INFINITE);
if (ctx->done) {
/*
* The main thread has asked us to shut down. Send back an
* event indicating that we've done so. Hereafter we must
* not touch ctx at all, because the main thread might
* have freed it.
*/
Reorganise Windows HANDLE management. Before commit 6e69223dc262755, Pageant would stop working after a certain number of PuTTYs were active at the same time. (At most about 60, but maybe fewer - see below.) This was because of two separate bugs. The easy one, fixed in 6e69223dc262755 itself, was that PuTTY left each named-pipe connection to Pageant open for the rest of its lifetime. So the real problem was that Pageant had too many active connections at once. (And since a given PuTTY might make multiple connections during userauth - one to list keys, and maybe another to actually make a signature - that was why the number of _PuTTYs_ might vary.) It was clearly a bug that PuTTY was leaving connections to Pageant needlessly open. But it was _also_ a bug that Pageant couldn't handle more than about 60 at once. In this commit, I fix that secondary bug. The cause of the bug is that the WaitForMultipleObjects function family in the Windows API have a limit on the number of HANDLE objects they can select between. The limit is MAXIMUM_WAIT_OBJECTS, defined to be 64. And handle-io.c was using a separate event object for each I/O subthread to communicate back to the main thread, so as soon as all those event objects (plus a handful of other HANDLEs) added up to more than 64, we'd start passing an overlarge handle array to WaitForMultipleObjects, and it would start not doing what we wanted. To fix this, I've reorganised handle-io.c so that all its subthreads share just _one_ event object to signal readiness back to the main thread. There's now a linked list of 'struct handle' objects that are ready to be processed, protected by a CRITICAL_SECTION. Each subthread signals readiness by adding itself to the linked list, and setting the event object to indicate that the list is now non-empty. When the main thread receives the event, it iterates over the whole list processing all the ready handles. (Each 'struct handle' still has a separate event object for the main thread to use to communicate _to_ the subthread. That's OK, because no thread is ever waiting on all those events at once: each subthread only waits on its own.) The previous HT_FOREIGN system didn't really fit into this framework. So I've moved it out into its own system. There's now a handle-wait.c which deals with the relatively simple job of managing a list of handles that need to be waited for, each with a callback function; that's what communicates a list of HANDLEs to event loops, and receives the notification when the event loop notices that one of them has done something. And handle-io.c is now just one client of handle-wait.c, providing a single HANDLE to the event loop, and dealing internally with everything that needs to be done when that handle fires. The new top-level handle-wait.c system *still* can't deal with more than MAXIMUM_WAIT_OBJECTS. At the moment, I'm reasonably convinced it doesn't need to: the only kind of HANDLE that any of our tools could previously have needed to wait on more than one of was the one in handle-io.c that I've just removed. But I've left some assertions and a TODO comment in there just in case we need to change that in future.
2021-05-24 12:06:10 +00:00
add_to_ready_list(&ctx->ready_node);
break;
}
if (povl) {
memset(povl, 0, sizeof(OVERLAPPED));
povl->hEvent = oev;
}
writeret = WriteFile(ctx->h, ctx->buffer, ctx->len,
&ctx->lenwritten, povl);
if (!writeret)
ctx->writeerr = GetLastError();
else
ctx->writeerr = 0;
if (povl && !writeret && GetLastError() == ERROR_IO_PENDING) {
writeret = GetOverlappedResult(ctx->h, povl,
&ctx->lenwritten, true);
if (!writeret)
ctx->writeerr = GetLastError();
else
ctx->writeerr = 0;
}
Reorganise Windows HANDLE management. Before commit 6e69223dc262755, Pageant would stop working after a certain number of PuTTYs were active at the same time. (At most about 60, but maybe fewer - see below.) This was because of two separate bugs. The easy one, fixed in 6e69223dc262755 itself, was that PuTTY left each named-pipe connection to Pageant open for the rest of its lifetime. So the real problem was that Pageant had too many active connections at once. (And since a given PuTTY might make multiple connections during userauth - one to list keys, and maybe another to actually make a signature - that was why the number of _PuTTYs_ might vary.) It was clearly a bug that PuTTY was leaving connections to Pageant needlessly open. But it was _also_ a bug that Pageant couldn't handle more than about 60 at once. In this commit, I fix that secondary bug. The cause of the bug is that the WaitForMultipleObjects function family in the Windows API have a limit on the number of HANDLE objects they can select between. The limit is MAXIMUM_WAIT_OBJECTS, defined to be 64. And handle-io.c was using a separate event object for each I/O subthread to communicate back to the main thread, so as soon as all those event objects (plus a handful of other HANDLEs) added up to more than 64, we'd start passing an overlarge handle array to WaitForMultipleObjects, and it would start not doing what we wanted. To fix this, I've reorganised handle-io.c so that all its subthreads share just _one_ event object to signal readiness back to the main thread. There's now a linked list of 'struct handle' objects that are ready to be processed, protected by a CRITICAL_SECTION. Each subthread signals readiness by adding itself to the linked list, and setting the event object to indicate that the list is now non-empty. When the main thread receives the event, it iterates over the whole list processing all the ready handles. (Each 'struct handle' still has a separate event object for the main thread to use to communicate _to_ the subthread. That's OK, because no thread is ever waiting on all those events at once: each subthread only waits on its own.) The previous HT_FOREIGN system didn't really fit into this framework. So I've moved it out into its own system. There's now a handle-wait.c which deals with the relatively simple job of managing a list of handles that need to be waited for, each with a callback function; that's what communicates a list of HANDLEs to event loops, and receives the notification when the event loop notices that one of them has done something. And handle-io.c is now just one client of handle-wait.c, providing a single HANDLE to the event loop, and dealing internally with everything that needs to be done when that handle fires. The new top-level handle-wait.c system *still* can't deal with more than MAXIMUM_WAIT_OBJECTS. At the moment, I'm reasonably convinced it doesn't need to: the only kind of HANDLE that any of our tools could previously have needed to wait on more than one of was the one in handle-io.c that I've just removed. But I've left some assertions and a TODO comment in there just in case we need to change that in future.
2021-05-24 12:06:10 +00:00
add_to_ready_list(&ctx->ready_node);
if (!writeret) {
/*
* The write operation has suffered an error. Telling that
* to the main thread will cause it to set its 'defunct'
* flag and dispose of the handle structure at the next
* opportunity, so we must not touch ctx at all after
* this.
*/
break;
}
}
if (povl)
CloseHandle(oev);
return 0;
}
static void handle_try_output(struct handle_output *ctx)
{
if (!ctx->busy && bufchain_size(&ctx->queued_data)) {
ptrlen data = bufchain_prefix(&ctx->queued_data);
ctx->buffer = data.ptr;
ctx->len = min(data.len, ~(DWORD)0);
SetEvent(ctx->ev_from_main);
ctx->busy = true;
} else if (!ctx->busy && bufchain_size(&ctx->queued_data) == 0 &&
ctx->outgoingeof == EOF_PENDING) {
handle_write_eof: delegate CloseHandle back to the client. When a writable HANDLE is managed by the handle-io.c system, you ask to send EOF on the handle by calling handle_write_eof. That waits until all buffered data has been written, and then sends an EOF event by simply closing the handle. That is, of course, the only way to send an EOF signal on a handle at all. And yet, it's a bug, because the handle_output system does not take ownership of the handle you give it: the client of handle_output retains ownership, keeps its own copy of the handle, and will expect to close it itself. In most cases, the extra close will harmlessly fail, and return ERROR_INVALID_HANDLE (which the caller didn't notice anyway). But if you're unlucky, in conditions of frantic handle opening and closing (e.g. with a lot of separate named-pipe-style agent forwarding connections being constantly set up and torn down), the handle value might have been reused between the two closes, so that the second CloseHandle closes an unrelated handle belonging to some other part of the program. We can't fix this by giving handle_output permanent ownership of the handle, because it really _is_ necessary for copies of it to survive elsewhere: in particular, for a bidirectional file such as a serial port or named pipe, the reading side also needs a copy of the same handle! And yet, we can't replace the handle_write_eof call in the client with a direct CloseHandle, because that won't wait until buffered output has been drained. The solution is that the client still calls handle_write_eof to register that it _wants_ an EOF sent; the handle_output system will wait until it's ready, but then, instead of calling CloseHandle, it will ask its _client_ to close the handle, by calling the provided 'sentdata' callback with the new 'close' flag set to true. And then the client can not only close the handle, but do whatever else it needs to do to record that that has been done.
2021-09-30 18:16:20 +00:00
ctx->sentdata(ctx->sentdata_param, 0, 0, true);
ctx->h = INVALID_HANDLE_VALUE;
ctx->outgoingeof = EOF_SENT;
}
}
/* ----------------------------------------------------------------------
* Unified code handling both input and output threads.
*/
struct handle {
HandleType type;
union {
struct handle_generic g;
struct handle_input i;
struct handle_output o;
} u;
};
Reorganise Windows HANDLE management. Before commit 6e69223dc262755, Pageant would stop working after a certain number of PuTTYs were active at the same time. (At most about 60, but maybe fewer - see below.) This was because of two separate bugs. The easy one, fixed in 6e69223dc262755 itself, was that PuTTY left each named-pipe connection to Pageant open for the rest of its lifetime. So the real problem was that Pageant had too many active connections at once. (And since a given PuTTY might make multiple connections during userauth - one to list keys, and maybe another to actually make a signature - that was why the number of _PuTTYs_ might vary.) It was clearly a bug that PuTTY was leaving connections to Pageant needlessly open. But it was _also_ a bug that Pageant couldn't handle more than about 60 at once. In this commit, I fix that secondary bug. The cause of the bug is that the WaitForMultipleObjects function family in the Windows API have a limit on the number of HANDLE objects they can select between. The limit is MAXIMUM_WAIT_OBJECTS, defined to be 64. And handle-io.c was using a separate event object for each I/O subthread to communicate back to the main thread, so as soon as all those event objects (plus a handful of other HANDLEs) added up to more than 64, we'd start passing an overlarge handle array to WaitForMultipleObjects, and it would start not doing what we wanted. To fix this, I've reorganised handle-io.c so that all its subthreads share just _one_ event object to signal readiness back to the main thread. There's now a linked list of 'struct handle' objects that are ready to be processed, protected by a CRITICAL_SECTION. Each subthread signals readiness by adding itself to the linked list, and setting the event object to indicate that the list is now non-empty. When the main thread receives the event, it iterates over the whole list processing all the ready handles. (Each 'struct handle' still has a separate event object for the main thread to use to communicate _to_ the subthread. That's OK, because no thread is ever waiting on all those events at once: each subthread only waits on its own.) The previous HT_FOREIGN system didn't really fit into this framework. So I've moved it out into its own system. There's now a handle-wait.c which deals with the relatively simple job of managing a list of handles that need to be waited for, each with a callback function; that's what communicates a list of HANDLEs to event loops, and receives the notification when the event loop notices that one of them has done something. And handle-io.c is now just one client of handle-wait.c, providing a single HANDLE to the event loop, and dealing internally with everything that needs to be done when that handle fires. The new top-level handle-wait.c system *still* can't deal with more than MAXIMUM_WAIT_OBJECTS. At the moment, I'm reasonably convinced it doesn't need to: the only kind of HANDLE that any of our tools could previously have needed to wait on more than one of was the one in handle-io.c that I've just removed. But I've left some assertions and a TODO comment in there just in case we need to change that in future.
2021-05-24 12:06:10 +00:00
/*
* Linked list storing the current list of handles ready to have
* something done to them by the main thread.
*/
static handle_list_node ready_head[1];
static CRITICAL_SECTION ready_critsec[1];
Reorganise Windows HANDLE management. Before commit 6e69223dc262755, Pageant would stop working after a certain number of PuTTYs were active at the same time. (At most about 60, but maybe fewer - see below.) This was because of two separate bugs. The easy one, fixed in 6e69223dc262755 itself, was that PuTTY left each named-pipe connection to Pageant open for the rest of its lifetime. So the real problem was that Pageant had too many active connections at once. (And since a given PuTTY might make multiple connections during userauth - one to list keys, and maybe another to actually make a signature - that was why the number of _PuTTYs_ might vary.) It was clearly a bug that PuTTY was leaving connections to Pageant needlessly open. But it was _also_ a bug that Pageant couldn't handle more than about 60 at once. In this commit, I fix that secondary bug. The cause of the bug is that the WaitForMultipleObjects function family in the Windows API have a limit on the number of HANDLE objects they can select between. The limit is MAXIMUM_WAIT_OBJECTS, defined to be 64. And handle-io.c was using a separate event object for each I/O subthread to communicate back to the main thread, so as soon as all those event objects (plus a handful of other HANDLEs) added up to more than 64, we'd start passing an overlarge handle array to WaitForMultipleObjects, and it would start not doing what we wanted. To fix this, I've reorganised handle-io.c so that all its subthreads share just _one_ event object to signal readiness back to the main thread. There's now a linked list of 'struct handle' objects that are ready to be processed, protected by a CRITICAL_SECTION. Each subthread signals readiness by adding itself to the linked list, and setting the event object to indicate that the list is now non-empty. When the main thread receives the event, it iterates over the whole list processing all the ready handles. (Each 'struct handle' still has a separate event object for the main thread to use to communicate _to_ the subthread. That's OK, because no thread is ever waiting on all those events at once: each subthread only waits on its own.) The previous HT_FOREIGN system didn't really fit into this framework. So I've moved it out into its own system. There's now a handle-wait.c which deals with the relatively simple job of managing a list of handles that need to be waited for, each with a callback function; that's what communicates a list of HANDLEs to event loops, and receives the notification when the event loop notices that one of them has done something. And handle-io.c is now just one client of handle-wait.c, providing a single HANDLE to the event loop, and dealing internally with everything that needs to be done when that handle fires. The new top-level handle-wait.c system *still* can't deal with more than MAXIMUM_WAIT_OBJECTS. At the moment, I'm reasonably convinced it doesn't need to: the only kind of HANDLE that any of our tools could previously have needed to wait on more than one of was the one in handle-io.c that I've just removed. But I've left some assertions and a TODO comment in there just in case we need to change that in future.
2021-05-24 12:06:10 +00:00
/*
* Event object used by all subthreads to signal that they've just put
* something on the ready list, i.e. that the ready list is non-empty.
*/
static HANDLE ready_event = INVALID_HANDLE_VALUE;
static void add_to_ready_list(handle_list_node *node)
{
Reorganise Windows HANDLE management. Before commit 6e69223dc262755, Pageant would stop working after a certain number of PuTTYs were active at the same time. (At most about 60, but maybe fewer - see below.) This was because of two separate bugs. The easy one, fixed in 6e69223dc262755 itself, was that PuTTY left each named-pipe connection to Pageant open for the rest of its lifetime. So the real problem was that Pageant had too many active connections at once. (And since a given PuTTY might make multiple connections during userauth - one to list keys, and maybe another to actually make a signature - that was why the number of _PuTTYs_ might vary.) It was clearly a bug that PuTTY was leaving connections to Pageant needlessly open. But it was _also_ a bug that Pageant couldn't handle more than about 60 at once. In this commit, I fix that secondary bug. The cause of the bug is that the WaitForMultipleObjects function family in the Windows API have a limit on the number of HANDLE objects they can select between. The limit is MAXIMUM_WAIT_OBJECTS, defined to be 64. And handle-io.c was using a separate event object for each I/O subthread to communicate back to the main thread, so as soon as all those event objects (plus a handful of other HANDLEs) added up to more than 64, we'd start passing an overlarge handle array to WaitForMultipleObjects, and it would start not doing what we wanted. To fix this, I've reorganised handle-io.c so that all its subthreads share just _one_ event object to signal readiness back to the main thread. There's now a linked list of 'struct handle' objects that are ready to be processed, protected by a CRITICAL_SECTION. Each subthread signals readiness by adding itself to the linked list, and setting the event object to indicate that the list is now non-empty. When the main thread receives the event, it iterates over the whole list processing all the ready handles. (Each 'struct handle' still has a separate event object for the main thread to use to communicate _to_ the subthread. That's OK, because no thread is ever waiting on all those events at once: each subthread only waits on its own.) The previous HT_FOREIGN system didn't really fit into this framework. So I've moved it out into its own system. There's now a handle-wait.c which deals with the relatively simple job of managing a list of handles that need to be waited for, each with a callback function; that's what communicates a list of HANDLEs to event loops, and receives the notification when the event loop notices that one of them has done something. And handle-io.c is now just one client of handle-wait.c, providing a single HANDLE to the event loop, and dealing internally with everything that needs to be done when that handle fires. The new top-level handle-wait.c system *still* can't deal with more than MAXIMUM_WAIT_OBJECTS. At the moment, I'm reasonably convinced it doesn't need to: the only kind of HANDLE that any of our tools could previously have needed to wait on more than one of was the one in handle-io.c that I've just removed. But I've left some assertions and a TODO comment in there just in case we need to change that in future.
2021-05-24 12:06:10 +00:00
/*
* Called from subthreads, when their handle has done something
* that they need the main thread to respond to. We append the
* given list node to the end of the ready list, and set
* ready_event to signal to the main thread that the ready list is
* now non-empty.
*/
EnterCriticalSection(ready_critsec);
node->next = ready_head;
node->prev = ready_head->prev;
node->next->prev = node->prev->next = node;
SetEvent(ready_event);
LeaveCriticalSection(ready_critsec);
}
Reorganise Windows HANDLE management. Before commit 6e69223dc262755, Pageant would stop working after a certain number of PuTTYs were active at the same time. (At most about 60, but maybe fewer - see below.) This was because of two separate bugs. The easy one, fixed in 6e69223dc262755 itself, was that PuTTY left each named-pipe connection to Pageant open for the rest of its lifetime. So the real problem was that Pageant had too many active connections at once. (And since a given PuTTY might make multiple connections during userauth - one to list keys, and maybe another to actually make a signature - that was why the number of _PuTTYs_ might vary.) It was clearly a bug that PuTTY was leaving connections to Pageant needlessly open. But it was _also_ a bug that Pageant couldn't handle more than about 60 at once. In this commit, I fix that secondary bug. The cause of the bug is that the WaitForMultipleObjects function family in the Windows API have a limit on the number of HANDLE objects they can select between. The limit is MAXIMUM_WAIT_OBJECTS, defined to be 64. And handle-io.c was using a separate event object for each I/O subthread to communicate back to the main thread, so as soon as all those event objects (plus a handful of other HANDLEs) added up to more than 64, we'd start passing an overlarge handle array to WaitForMultipleObjects, and it would start not doing what we wanted. To fix this, I've reorganised handle-io.c so that all its subthreads share just _one_ event object to signal readiness back to the main thread. There's now a linked list of 'struct handle' objects that are ready to be processed, protected by a CRITICAL_SECTION. Each subthread signals readiness by adding itself to the linked list, and setting the event object to indicate that the list is now non-empty. When the main thread receives the event, it iterates over the whole list processing all the ready handles. (Each 'struct handle' still has a separate event object for the main thread to use to communicate _to_ the subthread. That's OK, because no thread is ever waiting on all those events at once: each subthread only waits on its own.) The previous HT_FOREIGN system didn't really fit into this framework. So I've moved it out into its own system. There's now a handle-wait.c which deals with the relatively simple job of managing a list of handles that need to be waited for, each with a callback function; that's what communicates a list of HANDLEs to event loops, and receives the notification when the event loop notices that one of them has done something. And handle-io.c is now just one client of handle-wait.c, providing a single HANDLE to the event loop, and dealing internally with everything that needs to be done when that handle fires. The new top-level handle-wait.c system *still* can't deal with more than MAXIMUM_WAIT_OBJECTS. At the moment, I'm reasonably convinced it doesn't need to: the only kind of HANDLE that any of our tools could previously have needed to wait on more than one of was the one in handle-io.c that I've just removed. But I've left some assertions and a TODO comment in there just in case we need to change that in future.
2021-05-24 12:06:10 +00:00
static void remove_from_ready_list(handle_list_node *node)
{
/*
* Called from the main thread, just before destroying a 'struct
* handle' completely: as a precaution, we make absolutely sure
* it's not linked on the ready list, just in case somehow it
* still was.
*/
EnterCriticalSection(ready_critsec);
node->next->prev = node->prev;
node->prev->next = node->next;
node->next = node->prev = node;
LeaveCriticalSection(ready_critsec);
}
Reorganise Windows HANDLE management. Before commit 6e69223dc262755, Pageant would stop working after a certain number of PuTTYs were active at the same time. (At most about 60, but maybe fewer - see below.) This was because of two separate bugs. The easy one, fixed in 6e69223dc262755 itself, was that PuTTY left each named-pipe connection to Pageant open for the rest of its lifetime. So the real problem was that Pageant had too many active connections at once. (And since a given PuTTY might make multiple connections during userauth - one to list keys, and maybe another to actually make a signature - that was why the number of _PuTTYs_ might vary.) It was clearly a bug that PuTTY was leaving connections to Pageant needlessly open. But it was _also_ a bug that Pageant couldn't handle more than about 60 at once. In this commit, I fix that secondary bug. The cause of the bug is that the WaitForMultipleObjects function family in the Windows API have a limit on the number of HANDLE objects they can select between. The limit is MAXIMUM_WAIT_OBJECTS, defined to be 64. And handle-io.c was using a separate event object for each I/O subthread to communicate back to the main thread, so as soon as all those event objects (plus a handful of other HANDLEs) added up to more than 64, we'd start passing an overlarge handle array to WaitForMultipleObjects, and it would start not doing what we wanted. To fix this, I've reorganised handle-io.c so that all its subthreads share just _one_ event object to signal readiness back to the main thread. There's now a linked list of 'struct handle' objects that are ready to be processed, protected by a CRITICAL_SECTION. Each subthread signals readiness by adding itself to the linked list, and setting the event object to indicate that the list is now non-empty. When the main thread receives the event, it iterates over the whole list processing all the ready handles. (Each 'struct handle' still has a separate event object for the main thread to use to communicate _to_ the subthread. That's OK, because no thread is ever waiting on all those events at once: each subthread only waits on its own.) The previous HT_FOREIGN system didn't really fit into this framework. So I've moved it out into its own system. There's now a handle-wait.c which deals with the relatively simple job of managing a list of handles that need to be waited for, each with a callback function; that's what communicates a list of HANDLEs to event loops, and receives the notification when the event loop notices that one of them has done something. And handle-io.c is now just one client of handle-wait.c, providing a single HANDLE to the event loop, and dealing internally with everything that needs to be done when that handle fires. The new top-level handle-wait.c system *still* can't deal with more than MAXIMUM_WAIT_OBJECTS. At the moment, I'm reasonably convinced it doesn't need to: the only kind of HANDLE that any of our tools could previously have needed to wait on more than one of was the one in handle-io.c that I've just removed. But I've left some assertions and a TODO comment in there just in case we need to change that in future.
2021-05-24 12:06:10 +00:00
static void handle_ready(struct handle *h); /* process one handle (below) */
static void handle_ready_callback(void *vctx)
{
Reorganise Windows HANDLE management. Before commit 6e69223dc262755, Pageant would stop working after a certain number of PuTTYs were active at the same time. (At most about 60, but maybe fewer - see below.) This was because of two separate bugs. The easy one, fixed in 6e69223dc262755 itself, was that PuTTY left each named-pipe connection to Pageant open for the rest of its lifetime. So the real problem was that Pageant had too many active connections at once. (And since a given PuTTY might make multiple connections during userauth - one to list keys, and maybe another to actually make a signature - that was why the number of _PuTTYs_ might vary.) It was clearly a bug that PuTTY was leaving connections to Pageant needlessly open. But it was _also_ a bug that Pageant couldn't handle more than about 60 at once. In this commit, I fix that secondary bug. The cause of the bug is that the WaitForMultipleObjects function family in the Windows API have a limit on the number of HANDLE objects they can select between. The limit is MAXIMUM_WAIT_OBJECTS, defined to be 64. And handle-io.c was using a separate event object for each I/O subthread to communicate back to the main thread, so as soon as all those event objects (plus a handful of other HANDLEs) added up to more than 64, we'd start passing an overlarge handle array to WaitForMultipleObjects, and it would start not doing what we wanted. To fix this, I've reorganised handle-io.c so that all its subthreads share just _one_ event object to signal readiness back to the main thread. There's now a linked list of 'struct handle' objects that are ready to be processed, protected by a CRITICAL_SECTION. Each subthread signals readiness by adding itself to the linked list, and setting the event object to indicate that the list is now non-empty. When the main thread receives the event, it iterates over the whole list processing all the ready handles. (Each 'struct handle' still has a separate event object for the main thread to use to communicate _to_ the subthread. That's OK, because no thread is ever waiting on all those events at once: each subthread only waits on its own.) The previous HT_FOREIGN system didn't really fit into this framework. So I've moved it out into its own system. There's now a handle-wait.c which deals with the relatively simple job of managing a list of handles that need to be waited for, each with a callback function; that's what communicates a list of HANDLEs to event loops, and receives the notification when the event loop notices that one of them has done something. And handle-io.c is now just one client of handle-wait.c, providing a single HANDLE to the event loop, and dealing internally with everything that needs to be done when that handle fires. The new top-level handle-wait.c system *still* can't deal with more than MAXIMUM_WAIT_OBJECTS. At the moment, I'm reasonably convinced it doesn't need to: the only kind of HANDLE that any of our tools could previously have needed to wait on more than one of was the one in handle-io.c that I've just removed. But I've left some assertions and a TODO comment in there just in case we need to change that in future.
2021-05-24 12:06:10 +00:00
/*
* Called when the main thread detects ready_event, indicating
* that at least one handle is on the ready list. We empty the
* whole list and process the handles one by one.
*
* It's possible that other handles may be destroyed, and hence
* taken _off_ the ready list, during this processing. That
* shouldn't cause a deadlock, because according to the API docs,
* it's safe to call EnterCriticalSection twice in the same thread
* - the second call will return immediately because that thread
* already owns the critsec. (And then it takes two calls to
* LeaveCriticalSection to release it again, which is just what we
* want here.)
*/
EnterCriticalSection(ready_critsec);
while (ready_head->next != ready_head) {
handle_list_node *node = ready_head->next;
node->prev->next = node->next;
node->next->prev = node->prev;
node->next = node->prev = node;
handle_ready(container_of(node, struct handle, u.g.ready_node));
}
LeaveCriticalSection(ready_critsec);
}
Reorganise Windows HANDLE management. Before commit 6e69223dc262755, Pageant would stop working after a certain number of PuTTYs were active at the same time. (At most about 60, but maybe fewer - see below.) This was because of two separate bugs. The easy one, fixed in 6e69223dc262755 itself, was that PuTTY left each named-pipe connection to Pageant open for the rest of its lifetime. So the real problem was that Pageant had too many active connections at once. (And since a given PuTTY might make multiple connections during userauth - one to list keys, and maybe another to actually make a signature - that was why the number of _PuTTYs_ might vary.) It was clearly a bug that PuTTY was leaving connections to Pageant needlessly open. But it was _also_ a bug that Pageant couldn't handle more than about 60 at once. In this commit, I fix that secondary bug. The cause of the bug is that the WaitForMultipleObjects function family in the Windows API have a limit on the number of HANDLE objects they can select between. The limit is MAXIMUM_WAIT_OBJECTS, defined to be 64. And handle-io.c was using a separate event object for each I/O subthread to communicate back to the main thread, so as soon as all those event objects (plus a handful of other HANDLEs) added up to more than 64, we'd start passing an overlarge handle array to WaitForMultipleObjects, and it would start not doing what we wanted. To fix this, I've reorganised handle-io.c so that all its subthreads share just _one_ event object to signal readiness back to the main thread. There's now a linked list of 'struct handle' objects that are ready to be processed, protected by a CRITICAL_SECTION. Each subthread signals readiness by adding itself to the linked list, and setting the event object to indicate that the list is now non-empty. When the main thread receives the event, it iterates over the whole list processing all the ready handles. (Each 'struct handle' still has a separate event object for the main thread to use to communicate _to_ the subthread. That's OK, because no thread is ever waiting on all those events at once: each subthread only waits on its own.) The previous HT_FOREIGN system didn't really fit into this framework. So I've moved it out into its own system. There's now a handle-wait.c which deals with the relatively simple job of managing a list of handles that need to be waited for, each with a callback function; that's what communicates a list of HANDLEs to event loops, and receives the notification when the event loop notices that one of them has done something. And handle-io.c is now just one client of handle-wait.c, providing a single HANDLE to the event loop, and dealing internally with everything that needs to be done when that handle fires. The new top-level handle-wait.c system *still* can't deal with more than MAXIMUM_WAIT_OBJECTS. At the moment, I'm reasonably convinced it doesn't need to: the only kind of HANDLE that any of our tools could previously have needed to wait on more than one of was the one in handle-io.c that I've just removed. But I've left some assertions and a TODO comment in there just in case we need to change that in future.
2021-05-24 12:06:10 +00:00
static inline void ensure_ready_event_setup(void)
{
if (ready_event == INVALID_HANDLE_VALUE) {
ready_head->prev = ready_head->next = ready_head;
InitializeCriticalSection(ready_critsec);
ready_event = CreateEvent(NULL, false, false, NULL);
add_handle_wait(ready_event, handle_ready_callback, NULL);
}
}
struct handle *handle_input_new(HANDLE handle, handle_inputfn_t gotdata,
void *privdata, int flags)
{
struct handle *h = snew(struct handle);
DWORD in_threadid; /* required for Win9x */
h->type = HT_INPUT;
h->u.i.h = handle;
h->u.i.ev_from_main = CreateEvent(NULL, false, false, NULL);
h->u.i.gotdata = gotdata;
h->u.i.defunct = false;
h->u.i.moribund = false;
h->u.i.done = false;
h->u.i.privdata = privdata;
h->u.i.flags = flags;
Reorganise Windows HANDLE management. Before commit 6e69223dc262755, Pageant would stop working after a certain number of PuTTYs were active at the same time. (At most about 60, but maybe fewer - see below.) This was because of two separate bugs. The easy one, fixed in 6e69223dc262755 itself, was that PuTTY left each named-pipe connection to Pageant open for the rest of its lifetime. So the real problem was that Pageant had too many active connections at once. (And since a given PuTTY might make multiple connections during userauth - one to list keys, and maybe another to actually make a signature - that was why the number of _PuTTYs_ might vary.) It was clearly a bug that PuTTY was leaving connections to Pageant needlessly open. But it was _also_ a bug that Pageant couldn't handle more than about 60 at once. In this commit, I fix that secondary bug. The cause of the bug is that the WaitForMultipleObjects function family in the Windows API have a limit on the number of HANDLE objects they can select between. The limit is MAXIMUM_WAIT_OBJECTS, defined to be 64. And handle-io.c was using a separate event object for each I/O subthread to communicate back to the main thread, so as soon as all those event objects (plus a handful of other HANDLEs) added up to more than 64, we'd start passing an overlarge handle array to WaitForMultipleObjects, and it would start not doing what we wanted. To fix this, I've reorganised handle-io.c so that all its subthreads share just _one_ event object to signal readiness back to the main thread. There's now a linked list of 'struct handle' objects that are ready to be processed, protected by a CRITICAL_SECTION. Each subthread signals readiness by adding itself to the linked list, and setting the event object to indicate that the list is now non-empty. When the main thread receives the event, it iterates over the whole list processing all the ready handles. (Each 'struct handle' still has a separate event object for the main thread to use to communicate _to_ the subthread. That's OK, because no thread is ever waiting on all those events at once: each subthread only waits on its own.) The previous HT_FOREIGN system didn't really fit into this framework. So I've moved it out into its own system. There's now a handle-wait.c which deals with the relatively simple job of managing a list of handles that need to be waited for, each with a callback function; that's what communicates a list of HANDLEs to event loops, and receives the notification when the event loop notices that one of them has done something. And handle-io.c is now just one client of handle-wait.c, providing a single HANDLE to the event loop, and dealing internally with everything that needs to be done when that handle fires. The new top-level handle-wait.c system *still* can't deal with more than MAXIMUM_WAIT_OBJECTS. At the moment, I'm reasonably convinced it doesn't need to: the only kind of HANDLE that any of our tools could previously have needed to wait on more than one of was the one in handle-io.c that I've just removed. But I've left some assertions and a TODO comment in there just in case we need to change that in future.
2021-05-24 12:06:10 +00:00
ensure_ready_event_setup();
HANDLE hThread = CreateThread(NULL, 0, handle_input_threadfunc,
&h->u.i, 0, &in_threadid);
if (hThread)
CloseHandle(hThread); /* we don't need the thread handle */
h->u.i.busy = true;
return h;
}
struct handle *handle_output_new(HANDLE handle, handle_outputfn_t sentdata,
void *privdata, int flags)
{
struct handle *h = snew(struct handle);
DWORD out_threadid; /* required for Win9x */
h->type = HT_OUTPUT;
h->u.o.h = handle;
h->u.o.ev_from_main = CreateEvent(NULL, false, false, NULL);
h->u.o.busy = false;
h->u.o.defunct = false;
h->u.o.moribund = false;
h->u.o.done = false;
h->u.o.privdata = privdata;
bufchain_init(&h->u.o.queued_data);
h->u.o.outgoingeof = EOF_NO;
h->u.o.sentdata = sentdata;
handle_write_eof: delegate CloseHandle back to the client. When a writable HANDLE is managed by the handle-io.c system, you ask to send EOF on the handle by calling handle_write_eof. That waits until all buffered data has been written, and then sends an EOF event by simply closing the handle. That is, of course, the only way to send an EOF signal on a handle at all. And yet, it's a bug, because the handle_output system does not take ownership of the handle you give it: the client of handle_output retains ownership, keeps its own copy of the handle, and will expect to close it itself. In most cases, the extra close will harmlessly fail, and return ERROR_INVALID_HANDLE (which the caller didn't notice anyway). But if you're unlucky, in conditions of frantic handle opening and closing (e.g. with a lot of separate named-pipe-style agent forwarding connections being constantly set up and torn down), the handle value might have been reused between the two closes, so that the second CloseHandle closes an unrelated handle belonging to some other part of the program. We can't fix this by giving handle_output permanent ownership of the handle, because it really _is_ necessary for copies of it to survive elsewhere: in particular, for a bidirectional file such as a serial port or named pipe, the reading side also needs a copy of the same handle! And yet, we can't replace the handle_write_eof call in the client with a direct CloseHandle, because that won't wait until buffered output has been drained. The solution is that the client still calls handle_write_eof to register that it _wants_ an EOF sent; the handle_output system will wait until it's ready, but then, instead of calling CloseHandle, it will ask its _client_ to close the handle, by calling the provided 'sentdata' callback with the new 'close' flag set to true. And then the client can not only close the handle, but do whatever else it needs to do to record that that has been done.
2021-09-30 18:16:20 +00:00
h->u.o.sentdata_param = h;
h->u.o.flags = flags;
Reorganise Windows HANDLE management. Before commit 6e69223dc262755, Pageant would stop working after a certain number of PuTTYs were active at the same time. (At most about 60, but maybe fewer - see below.) This was because of two separate bugs. The easy one, fixed in 6e69223dc262755 itself, was that PuTTY left each named-pipe connection to Pageant open for the rest of its lifetime. So the real problem was that Pageant had too many active connections at once. (And since a given PuTTY might make multiple connections during userauth - one to list keys, and maybe another to actually make a signature - that was why the number of _PuTTYs_ might vary.) It was clearly a bug that PuTTY was leaving connections to Pageant needlessly open. But it was _also_ a bug that Pageant couldn't handle more than about 60 at once. In this commit, I fix that secondary bug. The cause of the bug is that the WaitForMultipleObjects function family in the Windows API have a limit on the number of HANDLE objects they can select between. The limit is MAXIMUM_WAIT_OBJECTS, defined to be 64. And handle-io.c was using a separate event object for each I/O subthread to communicate back to the main thread, so as soon as all those event objects (plus a handful of other HANDLEs) added up to more than 64, we'd start passing an overlarge handle array to WaitForMultipleObjects, and it would start not doing what we wanted. To fix this, I've reorganised handle-io.c so that all its subthreads share just _one_ event object to signal readiness back to the main thread. There's now a linked list of 'struct handle' objects that are ready to be processed, protected by a CRITICAL_SECTION. Each subthread signals readiness by adding itself to the linked list, and setting the event object to indicate that the list is now non-empty. When the main thread receives the event, it iterates over the whole list processing all the ready handles. (Each 'struct handle' still has a separate event object for the main thread to use to communicate _to_ the subthread. That's OK, because no thread is ever waiting on all those events at once: each subthread only waits on its own.) The previous HT_FOREIGN system didn't really fit into this framework. So I've moved it out into its own system. There's now a handle-wait.c which deals with the relatively simple job of managing a list of handles that need to be waited for, each with a callback function; that's what communicates a list of HANDLEs to event loops, and receives the notification when the event loop notices that one of them has done something. And handle-io.c is now just one client of handle-wait.c, providing a single HANDLE to the event loop, and dealing internally with everything that needs to be done when that handle fires. The new top-level handle-wait.c system *still* can't deal with more than MAXIMUM_WAIT_OBJECTS. At the moment, I'm reasonably convinced it doesn't need to: the only kind of HANDLE that any of our tools could previously have needed to wait on more than one of was the one in handle-io.c that I've just removed. But I've left some assertions and a TODO comment in there just in case we need to change that in future.
2021-05-24 12:06:10 +00:00
ensure_ready_event_setup();
HANDLE hThread = CreateThread(NULL, 0, handle_output_threadfunc,
&h->u.o, 0, &out_threadid);
if (hThread)
CloseHandle(hThread); /* we don't need the thread handle */
return h;
}
size_t handle_write(struct handle *h, const void *data, size_t len)
{
assert(h->type == HT_OUTPUT);
assert(h->u.o.outgoingeof == EOF_NO);
bufchain_add(&h->u.o.queued_data, data, len);
handle_try_output(&h->u.o);
return bufchain_size(&h->u.o.queued_data);
}
void handle_write_eof(struct handle *h)
{
/*
* This function is called when we want to proactively send an
* end-of-file notification on the handle. We can only do this by
* actually closing the handle - so never call this on a
* bidirectional handle if we're still interested in its incoming
* direction!
*/
assert(h->type == HT_OUTPUT);
if (h->u.o.outgoingeof == EOF_NO) {
h->u.o.outgoingeof = EOF_PENDING;
handle_try_output(&h->u.o);
}
}
static void handle_destroy(struct handle *h)
{
if (h->type == HT_OUTPUT)
bufchain_clear(&h->u.o.queued_data);
CloseHandle(h->u.g.ev_from_main);
Reorganise Windows HANDLE management. Before commit 6e69223dc262755, Pageant would stop working after a certain number of PuTTYs were active at the same time. (At most about 60, but maybe fewer - see below.) This was because of two separate bugs. The easy one, fixed in 6e69223dc262755 itself, was that PuTTY left each named-pipe connection to Pageant open for the rest of its lifetime. So the real problem was that Pageant had too many active connections at once. (And since a given PuTTY might make multiple connections during userauth - one to list keys, and maybe another to actually make a signature - that was why the number of _PuTTYs_ might vary.) It was clearly a bug that PuTTY was leaving connections to Pageant needlessly open. But it was _also_ a bug that Pageant couldn't handle more than about 60 at once. In this commit, I fix that secondary bug. The cause of the bug is that the WaitForMultipleObjects function family in the Windows API have a limit on the number of HANDLE objects they can select between. The limit is MAXIMUM_WAIT_OBJECTS, defined to be 64. And handle-io.c was using a separate event object for each I/O subthread to communicate back to the main thread, so as soon as all those event objects (plus a handful of other HANDLEs) added up to more than 64, we'd start passing an overlarge handle array to WaitForMultipleObjects, and it would start not doing what we wanted. To fix this, I've reorganised handle-io.c so that all its subthreads share just _one_ event object to signal readiness back to the main thread. There's now a linked list of 'struct handle' objects that are ready to be processed, protected by a CRITICAL_SECTION. Each subthread signals readiness by adding itself to the linked list, and setting the event object to indicate that the list is now non-empty. When the main thread receives the event, it iterates over the whole list processing all the ready handles. (Each 'struct handle' still has a separate event object for the main thread to use to communicate _to_ the subthread. That's OK, because no thread is ever waiting on all those events at once: each subthread only waits on its own.) The previous HT_FOREIGN system didn't really fit into this framework. So I've moved it out into its own system. There's now a handle-wait.c which deals with the relatively simple job of managing a list of handles that need to be waited for, each with a callback function; that's what communicates a list of HANDLEs to event loops, and receives the notification when the event loop notices that one of them has done something. And handle-io.c is now just one client of handle-wait.c, providing a single HANDLE to the event loop, and dealing internally with everything that needs to be done when that handle fires. The new top-level handle-wait.c system *still* can't deal with more than MAXIMUM_WAIT_OBJECTS. At the moment, I'm reasonably convinced it doesn't need to: the only kind of HANDLE that any of our tools could previously have needed to wait on more than one of was the one in handle-io.c that I've just removed. But I've left some assertions and a TODO comment in there just in case we need to change that in future.
2021-05-24 12:06:10 +00:00
remove_from_ready_list(&h->u.g.ready_node);
sfree(h);
}
void handle_free(struct handle *h)
{
assert(h && !h->u.g.moribund);
Reorganise Windows HANDLE management. Before commit 6e69223dc262755, Pageant would stop working after a certain number of PuTTYs were active at the same time. (At most about 60, but maybe fewer - see below.) This was because of two separate bugs. The easy one, fixed in 6e69223dc262755 itself, was that PuTTY left each named-pipe connection to Pageant open for the rest of its lifetime. So the real problem was that Pageant had too many active connections at once. (And since a given PuTTY might make multiple connections during userauth - one to list keys, and maybe another to actually make a signature - that was why the number of _PuTTYs_ might vary.) It was clearly a bug that PuTTY was leaving connections to Pageant needlessly open. But it was _also_ a bug that Pageant couldn't handle more than about 60 at once. In this commit, I fix that secondary bug. The cause of the bug is that the WaitForMultipleObjects function family in the Windows API have a limit on the number of HANDLE objects they can select between. The limit is MAXIMUM_WAIT_OBJECTS, defined to be 64. And handle-io.c was using a separate event object for each I/O subthread to communicate back to the main thread, so as soon as all those event objects (plus a handful of other HANDLEs) added up to more than 64, we'd start passing an overlarge handle array to WaitForMultipleObjects, and it would start not doing what we wanted. To fix this, I've reorganised handle-io.c so that all its subthreads share just _one_ event object to signal readiness back to the main thread. There's now a linked list of 'struct handle' objects that are ready to be processed, protected by a CRITICAL_SECTION. Each subthread signals readiness by adding itself to the linked list, and setting the event object to indicate that the list is now non-empty. When the main thread receives the event, it iterates over the whole list processing all the ready handles. (Each 'struct handle' still has a separate event object for the main thread to use to communicate _to_ the subthread. That's OK, because no thread is ever waiting on all those events at once: each subthread only waits on its own.) The previous HT_FOREIGN system didn't really fit into this framework. So I've moved it out into its own system. There's now a handle-wait.c which deals with the relatively simple job of managing a list of handles that need to be waited for, each with a callback function; that's what communicates a list of HANDLEs to event loops, and receives the notification when the event loop notices that one of them has done something. And handle-io.c is now just one client of handle-wait.c, providing a single HANDLE to the event loop, and dealing internally with everything that needs to be done when that handle fires. The new top-level handle-wait.c system *still* can't deal with more than MAXIMUM_WAIT_OBJECTS. At the moment, I'm reasonably convinced it doesn't need to: the only kind of HANDLE that any of our tools could previously have needed to wait on more than one of was the one in handle-io.c that I've just removed. But I've left some assertions and a TODO comment in there just in case we need to change that in future.
2021-05-24 12:06:10 +00:00
if (h->u.g.busy) {
/*
* If the handle is currently busy, we cannot immediately free
* it, because its subthread is in the middle of something.
* (Exception: foreign handles don't have a subthread.)
*
* Instead we must wait until it's finished its current
* operation, because otherwise the subthread will write to
* invalid memory after we free its context from under it. So
* we set the moribund flag, which will be noticed next time
* an operation completes.
*/
h->u.g.moribund = true;
} else if (h->u.g.defunct) {
/*
* There isn't even a subthread; we can go straight to
* handle_destroy.
*/
handle_destroy(h);
} else {
/*
* The subthread is alive but not busy, so we now signal it
* to die. Set the moribund flag to indicate that it will
* want destroying after that.
*/
h->u.g.moribund = true;
h->u.g.done = true;
h->u.g.busy = true;
SetEvent(h->u.g.ev_from_main);
}
}
Reorganise Windows HANDLE management. Before commit 6e69223dc262755, Pageant would stop working after a certain number of PuTTYs were active at the same time. (At most about 60, but maybe fewer - see below.) This was because of two separate bugs. The easy one, fixed in 6e69223dc262755 itself, was that PuTTY left each named-pipe connection to Pageant open for the rest of its lifetime. So the real problem was that Pageant had too many active connections at once. (And since a given PuTTY might make multiple connections during userauth - one to list keys, and maybe another to actually make a signature - that was why the number of _PuTTYs_ might vary.) It was clearly a bug that PuTTY was leaving connections to Pageant needlessly open. But it was _also_ a bug that Pageant couldn't handle more than about 60 at once. In this commit, I fix that secondary bug. The cause of the bug is that the WaitForMultipleObjects function family in the Windows API have a limit on the number of HANDLE objects they can select between. The limit is MAXIMUM_WAIT_OBJECTS, defined to be 64. And handle-io.c was using a separate event object for each I/O subthread to communicate back to the main thread, so as soon as all those event objects (plus a handful of other HANDLEs) added up to more than 64, we'd start passing an overlarge handle array to WaitForMultipleObjects, and it would start not doing what we wanted. To fix this, I've reorganised handle-io.c so that all its subthreads share just _one_ event object to signal readiness back to the main thread. There's now a linked list of 'struct handle' objects that are ready to be processed, protected by a CRITICAL_SECTION. Each subthread signals readiness by adding itself to the linked list, and setting the event object to indicate that the list is now non-empty. When the main thread receives the event, it iterates over the whole list processing all the ready handles. (Each 'struct handle' still has a separate event object for the main thread to use to communicate _to_ the subthread. That's OK, because no thread is ever waiting on all those events at once: each subthread only waits on its own.) The previous HT_FOREIGN system didn't really fit into this framework. So I've moved it out into its own system. There's now a handle-wait.c which deals with the relatively simple job of managing a list of handles that need to be waited for, each with a callback function; that's what communicates a list of HANDLEs to event loops, and receives the notification when the event loop notices that one of them has done something. And handle-io.c is now just one client of handle-wait.c, providing a single HANDLE to the event loop, and dealing internally with everything that needs to be done when that handle fires. The new top-level handle-wait.c system *still* can't deal with more than MAXIMUM_WAIT_OBJECTS. At the moment, I'm reasonably convinced it doesn't need to: the only kind of HANDLE that any of our tools could previously have needed to wait on more than one of was the one in handle-io.c that I've just removed. But I've left some assertions and a TODO comment in there just in case we need to change that in future.
2021-05-24 12:06:10 +00:00
static void handle_ready(struct handle *h)
{
if (h->u.g.moribund) {
/*
* A moribund handle is one which we have either already
* signalled to die, or are waiting until its current I/O op
* completes to do so. Either way, it's treated as already
* dead from the external user's point of view, so we ignore
* the actual I/O result. We just signal the thread to die if
* we haven't yet done so, or destroy the handle if not.
*/
if (h->u.g.done) {
handle_destroy(h);
} else {
h->u.g.done = true;
h->u.g.busy = true;
SetEvent(h->u.g.ev_from_main);
}
return;
}
switch (h->type) {
int backlog;
case HT_INPUT:
h->u.i.busy = false;
/*
* A signal on an input handle means data has arrived.
*/
if (h->u.i.len == 0) {
/*
* EOF, or (nearly equivalently) read error.
*/
h->u.i.defunct = true;
h->u.i.gotdata(h, NULL, 0, h->u.i.readerr);
} else {
backlog = h->u.i.gotdata(h, h->u.i.buffer, h->u.i.len, 0);
handle_throttle(&h->u.i, backlog);
}
break;
case HT_OUTPUT:
h->u.o.busy = false;
/*
* A signal on an output handle means we have completed a
* write. Call the callback to indicate that the output
* buffer size has decreased, or to indicate an error.
*/
if (h->u.o.writeerr) {
/*
* Write error. Send a negative value to the callback,
* and mark the thread as defunct (because the output
* thread is terminating by now).
*/
h->u.o.defunct = true;
handle_write_eof: delegate CloseHandle back to the client. When a writable HANDLE is managed by the handle-io.c system, you ask to send EOF on the handle by calling handle_write_eof. That waits until all buffered data has been written, and then sends an EOF event by simply closing the handle. That is, of course, the only way to send an EOF signal on a handle at all. And yet, it's a bug, because the handle_output system does not take ownership of the handle you give it: the client of handle_output retains ownership, keeps its own copy of the handle, and will expect to close it itself. In most cases, the extra close will harmlessly fail, and return ERROR_INVALID_HANDLE (which the caller didn't notice anyway). But if you're unlucky, in conditions of frantic handle opening and closing (e.g. with a lot of separate named-pipe-style agent forwarding connections being constantly set up and torn down), the handle value might have been reused between the two closes, so that the second CloseHandle closes an unrelated handle belonging to some other part of the program. We can't fix this by giving handle_output permanent ownership of the handle, because it really _is_ necessary for copies of it to survive elsewhere: in particular, for a bidirectional file such as a serial port or named pipe, the reading side also needs a copy of the same handle! And yet, we can't replace the handle_write_eof call in the client with a direct CloseHandle, because that won't wait until buffered output has been drained. The solution is that the client still calls handle_write_eof to register that it _wants_ an EOF sent; the handle_output system will wait until it's ready, but then, instead of calling CloseHandle, it will ask its _client_ to close the handle, by calling the provided 'sentdata' callback with the new 'close' flag set to true. And then the client can not only close the handle, but do whatever else it needs to do to record that that has been done.
2021-09-30 18:16:20 +00:00
h->u.o.sentdata(h, 0, h->u.o.writeerr, false);
} else {
bufchain_consume(&h->u.o.queued_data, h->u.o.lenwritten);
noise_ultralight(NOISE_SOURCE_IOLEN, h->u.o.lenwritten);
handle_write_eof: delegate CloseHandle back to the client. When a writable HANDLE is managed by the handle-io.c system, you ask to send EOF on the handle by calling handle_write_eof. That waits until all buffered data has been written, and then sends an EOF event by simply closing the handle. That is, of course, the only way to send an EOF signal on a handle at all. And yet, it's a bug, because the handle_output system does not take ownership of the handle you give it: the client of handle_output retains ownership, keeps its own copy of the handle, and will expect to close it itself. In most cases, the extra close will harmlessly fail, and return ERROR_INVALID_HANDLE (which the caller didn't notice anyway). But if you're unlucky, in conditions of frantic handle opening and closing (e.g. with a lot of separate named-pipe-style agent forwarding connections being constantly set up and torn down), the handle value might have been reused between the two closes, so that the second CloseHandle closes an unrelated handle belonging to some other part of the program. We can't fix this by giving handle_output permanent ownership of the handle, because it really _is_ necessary for copies of it to survive elsewhere: in particular, for a bidirectional file such as a serial port or named pipe, the reading side also needs a copy of the same handle! And yet, we can't replace the handle_write_eof call in the client with a direct CloseHandle, because that won't wait until buffered output has been drained. The solution is that the client still calls handle_write_eof to register that it _wants_ an EOF sent; the handle_output system will wait until it's ready, but then, instead of calling CloseHandle, it will ask its _client_ to close the handle, by calling the provided 'sentdata' callback with the new 'close' flag set to true. And then the client can not only close the handle, but do whatever else it needs to do to record that that has been done.
2021-09-30 18:16:20 +00:00
h->u.o.sentdata(h, bufchain_size(&h->u.o.queued_data), 0, false);
handle_try_output(&h->u.o);
}
break;
}
}
void handle_unthrottle(struct handle *h, size_t backlog)
{
assert(h->type == HT_INPUT);
handle_throttle(&h->u.i, backlog);
}
size_t handle_backlog(struct handle *h)
{
assert(h->type == HT_OUTPUT);
return bufchain_size(&h->u.o.queued_data);
}
void *handle_get_privdata(struct handle *h)
{
return h->u.g.privdata;
}
static void handle_sink_write(BinarySink *bs, const void *data, size_t len)
{
handle_sink *sink = BinarySink_DOWNCAST(bs, handle_sink);
handle_write(sink->h, data, len);
}
void handle_sink_init(handle_sink *sink, struct handle *h)
{
sink->h = h;
BinarySink_INIT(sink, handle_sink_write);
}