1
0
mirror of https://git.tartarus.org/simon/putty.git synced 2025-01-09 17:38:00 +00:00
putty-source/windows/CMakeLists.txt

207 lines
6.1 KiB
CMake
Raw Permalink Normal View History

Replace mkfiles.pl with a CMake build system. This brings various concrete advantages over the previous system: - consistent support for out-of-tree builds on all platforms - more thorough support for Visual Studio IDE project files - support for Ninja-based builds, which is particularly useful on Windows where the alternative nmake has no parallel option - a really simple set of build instructions that work the same way on all the major platforms (look how much shorter README is!) - better decoupling of the project configuration from the toolchain configuration, so that my Windows cross-building doesn't need (much) special treatment in CMakeLists.txt - configure-time tests on Windows as well as Linux, so that a lot of ad-hoc #ifdefs second-guessing a particular feature's presence from the compiler version can now be replaced by tests of the feature itself Also some longer-term software-engineering advantages: - other people have actually heard of CMake, so they'll be able to produce patches to the new build setup more easily - unlike the old mkfiles.pl, CMake is not my personal problem to maintain - most importantly, mkfiles.pl was just a horrible pile of unmaintainable cruft, which even I found it painful to make changes to or to use, and desperately needed throwing in the bin. I've already thrown away all the variants of it I had in other projects of mine, and was only delaying this one so we could make the 0.75 release branch first. This change comes with a noticeable build-level restructuring. The previous Recipe worked by compiling every object file exactly once, and then making each executable by linking a precisely specified subset of the same object files. But in CMake, that's not the natural way to work - if you write the obvious command that puts the same source file into two executable targets, CMake generates a makefile that compiles it once per target. That can be an advantage, because it gives you the freedom to compile it differently in each case (e.g. with a #define telling it which program it's part of). But in a project that has many executable targets and had carefully contrived to _never_ need to build any module more than once, all it does is bloat the build time pointlessly! To avoid slowing down the build by a large factor, I've put most of the modules of the code base into a collection of static libraries organised vaguely thematically (SSH, other backends, crypto, network, ...). That means all those modules can still be compiled just once each, because once each library is built it's reused unchanged for all the executable targets. One upside of this library-based structure is that now I don't have to manually specify exactly which objects go into which programs any more - it's enough to specify which libraries are needed, and the linker will figure out the fine detail automatically. So there's less maintenance to do in CMakeLists.txt when the source code changes. But that reorganisation also adds fragility, because of the trad Unix linker semantics of walking along the library list once each, so that cyclic references between your libraries will provoke link errors. The current setup builds successfully, but I suspect it only just manages it. (In particular, I've found that MinGW is the most finicky on this score of the Windows compilers I've tried building with. So I've included a MinGW test build in the new-look Buildscr, because otherwise I think there'd be a significant risk of introducing MinGW-only build failures due to library search order, which wasn't a risk in the previous library-free build organisation.) In the longer term I hope to be able to reduce the risk of that, via gradual reorganisation (in particular, breaking up too-monolithic modules, to reduce the risk of knock-on references when you included a module for function A and it also contains function B with an unsatisfied dependency you didn't really need). Ideally I want to reach a state in which the libraries all have sensibly described purposes, a clearly documented (partial) order in which they're permitted to depend on each other, and a specification of what stubs you have to put where if you're leaving one of them out (e.g. nocrypto) and what callbacks you have to define in your non-library objects to satisfy dependencies from things low in the stack (e.g. out_of_memory()). One thing that's gone completely missing in this migration, unfortunately, is the unfinished MacOS port linked against Quartz GTK. That's because it turned out that I can't currently build it myself, on my own Mac: my previous installation of GTK had bit-rotted as a side effect of an Xcode upgrade, and I haven't yet been able to persuade jhbuild to make me a new one. So I can't even build the MacOS port with the _old_ makefiles, and hence, I have no way of checking that the new ones also work. I hope to bring that port back to life at some point, but I don't want it to block the rest of this change.
2021-04-10 14:21:11 +00:00
set(CMAKE_RUNTIME_OUTPUT_DIRECTORY ${CMAKE_BINARY_DIR})
add_sources_from_current_dir(utils
Windows Pageant: make atomic client/server decision. In the previous state of the code, we first tested agent_exists() to decide whether to be the long-running Pageant server or a short-lived client; then, during the command-line parsing loop, we prompted for passphrases to add keys presented on the command line (to ourself or the server, respectively); *then* we set up the named pipe and WM_COPYDATA receiver window to actually start functioning as a server, if we decided that was our role. A consequence is that if a user started up two Pageants each with an encrypted key on the command line, there would be a race condition: each one would decide that it was _going_ to be the server, then prompt for a passphrase, and then try to set itself up as the server once the passphrase is entered. So whichever one's passphrase prompt was answered second would add its key to its own internal data structures, then fail to set up the server's named pipe, terminate with an error, and end up not having added its key to the _surviving_ server. This change reorders the setup steps so that the command-line parsing loop does not add the keys immediately; instead it merely caches the key filenames provided. Then we make the decision about whether we're the server, and set up both the named pipe and WM_COPYDATA window if we are; and finally, we go back to our list of key filenames and actually add them, either to ourself (if we're the server) or to some other Pageant (if we're a client). Moreover, the decision about whether to be the server is now wrapped in an interprocess mutex similar to the one used in connection sharing, which means that even if two or more Pageants are started up as close to simultaneously as possible, they should avoid a race condition and successfully manage to agree on exactly one of themselves to be the server. For example, a user reported that this could occur if you put shortcuts to multiple private key files in your Windows Startup folder, so that they were all launched simultaneously at startup. One slightly odd behaviour that remains: if the server Pageant has to prompt for private key passphrases at startup, then it won't actually start _servicing_ requests from other Pageants until it's finished dealing with its own prompts. As a result, if you do start up two Pageants at once each with an encrypted key file on its command line, the second one won't even manage to present its passphrase prompt until the first one's prompt is dismissed, because it will block waiting for the initial check of the key list. But it will get there in the end, so that's only a cosmetic oddity. It would be nice to arrange that Pageant GUI operations don't block unrelated agent requests (e.g. by having the GUI and the agent code run in separate threads). But that's a bigger problem, not specific to startup time - the same thing happens if you interactively load a key via Pageant's file dialog. And it would require a major reorganisation to fix that fully, because currently the GUI code depends on being able to call _internal_ Pageant query functions like pageant_count_ssh2_keys() that don't work by constructing an agent request at all.
2022-01-03 12:17:15 +00:00
utils/agent_mutex_name.c
utils/agent_named_pipe_name.c
utils/arm_arch_queries.c
utils/aux_match_opt.c
utils/centre_window.c
New abstraction for command-line arguments. This begins the process of enabling our Windows applications to handle Unicode characters on their command lines which don't fit in the system code page. Instead of passing plain strings to cmdline_process_param, we now pass a partially opaque and platform-specific thing called a CmdlineArg. This has a method that extracts the argument word as a default-encoded string, and another one that tries to extract it as UTF-8 (though it may fail if the UTF-8 isn't available). On Windows, the command line is now constructed by calling split_into_argv_w on the Unicode command line returned by GetCommandLineW(), and the UTF-8 method returns text converted directly from that wide-character form, not going via the system code page. So it _can_ include UTF-8 characters that wouldn't have round-tripped via CP_ACP. This commit introduces the abstraction and switches over the cross-platform and Windows argv-handling code to use it, with minimal functional change. Nothing yet tries to call cmdline_arg_get_utf8(). I say 'cross-platform and Windows' because on the Unix side there's still a lot of use of plain old argv which I haven't converted. That would be a much larger project, and isn't currently needed: the _current_ aim of this abstraction is to get the right things to happen relating to Unicode on Windows, so for code that doesn't run on Windows anyway, it's not adding value. (Also there's a tension with GTK, which wants to talk to standard argv and extract arguments _it_ knows about, so at the very least we'd have to let it munge argv before importing it into this new system.)
2024-09-25 09:18:38 +00:00
utils/cmdline_arg.c
utils/cryptoapi.c
utils/defaults.c
utils/dll_hijacking_protection.c
utils/dputs.c
utils/escape_registry_key.c
utils/filename.c
utils/fontspec.c
utils/getdlgitemtext_alloc.c
utils/get_system_dir.c
utils/get_username.c
utils/gui-timing.c
utils/interprocess_mutex.c
utils/is_console_handle.c
utils/load_system32_dll.c
utils/ltime.c
utils/makedlgitemborderless.c
Richer data type for interactive prompt results. All the seat functions that request an interactive prompt of some kind to the user - both the main seat_get_userpass_input and the various confirmation dialogs for things like host keys - were using a simple int return value, with the general semantics of 0 = "fail", 1 = "proceed" (and in the case of seat_get_userpass_input, answers to the prompts were provided), and -1 = "request in progress, wait for a callback". In this commit I change all those functions' return types to a new struct called SeatPromptResult, whose primary field is an enum replacing those simple integer values. The main purpose is that the enum has not three but _four_ values: the "fail" result has been split into 'user abort' and 'software abort'. The distinction is that a user abort occurs as a result of an interactive UI action, such as the user clicking 'cancel' in a dialog box or hitting ^D or ^C at a terminal password prompt - and therefore, there's no need to display an error message telling the user that the interactive operation has failed, because the user already knows, because they _did_ it. 'Software abort' is from any other cause, where PuTTY is the first to know there was a problem, and has to tell the user. We already had this 'user abort' vs 'software abort' distinction in other parts of the code - the SSH backend has separate termination functions which protocol layers can call. But we assumed that any failure from an interactive prompt request fell into the 'user abort' category, which is not true. A couple of examples: if you configure a host key fingerprint in your saved session via the SSH > Host keys pane, and the server presents a host key that doesn't match it, then verify_ssh_host_key would report that the user had aborted the connection, and feel no need to tell the user what had gone wrong! Similarly, if a password provided on the command line was not accepted, then (after I fixed the semantics of that in the previous commit) the same wrong handling would occur. So now, those Seat prompt functions too can communicate whether the user or the software originated a connection abort. And in the latter case, we also provide an error message to present to the user. Result: in those two example cases (and others), error messages should no longer go missing. Implementation note: to avoid the hassle of having the error message in a SeatPromptResult being a dynamically allocated string (and hence, every recipient of one must always check whether it's non-NULL and free it on every exit path, plus being careful about copying the struct around), I've instead arranged that the structure contains a function pointer and a couple of parameters, so that the string form of the message can be constructed on demand. That way, the only users who need to free it are the ones who actually _asked_ for it in the first place, which is a much smaller set. (This is one of the rare occasions that I regret not having C++'s extra features available in this code base - a unique_ptr or shared_ptr to a string would have been just the thing here, and the compiler would have done all the hard work for me of remembering where to insert the frees!)
2021-12-28 17:52:00 +00:00
utils/make_spr_sw_abort_winerror.c
utils/message_box.c
utils/minefield.c
utils/open_for_write_would_lose_data.c
utils/pgp_fingerprints_msgbox.c
utils/platform_get_x_display.c
utils/registry.c
utils/request_file.c
utils/screenshot.c
utils/security.c
utils/shinydialogbox.c
utils/split_into_argv.c
utils/split_into_argv_w.c
utils/version.c
utils/win_strerror.c
unicode.c)
if(NOT HAVE_STRTOUMAX)
add_sources_from_current_dir(utils utils/strtoumax.c)
endif()
add_sources_from_current_dir(eventloop
Reorganise Windows HANDLE management. Before commit 6e69223dc262755, Pageant would stop working after a certain number of PuTTYs were active at the same time. (At most about 60, but maybe fewer - see below.) This was because of two separate bugs. The easy one, fixed in 6e69223dc262755 itself, was that PuTTY left each named-pipe connection to Pageant open for the rest of its lifetime. So the real problem was that Pageant had too many active connections at once. (And since a given PuTTY might make multiple connections during userauth - one to list keys, and maybe another to actually make a signature - that was why the number of _PuTTYs_ might vary.) It was clearly a bug that PuTTY was leaving connections to Pageant needlessly open. But it was _also_ a bug that Pageant couldn't handle more than about 60 at once. In this commit, I fix that secondary bug. The cause of the bug is that the WaitForMultipleObjects function family in the Windows API have a limit on the number of HANDLE objects they can select between. The limit is MAXIMUM_WAIT_OBJECTS, defined to be 64. And handle-io.c was using a separate event object for each I/O subthread to communicate back to the main thread, so as soon as all those event objects (plus a handful of other HANDLEs) added up to more than 64, we'd start passing an overlarge handle array to WaitForMultipleObjects, and it would start not doing what we wanted. To fix this, I've reorganised handle-io.c so that all its subthreads share just _one_ event object to signal readiness back to the main thread. There's now a linked list of 'struct handle' objects that are ready to be processed, protected by a CRITICAL_SECTION. Each subthread signals readiness by adding itself to the linked list, and setting the event object to indicate that the list is now non-empty. When the main thread receives the event, it iterates over the whole list processing all the ready handles. (Each 'struct handle' still has a separate event object for the main thread to use to communicate _to_ the subthread. That's OK, because no thread is ever waiting on all those events at once: each subthread only waits on its own.) The previous HT_FOREIGN system didn't really fit into this framework. So I've moved it out into its own system. There's now a handle-wait.c which deals with the relatively simple job of managing a list of handles that need to be waited for, each with a callback function; that's what communicates a list of HANDLEs to event loops, and receives the notification when the event loop notices that one of them has done something. And handle-io.c is now just one client of handle-wait.c, providing a single HANDLE to the event loop, and dealing internally with everything that needs to be done when that handle fires. The new top-level handle-wait.c system *still* can't deal with more than MAXIMUM_WAIT_OBJECTS. At the moment, I'm reasonably convinced it doesn't need to: the only kind of HANDLE that any of our tools could previously have needed to wait on more than one of was the one in handle-io.c that I've just removed. But I've left some assertions and a TODO comment in there just in case we need to change that in future.
2021-05-24 12:06:10 +00:00
cliloop.c handle-wait.c)
add_sources_from_current_dir(console
select-cli.c nohelp.c console.c)
add_sources_from_current_dir(settings
storage.c)
add_sources_from_current_dir(network
network.c handle-socket.c named-pipe-client.c named-pipe-server.c local-proxy.c x11.c)
add_sources_from_current_dir(sshcommon
noise.c)
add_sources_from_current_dir(sshclient
agent-client.c gss.c sharing.c)
add_sources_from_current_dir(sftpclient
sftp.c)
add_sources_from_current_dir(otherbackends
serial.c)
add_sources_from_current_dir(agent
agent-client.c)
add_sources_from_current_dir(guiterminal
dialog.c controls.c config.c printing.c jump-list.c sizetip.c)
add_dependencies(guiterminal generated_licence_h) # dialog.c uses licence.h
Replace mkfiles.pl with a CMake build system. This brings various concrete advantages over the previous system: - consistent support for out-of-tree builds on all platforms - more thorough support for Visual Studio IDE project files - support for Ninja-based builds, which is particularly useful on Windows where the alternative nmake has no parallel option - a really simple set of build instructions that work the same way on all the major platforms (look how much shorter README is!) - better decoupling of the project configuration from the toolchain configuration, so that my Windows cross-building doesn't need (much) special treatment in CMakeLists.txt - configure-time tests on Windows as well as Linux, so that a lot of ad-hoc #ifdefs second-guessing a particular feature's presence from the compiler version can now be replaced by tests of the feature itself Also some longer-term software-engineering advantages: - other people have actually heard of CMake, so they'll be able to produce patches to the new build setup more easily - unlike the old mkfiles.pl, CMake is not my personal problem to maintain - most importantly, mkfiles.pl was just a horrible pile of unmaintainable cruft, which even I found it painful to make changes to or to use, and desperately needed throwing in the bin. I've already thrown away all the variants of it I had in other projects of mine, and was only delaying this one so we could make the 0.75 release branch first. This change comes with a noticeable build-level restructuring. The previous Recipe worked by compiling every object file exactly once, and then making each executable by linking a precisely specified subset of the same object files. But in CMake, that's not the natural way to work - if you write the obvious command that puts the same source file into two executable targets, CMake generates a makefile that compiles it once per target. That can be an advantage, because it gives you the freedom to compile it differently in each case (e.g. with a #define telling it which program it's part of). But in a project that has many executable targets and had carefully contrived to _never_ need to build any module more than once, all it does is bloat the build time pointlessly! To avoid slowing down the build by a large factor, I've put most of the modules of the code base into a collection of static libraries organised vaguely thematically (SSH, other backends, crypto, network, ...). That means all those modules can still be compiled just once each, because once each library is built it's reused unchanged for all the executable targets. One upside of this library-based structure is that now I don't have to manually specify exactly which objects go into which programs any more - it's enough to specify which libraries are needed, and the linker will figure out the fine detail automatically. So there's less maintenance to do in CMakeLists.txt when the source code changes. But that reorganisation also adds fragility, because of the trad Unix linker semantics of walking along the library list once each, so that cyclic references between your libraries will provoke link errors. The current setup builds successfully, but I suspect it only just manages it. (In particular, I've found that MinGW is the most finicky on this score of the Windows compilers I've tried building with. So I've included a MinGW test build in the new-look Buildscr, because otherwise I think there'd be a significant risk of introducing MinGW-only build failures due to library search order, which wasn't a risk in the previous library-free build organisation.) In the longer term I hope to be able to reduce the risk of that, via gradual reorganisation (in particular, breaking up too-monolithic modules, to reduce the risk of knock-on references when you included a module for function A and it also contains function B with an unsatisfied dependency you didn't really need). Ideally I want to reach a state in which the libraries all have sensibly described purposes, a clearly documented (partial) order in which they're permitted to depend on each other, and a specification of what stubs you have to put where if you're leaving one of them out (e.g. nocrypto) and what callbacks you have to define in your non-library objects to satisfy dependencies from things low in the stack (e.g. out_of_memory()). One thing that's gone completely missing in this migration, unfortunately, is the unfinished MacOS port linked against Quartz GTK. That's because it turned out that I can't currently build it myself, on my own Mac: my previous installation of GTK had bit-rotted as a side effect of an Xcode upgrade, and I haven't yet been able to persuade jhbuild to make me a new one. So I can't even build the MacOS port with the _old_ makefiles, and hence, I have no way of checking that the new ones also work. I hope to bring that port back to life at some point, but I don't want it to block the rest of this change.
2021-04-10 14:21:11 +00:00
Reorganise Windows HANDLE management. Before commit 6e69223dc262755, Pageant would stop working after a certain number of PuTTYs were active at the same time. (At most about 60, but maybe fewer - see below.) This was because of two separate bugs. The easy one, fixed in 6e69223dc262755 itself, was that PuTTY left each named-pipe connection to Pageant open for the rest of its lifetime. So the real problem was that Pageant had too many active connections at once. (And since a given PuTTY might make multiple connections during userauth - one to list keys, and maybe another to actually make a signature - that was why the number of _PuTTYs_ might vary.) It was clearly a bug that PuTTY was leaving connections to Pageant needlessly open. But it was _also_ a bug that Pageant couldn't handle more than about 60 at once. In this commit, I fix that secondary bug. The cause of the bug is that the WaitForMultipleObjects function family in the Windows API have a limit on the number of HANDLE objects they can select between. The limit is MAXIMUM_WAIT_OBJECTS, defined to be 64. And handle-io.c was using a separate event object for each I/O subthread to communicate back to the main thread, so as soon as all those event objects (plus a handful of other HANDLEs) added up to more than 64, we'd start passing an overlarge handle array to WaitForMultipleObjects, and it would start not doing what we wanted. To fix this, I've reorganised handle-io.c so that all its subthreads share just _one_ event object to signal readiness back to the main thread. There's now a linked list of 'struct handle' objects that are ready to be processed, protected by a CRITICAL_SECTION. Each subthread signals readiness by adding itself to the linked list, and setting the event object to indicate that the list is now non-empty. When the main thread receives the event, it iterates over the whole list processing all the ready handles. (Each 'struct handle' still has a separate event object for the main thread to use to communicate _to_ the subthread. That's OK, because no thread is ever waiting on all those events at once: each subthread only waits on its own.) The previous HT_FOREIGN system didn't really fit into this framework. So I've moved it out into its own system. There's now a handle-wait.c which deals with the relatively simple job of managing a list of handles that need to be waited for, each with a callback function; that's what communicates a list of HANDLEs to event loops, and receives the notification when the event loop notices that one of them has done something. And handle-io.c is now just one client of handle-wait.c, providing a single HANDLE to the event loop, and dealing internally with everything that needs to be done when that handle fires. The new top-level handle-wait.c system *still* can't deal with more than MAXIMUM_WAIT_OBJECTS. At the moment, I'm reasonably convinced it doesn't need to: the only kind of HANDLE that any of our tools could previously have needed to wait on more than one of was the one in handle-io.c that I've just removed. But I've left some assertions and a TODO comment in there just in case we need to change that in future.
2021-05-24 12:06:10 +00:00
# This object awkwardly needs to live in the network library as well
# as the eventloop library, in case it didn't get pulled in from the
# latter before handle-socket.c needed it.
add_library(handle-io OBJECT
handle-io.c)
target_sources(eventloop PRIVATE $<TARGET_OBJECTS:handle-io>)
target_sources(network PRIVATE $<TARGET_OBJECTS:handle-io>)
Replace mkfiles.pl with a CMake build system. This brings various concrete advantages over the previous system: - consistent support for out-of-tree builds on all platforms - more thorough support for Visual Studio IDE project files - support for Ninja-based builds, which is particularly useful on Windows where the alternative nmake has no parallel option - a really simple set of build instructions that work the same way on all the major platforms (look how much shorter README is!) - better decoupling of the project configuration from the toolchain configuration, so that my Windows cross-building doesn't need (much) special treatment in CMakeLists.txt - configure-time tests on Windows as well as Linux, so that a lot of ad-hoc #ifdefs second-guessing a particular feature's presence from the compiler version can now be replaced by tests of the feature itself Also some longer-term software-engineering advantages: - other people have actually heard of CMake, so they'll be able to produce patches to the new build setup more easily - unlike the old mkfiles.pl, CMake is not my personal problem to maintain - most importantly, mkfiles.pl was just a horrible pile of unmaintainable cruft, which even I found it painful to make changes to or to use, and desperately needed throwing in the bin. I've already thrown away all the variants of it I had in other projects of mine, and was only delaying this one so we could make the 0.75 release branch first. This change comes with a noticeable build-level restructuring. The previous Recipe worked by compiling every object file exactly once, and then making each executable by linking a precisely specified subset of the same object files. But in CMake, that's not the natural way to work - if you write the obvious command that puts the same source file into two executable targets, CMake generates a makefile that compiles it once per target. That can be an advantage, because it gives you the freedom to compile it differently in each case (e.g. with a #define telling it which program it's part of). But in a project that has many executable targets and had carefully contrived to _never_ need to build any module more than once, all it does is bloat the build time pointlessly! To avoid slowing down the build by a large factor, I've put most of the modules of the code base into a collection of static libraries organised vaguely thematically (SSH, other backends, crypto, network, ...). That means all those modules can still be compiled just once each, because once each library is built it's reused unchanged for all the executable targets. One upside of this library-based structure is that now I don't have to manually specify exactly which objects go into which programs any more - it's enough to specify which libraries are needed, and the linker will figure out the fine detail automatically. So there's less maintenance to do in CMakeLists.txt when the source code changes. But that reorganisation also adds fragility, because of the trad Unix linker semantics of walking along the library list once each, so that cyclic references between your libraries will provoke link errors. The current setup builds successfully, but I suspect it only just manages it. (In particular, I've found that MinGW is the most finicky on this score of the Windows compilers I've tried building with. So I've included a MinGW test build in the new-look Buildscr, because otherwise I think there'd be a significant risk of introducing MinGW-only build failures due to library search order, which wasn't a risk in the previous library-free build organisation.) In the longer term I hope to be able to reduce the risk of that, via gradual reorganisation (in particular, breaking up too-monolithic modules, to reduce the risk of knock-on references when you included a module for function A and it also contains function B with an unsatisfied dependency you didn't really need). Ideally I want to reach a state in which the libraries all have sensibly described purposes, a clearly documented (partial) order in which they're permitted to depend on each other, and a specification of what stubs you have to put where if you're leaving one of them out (e.g. nocrypto) and what callbacks you have to define in your non-library objects to satisfy dependencies from things low in the stack (e.g. out_of_memory()). One thing that's gone completely missing in this migration, unfortunately, is the unfinished MacOS port linked against Quartz GTK. That's because it turned out that I can't currently build it myself, on my own Mac: my previous installation of GTK had bit-rotted as a side effect of an Xcode upgrade, and I haven't yet been able to persuade jhbuild to make me a new one. So I can't even build the MacOS port with the _old_ makefiles, and hence, I have no way of checking that the new ones also work. I hope to bring that port back to life at some point, but I don't want it to block the rest of this change.
2021-04-10 14:21:11 +00:00
add_library(guimisc STATIC
select-gui.c)
Replace mkfiles.pl with a CMake build system. This brings various concrete advantages over the previous system: - consistent support for out-of-tree builds on all platforms - more thorough support for Visual Studio IDE project files - support for Ninja-based builds, which is particularly useful on Windows where the alternative nmake has no parallel option - a really simple set of build instructions that work the same way on all the major platforms (look how much shorter README is!) - better decoupling of the project configuration from the toolchain configuration, so that my Windows cross-building doesn't need (much) special treatment in CMakeLists.txt - configure-time tests on Windows as well as Linux, so that a lot of ad-hoc #ifdefs second-guessing a particular feature's presence from the compiler version can now be replaced by tests of the feature itself Also some longer-term software-engineering advantages: - other people have actually heard of CMake, so they'll be able to produce patches to the new build setup more easily - unlike the old mkfiles.pl, CMake is not my personal problem to maintain - most importantly, mkfiles.pl was just a horrible pile of unmaintainable cruft, which even I found it painful to make changes to or to use, and desperately needed throwing in the bin. I've already thrown away all the variants of it I had in other projects of mine, and was only delaying this one so we could make the 0.75 release branch first. This change comes with a noticeable build-level restructuring. The previous Recipe worked by compiling every object file exactly once, and then making each executable by linking a precisely specified subset of the same object files. But in CMake, that's not the natural way to work - if you write the obvious command that puts the same source file into two executable targets, CMake generates a makefile that compiles it once per target. That can be an advantage, because it gives you the freedom to compile it differently in each case (e.g. with a #define telling it which program it's part of). But in a project that has many executable targets and had carefully contrived to _never_ need to build any module more than once, all it does is bloat the build time pointlessly! To avoid slowing down the build by a large factor, I've put most of the modules of the code base into a collection of static libraries organised vaguely thematically (SSH, other backends, crypto, network, ...). That means all those modules can still be compiled just once each, because once each library is built it's reused unchanged for all the executable targets. One upside of this library-based structure is that now I don't have to manually specify exactly which objects go into which programs any more - it's enough to specify which libraries are needed, and the linker will figure out the fine detail automatically. So there's less maintenance to do in CMakeLists.txt when the source code changes. But that reorganisation also adds fragility, because of the trad Unix linker semantics of walking along the library list once each, so that cyclic references between your libraries will provoke link errors. The current setup builds successfully, but I suspect it only just manages it. (In particular, I've found that MinGW is the most finicky on this score of the Windows compilers I've tried building with. So I've included a MinGW test build in the new-look Buildscr, because otherwise I think there'd be a significant risk of introducing MinGW-only build failures due to library search order, which wasn't a risk in the previous library-free build organisation.) In the longer term I hope to be able to reduce the risk of that, via gradual reorganisation (in particular, breaking up too-monolithic modules, to reduce the risk of knock-on references when you included a module for function A and it also contains function B with an unsatisfied dependency you didn't really need). Ideally I want to reach a state in which the libraries all have sensibly described purposes, a clearly documented (partial) order in which they're permitted to depend on each other, and a specification of what stubs you have to put where if you're leaving one of them out (e.g. nocrypto) and what callbacks you have to define in your non-library objects to satisfy dependencies from things low in the stack (e.g. out_of_memory()). One thing that's gone completely missing in this migration, unfortunately, is the unfinished MacOS port linked against Quartz GTK. That's because it turned out that I can't currently build it myself, on my own Mac: my previous installation of GTK had bit-rotted as a side effect of an Xcode upgrade, and I haven't yet been able to persuade jhbuild to make me a new one. So I can't even build the MacOS port with the _old_ makefiles, and hence, I have no way of checking that the new ones also work. I hope to bring that port back to life at some point, but I don't want it to block the rest of this change.
2021-04-10 14:21:11 +00:00
add_executable(pageant
pageant.c
help.c
Replace mkfiles.pl with a CMake build system. This brings various concrete advantages over the previous system: - consistent support for out-of-tree builds on all platforms - more thorough support for Visual Studio IDE project files - support for Ninja-based builds, which is particularly useful on Windows where the alternative nmake has no parallel option - a really simple set of build instructions that work the same way on all the major platforms (look how much shorter README is!) - better decoupling of the project configuration from the toolchain configuration, so that my Windows cross-building doesn't need (much) special treatment in CMakeLists.txt - configure-time tests on Windows as well as Linux, so that a lot of ad-hoc #ifdefs second-guessing a particular feature's presence from the compiler version can now be replaced by tests of the feature itself Also some longer-term software-engineering advantages: - other people have actually heard of CMake, so they'll be able to produce patches to the new build setup more easily - unlike the old mkfiles.pl, CMake is not my personal problem to maintain - most importantly, mkfiles.pl was just a horrible pile of unmaintainable cruft, which even I found it painful to make changes to or to use, and desperately needed throwing in the bin. I've already thrown away all the variants of it I had in other projects of mine, and was only delaying this one so we could make the 0.75 release branch first. This change comes with a noticeable build-level restructuring. The previous Recipe worked by compiling every object file exactly once, and then making each executable by linking a precisely specified subset of the same object files. But in CMake, that's not the natural way to work - if you write the obvious command that puts the same source file into two executable targets, CMake generates a makefile that compiles it once per target. That can be an advantage, because it gives you the freedom to compile it differently in each case (e.g. with a #define telling it which program it's part of). But in a project that has many executable targets and had carefully contrived to _never_ need to build any module more than once, all it does is bloat the build time pointlessly! To avoid slowing down the build by a large factor, I've put most of the modules of the code base into a collection of static libraries organised vaguely thematically (SSH, other backends, crypto, network, ...). That means all those modules can still be compiled just once each, because once each library is built it's reused unchanged for all the executable targets. One upside of this library-based structure is that now I don't have to manually specify exactly which objects go into which programs any more - it's enough to specify which libraries are needed, and the linker will figure out the fine detail automatically. So there's less maintenance to do in CMakeLists.txt when the source code changes. But that reorganisation also adds fragility, because of the trad Unix linker semantics of walking along the library list once each, so that cyclic references between your libraries will provoke link errors. The current setup builds successfully, but I suspect it only just manages it. (In particular, I've found that MinGW is the most finicky on this score of the Windows compilers I've tried building with. So I've included a MinGW test build in the new-look Buildscr, because otherwise I think there'd be a significant risk of introducing MinGW-only build failures due to library search order, which wasn't a risk in the previous library-free build organisation.) In the longer term I hope to be able to reduce the risk of that, via gradual reorganisation (in particular, breaking up too-monolithic modules, to reduce the risk of knock-on references when you included a module for function A and it also contains function B with an unsatisfied dependency you didn't really need). Ideally I want to reach a state in which the libraries all have sensibly described purposes, a clearly documented (partial) order in which they're permitted to depend on each other, and a specification of what stubs you have to put where if you're leaving one of them out (e.g. nocrypto) and what callbacks you have to define in your non-library objects to satisfy dependencies from things low in the stack (e.g. out_of_memory()). One thing that's gone completely missing in this migration, unfortunately, is the unfinished MacOS port linked against Quartz GTK. That's because it turned out that I can't currently build it myself, on my own Mac: my previous installation of GTK had bit-rotted as a side effect of an Xcode upgrade, and I haven't yet been able to persuade jhbuild to make me a new one. So I can't even build the MacOS port with the _old_ makefiles, and hence, I have no way of checking that the new ones also work. I hope to bring that port back to life at some point, but I don't want it to block the rest of this change.
2021-04-10 14:21:11 +00:00
pageant.rc)
add_dependencies(pageant generated_licence_h)
target_link_libraries(pageant
guimisc eventloop agent network crypto utils
${platform_libraries})
set_target_properties(pageant PROPERTIES WIN32_EXECUTABLE ON)
Replace mkfiles.pl with a CMake build system. This brings various concrete advantages over the previous system: - consistent support for out-of-tree builds on all platforms - more thorough support for Visual Studio IDE project files - support for Ninja-based builds, which is particularly useful on Windows where the alternative nmake has no parallel option - a really simple set of build instructions that work the same way on all the major platforms (look how much shorter README is!) - better decoupling of the project configuration from the toolchain configuration, so that my Windows cross-building doesn't need (much) special treatment in CMakeLists.txt - configure-time tests on Windows as well as Linux, so that a lot of ad-hoc #ifdefs second-guessing a particular feature's presence from the compiler version can now be replaced by tests of the feature itself Also some longer-term software-engineering advantages: - other people have actually heard of CMake, so they'll be able to produce patches to the new build setup more easily - unlike the old mkfiles.pl, CMake is not my personal problem to maintain - most importantly, mkfiles.pl was just a horrible pile of unmaintainable cruft, which even I found it painful to make changes to or to use, and desperately needed throwing in the bin. I've already thrown away all the variants of it I had in other projects of mine, and was only delaying this one so we could make the 0.75 release branch first. This change comes with a noticeable build-level restructuring. The previous Recipe worked by compiling every object file exactly once, and then making each executable by linking a precisely specified subset of the same object files. But in CMake, that's not the natural way to work - if you write the obvious command that puts the same source file into two executable targets, CMake generates a makefile that compiles it once per target. That can be an advantage, because it gives you the freedom to compile it differently in each case (e.g. with a #define telling it which program it's part of). But in a project that has many executable targets and had carefully contrived to _never_ need to build any module more than once, all it does is bloat the build time pointlessly! To avoid slowing down the build by a large factor, I've put most of the modules of the code base into a collection of static libraries organised vaguely thematically (SSH, other backends, crypto, network, ...). That means all those modules can still be compiled just once each, because once each library is built it's reused unchanged for all the executable targets. One upside of this library-based structure is that now I don't have to manually specify exactly which objects go into which programs any more - it's enough to specify which libraries are needed, and the linker will figure out the fine detail automatically. So there's less maintenance to do in CMakeLists.txt when the source code changes. But that reorganisation also adds fragility, because of the trad Unix linker semantics of walking along the library list once each, so that cyclic references between your libraries will provoke link errors. The current setup builds successfully, but I suspect it only just manages it. (In particular, I've found that MinGW is the most finicky on this score of the Windows compilers I've tried building with. So I've included a MinGW test build in the new-look Buildscr, because otherwise I think there'd be a significant risk of introducing MinGW-only build failures due to library search order, which wasn't a risk in the previous library-free build organisation.) In the longer term I hope to be able to reduce the risk of that, via gradual reorganisation (in particular, breaking up too-monolithic modules, to reduce the risk of knock-on references when you included a module for function A and it also contains function B with an unsatisfied dependency you didn't really need). Ideally I want to reach a state in which the libraries all have sensibly described purposes, a clearly documented (partial) order in which they're permitted to depend on each other, and a specification of what stubs you have to put where if you're leaving one of them out (e.g. nocrypto) and what callbacks you have to define in your non-library objects to satisfy dependencies from things low in the stack (e.g. out_of_memory()). One thing that's gone completely missing in this migration, unfortunately, is the unfinished MacOS port linked against Quartz GTK. That's because it turned out that I can't currently build it myself, on my own Mac: my previous installation of GTK had bit-rotted as a side effect of an Xcode upgrade, and I haven't yet been able to persuade jhbuild to make me a new one. So I can't even build the MacOS port with the _old_ makefiles, and hence, I have no way of checking that the new ones also work. I hope to bring that port back to life at some point, but I don't want it to block the rest of this change.
2021-04-10 14:21:11 +00:00
installed_program(pageant)
add_sources_from_current_dir(plink no-jump-list.c nohelp.c plink.rc)
Replace mkfiles.pl with a CMake build system. This brings various concrete advantages over the previous system: - consistent support for out-of-tree builds on all platforms - more thorough support for Visual Studio IDE project files - support for Ninja-based builds, which is particularly useful on Windows where the alternative nmake has no parallel option - a really simple set of build instructions that work the same way on all the major platforms (look how much shorter README is!) - better decoupling of the project configuration from the toolchain configuration, so that my Windows cross-building doesn't need (much) special treatment in CMakeLists.txt - configure-time tests on Windows as well as Linux, so that a lot of ad-hoc #ifdefs second-guessing a particular feature's presence from the compiler version can now be replaced by tests of the feature itself Also some longer-term software-engineering advantages: - other people have actually heard of CMake, so they'll be able to produce patches to the new build setup more easily - unlike the old mkfiles.pl, CMake is not my personal problem to maintain - most importantly, mkfiles.pl was just a horrible pile of unmaintainable cruft, which even I found it painful to make changes to or to use, and desperately needed throwing in the bin. I've already thrown away all the variants of it I had in other projects of mine, and was only delaying this one so we could make the 0.75 release branch first. This change comes with a noticeable build-level restructuring. The previous Recipe worked by compiling every object file exactly once, and then making each executable by linking a precisely specified subset of the same object files. But in CMake, that's not the natural way to work - if you write the obvious command that puts the same source file into two executable targets, CMake generates a makefile that compiles it once per target. That can be an advantage, because it gives you the freedom to compile it differently in each case (e.g. with a #define telling it which program it's part of). But in a project that has many executable targets and had carefully contrived to _never_ need to build any module more than once, all it does is bloat the build time pointlessly! To avoid slowing down the build by a large factor, I've put most of the modules of the code base into a collection of static libraries organised vaguely thematically (SSH, other backends, crypto, network, ...). That means all those modules can still be compiled just once each, because once each library is built it's reused unchanged for all the executable targets. One upside of this library-based structure is that now I don't have to manually specify exactly which objects go into which programs any more - it's enough to specify which libraries are needed, and the linker will figure out the fine detail automatically. So there's less maintenance to do in CMakeLists.txt when the source code changes. But that reorganisation also adds fragility, because of the trad Unix linker semantics of walking along the library list once each, so that cyclic references between your libraries will provoke link errors. The current setup builds successfully, but I suspect it only just manages it. (In particular, I've found that MinGW is the most finicky on this score of the Windows compilers I've tried building with. So I've included a MinGW test build in the new-look Buildscr, because otherwise I think there'd be a significant risk of introducing MinGW-only build failures due to library search order, which wasn't a risk in the previous library-free build organisation.) In the longer term I hope to be able to reduce the risk of that, via gradual reorganisation (in particular, breaking up too-monolithic modules, to reduce the risk of knock-on references when you included a module for function A and it also contains function B with an unsatisfied dependency you didn't really need). Ideally I want to reach a state in which the libraries all have sensibly described purposes, a clearly documented (partial) order in which they're permitted to depend on each other, and a specification of what stubs you have to put where if you're leaving one of them out (e.g. nocrypto) and what callbacks you have to define in your non-library objects to satisfy dependencies from things low in the stack (e.g. out_of_memory()). One thing that's gone completely missing in this migration, unfortunately, is the unfinished MacOS port linked against Quartz GTK. That's because it turned out that I can't currently build it myself, on my own Mac: my previous installation of GTK had bit-rotted as a side effect of an Xcode upgrade, and I haven't yet been able to persuade jhbuild to make me a new one. So I can't even build the MacOS port with the _old_ makefiles, and hence, I have no way of checking that the new ones also work. I hope to bring that port back to life at some point, but I don't want it to block the rest of this change.
2021-04-10 14:21:11 +00:00
add_dependencies(plink generated_licence_h)
add_sources_from_current_dir(pscp no-jump-list.c nohelp.c pscp.rc)
Replace mkfiles.pl with a CMake build system. This brings various concrete advantages over the previous system: - consistent support for out-of-tree builds on all platforms - more thorough support for Visual Studio IDE project files - support for Ninja-based builds, which is particularly useful on Windows where the alternative nmake has no parallel option - a really simple set of build instructions that work the same way on all the major platforms (look how much shorter README is!) - better decoupling of the project configuration from the toolchain configuration, so that my Windows cross-building doesn't need (much) special treatment in CMakeLists.txt - configure-time tests on Windows as well as Linux, so that a lot of ad-hoc #ifdefs second-guessing a particular feature's presence from the compiler version can now be replaced by tests of the feature itself Also some longer-term software-engineering advantages: - other people have actually heard of CMake, so they'll be able to produce patches to the new build setup more easily - unlike the old mkfiles.pl, CMake is not my personal problem to maintain - most importantly, mkfiles.pl was just a horrible pile of unmaintainable cruft, which even I found it painful to make changes to or to use, and desperately needed throwing in the bin. I've already thrown away all the variants of it I had in other projects of mine, and was only delaying this one so we could make the 0.75 release branch first. This change comes with a noticeable build-level restructuring. The previous Recipe worked by compiling every object file exactly once, and then making each executable by linking a precisely specified subset of the same object files. But in CMake, that's not the natural way to work - if you write the obvious command that puts the same source file into two executable targets, CMake generates a makefile that compiles it once per target. That can be an advantage, because it gives you the freedom to compile it differently in each case (e.g. with a #define telling it which program it's part of). But in a project that has many executable targets and had carefully contrived to _never_ need to build any module more than once, all it does is bloat the build time pointlessly! To avoid slowing down the build by a large factor, I've put most of the modules of the code base into a collection of static libraries organised vaguely thematically (SSH, other backends, crypto, network, ...). That means all those modules can still be compiled just once each, because once each library is built it's reused unchanged for all the executable targets. One upside of this library-based structure is that now I don't have to manually specify exactly which objects go into which programs any more - it's enough to specify which libraries are needed, and the linker will figure out the fine detail automatically. So there's less maintenance to do in CMakeLists.txt when the source code changes. But that reorganisation also adds fragility, because of the trad Unix linker semantics of walking along the library list once each, so that cyclic references between your libraries will provoke link errors. The current setup builds successfully, but I suspect it only just manages it. (In particular, I've found that MinGW is the most finicky on this score of the Windows compilers I've tried building with. So I've included a MinGW test build in the new-look Buildscr, because otherwise I think there'd be a significant risk of introducing MinGW-only build failures due to library search order, which wasn't a risk in the previous library-free build organisation.) In the longer term I hope to be able to reduce the risk of that, via gradual reorganisation (in particular, breaking up too-monolithic modules, to reduce the risk of knock-on references when you included a module for function A and it also contains function B with an unsatisfied dependency you didn't really need). Ideally I want to reach a state in which the libraries all have sensibly described purposes, a clearly documented (partial) order in which they're permitted to depend on each other, and a specification of what stubs you have to put where if you're leaving one of them out (e.g. nocrypto) and what callbacks you have to define in your non-library objects to satisfy dependencies from things low in the stack (e.g. out_of_memory()). One thing that's gone completely missing in this migration, unfortunately, is the unfinished MacOS port linked against Quartz GTK. That's because it turned out that I can't currently build it myself, on my own Mac: my previous installation of GTK had bit-rotted as a side effect of an Xcode upgrade, and I haven't yet been able to persuade jhbuild to make me a new one. So I can't even build the MacOS port with the _old_ makefiles, and hence, I have no way of checking that the new ones also work. I hope to bring that port back to life at some point, but I don't want it to block the rest of this change.
2021-04-10 14:21:11 +00:00
add_dependencies(pscp generated_licence_h)
add_sources_from_current_dir(psftp no-jump-list.c nohelp.c psftp.rc)
Replace mkfiles.pl with a CMake build system. This brings various concrete advantages over the previous system: - consistent support for out-of-tree builds on all platforms - more thorough support for Visual Studio IDE project files - support for Ninja-based builds, which is particularly useful on Windows where the alternative nmake has no parallel option - a really simple set of build instructions that work the same way on all the major platforms (look how much shorter README is!) - better decoupling of the project configuration from the toolchain configuration, so that my Windows cross-building doesn't need (much) special treatment in CMakeLists.txt - configure-time tests on Windows as well as Linux, so that a lot of ad-hoc #ifdefs second-guessing a particular feature's presence from the compiler version can now be replaced by tests of the feature itself Also some longer-term software-engineering advantages: - other people have actually heard of CMake, so they'll be able to produce patches to the new build setup more easily - unlike the old mkfiles.pl, CMake is not my personal problem to maintain - most importantly, mkfiles.pl was just a horrible pile of unmaintainable cruft, which even I found it painful to make changes to or to use, and desperately needed throwing in the bin. I've already thrown away all the variants of it I had in other projects of mine, and was only delaying this one so we could make the 0.75 release branch first. This change comes with a noticeable build-level restructuring. The previous Recipe worked by compiling every object file exactly once, and then making each executable by linking a precisely specified subset of the same object files. But in CMake, that's not the natural way to work - if you write the obvious command that puts the same source file into two executable targets, CMake generates a makefile that compiles it once per target. That can be an advantage, because it gives you the freedom to compile it differently in each case (e.g. with a #define telling it which program it's part of). But in a project that has many executable targets and had carefully contrived to _never_ need to build any module more than once, all it does is bloat the build time pointlessly! To avoid slowing down the build by a large factor, I've put most of the modules of the code base into a collection of static libraries organised vaguely thematically (SSH, other backends, crypto, network, ...). That means all those modules can still be compiled just once each, because once each library is built it's reused unchanged for all the executable targets. One upside of this library-based structure is that now I don't have to manually specify exactly which objects go into which programs any more - it's enough to specify which libraries are needed, and the linker will figure out the fine detail automatically. So there's less maintenance to do in CMakeLists.txt when the source code changes. But that reorganisation also adds fragility, because of the trad Unix linker semantics of walking along the library list once each, so that cyclic references between your libraries will provoke link errors. The current setup builds successfully, but I suspect it only just manages it. (In particular, I've found that MinGW is the most finicky on this score of the Windows compilers I've tried building with. So I've included a MinGW test build in the new-look Buildscr, because otherwise I think there'd be a significant risk of introducing MinGW-only build failures due to library search order, which wasn't a risk in the previous library-free build organisation.) In the longer term I hope to be able to reduce the risk of that, via gradual reorganisation (in particular, breaking up too-monolithic modules, to reduce the risk of knock-on references when you included a module for function A and it also contains function B with an unsatisfied dependency you didn't really need). Ideally I want to reach a state in which the libraries all have sensibly described purposes, a clearly documented (partial) order in which they're permitted to depend on each other, and a specification of what stubs you have to put where if you're leaving one of them out (e.g. nocrypto) and what callbacks you have to define in your non-library objects to satisfy dependencies from things low in the stack (e.g. out_of_memory()). One thing that's gone completely missing in this migration, unfortunately, is the unfinished MacOS port linked against Quartz GTK. That's because it turned out that I can't currently build it myself, on my own Mac: my previous installation of GTK had bit-rotted as a side effect of an Xcode upgrade, and I haven't yet been able to persuade jhbuild to make me a new one. So I can't even build the MacOS port with the _old_ makefiles, and hence, I have no way of checking that the new ones also work. I hope to bring that port back to life at some point, but I don't want it to block the rest of this change.
2021-04-10 14:21:11 +00:00
add_dependencies(psftp generated_licence_h)
add_sources_from_current_dir(psocks nohelp.c)
Replace mkfiles.pl with a CMake build system. This brings various concrete advantages over the previous system: - consistent support for out-of-tree builds on all platforms - more thorough support for Visual Studio IDE project files - support for Ninja-based builds, which is particularly useful on Windows where the alternative nmake has no parallel option - a really simple set of build instructions that work the same way on all the major platforms (look how much shorter README is!) - better decoupling of the project configuration from the toolchain configuration, so that my Windows cross-building doesn't need (much) special treatment in CMakeLists.txt - configure-time tests on Windows as well as Linux, so that a lot of ad-hoc #ifdefs second-guessing a particular feature's presence from the compiler version can now be replaced by tests of the feature itself Also some longer-term software-engineering advantages: - other people have actually heard of CMake, so they'll be able to produce patches to the new build setup more easily - unlike the old mkfiles.pl, CMake is not my personal problem to maintain - most importantly, mkfiles.pl was just a horrible pile of unmaintainable cruft, which even I found it painful to make changes to or to use, and desperately needed throwing in the bin. I've already thrown away all the variants of it I had in other projects of mine, and was only delaying this one so we could make the 0.75 release branch first. This change comes with a noticeable build-level restructuring. The previous Recipe worked by compiling every object file exactly once, and then making each executable by linking a precisely specified subset of the same object files. But in CMake, that's not the natural way to work - if you write the obvious command that puts the same source file into two executable targets, CMake generates a makefile that compiles it once per target. That can be an advantage, because it gives you the freedom to compile it differently in each case (e.g. with a #define telling it which program it's part of). But in a project that has many executable targets and had carefully contrived to _never_ need to build any module more than once, all it does is bloat the build time pointlessly! To avoid slowing down the build by a large factor, I've put most of the modules of the code base into a collection of static libraries organised vaguely thematically (SSH, other backends, crypto, network, ...). That means all those modules can still be compiled just once each, because once each library is built it's reused unchanged for all the executable targets. One upside of this library-based structure is that now I don't have to manually specify exactly which objects go into which programs any more - it's enough to specify which libraries are needed, and the linker will figure out the fine detail automatically. So there's less maintenance to do in CMakeLists.txt when the source code changes. But that reorganisation also adds fragility, because of the trad Unix linker semantics of walking along the library list once each, so that cyclic references between your libraries will provoke link errors. The current setup builds successfully, but I suspect it only just manages it. (In particular, I've found that MinGW is the most finicky on this score of the Windows compilers I've tried building with. So I've included a MinGW test build in the new-look Buildscr, because otherwise I think there'd be a significant risk of introducing MinGW-only build failures due to library search order, which wasn't a risk in the previous library-free build organisation.) In the longer term I hope to be able to reduce the risk of that, via gradual reorganisation (in particular, breaking up too-monolithic modules, to reduce the risk of knock-on references when you included a module for function A and it also contains function B with an unsatisfied dependency you didn't really need). Ideally I want to reach a state in which the libraries all have sensibly described purposes, a clearly documented (partial) order in which they're permitted to depend on each other, and a specification of what stubs you have to put where if you're leaving one of them out (e.g. nocrypto) and what callbacks you have to define in your non-library objects to satisfy dependencies from things low in the stack (e.g. out_of_memory()). One thing that's gone completely missing in this migration, unfortunately, is the unfinished MacOS port linked against Quartz GTK. That's because it turned out that I can't currently build it myself, on my own Mac: my previous installation of GTK had bit-rotted as a side effect of an Xcode upgrade, and I haven't yet been able to persuade jhbuild to make me a new one. So I can't even build the MacOS port with the _old_ makefiles, and hence, I have no way of checking that the new ones also work. I hope to bring that port back to life at some point, but I don't want it to block the rest of this change.
2021-04-10 14:21:11 +00:00
add_executable(putty
window.c
putty.c
help.c
${CMAKE_SOURCE_DIR}/stubs/no-console.c
Replace mkfiles.pl with a CMake build system. This brings various concrete advantages over the previous system: - consistent support for out-of-tree builds on all platforms - more thorough support for Visual Studio IDE project files - support for Ninja-based builds, which is particularly useful on Windows where the alternative nmake has no parallel option - a really simple set of build instructions that work the same way on all the major platforms (look how much shorter README is!) - better decoupling of the project configuration from the toolchain configuration, so that my Windows cross-building doesn't need (much) special treatment in CMakeLists.txt - configure-time tests on Windows as well as Linux, so that a lot of ad-hoc #ifdefs second-guessing a particular feature's presence from the compiler version can now be replaced by tests of the feature itself Also some longer-term software-engineering advantages: - other people have actually heard of CMake, so they'll be able to produce patches to the new build setup more easily - unlike the old mkfiles.pl, CMake is not my personal problem to maintain - most importantly, mkfiles.pl was just a horrible pile of unmaintainable cruft, which even I found it painful to make changes to or to use, and desperately needed throwing in the bin. I've already thrown away all the variants of it I had in other projects of mine, and was only delaying this one so we could make the 0.75 release branch first. This change comes with a noticeable build-level restructuring. The previous Recipe worked by compiling every object file exactly once, and then making each executable by linking a precisely specified subset of the same object files. But in CMake, that's not the natural way to work - if you write the obvious command that puts the same source file into two executable targets, CMake generates a makefile that compiles it once per target. That can be an advantage, because it gives you the freedom to compile it differently in each case (e.g. with a #define telling it which program it's part of). But in a project that has many executable targets and had carefully contrived to _never_ need to build any module more than once, all it does is bloat the build time pointlessly! To avoid slowing down the build by a large factor, I've put most of the modules of the code base into a collection of static libraries organised vaguely thematically (SSH, other backends, crypto, network, ...). That means all those modules can still be compiled just once each, because once each library is built it's reused unchanged for all the executable targets. One upside of this library-based structure is that now I don't have to manually specify exactly which objects go into which programs any more - it's enough to specify which libraries are needed, and the linker will figure out the fine detail automatically. So there's less maintenance to do in CMakeLists.txt when the source code changes. But that reorganisation also adds fragility, because of the trad Unix linker semantics of walking along the library list once each, so that cyclic references between your libraries will provoke link errors. The current setup builds successfully, but I suspect it only just manages it. (In particular, I've found that MinGW is the most finicky on this score of the Windows compilers I've tried building with. So I've included a MinGW test build in the new-look Buildscr, because otherwise I think there'd be a significant risk of introducing MinGW-only build failures due to library search order, which wasn't a risk in the previous library-free build organisation.) In the longer term I hope to be able to reduce the risk of that, via gradual reorganisation (in particular, breaking up too-monolithic modules, to reduce the risk of knock-on references when you included a module for function A and it also contains function B with an unsatisfied dependency you didn't really need). Ideally I want to reach a state in which the libraries all have sensibly described purposes, a clearly documented (partial) order in which they're permitted to depend on each other, and a specification of what stubs you have to put where if you're leaving one of them out (e.g. nocrypto) and what callbacks you have to define in your non-library objects to satisfy dependencies from things low in the stack (e.g. out_of_memory()). One thing that's gone completely missing in this migration, unfortunately, is the unfinished MacOS port linked against Quartz GTK. That's because it turned out that I can't currently build it myself, on my own Mac: my previous installation of GTK had bit-rotted as a side effect of an Xcode upgrade, and I haven't yet been able to persuade jhbuild to make me a new one. So I can't even build the MacOS port with the _old_ makefiles, and hence, I have no way of checking that the new ones also work. I hope to bring that port back to life at some point, but I don't want it to block the rest of this change.
2021-04-10 14:21:11 +00:00
putty.rc)
Merge be_*.c into one ifdef-controlled module. This commit replaces all those fiddly little linking modules (be_all.c, be_none.c, be_ssh.c etc) with a single source file controlled by ifdefs, and introduces a function be_list() in setup.cmake that makes it easy to compile a version of it appropriate to each application. This is a net reduction in code according to 'git diff --stat', even though I've introduced more comments. It also gets rid of another pile of annoying little source files in the top-level directory that didn't deserve to take up so much room in 'ls'. More concretely, doing this has some maintenance advantages. Centralisation means less to maintain (e.g. n_ui_backends is worked out once in a way that makes sense everywhere), and also, 'appname' can now be reliably set per program. Previously, some programs got the wrong appname due to sharing the same linking module (e.g. Plink had appname="PuTTY"), which was a latent bug that would have manifested if I'd wanted to reuse the same string in another context. One thing I've changed in this rework is that Windows pterm no longer has the ConPTY backend in its backends[]: it now has an empty one. The special be_conpty.c module shouldn't really have been there in the first place: it was used in the very earliest uncommitted drafts of the ConPTY work, where I was using another method of selecting that backend, but now that Windows pterm has a dedicated backend_vt_from_conf() that refers to conpty_backend by name, it has no need to live in backends[] at all, just as it doesn't have to in Unix pterm.
2021-11-26 17:58:55 +00:00
be_list(putty PuTTY SSH SERIAL OTHERBACKENDS)
Replace mkfiles.pl with a CMake build system. This brings various concrete advantages over the previous system: - consistent support for out-of-tree builds on all platforms - more thorough support for Visual Studio IDE project files - support for Ninja-based builds, which is particularly useful on Windows where the alternative nmake has no parallel option - a really simple set of build instructions that work the same way on all the major platforms (look how much shorter README is!) - better decoupling of the project configuration from the toolchain configuration, so that my Windows cross-building doesn't need (much) special treatment in CMakeLists.txt - configure-time tests on Windows as well as Linux, so that a lot of ad-hoc #ifdefs second-guessing a particular feature's presence from the compiler version can now be replaced by tests of the feature itself Also some longer-term software-engineering advantages: - other people have actually heard of CMake, so they'll be able to produce patches to the new build setup more easily - unlike the old mkfiles.pl, CMake is not my personal problem to maintain - most importantly, mkfiles.pl was just a horrible pile of unmaintainable cruft, which even I found it painful to make changes to or to use, and desperately needed throwing in the bin. I've already thrown away all the variants of it I had in other projects of mine, and was only delaying this one so we could make the 0.75 release branch first. This change comes with a noticeable build-level restructuring. The previous Recipe worked by compiling every object file exactly once, and then making each executable by linking a precisely specified subset of the same object files. But in CMake, that's not the natural way to work - if you write the obvious command that puts the same source file into two executable targets, CMake generates a makefile that compiles it once per target. That can be an advantage, because it gives you the freedom to compile it differently in each case (e.g. with a #define telling it which program it's part of). But in a project that has many executable targets and had carefully contrived to _never_ need to build any module more than once, all it does is bloat the build time pointlessly! To avoid slowing down the build by a large factor, I've put most of the modules of the code base into a collection of static libraries organised vaguely thematically (SSH, other backends, crypto, network, ...). That means all those modules can still be compiled just once each, because once each library is built it's reused unchanged for all the executable targets. One upside of this library-based structure is that now I don't have to manually specify exactly which objects go into which programs any more - it's enough to specify which libraries are needed, and the linker will figure out the fine detail automatically. So there's less maintenance to do in CMakeLists.txt when the source code changes. But that reorganisation also adds fragility, because of the trad Unix linker semantics of walking along the library list once each, so that cyclic references between your libraries will provoke link errors. The current setup builds successfully, but I suspect it only just manages it. (In particular, I've found that MinGW is the most finicky on this score of the Windows compilers I've tried building with. So I've included a MinGW test build in the new-look Buildscr, because otherwise I think there'd be a significant risk of introducing MinGW-only build failures due to library search order, which wasn't a risk in the previous library-free build organisation.) In the longer term I hope to be able to reduce the risk of that, via gradual reorganisation (in particular, breaking up too-monolithic modules, to reduce the risk of knock-on references when you included a module for function A and it also contains function B with an unsatisfied dependency you didn't really need). Ideally I want to reach a state in which the libraries all have sensibly described purposes, a clearly documented (partial) order in which they're permitted to depend on each other, and a specification of what stubs you have to put where if you're leaving one of them out (e.g. nocrypto) and what callbacks you have to define in your non-library objects to satisfy dependencies from things low in the stack (e.g. out_of_memory()). One thing that's gone completely missing in this migration, unfortunately, is the unfinished MacOS port linked against Quartz GTK. That's because it turned out that I can't currently build it myself, on my own Mac: my previous installation of GTK had bit-rotted as a side effect of an Xcode upgrade, and I haven't yet been able to persuade jhbuild to make me a new one. So I can't even build the MacOS port with the _old_ makefiles, and hence, I have no way of checking that the new ones also work. I hope to bring that port back to life at some point, but I don't want it to block the rest of this change.
2021-04-10 14:21:11 +00:00
add_dependencies(putty generated_licence_h)
target_link_libraries(putty
guiterminal guimisc eventloop sshclient otherbackends settings network crypto
utils
${platform_libraries})
set_target_properties(putty PROPERTIES WIN32_EXECUTABLE ON)
Replace mkfiles.pl with a CMake build system. This brings various concrete advantages over the previous system: - consistent support for out-of-tree builds on all platforms - more thorough support for Visual Studio IDE project files - support for Ninja-based builds, which is particularly useful on Windows where the alternative nmake has no parallel option - a really simple set of build instructions that work the same way on all the major platforms (look how much shorter README is!) - better decoupling of the project configuration from the toolchain configuration, so that my Windows cross-building doesn't need (much) special treatment in CMakeLists.txt - configure-time tests on Windows as well as Linux, so that a lot of ad-hoc #ifdefs second-guessing a particular feature's presence from the compiler version can now be replaced by tests of the feature itself Also some longer-term software-engineering advantages: - other people have actually heard of CMake, so they'll be able to produce patches to the new build setup more easily - unlike the old mkfiles.pl, CMake is not my personal problem to maintain - most importantly, mkfiles.pl was just a horrible pile of unmaintainable cruft, which even I found it painful to make changes to or to use, and desperately needed throwing in the bin. I've already thrown away all the variants of it I had in other projects of mine, and was only delaying this one so we could make the 0.75 release branch first. This change comes with a noticeable build-level restructuring. The previous Recipe worked by compiling every object file exactly once, and then making each executable by linking a precisely specified subset of the same object files. But in CMake, that's not the natural way to work - if you write the obvious command that puts the same source file into two executable targets, CMake generates a makefile that compiles it once per target. That can be an advantage, because it gives you the freedom to compile it differently in each case (e.g. with a #define telling it which program it's part of). But in a project that has many executable targets and had carefully contrived to _never_ need to build any module more than once, all it does is bloat the build time pointlessly! To avoid slowing down the build by a large factor, I've put most of the modules of the code base into a collection of static libraries organised vaguely thematically (SSH, other backends, crypto, network, ...). That means all those modules can still be compiled just once each, because once each library is built it's reused unchanged for all the executable targets. One upside of this library-based structure is that now I don't have to manually specify exactly which objects go into which programs any more - it's enough to specify which libraries are needed, and the linker will figure out the fine detail automatically. So there's less maintenance to do in CMakeLists.txt when the source code changes. But that reorganisation also adds fragility, because of the trad Unix linker semantics of walking along the library list once each, so that cyclic references between your libraries will provoke link errors. The current setup builds successfully, but I suspect it only just manages it. (In particular, I've found that MinGW is the most finicky on this score of the Windows compilers I've tried building with. So I've included a MinGW test build in the new-look Buildscr, because otherwise I think there'd be a significant risk of introducing MinGW-only build failures due to library search order, which wasn't a risk in the previous library-free build organisation.) In the longer term I hope to be able to reduce the risk of that, via gradual reorganisation (in particular, breaking up too-monolithic modules, to reduce the risk of knock-on references when you included a module for function A and it also contains function B with an unsatisfied dependency you didn't really need). Ideally I want to reach a state in which the libraries all have sensibly described purposes, a clearly documented (partial) order in which they're permitted to depend on each other, and a specification of what stubs you have to put where if you're leaving one of them out (e.g. nocrypto) and what callbacks you have to define in your non-library objects to satisfy dependencies from things low in the stack (e.g. out_of_memory()). One thing that's gone completely missing in this migration, unfortunately, is the unfinished MacOS port linked against Quartz GTK. That's because it turned out that I can't currently build it myself, on my own Mac: my previous installation of GTK had bit-rotted as a side effect of an Xcode upgrade, and I haven't yet been able to persuade jhbuild to make me a new one. So I can't even build the MacOS port with the _old_ makefiles, and hence, I have no way of checking that the new ones also work. I hope to bring that port back to life at some point, but I don't want it to block the rest of this change.
2021-04-10 14:21:11 +00:00
installed_program(putty)
add_executable(puttytel
window.c
putty.c
help.c
${CMAKE_SOURCE_DIR}/stubs/no-gss.c
${CMAKE_SOURCE_DIR}/stubs/no-ca-config.c
${CMAKE_SOURCE_DIR}/stubs/no-console.c
${CMAKE_SOURCE_DIR}/stubs/no-rand.c
Arm: turn on PSTATE.DIT if available and needed. DIT, for 'Data-Independent Timing', is a bit you can set in the processor state on sufficiently new Arm CPUs, which promises that a long list of instructions will deliberately avoid varying their timing based on the input register values. Just what you want for keeping your constant-time crypto primitives constant-time. As far as I'm aware, no CPU has _yet_ implemented any data-dependent optimisations, so DIT is a safety precaution against them doing so in future. It would be embarrassing to be caught without it if a future CPU does do that, so we now turn on DIT in the PuTTY process state. I've put a call to the new enable_dit() function at the start of every main() and WinMain() belonging to a program that might do cryptography (even testcrypt, in case someone uses it for something!), and in case I missed one there, also added a second call at the first moment that any cryptography-using part of the code looks as if it might become active: when an instance of the SSH protocol object is configured, when the system PRNG is initialised, and when selecting any cryptographic authentication protocol in an HTTP or SOCKS proxy connection. With any luck those precautions between them should ensure it's on whenever we need it. Arm's own recommendation is that you should carefully choose the granularity at which you enable and disable DIT: there's a potential time cost to turning it on and off (I'm not sure what, but plausibly something of the order of a pipeline flush), so it's a performance hit to do it _inside_ each individual crypto function, but if CPUs start supporting significant data-dependent optimisation in future, then it will also become a noticeable performance hit to just leave it on across the whole process. So you'd like to do it somewhere in the middle: for example, you might turn on DIT once around the whole process of verifying and decrypting an SSH packet, instead of once for decryption and once for MAC. With all respect to that recommendation as a strategy for maximum performance, I'm not following it here. I turn on DIT at the start of the PuTTY process, and then leave it on. Rationale: 1. PuTTY is not otherwise a performance-critical application: it's not likely to max out your CPU for any purpose _other_ than cryptography. The most CPU-intensive non-cryptographic thing I can imagine a PuTTY process doing is the complicated computation of font rendering in the terminal, and that will normally be cached (you don't recompute each glyph from its outline and hints for every time you display it). 2. I think a bigger risk lies in accidental side channels from having DIT turned off when it should have been on. I can imagine lots of causes for that. Missing a crypto operation in some unswept corner of the code; confusing control flow (like my coroutine macros) jumping with DIT clear into the middle of a region of code that expected DIT to have been set at the beginning; having a reference counter of DIT requests and getting it out of sync. In a more sophisticated programming language, it might be possible to avoid the risk in #2 by cleverness with the type system. For example, in Rust, you could have a zero-sized type that acts as a proof token for DIT being enabled (it would be constructed by a function that also sets DIT, have a Drop implementation that clears DIT, and be !Send so you couldn't use it in a thread other than the one where DIT was set), and then you could require all the actual crypto functions to take a DitToken as an extra parameter, at zero runtime cost. Then "oops I forgot to set DIT around this piece of crypto" would become a compile error. Even so, you'd have to take some care with coroutine-structured code (what happens if a Rust async function yields while holding a DIT token?) and with nesting (if you have two DIT tokens, you don't want dropping the inner one to clear DIT while the outer one is still there to wrongly convince callees that it's set). Maybe in Rust you could get this all to work reliably. But not in C! DIT is an optional feature of the Arm architecture, so we must first test to see if it's supported. This is done the same way as we already do for the various Arm crypto accelerators: on ELF-based systems, check the appropriate bit in the 'hwcap' words in the ELF aux vector; on Mac, look for an appropriate sysctl flag. On Windows I don't know of a way to query the DIT feature, _or_ of a way to write the necessary enabling instruction in an MSVC-compatible way. I've _heard_ that it might not be necessary, because Windows might just turn on DIT unconditionally and leave it on, in an even more extreme version of my own strategy. I don't have a source for that - I heard it by word of mouth - but I _hope_ it's true, because that would suit me very well! Certainly I can't write code to enable DIT without knowing (a) how to do it, (b) how to know if it's safe. Nonetheless, I've put the enable_dit() call in all the right places in the Windows main programs as well as the Unix and cross-platform code, so that if I later find out that I _can_ put in an explicit enable of DIT in some way, I'll only have to arrange to set HAVE_ARM_DIT and compile the enable_dit() function appropriately.
2024-12-19 08:47:08 +00:00
${CMAKE_SOURCE_DIR}/stubs/no-dit.c
${CMAKE_SOURCE_DIR}/proxy/nocproxy.c
${CMAKE_SOURCE_DIR}/proxy/nosshproxy.c
Replace mkfiles.pl with a CMake build system. This brings various concrete advantages over the previous system: - consistent support for out-of-tree builds on all platforms - more thorough support for Visual Studio IDE project files - support for Ninja-based builds, which is particularly useful on Windows where the alternative nmake has no parallel option - a really simple set of build instructions that work the same way on all the major platforms (look how much shorter README is!) - better decoupling of the project configuration from the toolchain configuration, so that my Windows cross-building doesn't need (much) special treatment in CMakeLists.txt - configure-time tests on Windows as well as Linux, so that a lot of ad-hoc #ifdefs second-guessing a particular feature's presence from the compiler version can now be replaced by tests of the feature itself Also some longer-term software-engineering advantages: - other people have actually heard of CMake, so they'll be able to produce patches to the new build setup more easily - unlike the old mkfiles.pl, CMake is not my personal problem to maintain - most importantly, mkfiles.pl was just a horrible pile of unmaintainable cruft, which even I found it painful to make changes to or to use, and desperately needed throwing in the bin. I've already thrown away all the variants of it I had in other projects of mine, and was only delaying this one so we could make the 0.75 release branch first. This change comes with a noticeable build-level restructuring. The previous Recipe worked by compiling every object file exactly once, and then making each executable by linking a precisely specified subset of the same object files. But in CMake, that's not the natural way to work - if you write the obvious command that puts the same source file into two executable targets, CMake generates a makefile that compiles it once per target. That can be an advantage, because it gives you the freedom to compile it differently in each case (e.g. with a #define telling it which program it's part of). But in a project that has many executable targets and had carefully contrived to _never_ need to build any module more than once, all it does is bloat the build time pointlessly! To avoid slowing down the build by a large factor, I've put most of the modules of the code base into a collection of static libraries organised vaguely thematically (SSH, other backends, crypto, network, ...). That means all those modules can still be compiled just once each, because once each library is built it's reused unchanged for all the executable targets. One upside of this library-based structure is that now I don't have to manually specify exactly which objects go into which programs any more - it's enough to specify which libraries are needed, and the linker will figure out the fine detail automatically. So there's less maintenance to do in CMakeLists.txt when the source code changes. But that reorganisation also adds fragility, because of the trad Unix linker semantics of walking along the library list once each, so that cyclic references between your libraries will provoke link errors. The current setup builds successfully, but I suspect it only just manages it. (In particular, I've found that MinGW is the most finicky on this score of the Windows compilers I've tried building with. So I've included a MinGW test build in the new-look Buildscr, because otherwise I think there'd be a significant risk of introducing MinGW-only build failures due to library search order, which wasn't a risk in the previous library-free build organisation.) In the longer term I hope to be able to reduce the risk of that, via gradual reorganisation (in particular, breaking up too-monolithic modules, to reduce the risk of knock-on references when you included a module for function A and it also contains function B with an unsatisfied dependency you didn't really need). Ideally I want to reach a state in which the libraries all have sensibly described purposes, a clearly documented (partial) order in which they're permitted to depend on each other, and a specification of what stubs you have to put where if you're leaving one of them out (e.g. nocrypto) and what callbacks you have to define in your non-library objects to satisfy dependencies from things low in the stack (e.g. out_of_memory()). One thing that's gone completely missing in this migration, unfortunately, is the unfinished MacOS port linked against Quartz GTK. That's because it turned out that I can't currently build it myself, on my own Mac: my previous installation of GTK had bit-rotted as a side effect of an Xcode upgrade, and I haven't yet been able to persuade jhbuild to make me a new one. So I can't even build the MacOS port with the _old_ makefiles, and hence, I have no way of checking that the new ones also work. I hope to bring that port back to life at some point, but I don't want it to block the rest of this change.
2021-04-10 14:21:11 +00:00
puttytel.rc)
Merge be_*.c into one ifdef-controlled module. This commit replaces all those fiddly little linking modules (be_all.c, be_none.c, be_ssh.c etc) with a single source file controlled by ifdefs, and introduces a function be_list() in setup.cmake that makes it easy to compile a version of it appropriate to each application. This is a net reduction in code according to 'git diff --stat', even though I've introduced more comments. It also gets rid of another pile of annoying little source files in the top-level directory that didn't deserve to take up so much room in 'ls'. More concretely, doing this has some maintenance advantages. Centralisation means less to maintain (e.g. n_ui_backends is worked out once in a way that makes sense everywhere), and also, 'appname' can now be reliably set per program. Previously, some programs got the wrong appname due to sharing the same linking module (e.g. Plink had appname="PuTTY"), which was a latent bug that would have manifested if I'd wanted to reuse the same string in another context. One thing I've changed in this rework is that Windows pterm no longer has the ConPTY backend in its backends[]: it now has an empty one. The special be_conpty.c module shouldn't really have been there in the first place: it was used in the very earliest uncommitted drafts of the ConPTY work, where I was using another method of selecting that backend, but now that Windows pterm has a dedicated backend_vt_from_conf() that refers to conpty_backend by name, it has no need to live in backends[] at all, just as it doesn't have to in Unix pterm.
2021-11-26 17:58:55 +00:00
be_list(puttytel PuTTYtel SERIAL OTHERBACKENDS)
Replace mkfiles.pl with a CMake build system. This brings various concrete advantages over the previous system: - consistent support for out-of-tree builds on all platforms - more thorough support for Visual Studio IDE project files - support for Ninja-based builds, which is particularly useful on Windows where the alternative nmake has no parallel option - a really simple set of build instructions that work the same way on all the major platforms (look how much shorter README is!) - better decoupling of the project configuration from the toolchain configuration, so that my Windows cross-building doesn't need (much) special treatment in CMakeLists.txt - configure-time tests on Windows as well as Linux, so that a lot of ad-hoc #ifdefs second-guessing a particular feature's presence from the compiler version can now be replaced by tests of the feature itself Also some longer-term software-engineering advantages: - other people have actually heard of CMake, so they'll be able to produce patches to the new build setup more easily - unlike the old mkfiles.pl, CMake is not my personal problem to maintain - most importantly, mkfiles.pl was just a horrible pile of unmaintainable cruft, which even I found it painful to make changes to or to use, and desperately needed throwing in the bin. I've already thrown away all the variants of it I had in other projects of mine, and was only delaying this one so we could make the 0.75 release branch first. This change comes with a noticeable build-level restructuring. The previous Recipe worked by compiling every object file exactly once, and then making each executable by linking a precisely specified subset of the same object files. But in CMake, that's not the natural way to work - if you write the obvious command that puts the same source file into two executable targets, CMake generates a makefile that compiles it once per target. That can be an advantage, because it gives you the freedom to compile it differently in each case (e.g. with a #define telling it which program it's part of). But in a project that has many executable targets and had carefully contrived to _never_ need to build any module more than once, all it does is bloat the build time pointlessly! To avoid slowing down the build by a large factor, I've put most of the modules of the code base into a collection of static libraries organised vaguely thematically (SSH, other backends, crypto, network, ...). That means all those modules can still be compiled just once each, because once each library is built it's reused unchanged for all the executable targets. One upside of this library-based structure is that now I don't have to manually specify exactly which objects go into which programs any more - it's enough to specify which libraries are needed, and the linker will figure out the fine detail automatically. So there's less maintenance to do in CMakeLists.txt when the source code changes. But that reorganisation also adds fragility, because of the trad Unix linker semantics of walking along the library list once each, so that cyclic references between your libraries will provoke link errors. The current setup builds successfully, but I suspect it only just manages it. (In particular, I've found that MinGW is the most finicky on this score of the Windows compilers I've tried building with. So I've included a MinGW test build in the new-look Buildscr, because otherwise I think there'd be a significant risk of introducing MinGW-only build failures due to library search order, which wasn't a risk in the previous library-free build organisation.) In the longer term I hope to be able to reduce the risk of that, via gradual reorganisation (in particular, breaking up too-monolithic modules, to reduce the risk of knock-on references when you included a module for function A and it also contains function B with an unsatisfied dependency you didn't really need). Ideally I want to reach a state in which the libraries all have sensibly described purposes, a clearly documented (partial) order in which they're permitted to depend on each other, and a specification of what stubs you have to put where if you're leaving one of them out (e.g. nocrypto) and what callbacks you have to define in your non-library objects to satisfy dependencies from things low in the stack (e.g. out_of_memory()). One thing that's gone completely missing in this migration, unfortunately, is the unfinished MacOS port linked against Quartz GTK. That's because it turned out that I can't currently build it myself, on my own Mac: my previous installation of GTK had bit-rotted as a side effect of an Xcode upgrade, and I haven't yet been able to persuade jhbuild to make me a new one. So I can't even build the MacOS port with the _old_ makefiles, and hence, I have no way of checking that the new ones also work. I hope to bring that port back to life at some point, but I don't want it to block the rest of this change.
2021-04-10 14:21:11 +00:00
add_dependencies(puttytel generated_licence_h)
target_link_libraries(puttytel
guiterminal guimisc eventloop otherbackends settings network utils
${platform_libraries})
set_target_properties(puttytel PROPERTIES WIN32_EXECUTABLE ON)
Replace mkfiles.pl with a CMake build system. This brings various concrete advantages over the previous system: - consistent support for out-of-tree builds on all platforms - more thorough support for Visual Studio IDE project files - support for Ninja-based builds, which is particularly useful on Windows where the alternative nmake has no parallel option - a really simple set of build instructions that work the same way on all the major platforms (look how much shorter README is!) - better decoupling of the project configuration from the toolchain configuration, so that my Windows cross-building doesn't need (much) special treatment in CMakeLists.txt - configure-time tests on Windows as well as Linux, so that a lot of ad-hoc #ifdefs second-guessing a particular feature's presence from the compiler version can now be replaced by tests of the feature itself Also some longer-term software-engineering advantages: - other people have actually heard of CMake, so they'll be able to produce patches to the new build setup more easily - unlike the old mkfiles.pl, CMake is not my personal problem to maintain - most importantly, mkfiles.pl was just a horrible pile of unmaintainable cruft, which even I found it painful to make changes to or to use, and desperately needed throwing in the bin. I've already thrown away all the variants of it I had in other projects of mine, and was only delaying this one so we could make the 0.75 release branch first. This change comes with a noticeable build-level restructuring. The previous Recipe worked by compiling every object file exactly once, and then making each executable by linking a precisely specified subset of the same object files. But in CMake, that's not the natural way to work - if you write the obvious command that puts the same source file into two executable targets, CMake generates a makefile that compiles it once per target. That can be an advantage, because it gives you the freedom to compile it differently in each case (e.g. with a #define telling it which program it's part of). But in a project that has many executable targets and had carefully contrived to _never_ need to build any module more than once, all it does is bloat the build time pointlessly! To avoid slowing down the build by a large factor, I've put most of the modules of the code base into a collection of static libraries organised vaguely thematically (SSH, other backends, crypto, network, ...). That means all those modules can still be compiled just once each, because once each library is built it's reused unchanged for all the executable targets. One upside of this library-based structure is that now I don't have to manually specify exactly which objects go into which programs any more - it's enough to specify which libraries are needed, and the linker will figure out the fine detail automatically. So there's less maintenance to do in CMakeLists.txt when the source code changes. But that reorganisation also adds fragility, because of the trad Unix linker semantics of walking along the library list once each, so that cyclic references between your libraries will provoke link errors. The current setup builds successfully, but I suspect it only just manages it. (In particular, I've found that MinGW is the most finicky on this score of the Windows compilers I've tried building with. So I've included a MinGW test build in the new-look Buildscr, because otherwise I think there'd be a significant risk of introducing MinGW-only build failures due to library search order, which wasn't a risk in the previous library-free build organisation.) In the longer term I hope to be able to reduce the risk of that, via gradual reorganisation (in particular, breaking up too-monolithic modules, to reduce the risk of knock-on references when you included a module for function A and it also contains function B with an unsatisfied dependency you didn't really need). Ideally I want to reach a state in which the libraries all have sensibly described purposes, a clearly documented (partial) order in which they're permitted to depend on each other, and a specification of what stubs you have to put where if you're leaving one of them out (e.g. nocrypto) and what callbacks you have to define in your non-library objects to satisfy dependencies from things low in the stack (e.g. out_of_memory()). One thing that's gone completely missing in this migration, unfortunately, is the unfinished MacOS port linked against Quartz GTK. That's because it turned out that I can't currently build it myself, on my own Mac: my previous installation of GTK had bit-rotted as a side effect of an Xcode upgrade, and I haven't yet been able to persuade jhbuild to make me a new one. So I can't even build the MacOS port with the _old_ makefiles, and hence, I have no way of checking that the new ones also work. I hope to bring that port back to life at some point, but I don't want it to block the rest of this change.
2021-04-10 14:21:11 +00:00
installed_program(puttytel)
add_executable(puttygen
puttygen.c
${CMAKE_SOURCE_DIR}/stubs/no-timing.c
noise.c
no-jump-list.c
storage.c
help.c
Replace mkfiles.pl with a CMake build system. This brings various concrete advantages over the previous system: - consistent support for out-of-tree builds on all platforms - more thorough support for Visual Studio IDE project files - support for Ninja-based builds, which is particularly useful on Windows where the alternative nmake has no parallel option - a really simple set of build instructions that work the same way on all the major platforms (look how much shorter README is!) - better decoupling of the project configuration from the toolchain configuration, so that my Windows cross-building doesn't need (much) special treatment in CMakeLists.txt - configure-time tests on Windows as well as Linux, so that a lot of ad-hoc #ifdefs second-guessing a particular feature's presence from the compiler version can now be replaced by tests of the feature itself Also some longer-term software-engineering advantages: - other people have actually heard of CMake, so they'll be able to produce patches to the new build setup more easily - unlike the old mkfiles.pl, CMake is not my personal problem to maintain - most importantly, mkfiles.pl was just a horrible pile of unmaintainable cruft, which even I found it painful to make changes to or to use, and desperately needed throwing in the bin. I've already thrown away all the variants of it I had in other projects of mine, and was only delaying this one so we could make the 0.75 release branch first. This change comes with a noticeable build-level restructuring. The previous Recipe worked by compiling every object file exactly once, and then making each executable by linking a precisely specified subset of the same object files. But in CMake, that's not the natural way to work - if you write the obvious command that puts the same source file into two executable targets, CMake generates a makefile that compiles it once per target. That can be an advantage, because it gives you the freedom to compile it differently in each case (e.g. with a #define telling it which program it's part of). But in a project that has many executable targets and had carefully contrived to _never_ need to build any module more than once, all it does is bloat the build time pointlessly! To avoid slowing down the build by a large factor, I've put most of the modules of the code base into a collection of static libraries organised vaguely thematically (SSH, other backends, crypto, network, ...). That means all those modules can still be compiled just once each, because once each library is built it's reused unchanged for all the executable targets. One upside of this library-based structure is that now I don't have to manually specify exactly which objects go into which programs any more - it's enough to specify which libraries are needed, and the linker will figure out the fine detail automatically. So there's less maintenance to do in CMakeLists.txt when the source code changes. But that reorganisation also adds fragility, because of the trad Unix linker semantics of walking along the library list once each, so that cyclic references between your libraries will provoke link errors. The current setup builds successfully, but I suspect it only just manages it. (In particular, I've found that MinGW is the most finicky on this score of the Windows compilers I've tried building with. So I've included a MinGW test build in the new-look Buildscr, because otherwise I think there'd be a significant risk of introducing MinGW-only build failures due to library search order, which wasn't a risk in the previous library-free build organisation.) In the longer term I hope to be able to reduce the risk of that, via gradual reorganisation (in particular, breaking up too-monolithic modules, to reduce the risk of knock-on references when you included a module for function A and it also contains function B with an unsatisfied dependency you didn't really need). Ideally I want to reach a state in which the libraries all have sensibly described purposes, a clearly documented (partial) order in which they're permitted to depend on each other, and a specification of what stubs you have to put where if you're leaving one of them out (e.g. nocrypto) and what callbacks you have to define in your non-library objects to satisfy dependencies from things low in the stack (e.g. out_of_memory()). One thing that's gone completely missing in this migration, unfortunately, is the unfinished MacOS port linked against Quartz GTK. That's because it turned out that I can't currently build it myself, on my own Mac: my previous installation of GTK had bit-rotted as a side effect of an Xcode upgrade, and I haven't yet been able to persuade jhbuild to make me a new one. So I can't even build the MacOS port with the _old_ makefiles, and hence, I have no way of checking that the new ones also work. I hope to bring that port back to life at some point, but I don't want it to block the rest of this change.
2021-04-10 14:21:11 +00:00
${CMAKE_SOURCE_DIR}/sshpubk.c
${CMAKE_SOURCE_DIR}/sshrand.c
controls.c
Replace mkfiles.pl with a CMake build system. This brings various concrete advantages over the previous system: - consistent support for out-of-tree builds on all platforms - more thorough support for Visual Studio IDE project files - support for Ninja-based builds, which is particularly useful on Windows where the alternative nmake has no parallel option - a really simple set of build instructions that work the same way on all the major platforms (look how much shorter README is!) - better decoupling of the project configuration from the toolchain configuration, so that my Windows cross-building doesn't need (much) special treatment in CMakeLists.txt - configure-time tests on Windows as well as Linux, so that a lot of ad-hoc #ifdefs second-guessing a particular feature's presence from the compiler version can now be replaced by tests of the feature itself Also some longer-term software-engineering advantages: - other people have actually heard of CMake, so they'll be able to produce patches to the new build setup more easily - unlike the old mkfiles.pl, CMake is not my personal problem to maintain - most importantly, mkfiles.pl was just a horrible pile of unmaintainable cruft, which even I found it painful to make changes to or to use, and desperately needed throwing in the bin. I've already thrown away all the variants of it I had in other projects of mine, and was only delaying this one so we could make the 0.75 release branch first. This change comes with a noticeable build-level restructuring. The previous Recipe worked by compiling every object file exactly once, and then making each executable by linking a precisely specified subset of the same object files. But in CMake, that's not the natural way to work - if you write the obvious command that puts the same source file into two executable targets, CMake generates a makefile that compiles it once per target. That can be an advantage, because it gives you the freedom to compile it differently in each case (e.g. with a #define telling it which program it's part of). But in a project that has many executable targets and had carefully contrived to _never_ need to build any module more than once, all it does is bloat the build time pointlessly! To avoid slowing down the build by a large factor, I've put most of the modules of the code base into a collection of static libraries organised vaguely thematically (SSH, other backends, crypto, network, ...). That means all those modules can still be compiled just once each, because once each library is built it's reused unchanged for all the executable targets. One upside of this library-based structure is that now I don't have to manually specify exactly which objects go into which programs any more - it's enough to specify which libraries are needed, and the linker will figure out the fine detail automatically. So there's less maintenance to do in CMakeLists.txt when the source code changes. But that reorganisation also adds fragility, because of the trad Unix linker semantics of walking along the library list once each, so that cyclic references between your libraries will provoke link errors. The current setup builds successfully, but I suspect it only just manages it. (In particular, I've found that MinGW is the most finicky on this score of the Windows compilers I've tried building with. So I've included a MinGW test build in the new-look Buildscr, because otherwise I think there'd be a significant risk of introducing MinGW-only build failures due to library search order, which wasn't a risk in the previous library-free build organisation.) In the longer term I hope to be able to reduce the risk of that, via gradual reorganisation (in particular, breaking up too-monolithic modules, to reduce the risk of knock-on references when you included a module for function A and it also contains function B with an unsatisfied dependency you didn't really need). Ideally I want to reach a state in which the libraries all have sensibly described purposes, a clearly documented (partial) order in which they're permitted to depend on each other, and a specification of what stubs you have to put where if you're leaving one of them out (e.g. nocrypto) and what callbacks you have to define in your non-library objects to satisfy dependencies from things low in the stack (e.g. out_of_memory()). One thing that's gone completely missing in this migration, unfortunately, is the unfinished MacOS port linked against Quartz GTK. That's because it turned out that I can't currently build it myself, on my own Mac: my previous installation of GTK had bit-rotted as a side effect of an Xcode upgrade, and I haven't yet been able to persuade jhbuild to make me a new one. So I can't even build the MacOS port with the _old_ makefiles, and hence, I have no way of checking that the new ones also work. I hope to bring that port back to life at some point, but I don't want it to block the rest of this change.
2021-04-10 14:21:11 +00:00
puttygen.rc)
add_dependencies(puttygen generated_licence_h)
target_link_libraries(puttygen
keygen guimisc crypto utils
${platform_libraries})
set_target_properties(puttygen PROPERTIES WIN32_EXECUTABLE ON)
Replace mkfiles.pl with a CMake build system. This brings various concrete advantages over the previous system: - consistent support for out-of-tree builds on all platforms - more thorough support for Visual Studio IDE project files - support for Ninja-based builds, which is particularly useful on Windows where the alternative nmake has no parallel option - a really simple set of build instructions that work the same way on all the major platforms (look how much shorter README is!) - better decoupling of the project configuration from the toolchain configuration, so that my Windows cross-building doesn't need (much) special treatment in CMakeLists.txt - configure-time tests on Windows as well as Linux, so that a lot of ad-hoc #ifdefs second-guessing a particular feature's presence from the compiler version can now be replaced by tests of the feature itself Also some longer-term software-engineering advantages: - other people have actually heard of CMake, so they'll be able to produce patches to the new build setup more easily - unlike the old mkfiles.pl, CMake is not my personal problem to maintain - most importantly, mkfiles.pl was just a horrible pile of unmaintainable cruft, which even I found it painful to make changes to or to use, and desperately needed throwing in the bin. I've already thrown away all the variants of it I had in other projects of mine, and was only delaying this one so we could make the 0.75 release branch first. This change comes with a noticeable build-level restructuring. The previous Recipe worked by compiling every object file exactly once, and then making each executable by linking a precisely specified subset of the same object files. But in CMake, that's not the natural way to work - if you write the obvious command that puts the same source file into two executable targets, CMake generates a makefile that compiles it once per target. That can be an advantage, because it gives you the freedom to compile it differently in each case (e.g. with a #define telling it which program it's part of). But in a project that has many executable targets and had carefully contrived to _never_ need to build any module more than once, all it does is bloat the build time pointlessly! To avoid slowing down the build by a large factor, I've put most of the modules of the code base into a collection of static libraries organised vaguely thematically (SSH, other backends, crypto, network, ...). That means all those modules can still be compiled just once each, because once each library is built it's reused unchanged for all the executable targets. One upside of this library-based structure is that now I don't have to manually specify exactly which objects go into which programs any more - it's enough to specify which libraries are needed, and the linker will figure out the fine detail automatically. So there's less maintenance to do in CMakeLists.txt when the source code changes. But that reorganisation also adds fragility, because of the trad Unix linker semantics of walking along the library list once each, so that cyclic references between your libraries will provoke link errors. The current setup builds successfully, but I suspect it only just manages it. (In particular, I've found that MinGW is the most finicky on this score of the Windows compilers I've tried building with. So I've included a MinGW test build in the new-look Buildscr, because otherwise I think there'd be a significant risk of introducing MinGW-only build failures due to library search order, which wasn't a risk in the previous library-free build organisation.) In the longer term I hope to be able to reduce the risk of that, via gradual reorganisation (in particular, breaking up too-monolithic modules, to reduce the risk of knock-on references when you included a module for function A and it also contains function B with an unsatisfied dependency you didn't really need). Ideally I want to reach a state in which the libraries all have sensibly described purposes, a clearly documented (partial) order in which they're permitted to depend on each other, and a specification of what stubs you have to put where if you're leaving one of them out (e.g. nocrypto) and what callbacks you have to define in your non-library objects to satisfy dependencies from things low in the stack (e.g. out_of_memory()). One thing that's gone completely missing in this migration, unfortunately, is the unfinished MacOS port linked against Quartz GTK. That's because it turned out that I can't currently build it myself, on my own Mac: my previous installation of GTK had bit-rotted as a side effect of an Xcode upgrade, and I haven't yet been able to persuade jhbuild to make me a new one. So I can't even build the MacOS port with the _old_ makefiles, and hence, I have no way of checking that the new ones also work. I hope to bring that port back to life at some point, but I don't want it to block the rest of this change.
2021-04-10 14:21:11 +00:00
installed_program(puttygen)
New application: a Windows version of 'pterm'! This fulfills our long-standing Mayhem-difficulty wishlist item 'win-command-prompt': this is a Windows pterm in the sense that when you run it you get a local cmd.exe running inside a PuTTY-style window. Advantages of this: you get the same free choice of fonts as PuTTY has (no restriction to a strange subset of the system's available fonts); you get the same copy-paste gestures as PuTTY (no mental gear-shifting when you have command prompts and SSH sessions open on the same desktop); you get scrollback with the PuTTY semantics (scrolling to the bottom gets you to where the action is, as opposed to the way you could accidentally find yourself 500 lines past the end of the action in a real console). 'win-command-prompt' was at Mayhem difficulty ('Probably impossible') basically on the grounds that with Windows's old APIs for accessing the contents of consoles, there was no way I could find to get this to work sensibly. What was needed to make it feasible was a major piece of re-engineering work inside Windows itself. But, of course, that's exactly what happened! In 2019, the new ConPTY API arrived, which lets you create an object that behaves like a Windows console at one end, and round the back, emits a stream of VT-style escape sequences as the screen contents evolve, and accepts a VT-style input stream in return which it will parse function and arrow keys out of in the usual way. So now it's actually _easy_ to get this to basically work. The new backend, in conpty.c, has to do a handful of magic Windows API calls to set up the pseudo-console and its feeder pipes and start a subprocess running in it, a further magic call every time the PuTTY window is resized, and detect the end of the session by watching for the subprocess terminating. But apart from that, all it has to do is pass data back and forth unmodified between those pipes and the backend's associated Seat! That said, this is new and experimental, and there will undoubtedly be issues. One that I already know about is that you can't copy and paste a word that has wrapped between lines without getting an annoying newline in the middle of it. As far as I can see this is a fundamental limitation: the ConPTY system sends the _same_ escape sequence stream for a line that wrapped as it would send for a line that had a logical \n at what would have been the wrap point. Probably the best we can do to mitigate this is to adopt a different heuristic for newline elision that's right more often than it's wrong. For the moment, that experimental-ness is indicated by the fact that Buildscr will build, sign and deliver a copy of pterm.exe for each flavour of Windows, but won't include it in the .zip file or in the installer. (In fact, that puts it in exactly the same ad-hoc category as PuTTYtel, although for completely different reasons.)
2021-05-08 16:24:13 +00:00
if(HAVE_CONPTY)
add_executable(pterm
window.c
pterm.c
help.c
conpty.c
${CMAKE_SOURCE_DIR}/stubs/no-gss.c
${CMAKE_SOURCE_DIR}/stubs/no-ca-config.c
${CMAKE_SOURCE_DIR}/stubs/no-console.c
${CMAKE_SOURCE_DIR}/stubs/no-rand.c
Arm: turn on PSTATE.DIT if available and needed. DIT, for 'Data-Independent Timing', is a bit you can set in the processor state on sufficiently new Arm CPUs, which promises that a long list of instructions will deliberately avoid varying their timing based on the input register values. Just what you want for keeping your constant-time crypto primitives constant-time. As far as I'm aware, no CPU has _yet_ implemented any data-dependent optimisations, so DIT is a safety precaution against them doing so in future. It would be embarrassing to be caught without it if a future CPU does do that, so we now turn on DIT in the PuTTY process state. I've put a call to the new enable_dit() function at the start of every main() and WinMain() belonging to a program that might do cryptography (even testcrypt, in case someone uses it for something!), and in case I missed one there, also added a second call at the first moment that any cryptography-using part of the code looks as if it might become active: when an instance of the SSH protocol object is configured, when the system PRNG is initialised, and when selecting any cryptographic authentication protocol in an HTTP or SOCKS proxy connection. With any luck those precautions between them should ensure it's on whenever we need it. Arm's own recommendation is that you should carefully choose the granularity at which you enable and disable DIT: there's a potential time cost to turning it on and off (I'm not sure what, but plausibly something of the order of a pipeline flush), so it's a performance hit to do it _inside_ each individual crypto function, but if CPUs start supporting significant data-dependent optimisation in future, then it will also become a noticeable performance hit to just leave it on across the whole process. So you'd like to do it somewhere in the middle: for example, you might turn on DIT once around the whole process of verifying and decrypting an SSH packet, instead of once for decryption and once for MAC. With all respect to that recommendation as a strategy for maximum performance, I'm not following it here. I turn on DIT at the start of the PuTTY process, and then leave it on. Rationale: 1. PuTTY is not otherwise a performance-critical application: it's not likely to max out your CPU for any purpose _other_ than cryptography. The most CPU-intensive non-cryptographic thing I can imagine a PuTTY process doing is the complicated computation of font rendering in the terminal, and that will normally be cached (you don't recompute each glyph from its outline and hints for every time you display it). 2. I think a bigger risk lies in accidental side channels from having DIT turned off when it should have been on. I can imagine lots of causes for that. Missing a crypto operation in some unswept corner of the code; confusing control flow (like my coroutine macros) jumping with DIT clear into the middle of a region of code that expected DIT to have been set at the beginning; having a reference counter of DIT requests and getting it out of sync. In a more sophisticated programming language, it might be possible to avoid the risk in #2 by cleverness with the type system. For example, in Rust, you could have a zero-sized type that acts as a proof token for DIT being enabled (it would be constructed by a function that also sets DIT, have a Drop implementation that clears DIT, and be !Send so you couldn't use it in a thread other than the one where DIT was set), and then you could require all the actual crypto functions to take a DitToken as an extra parameter, at zero runtime cost. Then "oops I forgot to set DIT around this piece of crypto" would become a compile error. Even so, you'd have to take some care with coroutine-structured code (what happens if a Rust async function yields while holding a DIT token?) and with nesting (if you have two DIT tokens, you don't want dropping the inner one to clear DIT while the outer one is still there to wrongly convince callees that it's set). Maybe in Rust you could get this all to work reliably. But not in C! DIT is an optional feature of the Arm architecture, so we must first test to see if it's supported. This is done the same way as we already do for the various Arm crypto accelerators: on ELF-based systems, check the appropriate bit in the 'hwcap' words in the ELF aux vector; on Mac, look for an appropriate sysctl flag. On Windows I don't know of a way to query the DIT feature, _or_ of a way to write the necessary enabling instruction in an MSVC-compatible way. I've _heard_ that it might not be necessary, because Windows might just turn on DIT unconditionally and leave it on, in an even more extreme version of my own strategy. I don't have a source for that - I heard it by word of mouth - but I _hope_ it's true, because that would suit me very well! Certainly I can't write code to enable DIT without knowing (a) how to do it, (b) how to know if it's safe. Nonetheless, I've put the enable_dit() call in all the right places in the Windows main programs as well as the Unix and cross-platform code, so that if I later find out that I _can_ put in an explicit enable of DIT in some way, I'll only have to arrange to set HAVE_ARM_DIT and compile the enable_dit() function appropriately.
2024-12-19 08:47:08 +00:00
${CMAKE_SOURCE_DIR}/stubs/no-dit.c
Merge be_*.c into one ifdef-controlled module. This commit replaces all those fiddly little linking modules (be_all.c, be_none.c, be_ssh.c etc) with a single source file controlled by ifdefs, and introduces a function be_list() in setup.cmake that makes it easy to compile a version of it appropriate to each application. This is a net reduction in code according to 'git diff --stat', even though I've introduced more comments. It also gets rid of another pile of annoying little source files in the top-level directory that didn't deserve to take up so much room in 'ls'. More concretely, doing this has some maintenance advantages. Centralisation means less to maintain (e.g. n_ui_backends is worked out once in a way that makes sense everywhere), and also, 'appname' can now be reliably set per program. Previously, some programs got the wrong appname due to sharing the same linking module (e.g. Plink had appname="PuTTY"), which was a latent bug that would have manifested if I'd wanted to reuse the same string in another context. One thing I've changed in this rework is that Windows pterm no longer has the ConPTY backend in its backends[]: it now has an empty one. The special be_conpty.c module shouldn't really have been there in the first place: it was used in the very earliest uncommitted drafts of the ConPTY work, where I was using another method of selecting that backend, but now that Windows pterm has a dedicated backend_vt_from_conf() that refers to conpty_backend by name, it has no need to live in backends[] at all, just as it doesn't have to in Unix pterm.
2021-11-26 17:58:55 +00:00
${CMAKE_SOURCE_DIR}/proxy/nosshproxy.c
New application: a Windows version of 'pterm'! This fulfills our long-standing Mayhem-difficulty wishlist item 'win-command-prompt': this is a Windows pterm in the sense that when you run it you get a local cmd.exe running inside a PuTTY-style window. Advantages of this: you get the same free choice of fonts as PuTTY has (no restriction to a strange subset of the system's available fonts); you get the same copy-paste gestures as PuTTY (no mental gear-shifting when you have command prompts and SSH sessions open on the same desktop); you get scrollback with the PuTTY semantics (scrolling to the bottom gets you to where the action is, as opposed to the way you could accidentally find yourself 500 lines past the end of the action in a real console). 'win-command-prompt' was at Mayhem difficulty ('Probably impossible') basically on the grounds that with Windows's old APIs for accessing the contents of consoles, there was no way I could find to get this to work sensibly. What was needed to make it feasible was a major piece of re-engineering work inside Windows itself. But, of course, that's exactly what happened! In 2019, the new ConPTY API arrived, which lets you create an object that behaves like a Windows console at one end, and round the back, emits a stream of VT-style escape sequences as the screen contents evolve, and accepts a VT-style input stream in return which it will parse function and arrow keys out of in the usual way. So now it's actually _easy_ to get this to basically work. The new backend, in conpty.c, has to do a handful of magic Windows API calls to set up the pseudo-console and its feeder pipes and start a subprocess running in it, a further magic call every time the PuTTY window is resized, and detect the end of the session by watching for the subprocess terminating. But apart from that, all it has to do is pass data back and forth unmodified between those pipes and the backend's associated Seat! That said, this is new and experimental, and there will undoubtedly be issues. One that I already know about is that you can't copy and paste a word that has wrapped between lines without getting an annoying newline in the middle of it. As far as I can see this is a fundamental limitation: the ConPTY system sends the _same_ escape sequence stream for a line that wrapped as it would send for a line that had a logical \n at what would have been the wrap point. Probably the best we can do to mitigate this is to adopt a different heuristic for newline elision that's right more often than it's wrong. For the moment, that experimental-ness is indicated by the fact that Buildscr will build, sign and deliver a copy of pterm.exe for each flavour of Windows, but won't include it in the .zip file or in the installer. (In fact, that puts it in exactly the same ad-hoc category as PuTTYtel, although for completely different reasons.)
2021-05-08 16:24:13 +00:00
pterm.rc)
Merge be_*.c into one ifdef-controlled module. This commit replaces all those fiddly little linking modules (be_all.c, be_none.c, be_ssh.c etc) with a single source file controlled by ifdefs, and introduces a function be_list() in setup.cmake that makes it easy to compile a version of it appropriate to each application. This is a net reduction in code according to 'git diff --stat', even though I've introduced more comments. It also gets rid of another pile of annoying little source files in the top-level directory that didn't deserve to take up so much room in 'ls'. More concretely, doing this has some maintenance advantages. Centralisation means less to maintain (e.g. n_ui_backends is worked out once in a way that makes sense everywhere), and also, 'appname' can now be reliably set per program. Previously, some programs got the wrong appname due to sharing the same linking module (e.g. Plink had appname="PuTTY"), which was a latent bug that would have manifested if I'd wanted to reuse the same string in another context. One thing I've changed in this rework is that Windows pterm no longer has the ConPTY backend in its backends[]: it now has an empty one. The special be_conpty.c module shouldn't really have been there in the first place: it was used in the very earliest uncommitted drafts of the ConPTY work, where I was using another method of selecting that backend, but now that Windows pterm has a dedicated backend_vt_from_conf() that refers to conpty_backend by name, it has no need to live in backends[] at all, just as it doesn't have to in Unix pterm.
2021-11-26 17:58:55 +00:00
be_list(pterm pterm)
New application: a Windows version of 'pterm'! This fulfills our long-standing Mayhem-difficulty wishlist item 'win-command-prompt': this is a Windows pterm in the sense that when you run it you get a local cmd.exe running inside a PuTTY-style window. Advantages of this: you get the same free choice of fonts as PuTTY has (no restriction to a strange subset of the system's available fonts); you get the same copy-paste gestures as PuTTY (no mental gear-shifting when you have command prompts and SSH sessions open on the same desktop); you get scrollback with the PuTTY semantics (scrolling to the bottom gets you to where the action is, as opposed to the way you could accidentally find yourself 500 lines past the end of the action in a real console). 'win-command-prompt' was at Mayhem difficulty ('Probably impossible') basically on the grounds that with Windows's old APIs for accessing the contents of consoles, there was no way I could find to get this to work sensibly. What was needed to make it feasible was a major piece of re-engineering work inside Windows itself. But, of course, that's exactly what happened! In 2019, the new ConPTY API arrived, which lets you create an object that behaves like a Windows console at one end, and round the back, emits a stream of VT-style escape sequences as the screen contents evolve, and accepts a VT-style input stream in return which it will parse function and arrow keys out of in the usual way. So now it's actually _easy_ to get this to basically work. The new backend, in conpty.c, has to do a handful of magic Windows API calls to set up the pseudo-console and its feeder pipes and start a subprocess running in it, a further magic call every time the PuTTY window is resized, and detect the end of the session by watching for the subprocess terminating. But apart from that, all it has to do is pass data back and forth unmodified between those pipes and the backend's associated Seat! That said, this is new and experimental, and there will undoubtedly be issues. One that I already know about is that you can't copy and paste a word that has wrapped between lines without getting an annoying newline in the middle of it. As far as I can see this is a fundamental limitation: the ConPTY system sends the _same_ escape sequence stream for a line that wrapped as it would send for a line that had a logical \n at what would have been the wrap point. Probably the best we can do to mitigate this is to adopt a different heuristic for newline elision that's right more often than it's wrong. For the moment, that experimental-ness is indicated by the fact that Buildscr will build, sign and deliver a copy of pterm.exe for each flavour of Windows, but won't include it in the .zip file or in the installer. (In fact, that puts it in exactly the same ad-hoc category as PuTTYtel, although for completely different reasons.)
2021-05-08 16:24:13 +00:00
add_dependencies(pterm generated_licence_h)
target_link_libraries(pterm
guiterminal guimisc eventloop settings network utils
${platform_libraries})
set_target_properties(pterm PROPERTIES WIN32_EXECUTABLE ON)
New application: a Windows version of 'pterm'! This fulfills our long-standing Mayhem-difficulty wishlist item 'win-command-prompt': this is a Windows pterm in the sense that when you run it you get a local cmd.exe running inside a PuTTY-style window. Advantages of this: you get the same free choice of fonts as PuTTY has (no restriction to a strange subset of the system's available fonts); you get the same copy-paste gestures as PuTTY (no mental gear-shifting when you have command prompts and SSH sessions open on the same desktop); you get scrollback with the PuTTY semantics (scrolling to the bottom gets you to where the action is, as opposed to the way you could accidentally find yourself 500 lines past the end of the action in a real console). 'win-command-prompt' was at Mayhem difficulty ('Probably impossible') basically on the grounds that with Windows's old APIs for accessing the contents of consoles, there was no way I could find to get this to work sensibly. What was needed to make it feasible was a major piece of re-engineering work inside Windows itself. But, of course, that's exactly what happened! In 2019, the new ConPTY API arrived, which lets you create an object that behaves like a Windows console at one end, and round the back, emits a stream of VT-style escape sequences as the screen contents evolve, and accepts a VT-style input stream in return which it will parse function and arrow keys out of in the usual way. So now it's actually _easy_ to get this to basically work. The new backend, in conpty.c, has to do a handful of magic Windows API calls to set up the pseudo-console and its feeder pipes and start a subprocess running in it, a further magic call every time the PuTTY window is resized, and detect the end of the session by watching for the subprocess terminating. But apart from that, all it has to do is pass data back and forth unmodified between those pipes and the backend's associated Seat! That said, this is new and experimental, and there will undoubtedly be issues. One that I already know about is that you can't copy and paste a word that has wrapped between lines without getting an annoying newline in the middle of it. As far as I can see this is a fundamental limitation: the ConPTY system sends the _same_ escape sequence stream for a line that wrapped as it would send for a line that had a logical \n at what would have been the wrap point. Probably the best we can do to mitigate this is to adopt a different heuristic for newline elision that's right more often than it's wrong. For the moment, that experimental-ness is indicated by the fact that Buildscr will build, sign and deliver a copy of pterm.exe for each flavour of Windows, but won't include it in the .zip file or in the installer. (In fact, that puts it in exactly the same ad-hoc category as PuTTYtel, although for completely different reasons.)
2021-05-08 16:24:13 +00:00
installed_program(pterm)
else()
message("ConPTY not available; cannot build Windows pterm")
endif()
add_executable(test_split_into_argv
test/test_split_into_argv.c)
target_compile_definitions(test_split_into_argv PRIVATE TEST)
target_link_libraries(test_split_into_argv utils ${platform_libraries})
add_executable(test_screenshot
test/test_screenshot.c)
target_link_libraries(test_screenshot utils ${platform_libraries})
add_executable(test_lineedit
${CMAKE_SOURCE_DIR}/test/test_lineedit.c
${CMAKE_SOURCE_DIR}/stubs/no-gss.c
${CMAKE_SOURCE_DIR}/stubs/no-logging.c
${CMAKE_SOURCE_DIR}/stubs/no-printing.c
${CMAKE_SOURCE_DIR}/stubs/no-storage.c
${CMAKE_SOURCE_DIR}/stubs/no-timing.c
no-jump-list.c)
target_link_libraries(test_lineedit
guiterminal settings eventloop utils ${platform_libraries})
add_executable(test_terminal
${CMAKE_SOURCE_DIR}/test/test_terminal.c
${CMAKE_SOURCE_DIR}/stubs/no-gss.c
${CMAKE_SOURCE_DIR}/stubs/no-storage.c
${CMAKE_SOURCE_DIR}/stubs/no-timing.c
no-jump-list.c)
target_link_libraries(test_terminal
guiterminal settings eventloop utils ${platform_libraries})
add_sources_from_current_dir(test_conf no-jump-list.c handle-wait.c)