1
0
mirror of https://git.tartarus.org/simon/putty.git synced 2025-01-09 17:38:00 +00:00

Impose an upper bound on incoming SFTP packet length.

Coverity was unhappy that I'd used the packet length as a loop bound
without sanitising it first (on the basis that it had decided anything
coming from GET_32BIT_MSB_FIRST was potentially tainted).

I think this is not a security issue: all that will happen if the
server sends a huge packet length is that we'll try to allocate space
for it. On a 64-bit machine we might even _succeed_; on 32-bit, we'll
fail, and snewn() will abort the program rather than return NULL. So
*technically* this is a remote-triggered crash. But it can only happen
in a situation where the same server could have triggered the
termination of the SFTP connection just as easily by simply closing it
- the only difference is that the client would die with a different
fatal error message.

(In particular, it isn't even a DoS against other processes
participating in a connection-shared SSH session. The upstream will
pass the SFTP data stream through without parsing it, so it and the
other downstreams will be unaffected. Only the particular downstream
operating the SFTP client will run into this problem.)
This commit is contained in:
Simon Tatham 2021-04-09 18:19:44 +01:00
parent 165f630ae9
commit edadfa7093

12
sftp.c
View File

@ -38,7 +38,17 @@ struct sftp_packet *sftp_recv(void)
if (!sftp_recvdata(x, 4))
return NULL;
pkt = sftp_recv_prepare(GET_32BIT_MSB_FIRST(x));
/* Impose _some_ upper bound on packet size. We never expect to
* receive more than 32K of data in response to an FXP_READ,
* because we decide how much data to ask for. FXP_READDIR and
* pathname-returning things like FXP_REALPATH don't have an
* explicit bound, so I suppose we just have to trust the server
* to be sensible. */
unsigned pktlen = GET_32BIT_MSB_FIRST(x);
if (pktlen > (1<<20))
return NULL;
pkt = sftp_recv_prepare(pktlen);
if (!sftp_recvdata(pkt->data, pkt->length)) {
sftp_pkt_free(pkt);