What does proxy_send_timeout really do in Nginx?
Solution 1:
TCP/IP is a so-called "streaming" data transmission protocol. It is designed to let the party reading data over a TCP/IP connection not to have to necessarily care about sizes of "segments" or even packets. This translates to, in practice, a situation where a peer invoking some traditional "read" operation for obtaining data sent by the remote peer (e.g. read
with Linux), won't necessarily read at once exactly as much data as what the remote end provided to a single "write" operation. The TCP/IP protocol implementation will invariably make IP packets of appropriate size from what was passed to a "write" operation at a time, and the implementation on the other end will assemble data back from those packets; but it won't necessarily hand them to some "reading" client application with the same data boundaries!
Example: A has 50Kb of data to send for every external system event, so much so they won't fit it in RAM all at the same time, so they send it in chunks of 16Kb which is the size of their sending buffer. So they first send 16Kb, then another 16Kb, then another 16Kb, and then finally 2Kb. The TCP/IP implementation might send these 50Kb, buffered in a 128Kb buffer internally (e.g. a kernel buffer), and only then send it over the network, which also has its own conditions. Some of these data, fragmented thus in a way the sending application isn't even aware of, arrive at the other end first -- due to network conditions -- and are assembled by the TCP/IP implementation there and put into a kernel buffer again. The kernel wakes up the process that wants to read the data -- reading all 30Kb of it. The receiver must decide if they expect more and how to make sense of how much more to expect -- the format of the "message" or data isn't something TCP/IP is concerned over.
This means Nginx can't know how much a client's request it will read at once for every read
call it will do on a Linux-based system, for instance.
The documentation of proxy_send_timeout
slightly hints at what's it for (emphasis mine), though:
Sets a timeout for transmitting a request to the proxied server. The timeout is set only between two successive write operations, not for the transmission of the whole request. If the proxied server does not receive anything within this time, the connection is closed.
Thing is, since Nginx proxies a request -- meaning the request does not originate with it -- it waits for the "downstream" client (the remote end of the connection that sent the request to Nginx that the latter in its role as a "proxy" now expects to forward upstream) to transmit data of the request before it forwards (writes) it over the upstream connection.
The way I understand it if there is nothing received from downstream [during the timeout period of time] then the proxied server won't be receiving anything either -- and the connection is then closed.
Put another way, if said downstream does not send anything within the period of time indicated by proxy_send_timeout
, Nginx will close the connection with the upstream.
For instance, consider a Web browser that sends a request to Nginx. The first piece is read by Nginx at time A. Assuming it will proxy the request to some upstream, it opens a connection to said upstream and transmits (writes) what it has received from the browser, over the upstream connection socket. It then simply waits for more pieces of the request data to be read from the browser -- if the next piece does not arrive after some timeout X relative to time A, it will close the connection to the upstream.
Pay mind that this does not necessarily mean it will close the connection to the Web browser -- it will certainly return some HTTP error status code for the request, but the Web browser connection lifetime is governed by a different set of conditions than proxy_send_timeout
-- the latter only concerns Nginx's connections to the upstream.