Do HTTP reverse proxies typically enable HTTP Keep-Alive on the client side of the proxied connection and not on the server side?

edit: My answer only covers the original unedited question, which was whether this sort of thing is typical in load balancers/reverse proxies. I'm not sure whether nginx/product X has support for this, 99.9% of my reverse proxying experience is with HAproxy.

Correct. HTTP Keep-Alive on the client side, but not on the server side.

Why?

If you break down a few details you can quickly see why this is a benefit. For this example, let's pretend we're loading a page www.example.com and that page includes 3 images, img[1-3].jpg.

Browser loading a page, without Keep-Alive

  1. Client establishes a TCP connection to www.example.com on port 80
  2. Client does an HTTP GET request for "/"
  3. Server sends the HTML content of the URI "/" (which includes HTML tags referencing the 3 images)
  4. Server closes the TCP connection
  5. Client establishes a TCP connection to www.example.com on port 80
  6. Client does an HTTP GET request for "/img1.jpg"
  7. Server sends the image
  8. Server closes the TCP connection
  9. Client establishes a TCP connection to www.example.com on port 80
  10. Client does an HTTP GET request for "/img2.jpg"
  11. Server sends the image
  12. Server closes the TCP connection
  13. Client establishes a TCP connection to www.example.com on port 80
  14. Client does an HTTP GET request for "/img3.jpg"
  15. Server sends the image
  16. Server closes the TCP connection

Notice that there are 4 seperate TCP sessions established and then closed.

Browser loading a page, with Keep-Alive

HTTP Keep-Alive allows for a single TCP connection to serve multiple HTTP requests, one after the other.

  1. Client establishes a TCP connection to www.example.com on port 80
  2. Client does an HTTP GET request for "/", and also asks the server to make this an HTTP Keep-Alive session.
  3. Server sends the HTML content of the URI "/" (which includes HTML tags referencing the 3 images)
  4. Server does not close the TCP connection
  5. Client does and HTTP GET request for "/img1.jpg"
  6. Server sends the image
  7. Client does and HTTP GET request for "/img2.jpg"
  8. Server sends the image
  9. Client does and HTTP GET request for "/img3.jpg"
  10. Server sends the image
  11. Server closes TCP connection if no more HTTP requests are received within its HTTP Keep-Alive timeout period

Notice that with Keep-Alive, only 1 TCP connection is established and eventually closed.

Why's Keep-Alive better?

To answer this you must understand what it takes to establish a TCP connection between a client and a server. This is called the TCP 3-way handshake.

  1. Client sends a SYN(chronise) packet
  2. Server sends back a SYN(chronise) ACK(nowledgement), SYN-ACK
  3. Client sends an ACK(nowledgement) packet
  4. TCP connection is now considered active by both client and server

Networks have latency, so each step in the 3-way handshake takes a certain amount of time. Lets say that there's 30ms between the client and server, the back-and-forth sending of IP packets required to establish the TCP connection means that it takes 3 x 30ms = 90ms to establish a TCP connection.

This may not sound like much, but if we consider that in our original example, we have to establish 4 separate TCP connections, this becomes 360ms. What if the latency between the client and server is 100ms instead of 30ms? Then our 4 connections are taking 1200ms to establish.

Even worse, a typical web page may require far more than just 3 images in order to load, there may be multiple CSS, JavaScript, image or other files that the client needs to request. If the page loads 30 other files and the client-server latency is 100ms, how long do we spend establishing TCP connections?

  1. To establish 1 TCP connection takes 3 x latency, i.e. 3 x 100ms = 300ms.
  2. We must do this 31 times, once for the page, and another 30 times for each other file referenced by the page. 31 x 300ms = 9.3 seconds.

9.3 seconds spent establishing TCP connections to load a webpage which references 30 other files. And that doesn't even count the time spent sending HTTP requests and receiving responses.

With HTTP Keep-Alive, we need only establish 1 TCP connection, which takes 300ms.

If HTTP Keep-Alive is so great, why not use it on the server side as well?

HTTP reverse proxies (like HAproxy) are typically deployed very close to the backend servers they are proxying for. In most cases the latency between the reverse proxy and its backend server/s will be under 1ms, so establishing a TCP connection is much faster than it is between a client.

That's only half the reason though. An HTTP server allocates a certain amount of memory for each client connection. With Keep-Alive, it will keep the connection alive, and by extension it'll keep a certain amount of memory in use on the server, until the Keep-Alive timeout is reached, which may be up to 15s, depending on server configuration.

So if we consider the effects of using Keep-Alive on the server side of an HTTP reverse proxy, we are increasing the need for memory, but because the latency between the proxy and the server is so low, we get no real benefit from the reduction in time taken for TCP's 3-way handshake, so its typically better to just disable Keep-Alive between the proxy and the web server in this scenario.

Disclaimer: yes, this explanation doesn't take into account the fact that browsers typically establish multiple HTTP connections to a server in parallel. However, there is a limit to how many parallel connections a browser will make to the same host, and typically this is still small enough to make keep-alive desirable.


Nginx supports keep-alive on both sides.

  • client side: http://nginx.org/r/keepalive_timeout
  • backend side: http://nginx.org/r/keepalive