Does nginx as reverse proxy use buffers for every client request?

We have few requests that takes http headers size more than the default accepted size in our tomcat application i.e., maxHttpHeaderSize = 8192 (https://tomcat.apache.org/tomcat-8.0-doc/config/http.html)

And also nginx, has a default limit for buffers that can be allocated for large client requests i.e., large_client_header_buffers size = 8K (http://nginx.org/en/docs/http/ngx_http_core_module.html#large_client_header_buffers)

However, without increasing the large_client_header_buffers on nginx side, requests can still go through if just maxHttpHeaderSize parameter in tomcat is increased.

As per nginx documentation, "Buffers are allocated only on demand." I do not understand this. Are buffers not allocated for every request?


nginx request buffers consist of two parts:

client_header_buffer_size
large_client_header_buffers

The documentation for client_header_buffer_size explains nginx strategy:

Sets buffer size for reading client request header. For most requests, a buffer of 1K bytes is enough. However, if a request includes long cookies, or comes from a WAP client, it may not fit into 1K. If a request line or a request header field does not fit into this buffer then larger buffers, configured by the large_client_header_buffers directive, are allocated.

For large_client_header_buffers documentation states the following:

Sets the maximum number and size of buffers used for reading large client request header. A request line cannot exceed the size of one buffer, or the 414 (Request-URI Too Large) error is returned to the client. A request header field cannot exceed the size of one buffer as well, or the 400 (Bad Request) error is returned to the client. Buffers are allocated only on demand. By default, the buffer size is equal to 8K bytes. If after the end of request processing a connection is transitioned into the keep-alive state, these buffers are released.

So, by default, nginx allocates 1k of memory for request headers. If request headers do not fit into this space, nginx allocates up to N blocks of size X to store the additional request headers. N and X are specified in large_client_header_buffers directive.

For example, if there are 4 kB of request headers, nginx will use the 1kB base allocation and then allocate one 8 kB block for the rest.

With the default values, total size for headers can be 1 kB + 4 * 8 kB = 33 kB. However, because each request header line has to fit completely into the buffer, the buffer cannot be fully used. So the actual capacity for headers is less.

I don't know how Tomcat header limits work, but I assume that its strategy is different from nginx strategy and the values are not directly comparable.


nginx default values are:

client_header_buffer_size 1k;
large_client_header_buffers 4 8k;

which means it can process up to 4 requests of up to 8kB size in parallel. it's seems to be that your bottleneck is tomcat, which has fixed 4kB parameter for header size value, so up to 4 parallel requests larger than 1kB and smaller than 8kB can pass through your nginx, but they will stuck on tomcat if his parameters are unchanged.