Bad Gateway 502 error with Apache mod_proxy and Tomcat

Just to add some specific settings, I had a similar setup (with Apache 2.0.63 reverse proxying onto Tomcat 5.0.27).

For certain URLs the Tomcat server could take perhaps 20 minutes to return a page.

I ended up modifying the following settings in the Apache configuration file to prevent it from timing out with its proxy operation (with a large over-spill factor in case Tomcat took longer to return a page):

Timeout 5400
ProxyTimeout 5400

Some backgound

ProxyTimeout alone wasn't enough. Looking at the documentation for Timeout I'm guessing (I'm not sure) that this is because while Apache is waiting for a response from Tomcat, there is no traffic flowing between Apache and the Browser (or whatever http client) - and so Apache closes down the connection to the browser.

I found that if I left the Timeout setting at its default (300 seconds), then if the proxied request to Tomcat took longer than 300 seconds to get a response the browser would display a "502 Proxy Error" page. I believe this message is generated by Apache, in the knowledge that it's acting as a reverse proxy, before it closes down the connection to the browser (this is my current understanding - it may be flawed).

The proxy error page says:

Proxy Error

The proxy server received an invalid response from an upstream server. The proxy server could not handle the request GET.

Reason: Error reading from remote server

...which suggests that it's the ProxyTimeout setting that's too short, while investigation shows that Apache's Timeout setting (timeout between Apache and the client) that also influences this.


So, answering my own question here. We ultimately determined that we were seeing 502 and 503 errors in the load balancer due to Tomcat threads timing out. In the short term we increased the timeout. In the longer term, we fixed the app problems that were causing the timeouts in the first place. Why Tomcat timeouts were being perceived as 502 and 503 errors at the load balancer is still a bit of a mystery.


You can use proxy-initial-not-pooled

See http://httpd.apache.org/docs/2.2/mod/mod_proxy_http.html :

If this variable is set no pooled connection will be reused if the client connection is an initial connection. This avoids the "proxy: error reading status line from remote server" error message caused by the race condition that the backend server closed the pooled connection after the connection check by the proxy and before data sent by the proxy reached the backend. It has to be kept in mind that setting this variable downgrades performance, especially with HTTP/1.0 clients.

We had this problem, too. We fixed it by adding

SetEnv proxy-nokeepalive 1
SetEnv proxy-initial-not-pooled 1

and turning keepAlive on all servers off.

mod_proxy_http is fine in most scenarios but we are running it with heavy load and we still got some timeout problems we do not understand.

But see if the above directive fits your needs.


Sample from apache conf:

#Default value is 2 minutes
**Timeout 600**
ProxyRequests off
ProxyPass /app balancer://MyApp stickysession=JSESSIONID lbmethod=bytraffic nofailover=On
ProxyPassReverse /app balancer://MyApp
ProxyTimeout 600
<Proxy balancer://MyApp>
    BalancerMember http://node1:8080/ route=node1 retry=1 max=25 timeout=600
    .........
</Proxy>

I'm guessing your using mod_proxy_http (or proxy balancer).

Look in your tomcat logs (localhost.log, or catalina.log) I suspect your seeing an exception in your web stack bubbling up and closing the socket that the tomcat worker is connected to.