Load balancer to handle server errors silently

I'm looking for an HTTP load balancer that will handle server errors silently. What i want is to load balance every single request so that it works out, in the worst case with a little of timeout.

  • If the working web node returns an HTTP 500 server error, the load balancer will have to retry the request with another web node. If the second node does return another 500 error, doing the same with the last node (i suppose to have 3 nodes). If the last node returns a 500 error, display it to the end user.

  • If a server node timeout (takes more than 1 or 2 second to answer) the request will have to be routed to another server, the client should receive a good answer within 2+ seconds.


Solution 1:

You can nginx with HttpProxyModule (it's pretty standart module and usually is inside nginx) to implement such load balancer.

Nginx is lightweight, fast and has a lot of functionality (you can even embed lua code in it).

Example config for your use case would be

upstream backend { 
 server 10.0.0.1; 
 server 10.0.0.2;
 server 10.0.0.3;
}
server {
   listen      80;
   server_name _;

   location / {
        proxy_pass  http://backend;
        proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504;
        proxy_redirect off;
        proxy_buffering off;
        proxy_set_header        X-Real-IP       $remote_addr;
        proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
    }
}

The secret sauce is proxy_next_upstream directive, which determines in what cases the request will be transmitted to the next server. Possible values are :

  • error — an error has occurred while connecting to the server, sending a request to it, or reading its response;
  • timeout — occurred timeout during the connection with the server, transfer the request or while reading response from the server;
  • invalid_header — server returned a empty or incorrect answer;
  • http_500 — server returned answer with code 500
  • http_502 — server returned answer with code 502
  • http_503 — server returned answer with code 503
  • http_504 — server returned answer with code 504
  • http_404 — server returned answer with code 404
  • off — it forbids the request transfer to the next server

Solution 2:

This behavior could be accomplished too by apache in two ways

First one, using failonstatus

The directive is failonstatus in the module mod_proxy

For example, I used to use the below configuration for a productive environment

Header add Set-Cookie "ROUTEID=.%{BALANCER_WORKER_ROUTE}e; path=/" env=BALANCER_ROUTE_CHANGED
<Proxy "balancer://mycluster">
    BalancerMember "https://bod_node3.wavin.com:8443" route=1 connectiontimeout=5 keepalive=On retry=1200
    BalancerMember "https://bod.wavin.com:9443" route=2 connectiontimeout=5 keepalive=On  retry=1200
    ProxySet stickysession=ROUTEID
    ProxySet lbmethod=bytraffic
    ProxySet failonstatus=500,503,502
</Proxy>

Second way, and in my opinion the best option is to use the module mod_proxy_hcheck https://httpd.apache.org/docs/2.4/mod/mod_proxy_hcheck.html

Currently, I am using this module to detect backend issues

LoadModule proxy_hcheck_module modules/mod_proxy_hcheck.so
LoadModule watchdog_module modules/mod_watchdog.so
...
ProxyPass "/" "balancer://mycluster/" stickysession=JSESSIONID|jsessionid
    <Proxy "balancer://mycluster">
        BalancerMember "https://bod_node3.wavin.com:8443" route=node3 connectiontimeout=5 keepalive=On retry=1200 hcmethod=GET hcuri=/BOE/CMC/
        BalancerMember "https://bod.wavin.com:9443" route=node4 connectiontimeout=5 keepalive=On  retry=1200 hcmethod=GET hcuri=/BOE/CMC/
    ProxySet lbmethod=bytraffic
</Proxy>

Solution 3:

I'm guessing you want to serve HTTP?

Nginx provides a lot of functionalities, including all the ones you are looking for: http://wiki.nginx.org

check especially the upstream and proxy settings, there you can implement all your requirements: http://wiki.nginx.org/HttpUpstreamModule http://wiki.nginx.org/HttpProxyModule