Retry-after HTTP response header - does it affect anything?

If I want to politely refuse service on a web site due to temporary overload, the HTTP response 503 Service Unavailable seems appropriate. The spec mentions sending a Retry-after header with the 503.

Is there any point? Does Retry-after affect anything? Do browsers pay any attention to it?


The current state of the Retry-After header

The implementation of the Retry-After header in clients and servers has changed a bit in the recent years since the original posting of this question. So I thought I'd provide an updated answer.

First off, RFC 2616, section 14.37 Retry-After states:

The Retry-After response-header field can be used with a 503 (Service Unavailable) response to indicate how long the service is expected to be unavailable to the requesting client.

...

Two examples of its use are

  Retry-After: Fri, 31 Dec 1999 23:59:59 GMT
  Retry-After: 120

In the latter example, the delay is 2 minutes.

Support in client and server software

The following are code repository commit messages, announcements, and documentation regarding the Retry-After header in various software.

Chrome/Chromium

A code commit on Nov 22, 2012 with the log message: Added detection timeouts and usage of Retry-After HTTP header.

Mozilla/Firefox

A code commit on Mar 27, 2012 with the log messeage: Implement Handling of 5xxs, X-Weave-Backoff, Retry-After. Additionally, there are three other mentions of Retry-After header in their Mercurial repository.

A bug was initially submitted on Jan 6, 2004 with the title Retry-After sent with HTTP 503 response is ignored.

Googlebot

A Google Webmaster Central Blog article about dealing with site downtime mentions that the Retry-After header may be used to determine when to recrawl the URL.

Bingbot/Msnbot

Could not find any official Retry-After support document. However, there were a few mentions in random forums about using this header in a 503 response to throttle Microsoft's bots.

Nginx

The add_header directive states:

Adds the specified field to a response header provided that the response code equals 200, 201, 204, 206, 301, 302, 303, 304, or 307.

Therefore, to add the Retry-After header for a 503 response using version:

  • 1.7.4 and earlier, use a third-party module, such as Headers More.

  • 1.7.5 and later, append the always parameter to the add_header directive.

Apache

Unlike Nginx, the Apache header documentation gives no indication that it can't send a Retry-After header on a 503 response. Regarding non-2xx responses, however, the docs state:

adding a header to a locally generated non-success (non-2xx) response, such as a redirect, in which case only the table corresponding to always is used in the ultimate response.

Here is a SO answer that sets a Retry-After header with the always condition for 503 responses, as the doc advises.

An AskApache article provides other configuration examples of how to instruct search engines to come back using a 503 response with a Retry-After header.

Client testing

I wrote a Ruby server that simply returns a 503 response with a Retry-After header set to 10 seconds and a body containing a random number.

require 'sinatra'

get '/' do
  headers 'Content-Type' => 'text/plain', 'Retry-After' => '10'
  status 503
  body rand(1000).to_s
end

I accessed it on:

  • OpenBSD 5.8 using Chromium 44, Firefox-ESR 38, and Seamonkey 2.33,
  • Mac OSX 10.7.5 using Chrome 47 and Safari 6.1,
  • Windows 10 using Chrome 48, Firefox 41, and Edge 25.

I was expecting these browsers to automatically refresh the URL after 10 seconds and display a new random number. However, all browsers did not retry, even after several minutes. I tried shorter and longer Retry-After periods as well with the same results. The server access log confirmed no retry was ever made from any of these browsers.

Also, a "soft" refresh before the Retry-After period refetched the URL immediately. So the Retry-After header cannot be used to throttle "refresh-happy" users. I mention this because I saw in some forum that this header could be used to throttle impatient users from hammering your site.

As a side note, it seems logical for a "soft" refresh to have no action before the timeout period, but a "hard" or cache-bypass refresh would ignore any timeout and immediately refetch the URL.

Conclusion

Support for the Retry-After header still seems a bit sketchy on both clients and servers. Nonetheless, it is a good idea to set a retry timeout for 503 responses if it is not difficult to configure.

Even if Googlebot is the only client supporting the header and actually retrying after the timeout period, it may keep your pages from being de-indexed -- as opposed to a 404, 500, 502, or 504 response.


As far as i'm aware, no browser pays attention to a Retry-after header. Proxies and caches might, but

Apparently, some browsers now include some level of support for Retry-After (though support is still iffy at best). I'm not entirely convinced of the benefit of doing so in a browser; generally, it's considered a bad idea to cache failures. But if you know when you'll be accepting requests again, telling the client can't hurt. (If you come back up sooner than expected, though, any program that actually honors the header should assume -- and report -- that the site's still down.)

The most obvious benefit is, it seems Googlebot (and possibly other spiders) will pay attention to the header if it's there, which can keep it from un-indexing the page for a while.

With all that said...if it's trivial to add, and you can come up with a reasonably accurate estimate of when the service will be available, go for it. I wouldn't recommend going out of your way to do it, though. It's only advisory anyway, and putting the wrong time in there could cause more problems than not including the header at all.


I see this as a chicken-and-egg problem: No browsers currently implement Retry-after since no websites bother to. In my opinion, go ahead and send it as a service to the users. If their choice of web browser doesn't implement it, then that is their browser just not giving them useful information. You did!

When looking to implement standards which have multiple, competing implementations, I always try to adhere to the standards and not pay attention to the different implementations (unless I am specifically trying to emulate an implementation, such as cURLing but disguising my headers to look like a web browser). Otherwise, we end up with defacto standards, which if you remember the IE-dominence days you don't want!