How to optimize Time To First Byte (TTFB)?
I have nginx+apache web-server with several sites.
But with very long Time To First Byte (TTFB): 0,5-2 seconds for every request (for static & dynamic content).
How to optimize Time To First Byte (TTFB)?
Solution 1:
In general, time to first byte is dependent on:
DNS Lookup: Definition: Find the IP address of the domain Improve: more numerous/distributed/responsive DNS servers
Connection time: Definition: Open a socket to the server, negotiate the connection Improve: typical value should be around 'ping' time - a round trip is usually necessary
Waiting: Definition: initial processing required before first byte can be sent Improve: This is where your improvement should be - it will be most significant for dynamic content.
An additional consideration comes from:
Processing (after first byte): Definition: Sum of waiting + complete transfer of content Improve: if the transfer time is significantly longer than what would be expected to download the quantity of data received, further processing is occurring, and may be optimized (e.g. the page is flushing content as it is available)
Given that I don't know how you performed your test (or much about your setup, or the content you are serving), the following are rather general suggestions:
Firstly, try using Firebug on a remote (client side) machine and something such as ab (ApacheBench) on your server. This will help to breakdown the TTFB number into its component values, so that you can focus on fixing them one at a time - or narrow down the problem. Running the test from different locations also helps to point out problems in connection time and DNS lookups which may not occur if run directly from your server. Additionally, you will want to run a ping from the remote machine to your server to determine how much of the time is attributed to simply 'round trip time'.
Secondly, avoid using nginx as a reverse proxy to the extent possible. Expect it to be slower than Apache if every request has to go through nginx before reaching Apache. Try to serve static files directly from nginx (i.e. bypass Apache entirely), and cache some content that is received from Apache (i.e. proxy_cache). Nginx should be able to serve static files extremely quickly, especially if the test is running from the local machine. If this isn't true, then you need to look into your configuration, and the resources being used by your server.
Thirdly, try and determine where your performance is suffering - is it Apache or nginx? Compare the ab results for requests through nginx vs. requests directly to apache (just change the port, and specify the host header) to determine if the reverse proxying is a significant bottleneck).
You have mentioned that the problem applies equally to static and dynamic content - but the next step would be to profile your scripts, and look at slow query logs to determine if there are any significant bottlenecks.
Solution 2:
Some ideas:
- Upgrade your ram (for buffers/cache),
- Use the proxy_cache in Nginx,
- Use SSD or Ramdisk,
- Use a Content Delivery Network.
If you want a more detailed answer you must tell us more about.