HTTP response time profiling
Solution 1:
Wow! How are you measuring load times? As far as I knew nginx would only report request response times ($request_time) which is something completely different.
I've not had a good look for a few months, but last time I checked there was very little available for analysing response times. PastMon looks promising. And there are commercial tools like Client Vantage (rather expensive).
I ended up writing my own - its not that hard really to create a simple awk script to report all hits which are over a threshold - but remember that you'll need to go back and check to see how the URL behaves the rest of the time. e.g.
# looking for URLs matching 'example.com/interesting'
# with URL in $6 and $request_time in $8
BEGIN {}
$6==/example.com\/interesting/ {
if ( $8>0.3) {
n[$6]+=1; # no of hits by URL
t[$6]+=$8; # sum of times by url
s[$6]+=$8 * $8; # sum of sq of times by url
if (m[$6]<$8) m[$6]=$8; # max time for url
}
}
END {
print "url, n, avg, stddev, max";
for (x in n) {
print x ", " n[x] ", " t[x]/n[x] ", " sqrt(s[x]-t[x]*t[x])/(n[x]-1) ", " m[x];
}
}
If you are measuring the response times on the proxy, then you're also measuring the time taken to deliver the request across the network - i.e. your application may be behaving consistently but the spikes are introduced by changes on the internet / client. If you want to see what your application is really doing then you need to look at your webserver logs.