How to measure req/sec by analyzing apache logs

I want to measure the justify stress-test result on production env.

How to measure req/sec by analyzing apache logs?

apache2.2

LogFormat "%v:%p %h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\" %D" combined

Can I do with %t and %D parameters?


In real time you could use mod_status. You could also count the lines in your access.log over a given period and work out the rate from that. Something like this

#!/bin/bash
LOGFILE=/var/log/apache2/access.log
STATFILE=/var/tmp/apachestats
START=$(wc -l "$LOGFILE" | awk '{print $1}')
PERIOD=10
PRECISION=2
sleep "$PERIOD"
while true
do
    HITSPERSECOND=0
    HITS=$(wc -l "$LOGFILE" | awk '{print $1}')
    NEWHITS=$(( HITS - START ))
    if [[ "$NEWHITS" > 0 ]]
    then
        START=$HITS
        HITSPERSECOND=$(echo -e "scale=$PRECISION\n$NEWHITS / $PERIOD" | bc -l )
    fi
    echo "$(date) rate was $HITSPERSECOND" >>"$STATFILE"
    sleep "$PERIOD"
done

This great article helped me a lot...

http://www.inmotionhosting.com/support/website/server-usage/view-level-of-traffic-with-apache-access-log

I had created a set of prepered commands that I am using to analyze apache log:

request per hour
cat access.log | cut -d[ -f2 | cut -d] -f1 | awk -F: '{print $2":00"}' | sort -n | uniq -c

request per hour by date
grep "23/Jan" access.log | cut -d[ -f2 | cut -d] -f1 | awk -F: '{print $2":00"}' | sort -n | uniq -c

request per hour by IP
grep "XX.XX.XX.XX" access.log | cut -d[ -f2 | cut -d] -f1 | awk -F: '{print $2":00"}' | sort -n | uniq -c

requests per minute:
cat access.log | cut -d[ -f2 | cut -d] -f1 | awk -F: '{print $2":"$3}' | sort -nk1 -nk2 | uniq -c

requests per minute for date:
grep "02/Nov/2017" access.log | cut -d[ -f2 | cut -d] -f1 | awk -F: '{print $2":"$3}' | sort -nk1 -nk2 | uniq -c

requests per minute for url:
grep "[url]" access.log | cut -d[ -f2 | cut -d] -f1 | awk -F: '{print $2":"$3}' | sort -nk1 -nk2 | uniq -c

per IP per minute
grep "XX.XX.XX.XX" access.log | cut -d[ -f2 | cut -d] -f1 | awk -F: '{print $2":"$3}' | sort -nk1 -nk2 | uniq -c

hopes it will help anyone who's looking for it...


How about using a tool like awstat.

To do manually, you can count the log entries (requests) and then divide them by the number of seconds between the first and last request. So, you get the average req/sec.


If you want to run relatively short performance tests, you can keep your eye on reqs or bytes/sec in real-time with apachetop. It's like top command in Linux and Unix, but provides a view to your Apache. It works by tailing the access log file.

I don't know if it's accurate enough for your purposes, but it definitely gives you some nice ballpark figures.