How to achieve 500K requests per second on my webserver? [closed]
I recently gifted myself a new dedicated server and I am trying to squeeze maximum performance out of it, for fun and learning.
I am trying to achieve maximum possible requests per second this server can handle and aiming for 500K requests/sec as mentioned here - http://lowlatencyweb.wordpress.com/2012/03/20/500000-requestssec-modern-http-servers-are-fast/
Server Details
Intel® Xeon® E3-1270 4 Cores (8 HT) x 3.4 GHz
RAM 24 GB DDR3 ECC
Hard-disk space 2,000 GB (2 x 2,000 SATA) RAID Software RAID 1
Lan 100mbps
OS Centos 6.3 64 bit
Nginx
I am able to reach only 35K requests/sec for a static txt file. I am running the benchmark on the same machine. I am aware of NIC limits and network overhead
ab -n100000 -c200 http://localhost/test.txt
Update - 165K requests/sec
I tried another benchmarking tool called wrk
and it gave me 165K requests/sec. So cool!
Update 2 - 250K requests/sec
nginx.conf
#######################################################################
#
# This is the main Nginx configuration file.
#
# More information about the configuration options is available on
# * the English wiki - http://wiki.nginx.org/Main
# * the Russian documentation - http://sysoev.ru/nginx/
#
#######################################################################
#----------------------------------------------------------------------
# Main Module - directives that cover basic functionality
#
# http://wiki.nginx.org/NginxHttpMainModule
#
#----------------------------------------------------------------------
user nginx;
worker_processes 8;
worker_rlimit_nofile 262144;
error_log /var/log/nginx/error.log;
#error_log /var/log/nginx/error.log notice;
#error_log /var/log/nginx/error.log info;
pid /var/run/nginx.pid;
#----------------------------------------------------------------------
# Events Module
#
# http://wiki.nginx.org/NginxHttpEventsModule
#
#----------------------------------------------------------------------
events {
worker_connections 16384;
multi_accept on;
use epoll;
}
#----------------------------------------------------------------------
# HTTP Core Module
#
# http://wiki.nginx.org/NginxHttpCoreModule
#
#----------------------------------------------------------------------
http {
include /etc/nginx/mime.types;
index index.php index.html index.htm;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
server_tokens off;
client_max_body_size 24m;
client_body_buffer_size 128k;
#keepalive_timeout 0;
keepalive_timeout 65;
open_file_cache max=1000;
open_file_cache_min_uses 10;
open_file_cache_errors on;
gzip on;
gzip_static on;
gzip_comp_level 3;
gzip_disable "MSIE [1-6]\.";
gzip_http_version 1.1;
gzip_vary on;
gzip_proxied any;
gzip_types text/plain text/css text/xml text/javascript text/x-component text/cache-manifest application/json application/javascript application/x-javascript application/xml application/rss+xml application/xml+rss application/xhtml+xml application/atom+xml application/wlwmanifest+xml application/x-font-ttf image/svg+xml image/x-icon font/opentype app/vnd.ms-fontobject;
gzip_min_length 1000;
fastcgi_cache_path /tmp levels=1:2
keys_zone=NAME:10m
inactive=5m;
fastcgi_cache_key "$scheme$request_method$host$request_uri";
server {
listen 80;
server_name _;
root /var/www/html;
#charset koi8-r;
#access_log logs/host.access.log main;
location / {
try_files $uri $uri/ /index.php?$args;
}
error_page 404 /404.html;
location = /404.html {
root /var/www/error;
}
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /var/www/error;
}
# proxy the PHP scripts to Apache listening on 127.0.0.1:80
#
#location ~ \.php$ {
# proxy_pass http://127.0.0.1;
#}
# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
#
location ~ \.php$ {
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
# checks to see if the visitor is logged in, a commenter,
# or some other user who should bypass cache
set $nocache "";
if ($http_cookie ~ (comment_author_.*|wordpress_logged_in.*|wp-postpass_.*)) {
set $nocache "Y";
}
# bypass cache if logged in.
# Be sure that this is above all other fastcgi_cache directives
fastcgi_no_cache $nocache;
fastcgi_cache_bypass $nocache;
fastcgi_cache NAME;
fastcgi_cache_valid 200 302 10m;
fastcgi_cache_valid 301 1h;
fastcgi_cache_valid any 1m;
fastcgi_cache_min_uses 10;
fastcgi_cache_use_stale error timeout invalid_header http_500;
fastcgi_buffers 256 16k;
}
location = /favicon.ico {
log_not_found off;
access_log off;
}
location = /robots.txt {
allow all;
log_not_found off;
access_log off;
}
# Deny all attempts to access hidden files such as .htaccess, .htpasswd, .DS_Store (Mac).
location ~ /\. {
deny all;
access_log off;
log_not_found off;
}
# Deny access to any files with a .php extension in the uploads directory
location ~* ^/wp-content/uploads/.*.php$ {
deny all;
access_log off;
log_not_found off;
}
location ~* \.(jpg|jpeg|gif|png|flv|mp3|mpg|mpeg|js|css|ico)$ {
expires max;
log_not_found off;
}
}
}
First of all, you should write a new benchmarking tool. You're actually benchmarking ab
not nginx
.
Arpit, if you imagine that the absolutely smallest likely web response, even if it's a static text file is one Ethernet packet (~1,500 bytes) then 500,000 of them works out at around 750,000,000 bytes, or roughly 7.5 gigabit. So unless your server has very easily offloaded 10Gb NICs (and it doesnt't, the one you've got is one hundred times slower) and have setup the drivers and kernel to allow you to almost completely flood one of those links, plus the latencies of load-balancers, firewalls, routers and onward connections at that rate then you'll never be able to hit that kind of performance - even with a single packet response, which is unlikely. So ultimately 35k sounds not far off your limit.
Let's identify the bottleneck. Since you're on the same machine, we can assume it's either CPU or disk activity. For the one text file, it shouldn't be disk activity, but at 35k connections, you may be generating 35MB of logging every second as well.
The examples you're showing don't run access logging, only errors. Your config, however, has much more going on, the logging in particular:
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
Start with disabling that logging and then figure out where you're getting hung up at next. Also consider that running the test client on the same machine can have a notable impact on the server daemon. Hypertheading can also turn ou to be harmful sometimes, so explore whether that works better for your load when on or off.
if you are just after the numbers [eg there's no real use case behind this test] - make ab use the keep alive feature of http - execute number of requests over already open TCP connection.