Slow download big static files from nginx

I'm using debian 7 x64 in vmware-esxi virtualization.

Max download per client is 1mb/s and nginx no use more than 50mbps together and my question is what may cause so slow transfers?

server

**Settings for eth1:
    Supported ports: [ TP ]
    Supported link modes:   1000baseT/Full
                            10000baseT/Full**

root@www:~# iostat
Linux 3.2.0-4-amd64 (www)       09.02.2015      _x86_64_        (4 CPU)

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
       1,75    0,00    0,76    0,64    0,00   96,84

Device:            tps    kB_read/s    kB_wrtn/s    kB_read    kB_wrtn
sda             173,93      1736,11       219,06     354600      44744


root@www:~# free -m
             total       used       free     shared    buffers     cached
Mem:         12048       1047      11000          0        106        442
-/+ buffers/cache:        498      11549
Swap:          713          0        713

nginx.cof

user www-data;
worker_processes 4;
pid /var/run/nginx.pid;

events {
        worker_connections 3072;
        # multi_accept on;
}

http {

        ##
        # Basic Settings
        ##

        sendfile on;
        tcp_nopush on;
        tcp_nodelay on;
        keepalive_timeout 5;
        types_hash_max_size 2048;
        server_tokens off;

        # server_names_hash_bucket_size 64;
        # server_name_in_redirect off;

        include /etc/nginx/mime.types;
        default_type application/octet-stream;

        ##
        # Logging Settings
        ##

        access_log /var/log/nginx/access.log;
        error_log /var/log/nginx/error.log;

        ##
        # Gzip Settings
        ##

        gzip on;
        gzip_disable "msie6";

        # gzip_vary on;
        # gzip_proxied any;
        # gzip_comp_level 6;
        # gzip_buffers 16 8k;
        # gzip_http_version 1.1;
        # gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;

        ##
        # nginx-naxsi config
        ##
        # Uncomment it if you installed nginx-naxsi
        ##

        #include /etc/nginx/naxsi_core.rules;

        ## Start: Size Limits & Buffer Overflows ##

        client_body_buffer_size 1k;
        client_header_buffer_size 1k;
        client_max_body_size 4M;
        large_client_header_buffers 2 1k;

        ## END: Size Limits & Buffer Overflows ##

        ## Start: Timeouts ##

        client_body_timeout   10;
        client_header_timeout 10;
        send_timeout          10;

        ## End: Timeouts ##

        ## END: Size Limits & Buffer Overflof
        ##
        # nginx-passenger config
        ##
        # Uncomment it if you installed nginx-passenger
        ##

        #passenger_root /usr;
        #passenger_ruby /usr/bin/ruby;

        ##
        # Virtual Host Configs
        ##

        include /etc/nginx/conf.d/*.conf;
        include /etc/nginx/sites-enabled/*;
}

/etc/sysctl.conf

# Increase system IP port limits to allow for more connections

net.ipv4.ip_local_port_range = 2000 65000


net.ipv4.tcp_window_scaling = 1


# number of packets to keep in backlog before the kernel starts dropping them
net.ipv4.tcp_max_syn_backlog = 3240000


# increase socket listen backlog
net.core.somaxconn = 3240000
net.ipv4.tcp_max_tw_buckets = 1440000


# Increase TCP buffer sizes
net.core.rmem_default = 8388608
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216

UPDATE :

Debug log is completely empty, only when I manually cancel the download I get the following error

2015/02/09 20:05:32 [info] 4452#0: *2786 client prematurely closed connection while sending response to client, client: 83.11.xxx.xxx, server: xxx.com, request: "GET filename HTTP/1.1", host: "xxx.com"

curl output:

  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 1309M  100 1309M    0     0   374M      0  0:00:03  0:00:03 --:--:--  382M

Solution 1:

An answer for anyone here through Google:

Sendfile is blocking, and doesn't enable nginx to set lookahead, thus it's very inefficient if a file is only read once.

Sendfile relies on filesystem caching etc' and was never made for such large files.

What you want is to disable sendfile for large files, and use directio (preferably with threads so it's non-blocking) instead. Any files under 16MB will still be read using sendfile.

aio threads;
directio 16M;
output_buffers 2 1M;

sendfile on;
sendfile_max_chunk 512k;

By using directio you read directly from the disk, skipping many steps on the way.

p.s. Please note that to use aio threads you need to compile nginx with threads support https://www.nginx.com/blog/thread-pools-boost-performance-9x/

Solution 2:

You probably need to change sendfile_max_chunk value, as the documentation states :

Syntax:   sendfile_max_chunk size;
Default:  sendfile_max_chunk 0;
Context:  http, server, location

When set to a non-zero value, limits the amount of data that can be transferred in a single sendfile() call. Without the limit, one fast connection may seize the worker process entirely.

You may also want to adjust buffer sizes in case most of your traffic is "big" static files.