Nginx worker processors D state and high I/O utilization !!

shahzaib shahzaib shahzaib.cb at gmail.com
Tue Jul 16 04:59:30 UTC 2013


Hello,

      We're using nginx-1.2.8 to serve large static files for video
streaming. However all nginx worker_processes are in "D" state and HDD I/O
utilization is 99%.

[root at DNTX010 ~]# ps aux |grep nginx
root      3046  0.0  0.0  20272   688 ?        Ss   20:39   0:00 nginx:
master process nginx
nginx     3047  3.2  0.9  94480 74808 ?        D    20:39   0:03 nginx:
worker process
nginx     3048  1.4  0.3  52104 31388 ?        D    20:39   0:01 nginx:
worker process
nginx     3049  0.2  0.1  33156 12156 ?        S    20:39   0:00 nginx:
worker process
nginx     3050  0.1  0.1  29968  8844 ?        D    20:39   0:00 nginx:
worker process
nginx     3051  0.2  0.1  30332 10076 ?        D    20:39   0:00 nginx:
worker process
nginx     3052  2.7  0.8  91788 69504 ?        D    20:39   0:02 nginx:
worker process
nginx     3053  0.3  0.0  25632  5384 ?        D    20:39   0:00 nginx:
worker process
nginx     3054  0.2  0.1  36032 15852 ?        D    20:39   0:00 nginx:
worker process
nginx     3055  0.4  0.2  37592 17396 ?        D    20:39   0:00 nginx:
worker process
nginx     3056  0.2  0.1  32580 11028 ?        S    20:39   0:00 nginx:
worker process
nginx     3057  0.3  0.2  39288 19116 ?        D    20:39   0:00 nginx:
worker process
nginx     3058  0.3  0.2  41764 19744 ?        D    20:39   0:00 nginx:
worker process
nginx     3059  0.3  0.1  31124 10480 ?        D    20:39   0:00 nginx:
worker process
nginx     3060  1.0  0.3  52736 31776 ?        D    20:39   0:01 nginx:
worker process
nginx     3061  1.1  0.3  51920 29956 ?        D    20:39   0:01 nginx:
worker process
nginx     3062  1.6  0.4  58808 35548 ?        D    20:39   0:01 nginx:
worker process


[root at DNTX010 ~]# iostat -x -d 3
Linux 2.6.32-358.6.2.el6.x86_64 (DNTX010.local)         07/16/2013
_x86_64_        (8 CPU)

Device:         rrqm/s   wrqm/s     r/s     w/s   rsec/s   wsec/s avgrq-sz
avgqu-sz   await  svctm  %util
sda              30.28   177.37  260.32    2.96 38169.26  1442.70
150.46     2.29    8.70   3.52  92.78

Device:         rrqm/s   wrqm/s     r/s     w/s   rsec/s   wsec/s avgrq-sz
avgqu-sz   await  svctm  %util
sda               4.33     0.00  544.00    0.00 34376.00     0.00
63.19    43.83   75.25   1.84 100.00

Device:         rrqm/s   wrqm/s     r/s     w/s   rsec/s   wsec/s avgrq-sz
avgqu-sz   await  svctm  %util
sda               9.00     6.33  547.67    0.67 34637.33    56.00
63.27    48.01   86.20   1.82 100.00

Device:         rrqm/s   wrqm/s     r/s     w/s   rsec/s   wsec/s avgrq-sz
avgqu-sz   await  svctm  %util
sda               0.00     0.67  568.00    2.33 36024.00    29.33
63.21    54.98  101.10   1.75 100.00

Device:         rrqm/s   wrqm/s     r/s     w/s   rsec/s   wsec/s avgrq-sz
avgqu-sz   await  svctm  %util
sda               0.00     4.33  560.33    1.33 35712.00    45.33
63.66    37.20   65.06   1.78 100.00


Nginx.conf :

http {
    include       mime.types;
    default_type  application/octet-stream;
    client_body_buffer_size 128K;
    sendfile_max_chunk 128k;
    client_max_body_size 800m;
    client_header_buffer_size 256k;
    large_client_header_buffers 4 256k;
    output_buffers 1 512k;
    server_tokens off; #Conceals nginx version
    #access_log  logs/access.log  main;
    access_log off;
    error_log warn;
    sendfile        on;
#    aio on;
 #   directio 512k;
    ignore_invalid_headers on;
    client_header_timeout  3m;
    client_body_timeout 3m;
    send_timeout     3m;
    keepalive_timeout  0;
    reset_timedout_connection on;
}

We've also tried enabling aio directive but nothing changed. Help will be
highly appreciated.

Thanks
Shahzaib
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.nginx.org/pipermail/nginx/attachments/20130716/6bd1dc2a/attachment-0001.html>


More information about the nginx mailing list