All workers in 'D' state using sendfile

Host DL hostdl at
Sun Jun 9 14:54:49 UTC 2013

Hello Maxim,

Thanks for your response, and sorry that I am new to mailing list and my
1st message may was not very clear to you

I've already read all posts in this conversation and all tuning options has
been tested

I'm using 8x 2TB SATA ENT in RAID10 level + 64G RAM on my box
CentOS 5.9 x64_84 / 2.6.18-348.6.1.el5


worker_priority -10;
worker_processes 64;
worker_rlimit_nofile 20000;

events {
    worker_connections  2048;
    use epoll;
    worker_aio_requests 128;

http {
    sendfile     off;
    tcp_nopush     on;
    tcp_nodelay on;
    aio on;
    directio    2m;
    #directio_alignment 4k;
    output_buffers 1 1m;

    keepalive_timeout  15;


During the peak time connections will reach up to 14-15K in total and more
than 1Gbit/s outgoing throughput
Please note that the server was stable with about 12K connections in the
peak time and about 1-1.1Gbit/s throughput but after adding another VH with
about 2-3K connections it seems that server is unable to handle the request
properly at the peak time
Its expected the throughput to exceed the previous ~1.1Gbit/s rate but it
doesn't, Even it doesn't reach to 1Gbit/s while the connections are now
getting more and bigger

During every peak time the LA will each to the number of nginx workers ( 64
for my current config ) and will stay at the same rate to the end of peak
time, all processes are in D state and the interesting thing, memory is not
being used fully and it may push about 30-40G with about 20-30% I/O wait

Connections are not being processed fast and it will stay in connecting and
request sent/wait for a few seconds and then data transfer will start at a
very slow rate

I've already tried both sendfile and AIO, and AIO seems to handle the
connections better
Played with output_buffer and increased both number and size of buffers but
no any special effect, even the throughput got lower

The interesting thing is that at the same peak time the read transfer rate
is more than 500mbit/s using scp/rsync from the RAID to /dev/null

Sorry for my long post and if my english sux
Any suggestion would be greatly appreciarted



On Sun, Jun 9, 2013 at 4:43 PM, Maxim Dounin <mdounin at> wrote:

> Hello!
> On Sun, Jun 09, 2013 at 06:17:10AM +0430, Host DL wrote:
> > I am facing the same exact issue as explained by Drew,
> >
> > is there any working solution to tune nginx for higher throughput?
> >
> > or how to deal with sleeping D state nginx processes ?
> See this reply for basic tuning suggestions:
> --
> Maxim Dounin
> _______________________________________________
> nginx mailing list
> nginx at
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <>

More information about the nginx mailing list