Tuning workers and connections

Maxim Dounin mdounin at mdounin.ru
Thu Jul 2 13:56:00 MSD 2009


Hello!

On Thu, Jul 02, 2009 at 02:23:06AM +0100, Avleen Vig wrote:

> Hi folks, I have some questions about tuning nginx for best
> performance, on a site which handle ~1000 requests per second.
> 
> In our config, we have:
> 
>     worker_processes  16;
>     worker_rlimit_nofile 32768;
>     events {
>         worker_connections  8192;
>     }
> 
> This is on a server with 8 CPU cores and 8Gb RAM.
> My understanding is that this should allow nginx to establish up to
> 32k network connections, because that is the limit of
> worker_rlimit_nofile.

No.  With this settings you will be able to handle up to 8192 
connections in each worker, about 128k in total for 16 workers.  
If your system can cope with it.

Directive worker_rlimit_nofile just raises open files limit for 
nginx processes if system allows this, nothing more.  It's usually 
needed only if you started nginx with low limit, then raised the 
system one and want nginx to raise it's limit without restarting.

> We serve very few files off the disk, maybe a
> dozen CSS and JS files. Everything else is handed to backends using
> proxy_pass.
> So with one fd for the browser, and one fd to the upstream, we should
> be able to handle 16k concurrent connections.
> 
> However what we're seeing, is that around 5k connections we get a
> performance hit. Connections slow down, take longer to establish, etc.
> The load on the box is almost zero, nginx is clearly working very fast
> and efficiently, but I'm not sure why it slows down.
> Any thoughts?

There are several things to check:

1. Which event method used?  Make sure you use kqueue for *BSD, 
epoll for Linux.

2. What's in error_log?  Your system logs/memory stats/etc?  You 
may run out of some resources (file descriptors, network buffers, 
connection states in your firewall, ...).

3. In which states nginx workers are?  If they are disk bound it's 
probably a good idea to check why and e.g. try to tune proxy 
buffers / output buffers / sendfile_max_chunk.

> Also, I'm thinking about enabling the multi_accept option, but I
> couldn't find much documentation on how this works.
> It sounds like a high-performance tweak which we should be using if we
> get this many requests per second.
> Could it be that not using multi_accept is the problem?
> Nginx otherwise has to handle the incoming connection requests in
> serial and this is a bottleneck?

With multi_accept nginx tries to accept all connections from 
listen queue at once.  Without it - only one connection will be 
accepted on each return from event function.  The bad thing with 
multi_accept that if you have constant stream of incomming 
connections at high rate - it may overflow your worker_connections 
without any chance to process already accepted connections.

Note that this setting doesn't matter for kqueue (there is number 
of unaccepted connection at return from kernel, and nginx accepts 
all them).  And it's always on for rtsig (not sure, but likely due 
to the fact that rtsig is fragile).

Maxim Dounin





More information about the nginx mailing list