To many open files...

Dave Cheney dave at cheney.net
Fri Jan 16 00:40:58 MSK 2009


worker_rlimit_nofile 8192;
events {
     worker_connections  2048;
     use epoll;
}

Each fast CGI connection will use 2 file descriptors, one for the  
client, one for the proxy.

http://wiki.codemongers.com/NginxHttpEventsModule#worker_connections
http://wiki.codemongers.com/NginxHttpMainModule#worker_rlimit_nofile

As others have advised, make sure your ulimit settings at the OS level  
allow that many file descriptors.

man ulimit

Cheers

Dave


On 14/01/2009, at 3:15 AM, Ilan Berkner wrote:

> This morning, our server (www.spellingcity.com) went down and I  
> can't figure out why.  The Nginx log file has this error in it (all  
> over the place and is growing):
>
>
> 2009/01/13 10:07:24 [alert] 26159#0: accept() failed (24: Too many  
> open files) while accepting new connection on 74.200.197.210:80
>
> Which of course follows up with endless:
>
> 2009/01/13 10:14:56 [error] 26159#0: *13007 upstream timed out (110:  
> Connection timed out) while connecting to upstream, client:  
> 131.109.51.3, server: www.spellingcity.com, request: "GET / HTTP/ 
> 1.0", upstream: "fastcgi://127.0.0.1:9000", host: "www.spellingcity.com 
> "
>
>
> Help???
>
>
> Thanks






More information about the nginx mailing list