Running out of file descriptors under load

Dave Cheney dave at
Tue Apr 1 01:41:19 MSD 2008

What has most likely happened is your mongrels have jammed on  
something internal to your application, possibly database related.  
Once this happens mongrel will continue to accept requests, but those  
will be placed in a queue behind the mutex that rails places around  
itself. Each connection uses up 2 fd's on nginx's side and 1 on the  
mongrels side.

The situation will correct itself once the blockage has cleared itself  
from your mongrels but the real solution is to find the source of the  
blockage, not increase the number of fd's.



On 01/04/2008, at 8:25 AM, Dan Webb wrote:

> Hi All,
> I'm running a Rails site with nginx/mongrel on a fairly well specced  
> dedicated box w/ 4GB RAM and during a spike both the Mongrels and  
> nginx logs started to complain about running out of file  
> descriptors.  I've googled for this and not found anything  
> particularly useful so I wondered if anyone here has experienced  
> this, has any fixes or any tips on how to diagnose the problem.  The  
> few posts found via Google all recommend adjusting the global  
> descriptor limits (via ulimit) but its already set high (about  
> 200000) and there are no per user limits set.  I've not set  
> worker_rlimit_nofile in nginx but Im not sure thats the problem as  
> the mongrels where hitting the file descriptor limits as well.
> Any wisdom?
> Thanks,
> -- 
> Dan Webb
> aim: danwrong123
> skype: danwrong

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <>

More information about the nginx mailing list