Memory Management ( > 25GB memory usage)

Jonathan Vanasco nginx at
Mon Jun 3 15:17:31 UTC 2013

On Jun 3, 2013, at 10:13 AM, Belly wrote:

>>> What is the best setting for my situation?
>> I would recommend using "fastcgi_max_temp_file_size 0;" if you 
>> want to disable disk buffering (see [1]), and configuring some 
>> reasonable number of reasonably sized fastcgi_buffers.  I would 
>> recommend starting tuning with something like 32 x 64k buffers.
>> [1]
> I read about fastcgi_max_temp_file_size, but I'm a bit afraid of.
> fastcgi_max_temp_file_size 0; states that data will be transfered
> synchronously. What does it mean exactly? Is it faster/better than disk
> buffering? Nginx is built in an asynchronous way. What happens if a worker
> will do a  synchronous job inside an asynchronous one? Will it block the
> event loop?  

It's always been my understanding that in this context, "synchronously" means that nginx is proxying the data from php/fcgi to the client in real time.  

This sounds like a typical problem of application load balancing.  

The disk buffering / temp files allows for nginx to immediately "slurp" the entire response from the backend process, and then serves the files to the downstream client.  This has the advantage of allowing you to immediately re-use the fcgi process for dynamic content – slow or hangup connections downstream won't tie up your pool of fcgi/apache processes.  

restated with blocking - the temp files allow for  blocking within nginx instead of php ( nginx can handle 10k connections, php is limited to the number of processes ).  by removing the tempfiles,  blocking will happen within php instead.

my advice would be to use URL partitioning to segment this type of behavior. I would only allow specific URLs to have no tmp files , and I would proxy them back to a different pool of fcgi (or apache) servers running with a tweaked config.  this would allow the blocking activity from the routes serving large files to not affect the "global" pool of php processes. 

i would also look into periodic reloads of nginx, to see if that frees things up.  if so, that might be a simpler/more elegant solution.

I encountered problems like this about 10years ago with mod_perl under apache.  The aggressive code optimizations and memory/process management were tailored to making the application work very well – but did not play nice with the rest of the box.  The fix was to keep a low number of max_requests , and move to a "vanilla + mod_perl apache" system.  Years later, nginx became the vanilla apache.   

similar issues like this happen to people in the python and ruby communities as well – more expensive or intensive routes are often sectioned off and dispatched to a different pool of servers , so their workload doesn't affect the rest of requests.  

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <>

More information about the nginx mailing list