Memory Management ( > 25GB memory usage)

Maxim Dounin mdounin at mdounin.ru
Mon Jun 3 15:13:20 UTC 2013


Hello!

On Mon, Jun 03, 2013 at 10:13:03AM -0400, Belly wrote:

> Thanks Maxim for you answer!
> 
> Maxim Dounin Wrote:
> -------------------------------------------------------
> > Hello!
> > 
> > On Mon, Jun 03, 2013 at 08:57:21AM -0400, Belly wrote:
> > 
> > > Hello nginx!
> > > 
> > > I have one worker-process, which uses over 25GB memory (and doesn't
> > stop to
> > > do that).
> > > My configuration is... let's say special:
> > > 
> > > So there is nginx, which proxies all requests to the PHP backend and
> > the PHP
> > > backend sends a large request back to nginx. I set the
> > fastcgi_buffers very
> > > enormous huge to avoid nginx creating temporary files on my disk -
> > which
> > > would result in high CPU load.
> > > 
> > > Here is my configuration: (reduced to the problem)
> > > 
> > > worker_processes 1;
> > > worker_rlimit_nofile 80000;
> > > worker_priority -20;
> > > 
> > > events {
> > >         worker_connections 10240;
> > >         multi_accept on;
> > > }
> > > # ... 
> > >         # fastcgi settings
> > >         fastcgi_buffers 20480 1k;
> > 
> > Just a side note: each buffer structure takes about 100 bytes of 
> > memory on 64-bit platforms, and using 1k buffers results in about 
> > 10% overhead just because of this.
> > 
> 
> Very interesting! - Didn't know that... Thanks!
> 
> > >         fastcgi_connect_timeout 30;
> > >         fastcgi_read_timeout 30;
> > >         fastcgi_send_timeout 30;
> > >         fastcgi_keep_conn on;
> > >         upstream php-backend {
> > >                 server 127.0.0.1:9000;
> > >                 keepalive 10000;
> > >         }
> > > 
> > > 
> > > As you can see the buffers are extreme large, to avoid disk
> > buffering. The
> > > problem is that nginx doesn't free the buffers. It just eats and
> > eats. I
> > > know it's my fault and not nginx' fault. What am I doing wrong?
> > > 
> > > The response of my php backend could be from 1k to 300mb.
> > 
> > With your settings each connection can allocate up to 20M of 
> > buffers.  That is, 1500k connections are enough to allocate 25G of 
> > memory.  So the basic question is - how many connections are open?
> > 
> 
> 1000 - 2000... got your point. 
> 
> > With pessimistic assumption of 10k connections as per 
> > worker_connections, you configuration will result in more than 
> > 200G memory used.
> > 
> > > What is the best setting for my situation?
> > 
> > I would recommend using "fastcgi_max_temp_file_size 0;" if you 
> > want to disable disk buffering (see [1]), and configuring some 
> > reasonable number of reasonably sized fastcgi_buffers.  I would 
> > recommend starting tuning with something like 32 x 64k buffers.
> > 
> > [1] http://nginx.org/r/fastcgi_max_temp_file_size
> > 
> 
> I read about fastcgi_max_temp_file_size, but I'm a bit afraid of.
> fastcgi_max_temp_file_size 0; states that data will be transfered
> synchronously. What does it mean exactly? Is it faster/better than disk
> buffering? Nginx is built in an asynchronous way. What happens if a worker
> will do a  synchronous job inside an asynchronous one? Will it block the
> event loop?  

Docs ask linked document the effect as follows:

: Value of zero disables buffering of responses to temporary files.

This is what it actually does - it stops nginx from using disk 
buffering.  Instead, if fastcgi_buffers configured isn't enough, 
nginx will wait for some buffers to be sent to a client before 
reading more data from a backend.  Note this means the backend 
will be busy sending the response for a longer time.

-- 
Maxim Dounin
http://nginx.org/en/donation.html



More information about the nginx mailing list