Memory consumption in case of huge number of configs
Maxim Dounin
mdounin at mdounin.ru
Tue Jun 19 18:22:13 UTC 2012
Hello!
On Tue, Jun 19, 2012 at 11:42:14AM -0400, dparshin wrote:
> Valentin V. Bartenev Wrote:
> -------------------------------------------------------
> > On Monday 18 June 2012 23:47:21 dparshin wrote:
> > > I noticed following behavior of nginx recently,
> > everything I describe
> > > here is related to the case when we have huge
> > number of configuration
> > > files(> 1000). Once nginx is started it occupies
> > more than 100MB of
> > > memory, memory is not freed on fork. That is
> > expected behavior but the
> > > weird thing happens on configuration reload,
> > once HUP signal is sent,
> > > master process doubles occupied memory size, if
> > reload is repeated
> > > memory consumption stays the same, so it looks
> > like memory is not reused
> > > in process of reload, instead new pool is
> > created, which leads to the
> > > waste of memory.
> > >
> >
> > Of course it creates a new pool. Nginx must
> > continue to work and handle
> > requests, even if it fails to load the new
> > configuration.
>
> Sounds reasonable, but unused pool(after reload process successfully
> finished) is not destroyed,
It is destroyed, but you don't see it as your system allocator
doesn't return the freed memory to the system.
> and after fork the amount of unused memory
> is multiplied by the number of workers.
And this isn't true even for really leaked memory, as fork() uses
copy on write. See http://en.wikipedia.org/wiki/Copy_on_write.
> I mean that we have two pools in
> every worker and in the master process - pool with active configuration
> and unused pool used for reload purposes.
This is not true, see above.
Maxim Dounin
More information about the nginx
mailing list