Possible memory leak?

Anoop Alias anoopalias01 at gmail.com
Wed Mar 13 05:53:01 UTC 2019


An nginx restart can take the web server offline for more than 30 seconds
or so depending upon the number of server{} blocks and configuration. It
may be fine for a few vhost though




On Wed, Mar 13, 2019 at 11:14 AM Peter Booth via nginx <nginx at nginx.org>
wrote:

> Perhaps I’m naive or just lucky, but I have used nginx on many contracts
> and permanent jobs for over ten years and have never attempted to reload
> canfigurations. I have always stopped then restarted nginx instances one at
> a time. Am I not recognizing a constraint that affects other people?
>
> Curious ,
>
> Peter
>
> Sent from my iPhone
>
> > On Mar 12, 2019, at 9:57 PM, Maxim Dounin <mdounin at mdounin.ru> wrote:
> >
> > Hello!
> >
> >> On Tue, Mar 12, 2019 at 02:09:06PM -0400, wkbrad wrote:
> >>
> >> First of all, thanks so much for your insights into this and being
> patient
> >> with me.  :)  I'm just trying to understand the issue and what can be
> done
> >> about it.
> >>
> >> Can you explain to me what you mean by this?
> >>> you can configure system allocator to use mmap()
> >>
> >> I'm not a C programmer so correct me if I'm wrong, but doesn't the Nginx
> >> code determine which memory allocator it uses?
> >
> > Normally C programs use malloc() / free() functions as provided by
> > system libc library to allocate memory.  While it is possible for
> > an application to provide its own implementation of these
> > functions, this is something rarely used in practice.
> >
> >> If not can you point me to an article that describes how to do that as I
> >> would like to test it?
> >
> > For details on how to control system allocator on Linux, please
> > refer to the mallopt(3) manpage, notably the
> > MALLOC_MMAP_THRESHOLD_ environment variable.  Web version is
> > available here:
> >
> > http://man7.org/linux/man-pages/man3/mallopt.3.html
> >
> > Please refer to the M_MMAP_THRESHOLD description in the same man
> > page for details on what it does and various implications.
> >
> > Using a values less than NGX_CYCLE_POOL_SIZE (16k by default)
> > should help to move all configuration-related allocations into
> > mmap(), so these can be freed independently.  Alternatively,
> > recompiling nginx with NGX_CYCLE_POOL_SIZE set to a value larger
> > than 128k (default mmap() threshold) should have similar
> > effect.
> >
> > Note though that there may be other limiting factors,
> > such as MALLOC_MMAP_MAX_, which limits maximum number of mmap()
> > allocations to 65536 by default.
> >
> > You can also play with different allocators by using the
> > LD_PRELOAD environment variable, see for example jemalloc's wiki
> > here:
> >
> > https://github.com/jemalloc/jemalloc/wiki/Getting-Started
> >
> >> Also, you seem to be saying that Nginx IS attempting to free the memory
> but
> >> is not able to due to the way the OS is allocating memory or refusing to
> >> release the memory.  I've tested this in several Linux distros,
> kernels, and
> >> Nginx versions and I see the same behavior in all of them.  Do you know
> of
> >> an OS or specific distro where Nginx can release the old memory
> allocations
> >> correctly?  I would like to test that too.  :)
> >
> > Any Linux distro can be tuned so freed memory will be returned to
> > the system, see above.  And for example on FreeBSD, which uses
> > jemalloc as a system allocator, unused memory is properly returned
> > to the system out of the box (though can be seen in virtual
> > address space occupied by the process, since the allocator uses
> > madvise() to make the memory as unused instead of unmapping a
> > mapping).
> >
> > --
> > Maxim Dounin
> > http://mdounin.ru/
> > _______________________________________________
> > nginx mailing list
> > nginx at nginx.org
> > http://mailman.nginx.org/mailman/listinfo/nginx
>
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx



-- 
*Anoop P Alias*
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.nginx.org/pipermail/nginx/attachments/20190313/11635882/attachment.html>


More information about the nginx mailing list