Memory use flares up sharply, how to troubleshoot?

gthb nginx-forum at
Mon Jul 21 18:18:08 UTC 2014

> How do you track "nginx memory"?

What I was tracking was memory use per process name as reported by New Relic
nrsysmond, which I'm pretty sure is RSS from ps output, summed over all
nginx processes.

> From what you describe I suspect that disk buffering occurs (see
>, and the number you
> are looking at includes the size of files on disk.

I wish : ) because that's what I want to happen for these large responses.
But that's definitely not it, because we see a spike of swap when this
occurs, with most other processes on the machine being paged out ... and in
the worst spikes swap has filled up and an OOM kill has occurred, which
conveniently records in syslog the RSS for an nginx process being killed:

Jul 21 03:54:16 ren2 kernel: [3929562.712779] uwsgi invoked oom-killer:
gfp_mask=0x201da, order=0, oom_adj=0, oom_score_adj=0
Jul 21 03:54:16 ren2 kernel: [3929562.737340] Out of memory: Kill process
5248 (nginx) score 328 or sacrifice child
Jul 21 03:54:16 ren2 kernel: [3929562.737352] Killed process 5248 (nginx)
total-vm:3662860kB, anon-rss:3383776kB, file-rss:16kB

So that catches Nginx holding a 3.2GB resident set, matching what New Relic
says about the same time.

> The debuging log includes full information about all memory
> allocations, see

Thank you. I haven't been able to reproduce this outside of production (or
even in production) so I might have to leave debug logging enabled in
production and hope to catch this next time it happens. Am I right to assume
that enabling debug is going to weigh quite heavily on production usage, and
eat up disk very fast? (Traffic peaks at around 100 req/sec and 2 MB/sec.)



Posted at Nginx Forum:,251964,251967#msg-251967

More information about the nginx mailing list