Memory use flares up sharply, how to troubleshoot?
mdounin at mdounin.ru
Mon Jul 21 16:38:25 UTC 2014
On Mon, Jul 21, 2014 at 11:15:00AM -0400, gthb wrote:
> Several times recently, we have seen our production nginx memory usage flare
> up a hundred-fold, from its normal ~42 MB to 3-4 GB, for 20 minutes to an
> hour or so, and then recover. There is not a spike in number of connections,
> just memory use, so whatever causes this, it does not seem to be an increase
> in concurrency.
> The obvious thing to suspect for this is our app's newest change, which
> involves streaming responses proxied from an upstream (via uwsgi_pass);
> these responses can get very large and run for many minutes, pumping
> hundreds of megabytes each. But I expect nginx memory use for these requests
> to be bounded by uwsgi_buffers (shouldn't it be?) -- and indeed I cannot
> reproduce the problem by making such requests, even copying the exact ones
> that are being made when the memory spike occurs. In my tests, the responses
> get buffered as they should be, and delivered normally, without memory
How do you track "nginx memory"?
>From what you describe I suspect that disk buffering occurs (see
http://nginx.org/r/uwsgi_max_temp_file_size), and the number you
are looking at includes the size of files on disk.
> So, what is a good way to investigate what causes all this memory to be
> suddenly allocated? Is there a way of introspecting/logging nginx memory
> allocation edge cases like this? (Is there documentation on this which I
> didn't find?)
The debuging log includes full information about all memory
allocations, see http://nginx.org/en/docs/debugging_log.html.
More information about the nginx