nginx core dump explodes
Maxim Dounin
mdounin at mdounin.ru
Tue Jun 21 13:24:59 UTC 2016
Hello!
On Tue, Jun 21, 2016 at 02:53:48AM -0400, martinproinity wrote:
> Thanks, setting the value to 600G made it possible to get a dump. But it
> took ages and the system became quite unstable.
>
> What can cause the dump to become that large? There is almost no traffic
> (<10Mbps) on this server with 32G memory.
You haven't said how large the resulting dump, but in general a
dump reflects memory used by the process. Something like "top" or
"ps" should give you a good idea of how large a dump is expected to
be.
Most obvious reason why processes can use lots of memory is using
very big shared memory zones, e.g., in proxy_cache_path keys_zone.
Also, given the fact that you are debugging a socket leak (in a custom
module, I guess?), processes can be large due to leaks
accumulated.
--
Maxim Dounin
http://nginx.org/
More information about the nginx
mailing list