Large CRL file crashing nginx on reload
Maxim Dounin
mdounin at mdounin.ru
Fri Jul 27 00:13:00 UTC 2018
Hello!
On Thu, Jul 26, 2018 at 04:16:11PM -0400, Shaun Tarves wrote:
> We are trying to use nginx to support the DoD PKI infrastructure, which
> includes many DoD and contractor CRLs. The combined CRL file is over 350MB
> in size, which seems to crash nginx during a reload (at least on Red Hat
> 6). Our cert/key/crl set up is valid and working, and when only including a
> subset of the CRL files we have, reloads work fine.
>
> When we concatenate all the CRLs we need to support, the config reload
> request causes worker threads to become defunct and messages in the error
> log indicate the following:
>
> 2018/07/26 16:05:25 [alert] 30624#30624: fork() failed while spawning
> "worker process" (12: Cannot allocate memory)
The error suggest you've run out of memory.
> 2018/07/26 16:05:25 [alert] 30624#30624: sendmsg() failed (9: Bad file
> descriptor)
>
> 2018/07/26 16:08:42 [alert] 30624#30624: worker process 1611 exited on
> signal 9
And this one suggests nginx worker was killed with signal 9,
likely by the OOM Killer. That is, again, you've run out of
memory.
> Is there any way we can get nginx to support such a large volume
> of CRLs?
It looks like you problem is that you don't have enough memory for
your configuration. Most trivial solution would be to add more
memory. Another possible solution would be to carefully inspect
the configuration, and, if possible, reduce amount of memory
required. In particular, when using such a big CRLs it is
important to only specify them in configuration context they are
needed, as each SSL context with a CRL configured will load its
own copy of the CRL.
--
Maxim Dounin
http://mdounin.ru/
More information about the nginx
mailing list