Use primes for hashtable size

Mathew Heard mat999 at
Fri Jun 2 11:50:18 UTC 2017

We are loading around 10,000 - 15,000 server_names per server. We also have
a fair number of SSL certificates and at-least one big geo map as well
which probably do contribute.

At around 2,000 - 3,000 we hit our first issues with server_name and had to
alter the hash table max. Which brought the loading speed back up (which
has slowly regressed as we got bigger)

Honestly I don't consider it unacceptable, but if this patch comes with no
runtime performance penalty why wouldnt we want it? There are advantages to
faster startup (incident recovery, quicker reconfiguration etc)

On Fri, Jun 2, 2017 at 9:46 PM, Maxim Dounin <mdounin at> wrote:

> Hello!
> On Fri, Jun 02, 2017 at 10:56:31AM +1000, Mathew Heard wrote:
> > If this actually yields a decrease in start time while not introducing
> > other effects we would use it. Our start time of a couple minutes is
> > annoying at times.
> Do you have any details of what contributes to the start time in
> your case?
> In general, nginx trades start time to faster operation once
> started, and trying to build minimal hash is an example of this
> practice.  If it results in unacceptable start times we certainly
> should optimize it, though I don't think I've seen such cases in
> my practice.
> Most time-consuming things during start I've seen so far are:
> - multiple DNS names in the configuration and slow system
>   resolver;
> - multiple SSL certificates;
> - very large geo{} maps.
> --
> Maxim Dounin
> _______________________________________________
> nginx-devel mailing list
> nginx-devel at
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <>

More information about the nginx-devel mailing list