Use primes for hashtable size

Maxim Dounin mdounin at
Thu Jun 1 17:39:51 UTC 2017


On Thu, Jun 01, 2017 at 04:54:50PM +0500, Andrew Borodin wrote:

> Hi, Maxim!
> 2017-05-30 18:01 GMT+05:00 Maxim Dounin <mdounin at>:
> >
> > The maximum size of hash table as specified by the hinit->max_size
> > field is indeed maximum size, and not the size of the hash table.
> > Following code in the ngx_hash_init() will try hard to find to
> > find out an optimal hash size for a given set of values within the
> > maximum size specified, and will test all the prime numbers as
> > well.
> >
> > I see no reasons to additionally limit the maximum size to a prime
> > number.  If you think there are some, please be more specific.
> >
> > You are right. I've modified patch to checkout primes first, then proceed
> to "hard work" . Also I've kolhozed some perf prove of improvement.
> This test creates a hash table of 5000 semirandom strings (not very random,
> just bytes permutated).
> On my Ubuntu VM without patch hash creation is 92-96ms, with patch it's
> strictly 0. "hard work" search tries about 2k sizes before success, primes
> search hits at second.
> Docs say some words about startup speed and I wanted to apply primes
> somewhere, so here we go.

Thanks, though suggested change will certainly modify current 
nginx (documented) approach of searching for minimum possible 
hash sizes.

It might be a better solution for large hashes though, as 
currently optimized by using a larger start size:

    if (hinit->max_size > 10000 && nelts && hinit->max_size / nelts < 100) {
        start = hinit->max_size - 1000;

Not sure it is at all needed though, as I don't remember any 
"words about startup speed" in the documentation and startup 
speed of hashes wasn't a practical problem as far as I remember.

Maxim Dounin

More information about the nginx-devel mailing list