consistent hashing for upstreams

Alexandr Gomoliako zzz at zzz.org.ua
Mon Mar 19 22:08:33 UTC 2012


> The only potential issues I foresee are:

>    1) performance, as this perl will be called for 1000+ requests per
> second, and there are going to be potentially many upstream blocks.
> Maybe Digest::MurmurHash would help with performance instead of MD5
> (it's supposedly 3x faster in Perl than Digest::MD5 while using far less
> state). A native hash ring implementation in C would obviously be far
> more performant.

Couple of microseconds per request isn't something to worry about here.

>    2) a single backup server is problematic, but that can be fixed by
> adding more backups to the upstream blocks I think, or doing an error
> location that hashes again to find a new upstream. Not sure if a server
> being down would cause it to fail inside all upstream blocks it appears
> though, which might mean some very slow responses when a server goes
> offline.

But at least it's simple.

>   3) Perl module is still marked as experimental, which scares me

Don't be scared, it's not really experimental. Build it with
relatively modern perl and you'll be fine.



More information about the nginx mailing list