What are the limitations of ip_hash ?
Mark Swanson
mark at ScheduleWorld.com
Wed Dec 31 10:58:30 MSK 2008
Dave Cheney wrote:
> Hi Mark,
>
> Can you explain 'not working'. From my experience with ip_hash given a
> large enough set of incoming IP addresses, requests would be distributed
> equally across both machines. Its not so much a hash as a modulo of the
> incoming ip address.
1. IP 1.1.1.1 -> ip_hash -> server A
2. IP 2.2.2.2 -> ip_hash -> server B
3. IP 1.1.1.1 -> ip_hash -> server A
so far so good. 1.1.1.1 always goes to server A just like the docs
state. Until ...
4. IP 1.1.1.1 -> ip_hash -> server B
This should never happen if I understand ip_hash correctly, yet this it
seems to happen about 10% of the time (not useful to anyone needing
sticky sessions).
Btw, modulo? If you have the source handy would you mind posting the
code for this? I was really hoping the class C address would be used as
a key into a hash table and the value was the previously used server ID.
I was also really hoping this hash table would be preserved after
sending a HUP signal to nginx.
Cheers.
> Cheers
>
> Dave
>
> On Tue, 30 Dec 2008 22:15:21 -0500, Mark Swanson <mark at ScheduleWorld.com>
> wrote:
>> upstream tomcatcluster {
>> ip_hash;
>> server test1:8080 max_fails=0;
>> server test2:8080 max_fails=0;
>> }
>>
>> It seems that the size of ip_hash is too small and it can't hold enough
>> keys - or perhaps nginx clears out keys after a period of time?
>>
>> I'm curious what the limitations of ip_hash are because it's not working
>> 100% of the time.
>>
>> Thank you.
>
>
More information about the nginx
mailing list