Fair Proxy Balancer
Rob Mitzel
rob at rascal.ca
Wed Feb 6 03:50:45 MSK 2008
Hey, sorry for the late response here, but I thought I should mention, we're
using the fair proxy balancer on a rails site that averages over 100 million
hits/month. Been using it for about a month now, and love it.
Also, to Alexander, our programmer especially wanted me to thank you for
coming up with that mongrel process title patch, that thing is awesome! :)
-Rob.
-----Original Message-----
From: owner-nginx at sysoev.ru [mailto:owner-nginx at sysoev.ru] On Behalf Of
David Pratt
Sent: Wednesday, January 30, 2008 7:27 PM
To: nginx at sysoev.ru
Subject: Re: Fair Proxy Balancer
Hi. It has been a while since the introduction of fair proxy balancer.
How stable is it for production use. I was looking at potentially using
haproxy or lvm but hoping the new balancer is stable enough since I
don't want to unnecessarily complicate things with even more layers of
software in the stack. Anyone using it for production that can comment.
Regards,
David
Grzegorz Nosek wrote:
> Hi,
>
> 2007/11/23, Alexander Staubo <alex at purefiction.net>:
>>> One question, how is the busy state determined? In case
>>> of zeo each backend client can take some defined number of requests
>>> in parallel, how is such a case handled?
>
> Should work out of the box, distributing the load equally. You may
> wish to specify weight for each backend but if all are equal, this
> should have no effect.
>
>> I have not studied the sources, but I expect it will pick the upstream
>> with the fewest number of current pending requests; among upstreams
>> with the same number of concurrent requests, the one picked is
>> probably arbitrary.
>
> The scheduling logic looks like this:
> - The backends are selected _mostly_ round-robin (i.e. if you get 1
> req/hour, they'll be serviced by successive backends)
> - Idle (no requests currently serviced) backends have absolute
> priority (an idle backend will be always chosen if available)
> - Otherwise, the scheduler walks around the list of backends
> (remembering where it finished last time) until the scheduler score
> stops increasing. The highest scored backend is chosen (note: not all
> backends are probed, or at least not always).
> - The scheduler score is calculated roughly as follows (yes, it could
> be cleaned up a little bit):
>
> score = (1 - nreq) * 1000 + last_active_delta;
> if (score < 0) {
> score /= current_weight;
> } else {
> score *= current_weight;
> }
>
> nreq is the number of currently processed requests
> last_active_delta is time since last request start _or_ stop (serviced
> by this backend), in milliseconds
> current_weight is a counter decreasing from the backend's weight to 1
> with every serviced request
>
> It has a few properties which (I hope) make it good:
> - penalizing busy backends, with something like a pessimistic
> estimate of request time
> - rewarding backends which have been servicing a request for a long
> time (statistically they should finish earlier)
> - rewarding backends with higher weight more or less proportionally.
>
> Please give the module a try and report any issues you might find.
>
> Best regards,
> Grzegorz Nosek
>
More information about the nginx
mailing list