Fair Proxy Balancer

David Pratt fairwinds at eastlink.ca
Thu Jan 31 06:27:04 MSK 2008


Hi. It has been a while since the introduction of fair proxy balancer. 
How stable is it for production use. I was looking at potentially using 
haproxy or lvm but hoping the new balancer is stable enough since I 
don't want to unnecessarily complicate things with even more layers of 
software in the stack. Anyone using it for production that can comment.

Regards,
David


Grzegorz Nosek wrote:
> Hi,
> 
> 2007/11/23, Alexander Staubo <alex at purefiction.net>:
>>> One question, how is the busy state determined? In case
>>> of zeo each backend client can take some defined number of requests
>>> in parallel, how is such a case handled?
> 
> Should work out of the box, distributing the load equally. You may
> wish to specify weight for each backend but if all are equal, this
> should have no effect.
> 
>> I have not studied the sources, but I expect it will pick the upstream
>> with the fewest number of current pending requests; among upstreams
>> with the same number of concurrent requests, the one picked is
>> probably arbitrary.
> 
> The scheduling logic looks like this:
>  - The backends are selected _mostly_ round-robin (i.e. if you get 1
> req/hour, they'll be serviced by successive backends)
>  - Idle (no requests currently serviced) backends have absolute
> priority (an idle backend will be always chosen if available)
>  - Otherwise, the scheduler walks around the list of backends
> (remembering where it finished last time) until the scheduler score
> stops increasing. The highest scored backend is chosen (note: not all
> backends are probed, or at least not always).
>  - The scheduler score is calculated roughly as follows (yes, it could
> be cleaned up a little bit):
> 
> score = (1 - nreq) * 1000 + last_active_delta;
> if (score < 0) {
>   score /= current_weight;
> } else {
>   score *= current_weight;
> }
> 
> nreq is the number of currently processed requests
> last_active_delta is time since last request start _or_ stop (serviced
> by this backend), in milliseconds
> current_weight is a counter decreasing from the backend's weight to 1
> with every serviced request
> 
> It has a few properties which (I hope) make it good:
>  - penalizing busy backends, with something like a pessimistic
> estimate of request time
>  - rewarding backends which have been servicing a request for a long
> time (statistically they should finish earlier)
>  - rewarding backends with higher weight more or less proportionally.
> 
> Please give the module a try and report any issues you might find.
> 
> Best regards,
>  Grzegorz Nosek
> 





More information about the nginx mailing list