Fair Proxy Balancer

David Pratt fairwinds at eastlink.ca
Wed Feb 6 21:44:31 MSK 2008


Hi. Both haproxy and lvs have setups that are more involved for sure. 
haproxy 1.3 has more balancing algorithms than 1.2. I have seen patches 
that provide least connection balancing for 1.2 also. lvs is what I 
believe to be 'the' mainstream balancer but needs to be compiled into 
the linux kernel - it as not as portable and simple as incorporating the 
fair proxy balancer as a result. Interested in Rob's experience to 
determine no of servers. Many thanks Grzegorz.

Regards,
David

Grzegorz Nosek wrote:
> Hi,
> 
> On Wed, Feb 06, 2008 at 10:07:08AM -0400, David Pratt wrote:
>> Hi Rob. This is encouraging news and I am working on a setup to 
>> incorporate this into my process. I would really like to hear if there 
>> has been any attempt to evaluate the fair proxy balancer in relation to 
>> other balancing schemes. From the standpoint of server resources, it is 
>> attractive and much simpler than the haproxy or lvm for setup. I realize 
>> speed is subject to all sorts of additional parameters but a comparison 
>> of the balancer with others would be quite interesting.
> 
> (disclaimer: I wrote upstream_fair, I'm biased).
> 
> No, I haven't compared haproxy or lvs (I assume that was what you
> meant). However, haproxy is a TCP forwarder which makes it uncomfortable
> at times. For example, even if your backends are down, connections to
> haproxy will succeed and the only thing haproxy can do is to reset your
> new connection (even though nginx has already happily sent the request).
> This is a bit different than a failed backend, which returns a system
> error (connection refused) or times out. Besides, AFAIK haproxy does not
> offer least-connection balancing.
> 
> LVS, I cannot comment (haven't used it) but it has a wider choice of
> balancing algorithms, including weighted least-connection. If you have
> the resources to set it up (looks a bit hairy to me), it should perform
> very well.
> 
>> Rob, can you elaborate a bit more on your mongrels situation. I do not 
>> use ruby but have a similar situation other types of backend servers. In 
>> the current scenario, the last server will always get less hits. Are you 
>> setting some sort of threshold to determine how many mongrels to run (or 
>>  just starting up mongrels until the last is getting no hits). Many thanks.
>>
> 
> Hmm, let me use your message to reply to Rob too :)
> 
>>> First, actually thank YOU for coming up with the balancer.  It's made my
>>> life much easier.  And please, keep the round-robin behaviour as-is!  I
>>> mean, it's a great way to tell if you're running too many mongrels and/or
>>> too many nginx connections.
> 
> Unfortunately, pure WLC behaviour causes problems for mongrel as it
> apparently doesn't like to be slammed too hard (looks like it leaks
> memory but that's just a guess).
> 
> In the newest snapshot I added (or rather fixed) the round-robin part.
> I'll make it configurable, but the default will probably be round-robin
> from now on. But yes, it is handy.
> 
> Best regards,
>  Grzegorz Nosek
> 
> 





More information about the nginx mailing list