Fair Proxy Balancer
grzegorz.nosek at gmail.com
Wed Feb 6 18:41:19 MSK 2008
On Wed, Feb 06, 2008 at 10:07:08AM -0400, David Pratt wrote:
> Hi Rob. This is encouraging news and I am working on a setup to
> incorporate this into my process. I would really like to hear if there
> has been any attempt to evaluate the fair proxy balancer in relation to
> other balancing schemes. From the standpoint of server resources, it is
> attractive and much simpler than the haproxy or lvm for setup. I realize
> speed is subject to all sorts of additional parameters but a comparison
> of the balancer with others would be quite interesting.
(disclaimer: I wrote upstream_fair, I'm biased).
No, I haven't compared haproxy or lvs (I assume that was what you
meant). However, haproxy is a TCP forwarder which makes it uncomfortable
at times. For example, even if your backends are down, connections to
haproxy will succeed and the only thing haproxy can do is to reset your
new connection (even though nginx has already happily sent the request).
This is a bit different than a failed backend, which returns a system
error (connection refused) or times out. Besides, AFAIK haproxy does not
offer least-connection balancing.
LVS, I cannot comment (haven't used it) but it has a wider choice of
balancing algorithms, including weighted least-connection. If you have
the resources to set it up (looks a bit hairy to me), it should perform
> Rob, can you elaborate a bit more on your mongrels situation. I do not
> use ruby but have a similar situation other types of backend servers. In
> the current scenario, the last server will always get less hits. Are you
> setting some sort of threshold to determine how many mongrels to run (or
> just starting up mongrels until the last is getting no hits). Many thanks.
Hmm, let me use your message to reply to Rob too :)
> >First, actually thank YOU for coming up with the balancer. It's made my
> >life much easier. And please, keep the round-robin behaviour as-is! I
> >mean, it's a great way to tell if you're running too many mongrels and/or
> >too many nginx connections.
Unfortunately, pure WLC behaviour causes problems for mongrel as it
apparently doesn't like to be slammed too hard (looks like it leaks
memory but that's just a guess).
In the newest snapshot I added (or rather fixed) the round-robin part.
I'll make it configurable, but the default will probably be round-robin
from now on. But yes, it is handy.
More information about the nginx