Fair Proxy Balancer
fairwinds at eastlink.ca
Tue May 6 22:34:31 MSD 2008
This is an interesting use case. Further to this I have also been
looking at triggers that would factor into a decision to create,
terminate, or terminate and create (replace) instances. Latency is
certainly one of these. Latency tracking in the fair balancer is quite
interesting and would be great with hook that would allow it to be
captured by your monitoring server.
Rt Ibmer wrote:
> Thank you. These are excellent points. In my case all upstream servers share the same responsibility for the types of requests that are served. I guess I am looking at 'fair' more as a way to auto-tune the weighting based on the relative performance of each upstream.
> I am hosting within the Amazon EC2 network. Because of fluctuations in their virtualized environment and underlying systems, it is very possible to have some some backends performing poorly compared to others.
> For instance imagine a scenario where I have 3 virtualized servers running on EC2 that are running as my upstream boxes. These three servers may actually be (and are most likely) on different physical servers. Now assume one of the EC2 servers has a problem that affects performance of all virtualized servers that it is hosting (perhaps it is networking related or perhaps it affects the speed of the machine).
> Now my upstream server on that troubled box will be running at a lot lower level of performance than my other upstreams, and this will show itself on the bottom line by much higher average ms total response time (time it takes to connect to upstream and get its full response) compared to the others.
> So in my case, I would like to use 'fair' almost as a way to maximize site performance based on the health of the systems. Under heavy load I think 'fair' would likely do this as requests for the slower box would get backed up and get reflected in the weighting. But under a light load probably not. So in that case 'fair' would still route requests to a server that may take 500ms longer to reply just because there is no backlog.
> Anyway I realize that you did not write 'fair' to solve this but just wanted to provide you with this feedback in case it spurs some ideas for how to expand it to cover this usage scenario. Thank you for this opportunity to provide the feedback and for your great contributions to the nginx project!
> Be a better friend, newshound, and
> know-it-all with Yahoo! Mobile. Try it now. http://mobile.yahoo.com/;_ylt=Ahu06i62sR8HDtDypao8Wcj9tAcJ
More information about the nginx