Fair Proxy Balancer
Grzegorz Nosek
grzegorz.nosek at gmail.com
Tue May 6 20:01:05 MSD 2008
On Tue, May 06, 2008 at 08:12:24AM -0700, Rt Ibmer wrote:
> Thanks. Perhaps my needs are unique, but from my perspective what matters is how fast my upstreams are completing requests. So for instance, if nginx is able to make a connection to upstream x and get a completed response back from it in 5ms, yet upstreams y and z take about 10ms, then I would want it to weight upstream x to get more requests. That seems to be the bottom line for what should matter. Unfortunately I am not grasping how your fair module's weighting works if not based on the overall time to serve a request. In my case I will never be running at full capacity. If there is an internal network issue that affects the connection/transmit time from nginx to a particular upstream, then I'd like that upstream to be penalized and get let traffic until the condition improves and it "proves itself" equally fast again. Hope this makes sense...?
Hi,
The original use case for upstream_fair is slightly different. Consider
a site where some URLs take more time to serve (e.g. file uploads, big
reports etc.). The default balancer knows nothing about it so it may
schedule requests to a busy backend, even though there are idle ones
available. upstream_fair always chooses an idle backend if possible.
This behaviour extends to a case when all backends are servicing several
requests at once -- upstream_fair schedules to a backend which has the
least outstanding request (basically a weighted-least-connections
round-robin).
Tracking latency is definitely interesting but I see several obstacles
to implementing it in upstream_fair.
1. There's no load balancer hook to call when a response _starts_. This
means that large responses would get penalised because they (naturally)
take longer.
2. Not every request takes the same amount of time. If a backend gets a
"hard" request, it would again get penalised, although any other backend
would take as much time to service it. You could try to work around this
by smoothing the latency estimate (exponential weighted moving average
is particularly nice for this) but it's never going to be really
representative.
3. Do you really feel that the network latency differences will be
consistent enough to automatically tune the algorithm? I'd guess they'll
drown in e.g. database access times, though that obviously depends on
the application.
4. Tracking latency implies keeping peristent per-request state, instead
of per-backend. And managing memory to keep this state will be
non-trivial (high fragmentation risk etc.)
If you _know_ that certain backends are faster, you may always set their
weights higher.
Overall, yes, I'd say your needs are unique but feel free to send a
patch :)
Best regards,
Grzegorz Nosek
More information about the nginx
mailing list