upstream load balancing.
Maxim Dounin
mdounin at mdounin.ru
Thu Apr 4 14:14:56 UTC 2013
Hello!
On Thu, Apr 04, 2013 at 09:18:53AM +1300, Steve Holdoway wrote:
> Folks,
>
> I'm sharing processing load across 3 remote servers, and am having a
> terrible time getting it balanced.
>
> Here's the config
>
> upstream backend {
> server 192.168.162.218:9000 fail_timeout=30 max_fails=3 weight=1
> ; # Engine 1
> server 192.168.175.5:9000 fail_timeout=30 max_fails=3 weight=1;
> # Engine 2
> server 192.168.175.213:9000 fail_timeout=30 max_fails=3 weight=1
> ; # Engine 3
> }
>
>
> When the server gets busy, all load seems to be put onto the final
> entry, which is seeing load averages in the 70's, whereas the first 2
> are below 5.
>
> This is causing serious performance issues. How on earth can we force a
> more even loading?
The above configuration should result in equal number of requests
to each of the backends. This may not be the same as equal load
in terms of load averages, especially if servers are not equal.
You may use "weight=" parameter to tune request distribution more
precisely.
Alternatively, you may consider using least_conn balancer
algorithm for atomatic balancing based on number of currently
active connections to upstream servers configured. See
http://nginx.org/r/least_conn for details.
--
Maxim Dounin
http://nginx.org/en/donation.html
More information about the nginx
mailing list