Viability of nginx instead of hardware load balancer?
David Murphy
david at icewatermedia.com
Thu Sep 17 18:09:00 MSD 2009
Well if you are planning on not hosting it , some hosting providers will
set you up with networking to do go HA, some will not, you should get a
list of candidates and ask them directly.
For example for a fee the planet will install cross connects to the LBs and
we nodes and if you're in a cabinet let you have your own switch to setup
port fail over or also known as floating ips ( bonding)
-----Original Message-----
From: owner-nginx at sysoev.ru [mailto:owner-nginx at sysoev.ru] On Behalf Of John
Moore
Sent: Thursday, September 17, 2009 5:50 AM
To: nginx at sysoev.ru
Subject: Re: Viability of nginx instead of hardware load balancer?
Barry Abrahamson wrote:
>
> On Sep 15, 2009, at 9:41 AM, John Moore wrote:
>
>> I'm working on a project where it's critical to minimize the
>> possibility of a single point of failure, and where there will be
>> quite high traffic. Currently in another version of the system we're
>> using nginx as a remote proxy server for Tomcat, but the current plan
>> is to use a hardware load balancer in front of a Tomcat cluster (or a
>> cluster of nginx+Tomcat instances). I'm wondering, though, given the
>> extraordinary performance and reliability of nginx, whether we might
>> be able to omit the hardware load-balancer and use instead a couple
>> of dedicated minimal nginx servers with failover between them. If
>> anyone has gone down this path and has some good ideas and/or useful
>> experience, I'd be keen to hear from them.
>
> We are using nginx as a reverse proxy (load balancer) serving tens of
> thousands of requests per second across various large sites
> (WordPress.com, Gravatar.com, etc). We deploy our nginx reverse
> proxies in active-active pairs using Wackamole and Spread to control
> the floating IPs for high availability. Our busiest load balancers
> (req/sec) are serving about 7000 req/sec and the most traffic per
> machine is in the 600Mbit/sec range. We could push each machine more,
> they aren't maxed out, but we like to leave some room for growth, DoS
> attacks, hardware/network failures, etc. The bottleneck for us seem
> to be the large number of software interrupts on the network
> interfaces cause the boxes to become CPU bound at some point. I am
> not sure how to reduce this, it seems like a necessary evil of running
> something like this in user space. I have wanted to try FreeBSD 7 to
> see if it performs better in this area, but haven't had a chance yet
> (we are running Debian Lenny mostly).
>
> We are using "cheap" commodity hardware.
>
> 2 x Quad-core AMD or Intel CPUs
> 2-4GB of RAM
> Single SATA drive
> 2 x 1000Mbit NICs
>
> Since it is so easy to deploy more servers, it's super easy to scale,
> and this configuration has been ultra-reliable for us. Most of the
> failures we have had are from human error.
>
> Hope this helps,
>
>
It certainly does, thanks! Could I trouble you to explain a little more
about your use of Wackamole and Spread? I've not used either of them before.
Also, is there any reason why a hosting company would have problems with
such a setup (i.e., this won't be running in our hardware on our premises,
but we have full control of Linux servers).
More information about the nginx
mailing list