Viability of nginx instead of hardware load balancer?
Barry Abrahamson
barry at automattic.com
Thu Sep 17 09:05:23 MSD 2009
On Sep 15, 2009, at 9:41 AM, John Moore wrote:
> I'm working on a project where it's critical to minimize the
> possibility of a single point of failure, and where there will be
> quite high traffic. Currently in another version of the system we're
> using nginx as a remote proxy server for Tomcat, but the current
> plan is to use a hardware load balancer in front of a Tomcat cluster
> (or a cluster of nginx+Tomcat instances). I'm wondering, though,
> given the extraordinary performance and reliability of nginx,
> whether we might be able to omit the hardware load-balancer and use
> instead a couple of dedicated minimal nginx servers with failover
> between them. If anyone has gone down this path and has some good
> ideas and/or useful experience, I'd be keen to hear from them.
We are using nginx as a reverse proxy (load balancer) serving tens of
thousands of requests per second across various large sites
(WordPress.com, Gravatar.com, etc). We deploy our nginx reverse
proxies in active-active pairs using Wackamole and Spread to control
the floating IPs for high availability. Our busiest load balancers
(req/sec) are serving about 7000 req/sec and the most traffic per
machine is in the 600Mbit/sec range. We could push each machine more,
they aren't maxed out, but we like to leave some room for growth, DoS
attacks, hardware/network failures, etc. The bottleneck for us seem
to be the large number of software interrupts on the network
interfaces cause the boxes to become CPU bound at some point. I am
not sure how to reduce this, it seems like a necessary evil of running
something like this in user space. I have wanted to try FreeBSD 7 to
see if it performs better in this area, but haven't had a chance yet
(we are running Debian Lenny mostly).
We are using "cheap" commodity hardware.
2 x Quad-core AMD or Intel CPUs
2-4GB of RAM
Single SATA drive
2 x 1000Mbit NICs
Since it is so easy to deploy more servers, it's super easy to scale,
and this configuration has been ultra-reliable for us. Most of the
failures we have had are from human error.
Hope this helps,
Barry
--
Barry Abrahamson | Systems Wrangler | Automattic
Blog: http://barry.wordpress.com
More information about the nginx
mailing list