Viability of nginx instead of hardware load balancer?

David Murphy david at icewatermedia.com
Wed Sep 16 01:24:31 MSD 2009


Here is our current setup:

 

 

 

2 ESX HOSTs    4 QUAD CORE XEON  55xx series chips

 

1 SAN with   DUAL CONTROLLERS and Power supplies for each controller  routed
to   separate switches, which interconnect to the ESX Hosts

 

 

In the LB VM's we use   OCFS2 to create a shared session folder  that  the
Apache 2 / PHP 5.2  backend read from. 

 

 

The LB only do LB not PHP serving (  I  found   apache using a PHP  module
worked better for that)

 

Also in our LB configuration if a  request to a upstream fails it will try
again via a RW  rule, allowing   the bad  upstream to be removed, and the
pool to reprocess the request. 

Thus an end user  will never get a  proxy error page.

 

Does your kernel use PAE, if not you need to move to PAE or a 64 bit kernel
to really utilize your ram.

 

I would love to go to a true nginx  environment, if only  nginx could just
make PHP  a module and not do  spawning of  PHP  .

 

 

Hope this helps some.

 

BTW We currently use JEOS 8.04 for this task but im moving to  CentOS  for
better support from Dell and VMware  (CentOS is on the approved  distro list
for both of them.)

 

David

 

From: owner-nginx at sysoev.ru [mailto:owner-nginx at sysoev.ru] On Behalf Of Ilan
Berkner
Sent: Tuesday, September 15, 2009 3:42 PM
To: nginx at sysoev.ru
Subject: Re: Viability of nginx instead of hardware load balancer?

 

We are currently running Nginx as a front end LB to a single PHP App Server.
When we need a bit more horse power, I can divert traffic to the LB itself
to process PHP requests with a minor change in the config.

We are serving close to 2mm page views a day and growing.  The Nginx
configuration is a dual core, dual CPU AMD processor system, 32 bit with 8GB
of RAM.  We are also using it for delivery of static assets (everything
except PHP).  The CPU utilization on the box at peak is below 3% and nginx
processing 3000+ connections at any given time.

I also configured the PHP app server as a Nginx server and it serves as a
backup in case the primary nginx server fails.

I'm very happy with the confguration, adding another PHP app server will be
a breeze, although session management is going to be difficult.

On Tue, Sep 15, 2009 at 4:16 PM, Gena Makhomed <gmm at csdoc.com> wrote:

On Tuesday, September 15, 2009 at 18:19:38, David Murphy wrote:

DM> Not sure if this is possible ( as I haven't tried it)
DM> but what about building  nginx on Damn Small Linux and having
DM> a boot cd  or ramdisk, or even  boot flash.  You could literally take
DM> something like  a   PowerEdge 1425 or so and have a kicking minimalistic
DM> LB  hardware running on nginx.

DSL - Desktop OS, linux distro for i486 with 2.4.x linux kernel,
optimized for minimal RAM usage and old computers.
no linux 2.6.x kernel - means no "epoll" at all.

therefore - DSL is totally useless for high traffic load balancer as base
OS.

DM> Technically if you were so inclined, you could even  write  DSL and
nginx
DM> to a prom chip so its  100% automated, I'm better if nginx  does
everything
DM> you need it would be a lot cheaper than the hardware normal route with
the
DM> same if not better stability.

question was not about most cheaper "solution", but about "high traffic LB".

DM> Personally what I would do is  (assuming you have ESX), run 2  VM's
both
DM> running  nginx on dedicated NICs. Then one your switching set up an
DM> active/active fail over to those nice ( and have the VM's on separate
ESX
DM> hosts).

DM> You would then have a fully redundant LB system so if nginx on one node
DM> crashes the fail over would route all traffic  to  the other  LB.

if, for example, crashes mainboard of esx server with these VM's -
both VM's go down. so, this is not "a fully redundant LB system".

hardware of ESX server is "single point of failure".

--
Best regards,
 Gena



 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://nginx.org/pipermail/nginx/attachments/20090915/c707718c/attachment.html>


More information about the nginx mailing list