Viability of nginx instead of hardware load balancer?
david at icewatermedia.com
Tue Sep 22 00:00:33 MSD 2009
Once again Gena you missed my point by a landslide. WRT/DSL were to show
proof it could be done and very low end hardware, you could always get
better hardware. Furthermore you fail-over logic is flawed, if done proper
you can prevent the failure pages to the end user. However the point was
not what the best solution in a case would be but the overall viability
. I was simply shows a very basic approach improvements could be made.
You thought process of expecting it to be a fully planned out system, is
asking Alexander Gram-Bell how a cell phone tower would work, not could
it be possible in the future to have phones without wires, if such a
technology could be made?
However I digress this topic is dead at this point the user has been shown
several different ways where using nginx vs a hardware platform could be
done. Its not his job to plan and test it for his needs. Theory over
practice and all that.
From: owner-nginx at sysoev.ru [mailto:owner-nginx at sysoev.ru] On Behalf Of Gena
Sent: Monday, September 21, 2009 5:50 AM
To: David Murphy
Subject: Re: Viability of nginx instead of hardware load balancer?
On Thursday, September 17, 2009 at 17:05:28, David Murphy wrote:
DM> If your load balancer is not doing anything but being a load
DM> balancer really you only needs high quality network devices, a
DM> minimalistic kernel (to prevent security holes) and ram based os or fast
WRT-54G has very limited amount of memory - 8 MB, 16MB or 32MB.
this is not proper hardware for high traffic http load balancer, you can
test it independently if not believe my humble opinion.
DM> I would agree you need better hardware if you are doing more but as
DM> a pure LB, hardware requirements and not a strict as you are implying.
even pure http LB need a quite lot of RAM for tcp buffers and states of
DM> Furthermore you need to get past this concept of "failures" because
DM> if you configured things properly you would have a hot spare device
DM> to prevent any such lag. I find that buying a single piece of
DM> hardware vs building out a redundant infrastructure a) costs more
DM> money and b) actually have a higher chance of failure due to a
DM> Single Point of Failure
lags/overloads - because of low performance of very cheap hardware (slow
CPU/not enough RAM).
failures - because of low reliability of very cheap hardware (obsolete
DM> Also you have the ability via networking to run a true balanced
DM> share out of the load so you could have 3 LB's all getting one
DM> 1/3 of all requests and hitting the same backends. Then if one
DM> drops off the switch just downs the port and 1/2 goes to each of the
remaining LB nodes.
in case of persistent failure - yes, one drops off all active tcp
connections at that moment and go down.
even persistent failure dont have zero cost - it generates temporary denial
of service for all clients of these connections.
but, for exampe, in case of broken memory chips - failed LB not go down, and
continue to "work", generating broken ip packets or reboots/kernel panics.
DM> Your belief you must buy hardware is just a waste of capital
DM> investment, when you can build it yourself for much cheaper with
DM> the same of better hardware than buying something from a vendor.
I belief what WRT-54G is not appropriate hardware for load balancer and DSL
(2.4 kernel) DSL-N (development version) is not appropriate base OS for load
balancer, even if using high quality nginx server.
I belief need to be minimized not cost of some piece of hardware, but
minimize TCO of solution, if all QoS requirements satisfacted and scaling of
any part of system provided.
More information about the nginx