HAProxy, NGINX and Rails anyone?
Willy Tarreau
w at 1wt.eu
Fri Aug 1 20:00:49 MSD 2008
On Fri, Aug 01, 2008 at 11:44:19AM -0400, Brian Gupta wrote:
> Understood. I was wondering if they had haproxy properly configured. Frankly
> haproxy's docs are really hard to digest for newcomers, and are prolly the
> biggest impediment to growing the user base.
Yes, I know.
> It's one giant text file so you
> basically have to know what you are looking for already. (It's documentation
> geared toward advanced haproxy users and haproxy developers.)
That's the reason why I want to quickly get rid of the old files, and only
keep the reference manual and the architecture manual, since the later
provides immediate solutions to existing problems, though it's a bit
outdated now (contains no reference to content-switching).
> When things
> slow down a bit, I plan to help out with the Wiki efforts to address newbie
> documentation.
That's much appreciated :-)
> > Now, that said, I know about people who have a handful of machines
> > all saturating gig pipes all the day with moderate CPU usage. And
> > judging by the workload (large transfers essentially, no problem
> > with running one process per CPU core), it should be possible to
> > use a 10 Gig NIC in one single machine. But this case is extreme,
> > as most people serve smaller files. From my first series of tests,
> > you need to serve files larger than 50kB in average to go beyond
> > 5 Gbps (that was about 13000 hits/s). And you needed an average of
> > 500kB to reach 9 Gbps.
>
>
> I thought you had posted that you saturated a 10G NIC somewhere? (Maybe I
> misunderstood/misremembered something.)
Yes I did, but it was on my lab. It proves the component by itself
can withstand this level of load, but it does not say that everyone
has similar workload (eg: many "large" files). Sites with many movies
or ISO mirrors are completely compatible with this workload. Standard
sites are not as much (though 3-5 Gbps should be a reasonable goal
with a good 10Gig NIC and a properly tuned TCP stack).
> > You should also be aware that reaching these loads on one machine
> > requires a lot of system tweaking. You can run out of memory in
> > seconds, and if the system runs low on network buffers, you observe
> > terrible performance. It's always a trade between raw performance
> > and reliability.
>
>
> I got ya... I still say it would be interesting to see if we could improve
> those Wordpress testing numbers using haproxy. (Even with real world traffic
> it should be possible to beat 1.2Gbit/sec)
It depends on a lot of factors :
- if Wordpress is OK to contribute regular benchmarks on real
world traffic, it would be of a major help for many projects,
but I don't see what their motivation would be.
- if they are 100% satisfied with nginx, why would they switch
to anything else ? #1 rule in network architecture: if it isn't
broken, don't fix it.
- if they miss something haproxy offers, it may be a valid reason
to iteratively run benchmarks.
- if neither haproxy nor nginx fill 100% of their needs, it would
be good for both projects to be aware of it, and to estimate the
cost and relevance of implementing missing features.
But you obviously can't blame them for not reconsidering their choice
once they made it if they are satisfied :-)
> > These days, I would say that 10Gig becomes affordable but a lot
> > of work is required to get the best of it, while reaching gig
> > speeds is almost a child's game.
>
>
> Key word, "almost". ;)
Of course :-)
Cheers,
Willy
More information about the nginx
mailing list