Benchmarking Nginx on a prehistoric PIII 500

Cliff Wells cliff at
Fri Mar 14 15:19:37 MSK 2008

On Fri, 2008-03-14 at 05:59 +0100, Denis Arh wrote:

> All testing servers had the same conditions, and adding the
> benchmarking tool to the whole mess did only create more realistic
> conditions (there is usually a few other services running on the VPS)

A "few other services" rarely use 1000 connections and half the
available CPU.  Not a realistic environment at all. 

You are also failing to account for the fact that the load the client
places on the hardware and OS will be different for each HTTP server
being tested.  Think about it: the performance of the client is directly
tied to the performance of the server... a server that is faster will
also make the client faster, the faster the client, the more CPU it
hogs.  The net result is that the fastest server software seems likely
to take the biggest performance hit by having it's resources consumed by
the client.  This would show up as a damping effect, bringing all the
results closer together than they really are.

> Yes, I could do the tests again, removing all disturbing factors
> (another machine for the benchmarking tool, dedicated box for both of
> them), but this would only create a perfect conditions for all of the
> tested software and I'm positive that this would not do anything else
> than raise the final numbers by same factor for all.

Your certainty only assures me you haven't thought much about
benchmarking =)  

It isn't about creating "perfect conditions" (you can't) so much as
removing glaring flaws.  

> And I think I did a much better job here then most of the developers
> of the testes software (by the number different of servers used,
> specifying what hardware I've used and most finally providing the
> config files...)

Being detailed about a flawed test does not make it any less flawed.  

Your approach is a bit like testing the relative speed of a Jeep and a
Ferrari in the snow.  Just because the conditions are the same for both
vehicles doesn't really tell you anything about the actual relative
performance of each. 

Not trying to be negative, just trying to avoid having one more badly
misleading benchmark published (yes, I know that all benchmarks are
misleading to some degree, but we can at least try).


More information about the nginx mailing list