Nginx Fastcgi_cache performance - Disk cached VS tmpfs cached VS serving static file

ddutra nginx-forum at nginx.us
Fri Oct 4 13:43:05 UTC 2013


Hello Maxim,
Thanks again for your considerations and help.

My first siege tests against the ec2 m1.small production server was done
using a Dell T410 with 4CPUS x 2.4 (Xeon E5620). It was after your
considerations about 127.0.0.1 why I did the siege from the same server that
is running nginx (production).

The debian machine I am using for the tests has 4vcpus and runs nothing
else. Other virtual machines run on this server but nothing too heavy. So I
am "sieging" from a server that has way more power then the one running
nginx. And I am sieging for a static html file on the production server that
has 44.2kb.

Lets run the tests again. This time I'll keep an eye on the siege cpu usage
and overall server load using htop and vmware vsphere client.

siege -c40 -b -t120s -i 'http://177.71.188.137/test.html'   (agaisnt
production)

Transactions:                    2010 hits
Availability:                 100.00 %
Elapsed time:                 119.95 secs
Data transferred:              28.12 MB
Response time:                  2.36 secs
Transaction rate:              16.76 trans/sec
Throughput:                     0.23 MB/sec
Concurrency:                   39.59
Successful transactions:        2010
Failed transactions:               0
Longest transaction:            5.81
Shortest transaction:           0.01

Siege cpu usage was like 1~~2% during the entire 120s.
On the other hand, ec2 m1.small (production nginx) was 100% the entire time.
All nginx.

Again, with more concurrent users
siege -c80 -b -t120s -i 'http://177.71.188.137/test.html'

Lifting the server siege...      done.                                      
                  
Transactions:                    2029 hits
Availability:                 100.00 %
Elapsed time:                 119.65 secs
Data transferred:              28.41 MB
Response time:                  4.60 secs
Transaction rate:              16.96 trans/sec
Throughput:                     0.24 MB/sec
Concurrency:                   78.00
Successful transactions:        2029
Failed transactions:               0
Longest transaction:            9.63
Shortest transaction:           0.19

Cant get pass the 17trans/sec per cpu.

This time siege cpu usage on my dell server was like 2~~3% the entire time
(htop). vsphere graphs dont even show a change from idle.

So I think we can rule out the possibility of siege cpu limitation.


----------------

Now for another test. Running siege and nginx on the same machine, with
exactly the same nginx.conf as the production server - changing only one
thing: 

worker_processes 1; to >> worker_processes 4; Because m1.small (AWS Ec2) has
only 1 vcpu. My vmware dev server has 4.

siege -c40 -b -t120s -i 'http://127.0.0.1/test.html'

Results are - siege using about 1% cpu, all 4 vcpus jumping betweehn 30 to
90% usage, I would say avg 50%.

I dont see a lot of improvement either - results below:

Transactions:                   13935 hits
Availability:                 100.00 %
Elapsed time:                 119.25 secs
Data transferred:             195.14 MB
Response time:                  0.34 secs
Transaction rate:             116.86 trans/sec
Throughput:                     1.64 MB/sec
Concurrency:                   39.85
Successful transactions:       13935
Failed transactions:               0
Longest transaction:            1.06
Shortest transaction:           0.02


siege -c50 -b -t240s -i 'http://127.0.0.1/test.html'
Transactions:                   27790 hits
Availability:                 100.00 %
Elapsed time:                 239.93 secs
Data transferred:             389.16 MB
Response time:                  0.43 secs
Transaction rate:             115.83 trans/sec
Throughput:                     1.62 MB/sec
Concurrency:                   49.95
Successful transactions:       27790
Failed transactions:               0
Longest transaction:            1.78
Shortest transaction:           0.01


I belive this machine I just did this test is more powerful then our
notebooks. AVG CPU during the tests is 75%, 99% consumed by nginx. So it can
only be something in nginx config file.

Here is my nginx.conf
http://ddutra.s3.amazonaws.com/nginx/nginx.conf

And here is the virtualhost file I am fetching this test.html page com, it
is the default virtual host and the same one I use for status consoles etc.
http://ddutra.s3.amazonaws.com/nginx/default


If you could please take a look. There is a huge difference between your
results and mine. I am sure i am doing something wrong here.

Best regards.

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,243412,243429#msg-243429



More information about the nginx mailing list