Nginx Fastcgi_cache performance - Disk cached VS tmpfs cached VS serving static file
Maxim Dounin
mdounin at mdounin.ru
Fri Oct 4 12:05:09 UTC 2013
Hello!
On Thu, Oct 03, 2013 at 03:00:51PM -0400, ddutra wrote:
> Maxim Dounin Wrote:
> -------------------------------------------------------
>
> > The 15 requests per second for a static file looks utterly slow,
> > and first of all you may want to find out what's a limiting factor
> > in this case. This will likely help to answer the question "why
> > the difference".
> >
> > From what was previously reported here - communication with EC2
> > via external ip address may be very slow, and using 127.0.0.1
> > instead used to help.
>
> Alright, so you are saying my static html serving stats are bad, that means
> the gap between serving static html from disk and serving cached version
> (fastcgi_cache) from tmpfs is even bigger?
Yes. Numbers are _very_ low. In a virtual machine on my notebook
numbers from siege with 151-byte static file looks like:
$ siege -c 40 -b -t120s http://127.0.0.1:8080/index.html
...
Lifting the server siege... done.
Transactions: 200685 hits
Availability: 100.00 %
Elapsed time: 119.82 secs
Data transferred: 28.90 MB
Response time: 0.02 secs
Transaction rate: 1674.88 trans/sec
Throughput: 0.24 MB/sec
Concurrency: 39.64
Successful transactions: 200685
Failed transactions: 0
Longest transaction: 0.08
Shortest transaction: 0.01
...
Which is still very low. Switching off verbose output in siege
config (which is there by default) results in:
$ siege -c 40 -b -t120s http://127.0.0.1:8080/index.html
** SIEGE 2.70
** Preparing 40 concurrent users for battle.
The server is now under siege...
Lifting the server siege... done.
Transactions: 523592 hits
Availability: 100.00 %
Elapsed time: 119.73 secs
Data transferred: 75.40 MB
Response time: 0.01 secs
Transaction rate: 4373.23 trans/sec
Throughput: 0.63 MB/sec
Concurrency: 39.80
Successful transactions: 523592
Failed transactions: 0
Longest transaction: 0.02
Shortest transaction: 0.01
That is, almost 3x speedup. This suggests the limiting factor
first tests is siege itself. And top suggests the test is CPU
bound (idle 0%) - with nginx using about 4% of the CPU, and about
60% accounted to siege threads. Rest is unaccounted, likely due
to number of threads siege uses.
With http_load results look like:
$ echo http://127.0.0.1:8080/index.html > z
$ http_load -parallel 40 -seconds 120 z
696950 fetches, 19 max parallel, 1.05239e+08 bytes, in 120 seconds
151 mean bytes/connection
5807.91 fetches/sec, 876995 bytes/sec
msecs/connect: 0.070619 mean, 7.608 max, 0 min
msecs/first-response: 0.807419 mean, 14.526 max, 0 min
HTTP response codes:
code 200 -- 696950
That is, siege results certainly could be better. Test is again
CPU bound, with nginx using about 40% and http_load using about
60%.
>From my previous experience, siege requires multiple dedicated
servers to run due to being CPU hungry.
[...]
> Please let me know what you think.
Numbers are still very low, but the difference between public ip
and 127.0.0.1 seems minor. Limiting factor is something else.
> Its my first nginx experience. So far it
> is performing way better then my old setup, but I would like to get the most
> out of it.
First of all, I would recommend you to make sure your are
benchmarking nginx, not your benchmarking tool.
--
Maxim Dounin
http://nginx.org/en/donation.html
More information about the nginx
mailing list