nginx serving static files - caching?
Ryan Malayter
malayter at gmail.com
Fri Nov 12 17:35:17 MSK 2010
On Fri, Nov 12, 2010 at 3:24 AM, Edho P Arief <edhoprima at gmail.com> wrote:
> On Fri, Nov 12, 2010 at 4:14 PM, revirii <nginx-forum at nginx.us> wrote:
>> Let's say we have 16 GB Ram available for the server. We could simply
>> avoid using some caching mechanism and read the files from disk. For my
>> scenario: is it possible to establish some reasonable caching? No
>> decision regarding the to-be-used technology is made so far. An
>> associate favours varnish (with some webserver as backend) and some
>> least recently used strategy, i'd give nginx+memcached a try. Any
>> recommendations?
>>
>
> how about you... let your operating system to handle it?
>
Agreed. There's no reason to try to re-implement what the OS already
does with the file-system cache; you'll probably do it poorly. If
you're using FreeBSD, simply using nginx by itself is probably your
best bet as you can enable asynchronous IO. Varnish and nginx
proxy_cache a great at caching, but there's no reason for that tier
unless you're caching dynamically generated assets (or have a very
slow back-end such as Amazon S3).
If you're using Linux, AIO doesn't work unless you're using direct IO
(bypassing filesystem cache), so you will probably want to use a large
number of nginx workers to keep throughput up even though disk IO is
blocking. With 130,000,000 requests per month, that's 50 requests per
second average. I assume that you will see peaks several times that
number, so you will want to tune the number of worker processes based
on the average request duration, IO load, etc. It all depends on your
cache hit ratio and the patterns of your clients. We use 10 nginx
workers on a dual-core machine to serve about 150M requests per month,
but our content set is only about 80 GB and the cache hit ratio is
very high.
--
RPM
More information about the nginx
mailing list