nginx cache mounted on tmpf getting fulled

Peter Booth peter_booth at me.com
Mon Feb 20 05:36:03 UTC 2017


Just to be pedantic. It’s counterintuitive but, in general, tmpfs is not faster than local storage, for the use case of caching static content for web servers. 
Sounds weird? Here’s why:

tmpfs is a file system view of all of the system’s virtual memory - that is both physical memory and swap space.

If you use local storage for your cache store then every time a file is requested for the first time it gets read from local storage and written to the page cache. Subsequent requests are served from page cache.. The OS manages memory allocation so as to maximize the size of the page cache (subject to config settings snappiness, min_free_kbytes, and the watermark setting for kswapd. The fact that you are using a cache suggests that you have expensive back-end queries that your cache is front of. So the cost of reading a cached file from disk << cost of recreating dynamic content. With real-world web systems the probability distribution of requests for different resources is never uniform, There are clusters of popular resources. 

Any popular requests that  get served from your disk cache will in fact be being served from page cache (memory), so there is no reason to interfere with the OS. In general its likely that you have less physical memory than disk space., so that using tmpfs for your nginx cache could mean that you’re serving cached files from swap space.

Allowing the OS to do its job (with the page cache) means that you will already get the tmpfs like latencies for the popular resources - which is what you want to maximize performance across your entire site. This is another example of  why with issues of web performance, its usually better to test theories than to rely on logical reasoning.

Peter Booth 


> On 16 Jan 2017, at 10:34 PM, steve <steve at greengecko.co.nz> wrote:
> 
> Hi,
> 
> 
> On 01/17/2017 02:37 PM, Peter Booth wrote:
>> I'm curious, why are you using tmpfs for your cache store? With fast local storage bring so cheap, why don't you devote a few TB to your cache?
>> 
>> When I look at the techempower benchmarks I see that openresty (an nginx build that comes with lots of lua value add) can serve 440,000 JSON responses per sec with 3ms latency. That's on five year old E7-4850 Westmere hardware at 2.0GHz, with 10G NICs. The min latency to get a packet from nginx through the kernel stack and onto the wire is about 4uS for a NIC of that vintage, dropping to 2uS with openonload (sokarflare's kernel bypass).
>> 
>> As ippolitiv suggests, your cache already has room for 1.6M items- that's a huge amount. What kind of hit rate are you seeing for your cache?
>> 
>> One way to manage cache size is to only cache popular items- if you set proxy_cache_min_uses =4 then only objects that are requested four times will be cached, which will increase your hit rates and reduce the space needed for the cache.
>> 
>> Peter
>> 
>> Sent from my iPhone
>> 
>>> On Jan 16, 2017, at 4:41 AM, Igor A. Ippolitov <iippolitov at nginx.com> wrote:
>>> 
>>> Hello,
>>> 
>>> Your cache have 200m space for keys. This is around 1.6M items, isn't it?
>>> How much files do you have in your cache? May we have a look at
>>> `df -i ` and `du -s /cache/123` output, please?
>>> 
>>>> On 06.01.2017 08:40, omkar_jadhav_20 wrote:
>>>> Hi,
>>>> 
>>>> I am using nginx as webserver with nginx version: nginx/1.10.2. For faster
>>>> access we have mounted cache of nginx of different application on RAM.But
>>>> even after giving enough buffer of size , now and then cache is getting
>>>> filled , below are few details of files for your reference :
>>>> maximum size given in nginx conf file is 500G , while mouting we have given
>>>> 600G of space i.e. 100G of buffer.But still it is getting filled 100%.
>>>> 
>>>> fstab entries :
>>>> tmpfs                   /cache/123            tmpfs   defaults,size=600G
>>>>    0 0
>>>> tmpfs                   /cache/456            tmpfs   defaults,size=60G
>>>>   0 0
>>>> tmpfs                   /cache/789            tmpfs   defaults,size=110G
>>>>    0 0
>>>> 
>>>> cache getting filled , df output:
>>>> 
>>>> tmpfs                              tmpfs      60G   17G   44G  28%
>>>> /cache/456
>>>> tmpfs                              tmpfs     110G  323M  110G   1%
>>>> /cache/789
>>>> tmpfs                              tmpfs     600G  600G     0 100%
>>>> /cache/123
>>>> 
>>>> nginx conf details :
>>>> 
>>>> proxy_cache_path    /cache/123 keys_zone=a123:200m levels=1:2 max_size=500g
>>>> inactive=3d;
>>>> 
>>>> server{
>>>> listen 80;
>>>> server_name   dvr.catchup.com;
>>>> location  ~.*.m3u8 {
>>>> access_log /var/log/nginx/access_123.log access;
>>>> proxy_cache off;
>>>> root    /xyz/123;
>>>> if (!-e $request_filename) {
>>>> #origin url will be used if content is not available on DS
>>>>      proxy_pass http://10.10.10.1X;
>>>> }
>>>> }
>>>> location     / {
>>>> access_log /var/log/nginx/access_123.log access;
>>>> proxy_cache_valid    3d;
>>>> proxy_cache   a123;
>>>> root    /xyz/123;
>>>> if (!-e $request_filename) {
>>>> #origin url will be used if content is not available on server
>>>>      proxy_pass http://10.10.10.1X;
>>>> }
>>>> proxy_cache_key    $proxy_host$uri;
>>>> }
>>>> }
>>>> 
>>>> Posted at Nginx Forum: https://forum.nginx.org/read.php?2,271842,271842#msg-271842
>>>> 
>>>> _______________________________________________
>>>> nginx mailing list
>>>> nginx at nginx.org
>>>> http://mailman.nginx.org/mailman/listinfo/nginx
>>> 
>>> _______________________________________________
>>> nginx mailing list
>>> nginx at nginx.org
>>> http://mailman.nginx.org/mailman/listinfo/nginx
>> _______________________________________________
>> nginx mailing list
>> nginx at nginx.org
>> http://mailman.nginx.org/mailman/listinfo/nginx
> 
> So that's a total of 1TB of memory allocated to caches. Do you have that much spare on your server? Linux will allocate *up  to* the specified amount *as long as it's spare*. It would be worth looking at your server to ensure that 1TB memory is spare before blaming nginx.
> 
> You can further improve performance and safety by mounting them
> 
> nodev,noexec,nosuid,noatime,async,size=xxxM,mode=0755,uid=xx,gid=xx
> 
> To answer this poster... memory is even faster!
> 
> Steve
> 
> -- 
> Steve Holdoway BSc(Hons) MIITP
> http://www.greengecko.co.nz
> Linkedin: http://www.linkedin.com/in/steveholdoway
> Skype: sholdowa
> 
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.nginx.org/pipermail/nginx/attachments/20170220/34840db6/attachment.html>


More information about the nginx mailing list