<html><head><meta http-equiv="Content-Type" content="text/html charset=utf-8"></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space;" class=""><div class=""><br class=""></div><div class="">Just to be pedantic. It’s counterintuitive but, in general, tmpfs <b class="">is not faster than local storage</b>, for the use case of caching static content for web servers. </div><div class="">Sounds weird? Here’s why:</div><div class=""><br class=""></div><div class="">tmpfs is a file system view of all of the system’s virtual memory - that is both physical memory and swap space.</div><div class=""><br class=""></div><div class="">If you use local storage for your cache store then every time a file is requested for the first time it gets read from local storage and written to the page cache. Subsequent requests are served from page cache.. The OS manages memory allocation so as to maximize the size of the page cache (subject to config settings snappiness, min_free_kbytes, and the watermark setting for kswapd. The fact that you are using a cache suggests that you have expensive back-end queries that your cache is front of. So the cost of reading a cached file from disk << cost of recreating dynamic content. With real-world web systems the probability distribution of requests for different resources is never uniform, There are clusters of popular resources. </div><div class=""><br class=""></div><div class="">Any popular requests that get served from your disk cache will in fact be being served from page cache (memory), so there is no reason to interfere with the OS. In general its likely that you have less physical memory than disk space., so that using tmpfs for your nginx cache could mean that you’re serving cached files from swap space.</div><div class=""><br class=""></div><div class="">Allowing the OS to do its job (with the page cache) means that you will already get the tmpfs like latencies for the popular resources - which is what you want to maximize performance across your entire site. This is another example of why with issues of web performance, its usually better to test theories than to rely on logical reasoning.</div><div class=""><br class=""></div><div class="">Peter Booth </div><div class=""><br class=""></div><br class=""><div><blockquote type="cite" class=""><div class="">On 16 Jan 2017, at 10:34 PM, steve <<a href="mailto:steve@greengecko.co.nz" class="">steve@greengecko.co.nz</a>> wrote:</div><br class="Apple-interchange-newline"><div class=""><div class="">Hi,<br class=""><br class=""><br class="">On 01/17/2017 02:37 PM, Peter Booth wrote:<br class=""><blockquote type="cite" class="">I'm curious, why are you using tmpfs for your cache store? With fast local storage bring so cheap, why don't you devote a few TB to your cache?<br class=""><br class="">When I look at the techempower benchmarks I see that openresty (an nginx build that comes with lots of lua value add) can serve 440,000 JSON responses per sec with 3ms latency. That's on five year old E7-4850 Westmere hardware at 2.0GHz, with 10G NICs. The min latency to get a packet from nginx through the kernel stack and onto the wire is about 4uS for a NIC of that vintage, dropping to 2uS with openonload (sokarflare's kernel bypass).<br class=""><br class="">As ippolitiv suggests, your cache already has room for 1.6M items- that's a huge amount. What kind of hit rate are you seeing for your cache?<br class=""><br class="">One way to manage cache size is to only cache popular items- if you set proxy_cache_min_uses =4 then only objects that are requested four times will be cached, which will increase your hit rates and reduce the space needed for the cache.<br class=""><br class="">Peter<br class=""><br class="">Sent from my iPhone<br class=""><br class=""><blockquote type="cite" class="">On Jan 16, 2017, at 4:41 AM, Igor A. Ippolitov <<a href="mailto:iippolitov@nginx.com" class="">iippolitov@nginx.com</a>> wrote:<br class=""><br class="">Hello,<br class=""><br class="">Your cache have 200m space for keys. This is around 1.6M items, isn't it?<br class="">How much files do you have in your cache? May we have a look at<br class="">`df -i ` and `du -s /cache/123` output, please?<br class=""><br class=""><blockquote type="cite" class="">On 06.01.2017 08:40, omkar_jadhav_20 wrote:<br class="">Hi,<br class=""><br class="">I am using nginx as webserver with nginx version: nginx/1.10.2. For faster<br class="">access we have mounted cache of nginx of different application on RAM.But<br class="">even after giving enough buffer of size , now and then cache is getting<br class="">filled , below are few details of files for your reference :<br class="">maximum size given in nginx conf file is 500G , while mouting we have given<br class="">600G of space i.e. 100G of buffer.But still it is getting filled 100%.<br class=""><br class="">fstab entries :<br class="">tmpfs /cache/123 tmpfs defaults,size=600G<br class=""> 0 0<br class="">tmpfs /cache/456 tmpfs defaults,size=60G<br class=""> 0 0<br class="">tmpfs /cache/789 tmpfs defaults,size=110G<br class=""> 0 0<br class=""><br class="">cache getting filled , df output:<br class=""><br class="">tmpfs tmpfs 60G 17G 44G 28%<br class="">/cache/456<br class="">tmpfs tmpfs 110G 323M 110G 1%<br class="">/cache/789<br class="">tmpfs tmpfs 600G 600G 0 100%<br class="">/cache/123<br class=""><br class="">nginx conf details :<br class=""><br class="">proxy_cache_path /cache/123 keys_zone=a123:200m levels=1:2 max_size=500g<br class="">inactive=3d;<br class=""><br class="">server{<br class="">listen 80;<br class="">server_name <a href="http://dvr.catchup.com" class="">dvr.catchup.com</a>;<br class="">location ~.*.m3u8 {<br class="">access_log /var/log/nginx/access_123.log access;<br class="">proxy_cache off;<br class="">root /xyz/123;<br class="">if (!-e $request_filename) {<br class="">#origin url will be used if content is not available on DS<br class=""> proxy_pass <a href="http://10.10.10.1X" class="">http://10.10.10.1X</a>;<br class="">}<br class="">}<br class="">location / {<br class="">access_log /var/log/nginx/access_123.log access;<br class="">proxy_cache_valid 3d;<br class="">proxy_cache a123;<br class="">root /xyz/123;<br class="">if (!-e $request_filename) {<br class="">#origin url will be used if content is not available on server<br class=""> proxy_pass <a href="http://10.10.10.1X" class="">http://10.10.10.1X</a>;<br class="">}<br class="">proxy_cache_key $proxy_host$uri;<br class="">}<br class="">}<br class=""><br class="">Posted at Nginx Forum: <a href="https://forum.nginx.org/read.php?2,271842,271842#msg-271842" class="">https://forum.nginx.org/read.php?2,271842,271842#msg-271842</a><br class=""><br class="">_______________________________________________<br class="">nginx mailing list<br class=""><a href="mailto:nginx@nginx.org" class="">nginx@nginx.org</a><br class="">http://mailman.nginx.org/mailman/listinfo/nginx<br class=""></blockquote><br class="">_______________________________________________<br class="">nginx mailing list<br class=""><a href="mailto:nginx@nginx.org" class="">nginx@nginx.org</a><br class="">http://mailman.nginx.org/mailman/listinfo/nginx<br class=""></blockquote>_______________________________________________<br class="">nginx mailing list<br class=""><a href="mailto:nginx@nginx.org" class="">nginx@nginx.org</a><br class="">http://mailman.nginx.org/mailman/listinfo/nginx<br class=""></blockquote><br class="">So that's a total of 1TB of memory allocated to caches. Do you have that much spare on your server? Linux will allocate *up to* the specified amount *as long as it's spare*. It would be worth looking at your server to ensure that 1TB memory is spare before blaming nginx.<br class=""><br class="">You can further improve performance and safety by mounting them<br class=""><br class="">nodev,noexec,nosuid,noatime,async,size=xxxM,mode=0755,uid=xx,gid=xx<br class=""><br class="">To answer this poster... memory is even faster!<br class=""><br class="">Steve<br class=""><br class="">-- <br class="">Steve Holdoway BSc(Hons) MIITP<br class=""><a href="http://www.greengecko.co.nz" class="">http://www.greengecko.co.nz</a><br class="">Linkedin: http://www.linkedin.com/in/steveholdoway<br class="">Skype: sholdowa<br class=""><br class="">_______________________________________________<br class="">nginx mailing list<br class="">nginx@nginx.org<br class="">http://mailman.nginx.org/mailman/listinfo/nginx<br class=""></div></div></blockquote></div><br class=""></body></html>