<html><head><meta http-equiv="Content-Type" content="text/html; charset=utf-8"></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;" class="">Quintin,<div class=""><br class=""></div><div class="">Are most of your requests for dynamic or static content?</div><div class="">Are the requests clustered such that there is a lot of requests for a few (between 5 and 200, say) URLs?</div><div class="">If three different people make same request do they get personalized or identical content returned?</div><div class="">How long are the cached resources valid for?</div><div class=""><br class=""></div><div class="">I have seen layered caches deliver enormous benefit both in terms of performance and ensuring availability- which is usually</div><div class="">synonymous with “protecting teh backend.” That protection was most useful when, for example</div><div class=""> I was working on a site that would get mentioned in a tv show at known time of the day every week.</div><div class="">nginx proxy_cache was invaluable at helping the site stay up and responsive when hit with enormous spikes of requests.</div><div class=""><br class=""></div><div class="">This is nuanced, subtle stuff though.</div><div class=""><br class=""></div><div class="">Is your site something that you can disclose publicly?</div><div class=""><br class=""></div><div class=""><br class=""></div><div class="">Peter</div><div class=""><br class=""></div><div class=""><br class=""><div><br class=""><blockquote type="cite" class=""><div class="">On 12 Sep 2018, at 7:23 PM, Quintin Par <<a href="mailto:quintinpar@gmail.com" class="">quintinpar@gmail.com</a>> wrote:</div><br class="Apple-interchange-newline"><div class=""><div dir="ltr" class=""><div class="gmail_default" style="font-family: arial, helvetica, sans-serif;"><div style="margin: 0in 0in 0.0001pt; font-size: 12pt; font-family: Calibri, sans-serif;" class="">Hi Lucas,</div><p class="MsoNormal" style="margin:0in 0in 0.0001pt;font-size:12pt;font-family:Calibri,sans-serif"> </p><div style="margin: 0in 0in 0.0001pt; font-size: 12pt; font-family: Calibri, sans-serif;" class="">The cache is pretty big and I want to limit unnecessary requests if I can. Cloudflare is in front of my machines and I pay for load balancing, firewall, Argo among others. So there is a cost per request.</div><p class="MsoNormal" style="margin:0in 0in 0.0001pt;font-size:12pt;font-family:Calibri,sans-serif"> </p><div style="margin: 0in 0in 0.0001pt; font-size: 12pt; font-family: Calibri, sans-serif;" class="">Admittedly I have a not so complex cache architecture. i.e. all cache machines in front of the origin and it has worked so far. This is also because I am not that great a programmer/admin :-)</div><p class="MsoNormal" style="margin:0in 0in 0.0001pt;font-size:12pt;font-family:Calibri,sans-serif"> </p><div style="margin: 0in 0in 0.0001pt; font-size: 12pt; font-family: Calibri, sans-serif;" class="">My optimization is not primarily around hits to the origin, but rather bandwidth and number of requests.</div><p class="MsoNormal" style="margin:0in 0in 0.0001pt;font-size:12pt;font-family:Calibri,sans-serif"> </p></div><br clear="all" class=""><div class=""><div dir="ltr" class="gmail_signature" data-smartmail="gmail_signature">- Quintin</div></div><br class=""></div><br class=""><div class="gmail_quote"><div dir="ltr" class="">On Wed, Sep 12, 2018 at 1:06 PM Lucas Rolff <<a href="mailto:lucas@lucasrolff.com" class="">lucas@lucasrolff.com</a>> wrote:<br class=""></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Can I ask, why do you need to start with a warm cache directly? Sure it will lower the requests to the origin, but you could implement a secondary caching layer if you wanted to (using nginx), so you’d have your primary cache in let’s say 10 locations, let's say spread across 3 continents (US, EU, Asia), then you could have a second layer that consist of a smaller amount of locations (1 instance in each continent) - this way you'll warm up faster when you add new servers, and it won't really affect your origin server.<br class="">
<br class="">
It's a lot more clean also because you're able to use proxy_cache which is really what (in my opinion) you should use when you're building caching proxies.<br class="">
<br class="">
Generally I'd just slowly warm up new servers prior to putting them into production, get a list of top X files accessed, and loop over them to pull them in as a normal http request.<br class="">
<br class="">
There's plenty of decent solutions (some more complex than others), but there should really never be a reason to having to sync your cache across machines - even for new servers.<br class="">
<br class="">
_______________________________________________<br class="">
nginx mailing list<br class="">
<a href="mailto:nginx@nginx.org" target="_blank" class="">nginx@nginx.org</a><br class="">
<a href="http://mailman.nginx.org/mailman/listinfo/nginx" rel="noreferrer" target="_blank" class="">http://mailman.nginx.org/mailman/listinfo/nginx</a></blockquote></div>
_______________________________________________<br class="">nginx mailing list<br class=""><a href="mailto:nginx@nginx.org" class="">nginx@nginx.org</a><br class="">http://mailman.nginx.org/mailman/listinfo/nginx</div></blockquote></div><br class=""></div></body></html>