Avoiding Nginx restart when rsyncing cache across machines

Peter Booth peter_booth at me.com
Wed Sep 12 23:33:37 UTC 2018


Quintin,

Are most of your requests for dynamic or static content?
Are the requests clustered such that there is a lot of requests for a few (between 5 and 200, say) URLs?
If three different people make same request do they get personalized or identical content returned?
How long are the cached resources valid for?

I have seen layered caches deliver enormous benefit both in terms of performance and ensuring availability- which is usually
synonymous with “protecting teh backend.”  That protection was most useful when, for example
 I was working on a site that would get mentioned in a tv show at known time of the day every week.
nginx proxy_cache was invaluable at helping the  site stay up and responsive when hit with enormous spikes of requests.

This is nuanced, subtle stuff though.

Is your site something that you can disclose publicly?


Peter



> On 12 Sep 2018, at 7:23 PM, Quintin Par <quintinpar at gmail.com> wrote:
> 
> Hi Lucas,
>  
> The cache is pretty big and I want to limit unnecessary requests if I can. Cloudflare is in front of my machines and I pay for load balancing, firewall, Argo among others. So there is a cost per request.
>  
> Admittedly I have a not so complex cache architecture. i.e. all cache machines in front of the origin and it has worked so far. This is also because I am not that great a programmer/admin :-)
>  
> My optimization is not primarily around hits to the origin, but rather bandwidth and number of requests.
>  
> 
> - Quintin
> 
> 
> On Wed, Sep 12, 2018 at 1:06 PM Lucas Rolff <lucas at lucasrolff.com <mailto:lucas at lucasrolff.com>> wrote:
> Can I ask, why do you need to start with a warm cache directly? Sure it will lower the requests to the origin, but you could implement a secondary caching layer if you wanted to (using nginx), so you’d have your primary cache in let’s say 10 locations, let's say spread across 3 continents (US, EU, Asia), then you could have a second layer that consist of a smaller amount of locations (1 instance in each continent) - this way you'll warm up faster when you add new servers, and it won't really affect your origin server.
> 
> It's a lot more clean also because you're able to use proxy_cache which is really what (in my opinion) you should use when you're building caching proxies.
> 
> Generally I'd just slowly warm up new servers prior to putting them into production, get a list of top X files accessed, and loop over them to pull them in as a normal http request.
> 
> There's plenty of decent solutions (some more complex than others), but there should really never be a reason to having to sync your cache across machines - even for new servers.
> 
> _______________________________________________
> nginx mailing list
> nginx at nginx.org <mailto:nginx at nginx.org>
> http://mailman.nginx.org/mailman/listinfo/nginx <http://mailman.nginx.org/mailman/listinfo/nginx>_______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.nginx.org/pipermail/nginx/attachments/20180912/148e6014/attachment-0001.html>


More information about the nginx mailing list