Avoiding Nginx restart when rsyncing cache across machines

Quintin Par quintinpar at gmail.com
Thu Sep 13 06:14:25 UTC 2018


Hi Peter,



Here are my stats for this week: https://imgur.com/a/JloZ37h . The Bypass
is only because I was experimenting with some cache warmer scripts. This is
primarily a static website.

Here’s my URL hit distribution: https://imgur.com/a/DRJUjPc

If three people are making the same request, they get identical content. No
personalization. The pages are cached for 200 days and inactive in
proxy_cache_path set to 60 days.



This is embarrassing but my CDNs are primarily $5 digital ocean machines
across the web with this Nginx cache setup. The server response time
averages at 0.29 seconds. Prior to doing my ghetto CDNing this was at 0.98
seconds. I am pretty proud that I have survived several Slashdot effects on
the $5 machines serving cached content peaking at 2500 requests/second
without any issues.



Since this is working well, I don’t want to do any layered caching, unless
there is a compelling reason.

- Quintin


On Wed, Sep 12, 2018 at 4:32 PM Peter Booth via nginx <nginx at nginx.org>
wrote:

> Quintin,
>
> Are most of your requests for dynamic or static content?
> Are the requests clustered such that there is a lot of requests for a few
> (between 5 and 200, say) URLs?
> If three different people make same request do they get personalized or
> identical content returned?
> How long are the cached resources valid for?
>
> I have seen layered caches deliver enormous benefit both in terms of
> performance and ensuring availability- which is usually
> synonymous with “protecting teh backend.”  That protection was most useful
> when, for example
>  I was working on a site that would get mentioned in a tv show at known
> time of the day every week.
> nginx proxy_cache was invaluable at helping the  site stay up and
> responsive when hit with enormous spikes of requests.
>
> This is nuanced, subtle stuff though.
>
> Is your site something that you can disclose publicly?
>
>
> Peter
>
>
>
> On 12 Sep 2018, at 7:23 PM, Quintin Par <quintinpar at gmail.com> wrote:
>
> Hi Lucas,
>
>
> The cache is pretty big and I want to limit unnecessary requests if I can.
> Cloudflare is in front of my machines and I pay for load balancing,
> firewall, Argo among others. So there is a cost per request.
>
>
> Admittedly I have a not so complex cache architecture. i.e. all cache
> machines in front of the origin and it has worked so far. This is also
> because I am not that great a programmer/admin :-)
>
>
> My optimization is not primarily around hits to the origin, but rather
> bandwidth and number of requests.
>
>
>
> - Quintin
>
>
> On Wed, Sep 12, 2018 at 1:06 PM Lucas Rolff <lucas at lucasrolff.com> wrote:
>
>> Can I ask, why do you need to start with a warm cache directly? Sure it
>> will lower the requests to the origin, but you could implement a secondary
>> caching layer if you wanted to (using nginx), so you’d have your primary
>> cache in let’s say 10 locations, let's say spread across 3 continents (US,
>> EU, Asia), then you could have a second layer that consist of a smaller
>> amount of locations (1 instance in each continent) - this way you'll warm
>> up faster when you add new servers, and it won't really affect your origin
>> server.
>>
>> It's a lot more clean also because you're able to use proxy_cache which
>> is really what (in my opinion) you should use when you're building caching
>> proxies.
>>
>> Generally I'd just slowly warm up new servers prior to putting them into
>> production, get a list of top X files accessed, and loop over them to pull
>> them in as a normal http request.
>>
>> There's plenty of decent solutions (some more complex than others), but
>> there should really never be a reason to having to sync your cache across
>> machines - even for new servers.
>>
>> _______________________________________________
>> nginx mailing list
>> nginx at nginx.org
>> http://mailman.nginx.org/mailman/listinfo/nginx
>
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
>
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.nginx.org/pipermail/nginx/attachments/20180912/6d2e38f3/attachment.html>


More information about the nginx mailing list