NGINX stale-while-revalidate cluster

Joan Tomàs i Buliart joan.tomas at
Sat Jul 8 13:00:40 UTC 2017

Thanks Owen!

We considered all the options on these 2 documents but, on our 
environment in which is important to use stale-while-revalidate, all of 
them have, at least, one of these drawbacks: or it adds a layer in the 
fast path to the content or it can't guarantee that one request on a 
stale content will force the invalidation off all the copies of this object.

That is the reason for which we are looking for a "background" 
alternative to update the content.

Many thanks in any case,


On 07/07/17 16:04, Owen Garrett wrote:
> There are a couple of options described here that you could consider 
> if you want to share your cache between NGINX instances:
> describes 
> a sharded cache approach, where you load-balance by URI across the 
> NGINX cache servers.  You can combine your front-end load balancers 
> and back-end caches onto one tier to reduce your footprint if you wish
> describes 
> an alternative HA (shared) approach that replicates the cache so that 
> there’s no increased load on the origin server if one cache server fails.
> It’s not possible to share a cache across instances by using a shared 
> filesystem (e.g. nfs).
> ---
> owen at <mailto:owen at>
> Skype: owen.garrett
> Cell: +44 7764 344779
>> On 7 Jul 2017, at 14:39, Peter Booth <peter_booth at 
>> <mailto:peter_booth at>> wrote:
>> You could do that but it would be bad. Nginx' great performance is 
>> based on serving files from a local Fisk and the behavior of a Linux 
>> page cache. If you serve from a shared (nfs) filsystem then every 
>> request is slower. You shouldn't slow down the common case just to 
>> increase cache hit rate.
>> Sent from my iPhone
>> On Jul 7, 2017, at 9:24 AM, Frank Dias <frank.dias at 
>> <mailto:frank.dias at>> wrote:
>>> Have you thought about using a shared file system for the cache. 
>>> This way all the nginx 's are looking at the same cached content.
>>> On Jul 7, 2017 5:30 AM, Joan Tomàs i Buliart <joan.tomas at 
>>> <mailto:joan.tomas at>> wrote:
>>>     Hi Lucas
>>>     On 07/07/17 12:12, Lucas Rolff wrote:
>>>     > Instead of doing round robin load balancing why not do a URI
>>>     based
>>>     > load balancing? Then you ensure your cached file is only
>>>     present on a
>>>     > single machine behind the load balancer.
>>>     Yes, we considered this option but it forces us to deploy and
>>>     maintain
>>>     another layer (LB+NG+AppServer). All cloud providers have round
>>>     robin
>>>     load balancers out-of-the-box but no one provides URI based load
>>>     balancer. Moreover, in our scenario, our webservers layer is quite
>>>     dynamic due to scaling up/down.
>>>     Best,
>>>     Joan
>>>     _______________________________________________
>>>     nginx mailing list
>>>     nginx at <mailto:nginx at>
>>> This message is confidential to Prodea unless otherwise indicated or 
>>> apparent from its nature. This message is directed to the intended 
>>> recipient only, who may be readily determined by the sender of this 
>>> message and its contents. If the reader of this message is not the 
>>> intended recipient, or an employee or agent responsible for 
>>> delivering this message to the intended recipient:(a)any 
>>> dissemination or copying of this message is strictly prohibited; 
>>> and(b)immediately notify the sender by return message and destroy 
>>> any copies of this message in any form(electronic, paper or 
>>> otherwise) that you have.The delivery of this message and its 
>>> information is neither intended to be nor constitutes a disclosure 
>>> or waiver of any trade secrets, intellectual property, attorney work 
>>> product, or attorney-client communications. The authority of the 
>>> individual sending this message to legally bind Prodea is neither 
>>> apparent nor implied,and must be independently verified.
>>> _______________________________________________
>>> nginx mailing list
>>> nginx at <mailto:nginx at>
>> _______________________________________________
>> nginx mailing list
>> nginx at <mailto:nginx at>
> _______________________________________________
> nginx mailing list
> nginx at

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <>

More information about the nginx mailing list