Mark stale cache content as "invalid" on non-cacheable responses

Maxim Dounin mdounin at mdounin.ru
Mon Nov 23 01:17:10 UTC 2015


Hello!

On Wed, Nov 18, 2015 at 10:47:18PM +0000, Mindaugas Rasiukevicius wrote:

> Maxim Dounin <mdounin at mdounin.ru> wrote:
> > > 
> > >    A cache MUST NOT generate a stale response if it is prohibited by an
> > >    explicit in-protocol directive (e.g., by a "no-store" or "no-cache"
> > >    cache directive, a "must-revalidate" cache-response-directive, or an
> > >    applicable "s-maxage" or "proxy-revalidate" cache-response-directive;
> > >    see Section 5.2.2).
> > 
> > The response stored in cache doesn't have "no-cache" nor any other 
> > directives in it, and this "MUST NOT" certainly doesn't apply to 
> > it.
> > 
> > In the scenario I've described, the response in cache is a correct 
> > (but stale) response, as returned by an upstream server when it 
> > was up and running normally.
> > 
> > In the scenario you've described, the response in cache is a 
> > "temporary cacheable error", and it doesn't have any directives 
> > attached to it either.
> 
> It does not, but the *subsequent* request does and the cache should obey
> the most recent one.  I am not sure why are you focusing on the
> per-request narrative when the cache is inherently about the state.  The
> Cache-Control header is a way to control that state.  Again, RFC seems to
> be fairly clear: the "cache MUST NOT reuse a stored response" (note the
> word "reuse"), unless as described in the last bullet point of section 4,
> the stored response is either:
> 
>       *  fresh (see Section 4.2), or
>       *  allowed to be served stale (see Section 4.2.4), or
>       *  successfully validated (see Section 4.3).
> 
> The race conditions we are talking about happen *after* the upstream
> server advertises 'no-cache', therefore the second point is no longer
> satisfied (and, of course, neither are the other two).  And further:
> 
>    When more than one suitable response is stored, a cache MUST use the
>    most recent response (as determined by the Date header field).  It
>    can also forward the request with "Cache-Control: max-age=0" or
>    "Cache-Control: no-cache" to disambiguate which response to use.

No claims here suggest that cache may not use a previously stored 
response if some other response was received with "Cache-Control: 
no-cache" (and thus not stored by cache).

Anyway, RFC more or less completely rules out returning stale 
content anyway.  And this is also not something nginx does by 
default.  For nginx to return stale content you have to explicitly 
configure it to do so.

> > Use case 1, a cache with possible errors:
> > 
> > A high traffic resource, which normally can be cached for a long 
> > time, but takes a long time to generate.  A response is stored in 
> > the cache, and "proxy_cache_use_stale updating" is used to prevent 
> > multiple clients from updating the cache at the same time.  If at 
> > some point a request to update the cache fails / times out, an 
> > "degraded" version is returned with caching disabled (this can be 
> > an error, or normal response without some data).  The response 
> > previously stored in the cache is preserved and will be returned 
> > to other clients while we'll try to update the cache again.
> 
> The cache can still legitimately serve the content while update is in
> progress and if the cache itself experienced a timeout while fetching
> from the upstream server (because the state is still "unknown" for the
> cache).  However, if the upstream server sent a response with 'no-cache',
> then as undesirable as it sounds, I think the correct thing to do is to
> invalidate the existing stale object.  Simply because the cache cannot
> know whether it is a temporary error or a deliberate change of state into
> an error.  The invalidation is inefficient, but it ensures correctness.
> 
> I agree it is real a problem, though.  It seems that the 'stale-if-error'
> proposed in RFC 5861 you mentioned was suggested exactly for this purpose.
> On the other hand, if you do not have the control over the upstream server
> and it responds with a cacheable error (say max-age=3 as in the previous
> example), then that will also nuke your stale cache object.

Most trivial behaviour to ensure correctness is to don't use stale 
content at all.  And this is what's done by default.  

> > Use case 2, a cache with non-cacheable private responses:
> > 
> > A resource has two versions: one is "general" and can/should be 
> > cached (e.g., "guest user" version of a page), and another one 
> > is private and should not be cached by nginx ("logged in user" 
> > version).  The "proxy_cache_bypass" directive is used to determine 
> > if a cached version can be returned, or a request to an upstream 
> > server is needed.  "Logged in" responses are returned with disabled 
> > caching, while "guest user" responses are cacheable.
> 
> Fair point, but in this case the cache invalidation logic should take
> into account the proxy_cache_bypass condition.  My patch simply did not
> address that.

It simply breaks this use case and the previous one.  And that's 
why the patch is rejected.

> > > Okay, so what solution do you propose?
> > 
> > As I already wrote in the very first reply, I'm not even sure a 
> > good solution exists.  May be some timeouts like ones proposed by 
> > rfc5861 will work (though this will limit various "use stale" cases 
> > considerably with low timeouts, and won't help much with high 
> > ones).  Or may be we can introduce some counters/heuristics to 
> > detect cacheable->uncacheable transitions.  May be just enforcing 
> > "inactive" time on such resources regardless of actual requests 
> > will work (but unlikely, as an upstream server can be down for a 
> > considerable time in some cases).
> > 
> 
> I would say the right way would be to invalidate the object on 'no-cache',
> but provide an nginx option equivalent to 'stale-if-error' logic (or even
> better -- a generic directive to override Cache-Control value for a given
> HTTP code range).  I understand that breaking the existing configs would
> be undesirable.  How about introducing the inverted logic option, e.g.
> extending proxy_cache_use_stale with 'invalidate-on-nocache'?

As I previously said, I don't see a good solution.

-- 
Maxim Dounin
http://nginx.org/



More information about the nginx-devel mailing list