[PATCH] Cache: provide naive feedback loop for uncacheable URLs
mdounin at mdounin.ru
Wed Apr 29 12:56:20 UTC 2015
On Tue, Apr 28, 2015 at 03:03:10PM -0700, Piotr Sikora wrote:
> Hey Maxim,
> > I'm not really sure this is a right change.
> It is, without this change proxy_cache_lock basically serializes
> access to the upstream.
It is designed to do so, and does exactly what it's expected to
I'm perfectly understand what you are trying to do an why. That
is, you are trying to teach proxy_cache_lock to be smart and to
switch itself off automatically once it sees an uncacheable
response - because it's not practical to configure proxy_cache_lock
on a per-resource basis when proxying arbitrary client sites.
But as I said, I'm not sure it's a right change. May be it is, or
may be it should be an optional behaviour, or may be we need some
other way to address this problem.
> > In particular, because "uncacheable" is a property of a response, not URLs.
> That's nitpicking, but you're right, I should have used "cache key",
> not "URL", but that only applies to the commit message and not the
> Regarding the code, the "uncacheable" flag is only set for a
> particular cache key after nginx receives uncacheable response for it
> and it's only used to skip the cache lock on subsequent requests for
> the same cache key, not to determine cacheability of the response.
You can't set "uncacheable" flag for a cache key, as you never
know if a particular response will be cacheable or not. E.g., for
some client a backend may return an uncacheable response because
this client is somewhat special - e.g., have some debug cookie set
or uses some form of conditional requests. And another client
will get a cacheable response for the same key. So an obvious
edge case with your code is:
- a special client gets an uncacheable response (e.g., something
like "we don't like you, here is a captcha"), the "unceacheable"
flag is set;
- N (or, rather, M, as N isn't big enough) normal clients are
passed to an upstream server, and all load cacheable responses.
Of course this is an edge case, but it is to be considered.
More information about the nginx-devel