Ryan Malayter malayter at
Mon Apr 25 22:59:27 MSD 2011

On Sun, Apr 24, 2011 at 9:22 PM, Richard Kearsley
<Richard.Kearsley at> wrote:
> Thanks, I already set "proxy_cache_use_stale updating"
> But what if the file is completely new..? there is no stale file to serve

Ah, I see... the first request for a new file would put it into the
updating state, but there would be no stale version to serve.

I'm not sure what happens in that case, but I would suspect that all
of those requests go to the back end. This should be a very
short-lived condition though. In the majority of the cases we're
talking milliseconds, unless the files are really large or the
back-end is really slow.

Say you had a 50 MB video file and your back-end was something like
Amazon S3... I could see many many requests for that file coming in at
the same time the first was still processing. I'm not sure what the
best thing to do in that case would be.

The options, I think, would be:
1) return a 404 (or some temporary error code) until the cache is
primed (that doesn't seem like good default behavior)
2) block all other requests until the first is finished (also seems
problematic, especially if the first request is taking forever)
3) pass all requests to the back-end until there is a valid cache entry

I suspect nginx chooses option #3. Are you saying that you want to do
#2? Or something else entirely?

Varnish seems to do #2 by default:


More information about the nginx mailing list