proxy_cache and X-Accel-Redirect

Francis Daly francis at
Wed Jun 28 12:51:41 UTC 2017

On Tue, Jun 27, 2017 at 11:12:04AM -0400, deivid__ wrote:

Hi there,

this is partly in response to this mail, and partly in response to
parallel responses in the thread.

First, some background:

The nginx proxy_pass directive is documented at

The other directives on that page are really only useful when proxy_pass is used.

The nginx uwsgi_pass directive is documented at

The other directives on that page are really only useful when uwsgi_pass is used.

You will be happier if you keep the distinction very clear in your head.

> The /data root was an example, in my case it's /v/

Also in general: if the first part of your example is not the same as
the second part of your example, then everyone who is not you must guess
at what you might have meant.

You are more likely to get a better response quicker, if you avoid the
need for guesswork.

> What I want to do is:
> - get a request like /v/c85320d9ddb90c13f4a215f1f0a87b531ab33310
> - proxy that to my back-end which tells nginx to serve a certain file
> (X-Accel-redirect).
> - I want to cache this file as the first access is expensive. (I want to
> cache *the file*, other requests can end up pointing to the same file,
> that's what I want to speed up).

This seems to be the key: what you want is not related to proxy_cache
or uwsgi_pass or X-Accel-Redirect; it is related to nginx serving a
local file.

You want for nginx to serve a local file from a slow, nfs-mounted file
system and cache it on another, faster, local file system.

nginx does not do that directly -- caching a local file would in general
just make an unnecessary disk copy; and the kernel is much better placed
to cache file contents in RAM.

The easy option would be for you to copy the files that you care about
from the slow filesystem to the faster filesystem, and just tell nginx
to serve from the faster one. There are probably reasons why you do not
do that.

The nginx-config option is for you to configure a separate server{}
which will serve content from the slow filesystem; and then in your
normally-used server{} you proxy_pass to that separate server. At the
place where you proxy_pass, you also either proxy_store or proxy_cache,
to have a copy of the file contents on your faster filesystem where it
can be served from directly the next time a matching request comes in.

proxy_store and proxy_cache are different; they do different things
and have different costs and benefits. They also need different
configuration. One may be more suitable than the other, for the thing
that you are trying to do.

> You are right that I'm not using proxy_pass; as my back-end is served by
> uwsgi I'm using uwsgi_pass.

If you want to do any kind of caching of the response from a uwsgi_pass
request, you will want to use uwsgi_cache.

You probably do *not* want to do that caching in this case.

Francis Daly        francis at

More information about the nginx mailing list