.htaccess issues
Max
nginxyz at mail.ru
Tue Feb 14 07:48:07 UTC 2012
14 февраля 2012, 01:44 от António P. P. Almeida <appa at perusio.net>:
> On 13 Fev 2012 21h13 WET, nginxyz at mail.ru wrote:
>
> >
> > 13 февраля 2012, 21:58 от António P. P. Almeida <appa at perusio.net>:
> >> On 12 Fev 2012 15h32 WET, guilherme.e at gmail.com wrote:
> >>
> >> You can use the auth_request module for that then.
> >>
> >> http://mdounin.ru/hg/ngx_http_auth_request_module
> >>
> >> I've replicated the mercurial repo on github:
> >>
> >> https://github.com/perusio/nginx-auth-request-module
> >>
> >> It involves setting up a location that proxy_pass(es) to the Apache
> >> upstream and returns 403 if not allowed to access.
> >
> > Maxim's auth_request module is great, but AFAIK, it doesn't
> > support caching, which makes it unsuited to the OP's
> > situation because the OP wants to cache large files from
> > the backend server(s).
> >
> > The access_by_lua solution I proposed, on the other hand,
> > does make it possible to cache the content, and if one should
> > want, even the IP-based authorization information in a
> > separate cache zone.
>
> AFAIK the authorization occurs well before any content is served. What
> does that have to with caching?
>
> Using access_by_lua with a subrequest like you suggested is, AFAICT,
> equivalent to using auth_request.
>
> IIRC the OP wanted first to check if a given client could access a
> certain file. If it can, then it gets the content from the cache or
> whatever he decides.
Have you ever actually used the auth_request module? Or have you at
least read the part of the auth_request module README file where Maxim
wrote:
"Note: it is not currently possible to use proxy_cache/proxy_store (and
fastcgi_cache/fastcgi_store) for requests initiated by auth request
module."
Let's take the example from Maxim's README file:
location /private/ {
auth_request /auth;
...
}
location = /auth {
proxy_pass ...
proxy_pass_request_body off;
proxy_set_header Content-Length "";
proxy_set_header X-Original-URI $request_uri;
}
Let's say you configure caching in the /private/ location block, and
the cache is empty. The first matching request would get passed on to
the backend server, which would send back the latest requested file,
if the request was allowed. The frontend server would then store the
file in the cache and send it back to the client, as expected.
The next matching request would again be passed on to the backend
server, which would again send back the latest requested file,
if the request was allowed, but this time the frontend server would
send back to the client NOT the LATEST file, but the OLD file from
the CACHE. The old file would remain in the cache, from where it would
keep getting sent back to clients until it expired, while each new allowed
request would cause the latest requested file to be retrieved from
the backend server and then DISCARDED.
Turning the proxy_no_cache directive on would prevent anything from
being stored in the cache, as expected.
Turning the proxy_bypass directive on would cause the cache to be
bypassed, and the latest requested file to be both sent back to the
client and stored in the cache each time (as long as proxy_no_cache
wasn't turned on), but either way you'd end up retrieving the file
from the backend server on every request, which defeats the purpose of
caching.
However, forbidden response codes from the backend server are always
correctly sent back to clients, and are never cached.
Now, let's say you've given up on caching in the /private/ location
block and decided to configure caching in the /auth location block.
Again, the cache is empty. Here the first matching request passed
on from the /private/ location block would be sent on to the backend
server, which would send back the latest requested file, if the request
was allowed. The frontend server would then store this file in the cache,
but instead of sending it back to the client, it would just TERMINATE
the connection (444-style)!
The next matching request would again get passed on to the backend
server, which would again send back the latest requested file, if the
request was allowed, and in this case, the frontend server would
send back the latest requested file to the client, but ONLY if there
was an EXISTING cache entry for the request cache key! If there
was NO cache entry for the request cache key, then the requested file
would get retrieved from the backend server and stored in the cache,
if the request was allowed, but NOTHING would be sent back to the client
and the connection would be TERMINATED 444-style.
Once a file got stored in the cache, it would REMAIN in the CACHE
until it expired, while each new allowed request would cause the
latest requested file to be retrieved from the backend, sent back
to the client and DISCARDED without replacing the old file in the cache.
If there was no cache entry for a request cache key and the
proxy_no_cache directive was turned on, then each and every request
would cause the requested file to be retrieved from the backend server
and discarded, while the connection would ALWAYS be TERMINATED
444-style.
Turning the proxy_bypass directive on would cause the cache to be
bypassed, and the latest requested file to be retrieved from the
backend server and stored in the cache, but nothing would be sent
back to the client, and the connection would again be terminated
444-style.
So, as you can surely see by now, using caching with the auth_request
module not only defeats the purpose of caching, but also violates
the expected functionality in serious and totally unexpected ways.
The access_by_lua solution I proposed, on the other hand, can safely
be used with caching.
Maxim, feel free to add this explanation to the auth_request module
README file.
Max
More information about the nginx
mailing list