.htaccess issues
Max
nginxyz at mail.ru
Mon Feb 13 01:08:04 UTC 2012
12 февраля 2012, 19:37 от Guilherme <guilherme.e at gmail.com>:
> On Fri, Feb 10, 2012 at 6:08 PM, Max <nginxyz at mail.ru> wrote:
>
> >
> > 10 февраля 2012, 23:40 от Guilherme <guilherme.e at gmail.com>:
> > > This would fix the problem, but I don't know the directories that has a
> > > .htaccess file with allow/deny.
> > >
> > > Example:
> > >
> > > Scenario: nginx (cache/proxy) + back-end apache
> > >
> > > root at srv1 [~]# ls -a /home/domain/public_html/restrictedimages/
> > > ./ ../ .htaccess image.jpg
> > > root at srv1 [~]# cat /home/domain/public_html/restrictedimages/.htaccess
> > > allow from x.x.x.x
> > > deny from all
> > >
> > > In the first access (source IP: x.x.x.x) to
> > > http://domain.com/restrictedimages/image.jpg, nginx proxy request to
> > apache
> > > and cache response. The problem comes in other request from other IP
> > > address different from x.x.x.x. Nginx deliver the objects from cache,
> > even
> > > if the ip address is not authorized, because nginx doesn't understand
> > > .htaccess.
> > >
> > > I would like to bypass cache in this cases, maybe using
> > proxy_cache_bypass,
> > > but I don't know how. Any idea?
> >
> > You could use this:
> >
> > proxy_cache_key $scheme$remote_addr$host$$server_port$request_uri;
> >
> > This would make originating IP addresses ($remote_addr) part of
> > the cache key, so different clients would get the correct responses
> > from the cache just as if they were accessing the backend directly,
> > there's no need to bypass the cache at all.
> >
> > Max
> > _______________________________________________
> > nginx mailing list
> > nginx at nginx.org
> > http://mailman.nginx.org/mailman/listinfo/nginx
> >
>
> Max, good idea, but in the other requests, that I want to cache responses,
> the cache size will grow too fast, because the same object will be cached a
> lot of times, cause the ip adress is in the cache key (one cache entry per
> IP).
I suggest you recompile your nginx with the Lua module included:
http://wiki.nginx.org/HttpLuaModule
Then you could use something like this:
proxy_cache_key $scheme$host$server_port$uri;
location / {
access_by_lua '
local res = ngx.location.capture("/test_access" .. ngx.var.request_uri)
if res.status == ngx.HTTP_OK then
return
end
if res.status == ngx.HTTP_FORBIDDEN then
ngx.exit(res.status)
end
ngx.exit(ngx.HTTP_INTERNAL_SERVER_ERROR)
';
proxy_set_header Host $host:$proxy_port;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_pass http://backend/;
}
location /test_access/ {
internal;
proxy_method HEAD;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_cache_bypass "Always bypass the cache!";
proxy_no_cache "Never store the response in the cache!";
proxy_pass http://backend/;
}
The access_by_lua block initiates a local non-blocking subrequest
for "/test_access/$request_uri", which is handled by the /test_access/
location block as follows: the request method is set to HEAD instead
of the original POST or GET request in order to find out whether the
original request would be allowed or denied without the overhead of
having to transfer any files.
The X-Forwarded-For header is also reset to the originating IP address.
Any X-Forwarded-For headers set by clients are removed and replaced,
so the backend server can rely on this header for IP-based access
control. The Apache mod_remoteip module can be configured to
make sure Apache always uses the originating IP address from the
X-Forwarded-For header:
http://httpd.apache.org/docs/trunk/mod/mod_remoteip.html
The next two directives make sure that the cache is always bypassed
and that no HEAD request responses are cached because you want
to make sure you have the latest access control information. The original
request URI is then passed on to the backend (note the trailing slash),
and the response is captured in the res variable inside the access_by_lua
block. If the subrequest was completed with the HTTP OK status code,
access is allowed, so after returning from the access_by_lua block
the Host and X-Forwarded-For headers are set and the original request
is processed - first the cache is checked and if there is no matching
entry the request is passed on to the backend server and the response
is cached under such a key that makes it possible for a single copy of
a file to be stored in the cache.
If the subrequest is completed with the HTTP FORBIDDEN status code
or any other error, the access_by_lua block is exited in a way that
terminates further processing and returns the status code.
There you go, thanks to the speed and non-blocking nature of Lua, you
now have a solution that causes minimal overhead by allowing you to
take full advantage of both caching and IP-based access control.
Max
More information about the nginx
mailing list