limit_req_zone limit by location/proxy
Maxim Dounin
mdounin at mdounin.ru
Wed Nov 13 11:27:13 UTC 2013
Hello!
On Tue, Nov 12, 2013 at 09:24:57PM -0600, Justin Deltener wrote:
> For the life of me I can't seem to get my configuration correct to limit
> requests. I'm running nginx 1.5.1 and have it serving up static content and
> pushing all non-existent requests to the apache2 proxy backend for serving
> up. I don't want to limit any requests to static content but do want to
> limit requests to the proxy. It seems no matter what I put in my
> configuration I continue to see entries in the error log for ip addresses
> which are not breaking the rate limit.
>
> 2013/11/12 20:55:28 [warn] 10568#0: *1640292 delaying request, excess:
> 0.412, by zone "proxyzone" client ABCD
>
> I've tried using a map in the top level like so
>
> limit_req_zone $limit_proxy_hits zone=proxyzone:10m rate=4r/s;
>
> map $request_filename $limit_proxy_hits
> {
> default "";
> ~/$ $binary_remote_addr; (only limit filename requests ending in
> slash as we may have something.php which should not be limited)
> }
>
> yet when i look at the logs, ip ABCD has been delayed for a url ending in
> slash BUT when i look at all proxy requests for the IP, it is clearly not
> going over the limit. It really seems that no matter what, the
> limit_req_zone still counts static content against the limit or something
> else equally as confusing.
>
> I've also attempted
>
> limit_req_zone $limit_proxy_hits zone=proxyzone:10m rate=4r/s;
>
> and then use $limit_proxy_hits inside the server/location
>
> server
> {
> set $limit_proxy_hits "";
>
> location /
> {
> set $limit_proxy_hits $binary_remote_addr;
> }
> }
>
> and while the syntax doesn't bomb, it seems to exhibit the exact same
> behavior as above as well.
>
> ASSERT:
>
> a) When i clearly drop 40 requests from an ip, it clearly lays the smack
> down on a ton of requests as it should
> b) I do a kill -HUP on the primary nginx process after each test
> c) I keep getting warnings on requests from ip's which are clearly not
> going over the proxy limit
> d) I have read the leaky-bucket algorithm and unless i'm totally missing
> something a max of 4r/s should always allow traffic until we start to go
> OVER 4r/s which isn't the case.
>
> The documentation doesn't have any real deep insight into how this works
> and I could really use a helping hand. Thanks!
Just some arbitrary facts:
1. The config you've provided doesn't configure any limits, as it
doesn't contatin limit_req directive. See
http://nginx.org/r/limit_req for documentation.
2. The "delaying request" message means exactly this - nginx is
delaying requests since average speed of requests exceeds
configured request rate. It basically means that the "bucket"
isn't empty and a request have to wait some time till it will be
allowed to continue. This message shouldn't be confused with
"limiting requests" message, which is logged when requests are
rejected due to burst limit reached.
As long as rate is set to 4r/s, it's enough to do two requests
with less than 250ms between them to trigger "delaying request"
message, which can easily happen as a pageview usually results in
multiple requests (one request to load the page itself, and
several other requests to load various include resources like css,
images and so on).
It might be a good idea to use "limit_req ... nodelay" to
instruct nginx to don't do anything unless configured burst limit
is reached.
3. Doing a "kill -HUP" doesn't clear limit_req stats and mostly
useless between tests.
4. To differentiate between various resources, there is a
directive called "location", see http://nginx.org/r/location.
If you want to limit requests to some resources, but not others, it's
good idea to do so by using two distinct locations, e.g.:
location / {
limit_req zone burst=10 nodelay;
proxy_pass http://...
}
location /static/ {
# static files, no limit_req here
}
5. The documentation is here:
http://nginx.org/en/docs/http/ngx_http_limit_req_module.html
--
Maxim Dounin
http://nginx.org/en/donation.html
More information about the nginx
mailing list