bug in limit_req

Maxim Dounin mdounin at mdounin.ru
Mon Mar 5 16:00:09 UTC 2012


Hello!

On Mon, Mar 05, 2012 at 10:34:01AM -0500, double wrote:

> limit_req does not work with Nginx 1.1.16 - if used on different
> levels.
> `for a in `seq 300`; do wget -O /dev/null http://localhost:8081/; done`
> Exepected result: 503, result is always "502 bad gateway".
> Down below the config file.

[...]

> http {

[...]

>     limit_req_zone              $binary_remote_addr zone=everything_ip:32m rate=50r/s;
>     limit_req                   zone=everything_ip burst=50;
>     
>     limit_req_zone              $binary_remote_addr  zone=fastcgi_ip:32m rate=2r/s;
> 
>     server {

[...]

>         # fastcgi
>         location @backend {
>             limit_req           zone=fastcgi_ip burst=10;
>             fastcgi_pass        unix:/tmp/nginx.dispatch.sock;
>         }
> 
>         # static
>         location / {
>             try_files $uri @backend;
>             expires max;
>         }
>     }
> }

With such configuration the "limit_req zone=everything_ip 
burst=50;" as specified on http level will be used to limit 
requests (it's inherited into "location /" and checked there 
before try_file; the limit_req directive in @backend location 
isn't checked as we've already checked limit_req limits).

Furthermore, as the directive doesn't have the "nodelay" 
argument, it's not expected to overflow the burst (and return 503) 
if there is only single process doing requests (as in the sh 
snippet above).

That is, there is no bug here.  Probably there is a room for 
improvement though, as currently separate limiting of a location 
after try_files is only possible if there were no limits in the 
location with try_files.

Maxim Dounin



More information about the nginx mailing list