Understanding HttpLimitReqModule

Maxim Dounin mdounin at mdounin.ru
Fri Feb 25 16:18:48 MSK 2011


On Fri, Feb 25, 2011 at 11:04:34AM +0100, Adrian von Stechow wrote:

> Hi all:
> I'm trying to understand the HttpLimitReqModule, the wiki is a bit
> unverbose about the terminology.
> I'm trying to mimic Apache's mod_evasive module, specifically there is
> an annoying user that likes to request the same image once every
> second for hours at a time. I would like to log this and then use
> fail2ban to block the IP for a specific time. The problem is that the
> image in question is a legitimate request that shows up on every page
> of the site in question. What I had in mind:
> limit_req_zone  $binary_remote_addr  zone=one:1m   rate=50r/m;
> #offending user: 60r/m
>     server {
>         location = /path/to/image.jpg {
>             limit_req   zone=one  burst=???;
>             limit_req_log_level error
>         }
> The problem is the low rate with which the offending requests are
> made. mod_evasive lets you set up a timespan in which a specific
> number of requests are made, while nginx checks "online" if a second
> request is made after 1/rate. In my case (1 offending request per
> second), legitimate users would be blocked if they load 2 pages in one
> second, which of course happens frequently.
> Any suggestions?

Set burst= to be high enough to accomodate occasional request 
bursts from legitimate users, and rate= to be somewhere between 
typical one for legitimate users (on average during relatively 
long time period) and offending one.  

If you just want to block offending user which does 60 requests 
per minute, and your typical users do about 1 requests per 
minute, but occasinally may produce something like 100 requests in 
short time frame, you may set something like:

    limit_req_zone $binary_remote_addr zone=one:1m rate=10r/m;

    location ... {
        limit_req zone=one burst=100 nodelay;

This will allow legitimate users to do 10r/m on average and up to 
100 requests in the very same second.  On the other hand, 
offending user with 60 r/m will start getting 503's after about 2 
minutes: 120 requests with 20 allowed by rate will overflow 

Please refer to http://en.wikipedia.org/wiki/Leaky_Bucket for 
algorithm details.

I also recommend using "nodelay" flag unless you really want to 
control rate at which requests are passed to backends or something 
like this.

Maxim Dounin

More information about the nginx mailing list