How to control the total requests in Ngnix

tongshushan at migu.cn tongshushan at migu.cn
Sun Dec 3 03:58:16 UTC 2017


Hi Francis,

Thanks for help.
I might have misunderstood some concepts and rectify them here:
burst--bucket size;
rate--water leaks speed (not requests sent speed)

right?



Tong
 
From: Francis Daly
Date: 2017-12-02 19:02
To: nginx
Subject: Re: Re: How to control the total requests in Ngnix
On Fri, Dec 01, 2017 at 11:18:06AM +0800, tongshushan at migu.cn wrote:
 
Hi there,
 
Others have already given some details, so I'll try to put everything
together.
 
> limit_req_zone "all" zone=all:100m rate=2000r/s;
 
The size of the zone (100m, above) relates to the number of individual
key values that the zone can store -- if you have too many values for
the size, then things can break.
 
In your case, you want just one key; so you can have a much smaller
zone size.
 
Using 100m won't break things, but it will be wasteful.
 
 
The way that nginx uses the "rate" value is not "start of second, allow
that number, block the rest until the start of the next second". It is
"turn that number into time-between-requests, and block the second
request if it is within that time of the first".
 
> limit_req zone=all burst=100 nodelay;
 
"block" can be "return error immediately", or can be "delay until the
right time", depending on what you configure. "nodelay" above means
"return error immediately".
 
Rather than strictly requiring a fixed time between requests always, it
can be useful to enforce an average rate; in this case, you configure
"burst" to allow that many requests as quickly as they arrive, before
delaying-or-erroring on the next ones. That is, to use different numbers:
 
  rate=1r/s  with  burst=10
 
would mean that it would accept 10 requests all at once, but would not
accept the 11th until 10s later (in order to bring the average rate down
to 1r/s).
 
Note: that is not exactly what happens -- for that, read the fine source
-- but it is hopefully a clear high-level description of the intent.
 
 
And one other thing is relevant here: nginx counts in milliseconds. So
I think that you are unlikely to get useful rate limiting once you
approach 1000r/s.
 
> but when testing,I use tool to send the request at: Qps:486.1(not reach 2000)  I got the many many 503 error,and the error info as below:
> 
>  2017/12/01 11:08:29 [error] 26592#37196: *15466 limiting requests, excess: 101.000 by zone "all", client: 127.0.0.1, server: localhost, request: "GET /private/rush2purchase/inventory/aquire?productId=product1 HTTP/1.1", host: "localhost"
> 
> Why excess: 101.000? I set it as 2000r/s ?
 
If your tool sends all requests at once, nginx will handle "burst" before
trying to enforce your rate, and your "nodelay" means that nginx should
error immediately then.
 
If you remove "nodelay", then nginx should slow down processing without
sending the 503 errors.
 
If your tool sends one request every 0.5 ms, then nginx would have a
chance to process them all without exceeding the declared limit rate. (But
the server cannot rely on the client to behave, so the server has to be
told what to do when there is a flood of requests.)
 
 
 
As a way of learning how to limit requests into nginx, this is useful. As
a way of solving a specific problem that you have right now, it may or
may not be useful -- that depends on what the problem is.
 
Good luck with it,
 
f
-- 
Francis Daly        francis at daoine.org
_______________________________________________
nginx mailing list
nginx at nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.nginx.org/pipermail/nginx/attachments/20171203/90160917/attachment.html>


More information about the nginx mailing list