<div dir="ltr">Thank you very much for clearing this out. All I need to do is "limit_req_log_level warn;" and then I see limits as warn-logs and delaying as info, and hence I only view warn+ levels, it is omitted from the logfile completely.<div></div></div><div class="gmail_extra"><br clear="all"><div><div class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr"><div><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr">---</div><div dir="ltr"><br></div><div dir="ltr">Med venlig hilsen / Best Regards</div><div dir="ltr">Stephan Ryer Møller</div><div dir="ltr"><span style="font-size:12.8px">Partner & CTO</span><br></div><div dir="ltr"><br></div><div dir="ltr">inMobile ApS</div><div dir="ltr">Axel Kiers Vej 18L</div><div dir="ltr">DK-8270 Højbjerg</div><div dir="ltr"><br></div><div dir="ltr">Dir. +45 82 82 66 92</div><div dir="ltr">E-mail: <a href="mailto:sr@inmobile.dk" target="_blank">sr@inmobile.dk</a></div><div dir="ltr"><br></div><div dir="ltr">Web: <a href="http://www.inmobile.dk" target="_blank">www.inmobile.dk</a> </div><div dir="ltr">Tel: +45 88 33 66 99</div></div></div></div></div></div></div></div></div></div></div>
<br><div class="gmail_quote">2017-11-20 14:01 GMT+01:00 Maxim Dounin <span dir="ltr"><<a href="mailto:mdounin@mdounin.ru" target="_blank">mdounin@mdounin.ru</a>></span>:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hello!<br>
<span class=""><br>
On Mon, Nov 20, 2017 at 11:33:26AM +0100, Stephan Ryer wrote:<br>
<br>
> We are using nginx as a proxy server in front of our IIS servers.<br>
><br>
> We have a client who needs to call us up to 200 times per second. Due to<br>
> the roundtrip-time, 16 simultanious connections are opened from the client<br>
> and each connection is used independently to send a https request, wait for<br>
> x ms and then send again.<br>
><br>
> I have been doing some tests and looked into the throttle logic in the<br>
> nginx-code. It seems that when setting request limit to 200/sec it is<br>
> actually interpreted as “minimum 5ms per call” in the code. If we receive 2<br>
> calls at the same time, the warning log will show an “excess”-message and<br>
> the call will be delayed to ensure a minimum of 5ms between the calls..<br>
> (and if no burst is set, it will be an error message in the log and an<br>
> error will be returned to the client)<br>
><br>
> We have set burst to 20 meaning, that when our client only sends 1 request<br>
> at a time per connection, he will never get an error reply from nginx,<br>
> instead nginx just delays the call. I conclude that this is by design.<br>
<br>
</span>Yes, the code counts average request rate, and if it sees two<br>
requests with just 1ms between them the averate rate will be 1000<br>
requests per second. This is more than what is allowed, and hence<br>
nginx will either delay the second request (unless configured with<br>
"nodelay"), or will reject it if the configured burst size is<br>
reached.<br>
<span class=""><br>
> The issue, however, is that a client using multiple connections naturally<br>
> often wont be able to time the calls between each connection. And even<br>
> though our burst has been set to 20, our log is spawned by warning-messages<br>
> which I do not think should be a warning at all. There is a difference<br>
> between sending 2 calls at the same time and sending a total of 201<br>
> requests within a second, the latter being the only case I would expect to<br>
> be logged as a warning.<br>
<br>
</span>If you are not happy with log levels used, you can easily tune<br>
them using the limit_req_log_level directive. See<br>
<a href="http://nginx.org/r/limit_req_log_level" rel="noreferrer" target="_blank">http://nginx.org/r/limit_req_<wbr>log_level</a> for details.<br>
<br>
Note well that given the use case description, you probably don't<br>
need requests to be delayed at all, so consider using "limit_req<br>
.. nodelay;". It will avoid delaying logic altogether, thus<br>
allowing as many requests as burst permits.<br>
<span class=""><br>
> Instead of calculating the throttling by simply looking at the last call<br>
> time and calculate a minimum timespan between last call and current call, I<br>
> would like the logic to be that nginx keeps a counter of the number of<br>
> requests withing the current second, and when the second expires and a new<br>
> seconds exists, the counter Is reset.<br>
<br>
</span>This approach is not scalable. For example, it won't allow to<br>
configure a limit of 1 request per minute. Moreover, it can<br>
easily allow more requests in a single second than configured -<br>
for example, a client can do 200 requests at 0.999 and additional<br>
200 requests at 1.000. According to your algorithm, this is<br>
allowed, yet it 400 requests in just 2 milliseconds.<br>
<br>
The current implementation is much more robust, and it can be<br>
configured for various use cases. In particular, if you want to<br>
maintain limit of 200 requests per second and want to tolerate<br>
cases when a client does all requests allowed within a second at<br>
the same time, consider:<br>
<br>
limit_req_zone $binary_remote_addr zone=one:10m rate=200r/s;<br>
limit_req zone=one burst=200 nodelay;<br>
<br>
This will switch off delays as already suggested above, and will<br>
allow burst of up to 200 requests - that is, a client is allowed<br>
to do all 200 requests when a second starts. (If you really want<br>
to allow the case with 400 requests in 2 milliseconds as described<br>
above, consider using burst=400.)<br>
<span class="HOEnZb"><font color="#888888"><br>
--<br>
Maxim Dounin<br>
<a href="http://mdounin.ru/" rel="noreferrer" target="_blank">http://mdounin.ru/</a><br>
______________________________<wbr>_________________<br>
nginx mailing list<br>
<a href="mailto:nginx@nginx.org">nginx@nginx.org</a><br>
<a href="http://mailman.nginx.org/mailman/listinfo/nginx" rel="noreferrer" target="_blank">http://mailman.nginx.org/<wbr>mailman/listinfo/nginx</a></font></span></blockquote></div><br></div>