[PATCH] Limit req: r/h and r/d support
Maxim Dounin
mdounin at mdounin.ru
Mon Feb 19 14:12:29 UTC 2018
Hello!
On Sat, Feb 17, 2018 at 10:07:27PM +0100, Bernhard Reutner-Fischer wrote:
> On 16 February 2018 at 17:52, Maxim Dounin <mdounin at mdounin.ru> wrote:
>
> >> Limit req: r/h and r/d support
>
> > Do you use it yourself, or your patch is based on the feature
> > requests in question? If yes, what is your use case?
>
> I have to limit certain use-cases to a handful of requests per day (don't ask).
> Mentioned feature-requests just show that this oddish corner case is a
> real knock-out criterion in the real world so i mentioned them to
> emphasis that it wasn't just /me but that you're losing users due to
> this. Often this is a hard requirement. You don't seem to offer any
> sensible, off the shelve (or documented) way to handle "long" request
> limits anyway -- think shm -- at least none that i was able to find?
The limit_req module was created mostly to limit high rates - in
particular, to limit requests to costly services to fight DoS, and
to limit requests to password checking to fight bruteforce
attempts. It was never meant to limit normal user activity, and
hence the data format choosen does not support rates less than
1r/m. Yet it works effectively with high rates the module was
created for.
While it is understood that some people might want to use longer
limits, it's not something the module tries to provide now. Also,
it might not actually be a good idea to provide such limits in the
module, as any nginx restart / upgrade will mean that limits won't
work as expected for significant time. If you want to limits
users to several requests per day, it might be a better idea to
keep these limits outside of nginx.
> > Style: please add a trailing dot.
>
> Sure, please excuse my sloppy reading of previous logs, i honestly
> failed to notice that, added.
>
> > On 32-bit platforms this means that the maximum supported rate
> > would be about ~40kr/s. And I suspect this can hurt real setups.
>
> agreed. Not sure about current typical HTTP server rates, admittedly.
> Getting esoteric, but current servers usually satisfy millions of
> requests but i'm not sure if you'd usually limit those to let's say
> 100k/s, but of course i see your point.
I doubt rates like 100kr/s are needed in practice (especially on
32-bit platforms), but I'm pretty sure that configurations with
something like this are real. In particular, googling "limit_req
rate site:nginx.org/pipermail/" shows examples like
"rate=999999r/s" pretty quickly.
[...]
> > Note that I have no good solution for 32-bit platforms and the
> > 40kr/s (20kr/s) limit.
>
> my notes read:
> disentangle rate. Should have rate(per unit) and unit(in seconds?
> minutes? calc!) where excess would have same base as rate; common
> timer conversion helper for rate/unit?
>
> This has much more impact and hence is a much larger patch but would
> certainly be way easier to grok compared to the current handling.
> Would you be willing to fix the existing rate mess^whandling?
Current handling is pretty clear and simple: all calculations are
done in requests per millisecond, and this requires just one value
in various structures. We can tolerate switching to something
different, though it would be a good idea to keep code logic close
to the current implementation.
--
Maxim Dounin
http://mdounin.ru/
More information about the nginx-devel
mailing list