limit-req and greedy UAs

B.R. reallfqq-nginx at yahoo.fr
Mon Sep 12 08:07:31 UTC 2016


You could also generate 304 responses for content you won't provide (cf.
return).
nginx is good at dealing with loads of requests, no problem on that side.
And since return generates an in-memory answer by default, you won't be
hammering your resources. If yo uare CPU or RAM-limited because of those
requests, then I would suggest you evaluate the sizing of your server(s).
You might wish to seperate logging for these requests from the standard
flow to improve their readability, or deactivate them altogether if you
consider they add little-to-no value.

My 2¢,
---
*B. R.*

On Sun, Sep 11, 2016 at 9:16 PM, <lists at lazygranch.com> wrote:

>https://www.nginx.com/blog/tuning-nginx/
>
> ‎I have far more faith in this write up regarding tuning than the
> anti-ddos, though both have similarities.
>
> My interpretation is the user bandwidth is connections times rate. But you
> can't limit the connection to one because (again my interpretation) there
> can be multiple users behind one IP. Think of a university reading your
> website. Thus I am more comfortable limiting bandwidth than I am limiting
> the number of connections. ‎The 512k rate limit is fine. I wouldn't go any
> higher.
>
> I don't believe their is one answer here because it depends on how the
> user interacts with the website. I only serve static content. In fact, I
> only allow the verbs "head" and "get" to limit the attack surface. A page
> of text and photos itself can be many things. Think of a photo gallery
> versus a forum page. The forum has mostly text sprinkled with avatar
> photos, while the gallery can be mostly images with just a line of text
> each.
>
> Basically you need to experiment. Even then, your setup may be better or
> worse than the typical user. That said, if you limited the rate to 512k
> bytes per second, most users could achieve that rate‎.
>
> I just don't see evidence of download managers. I see plenty of wget,
> curl, and python. Those people get my 444 treatment. I use the map module
> as indicated in my other post to do this.
>
> What I haven't mentioned is filtering out machines. If you are really
> concerned about your system being overloaded, think about the search
> engines you want to support. Assuming you want Google, you need to set up
> your website in a manner so that Google knows you own it, then you can
> throttle it back. Google is maybe 20% of my referrals.
>
> If you have a lot of photos, you can set up nginx to block hit linking.
> This is significant because Google images will hot link everything you
> have. What you want is for Google itself to see your images, which it will
> present in reduced resolution, but block the Google hot link. If someone
> really wants to see your image, Google supplies the referal page.
>
> http://nginx.org/en/docs/http/ngx_http_referer_module.html
>
> I make my own domain a valid, but maybe that is assumed. If you want to
> place a link to an image on your website in a forum, you need to make that
> forum valid.
>
> Facebook will steal your images.
> http://badbots.vps.tips/info/facebookexternalhit-bot
>
> I would use the nginx map module since you will probably be blocking many
> bots.
>
> Finally, you may want to block "the cloud"‎ using your firewall. Only
> block the browser ports since mail servers will be on the cloud. I block
> all of AWS for example. My nginx.conf also flags certain requests such as
> logging into WordPress since I'm not using WordPress! Clearly that IP is a
> hacker. I have plenty more signatures in the map. I have a script that
> pulls the IP addresses out of the access.log. I get maybe 20 addresses a
> day. I feed them to ip2location. Any address that goes to a cloud, VPS,
> colo, hosting company gets added to the firewall blocking list. I don't
> just block the IP, but I use the Hurricane Electric BGP tool to get the
> entire IP space to block. As a rule, I don't block schools, libraries, or
> ISPs. The idea here is to allow eyeballs but not machines.
>
> You can also use commercial blocking services if you trust them. (I don't.
> )
>
>
>   Original Message
> From: Grant
> Sent: Sunday, September 11, 2016 10:28 AM
> To: nginx at nginx.org
> Reply To: nginx at nginx.org
> Subject: Re: limit-req and greedy UAs
>
> > ‎This page has all the secret sauce, including how to limit the number
> of connections.
> >
> > https://www.nginx.com/blog/mitigating-ddos-attacks-with-
> nginx-and-nginx-plus/
> >
> > I set up the firewall with a higher number as a "just in case."
>
>
> Should I basically duplicate my limit_req and limit_req_zone
> directives into limit_conn and limit_conn_zone? In what sort of
> situation would someone not do that?
>
> - Grant
>
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.nginx.org/pipermail/nginx/attachments/20160912/b2889fbe/attachment.html>


More information about the nginx mailing list