Slow read attack in HTTP/2
jsharan15 at gmail.com
Mon Aug 22 07:10:46 UTC 2016
The scenario which I mentioned was only tested and reported by imperva and
Nginx has said that they have solved this slow read issue.
But as you say, the problem still persists? We can prevent a single client
from requesting large amount of resources but what if the attacker uses
multiple machines to make an attack?
Is there any check in nginx to examine the client's initial congestion
window setting and close the connection if it says initial congestion
window size to be less than 65,535 bytes (as mentioned in RFC
<https://tools.ietf.org/html/rfc7540#section-5.2.1> as this to be the
minimum initial congestion window size).
P.S. Please correct me if I misunderstood. Thank you for your responses :)
On Fri, Aug 19, 2016 at 7:21 PM, Валентин Бартенев <vbart at nginx.com> wrote:
> On Friday 19 August 2016 18:07:46 Sharan J wrote:
> > Hi,
> > Thanks for the response.
> > Would like to know what happens in the following scenario,
> > Client sets its initial congestion window size to be very small and
> > requests for a large data. It updates the window size everytime when it
> > gets exhausted with a small increment (so send_timeout wont happen as
> > writes happens always but in a very small amount). In this case won't the
> > connection remain until the server flushes all the data to the client
> > has very less window size?
> The same is true with HTTP/1.x, there's no difference.
> > If the client opens many such connections with many streams, each
> > requesting for a very large data, then won't it cause DOS?
> You should configure other limits to prevent client from requesting
> unlimited amounts of resources at the same time.
> wbr, Valentin V. Bartenev
> nginx mailing list
> nginx at nginx.org
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the nginx