[PATCH] HTTP/2: per-iteration stream handling limit

Maxim Dounin mdounin at mdounin.ru
Tue Oct 10 18:39:48 UTC 2023


Hello!

On Tue, Oct 10, 2023 at 05:57:58PM +0000, Thomas Ward wrote:

> Maxim:
> 
> If NGINX is unaffected, then F5 is distributing the FUD.

Yes, they are.

> Refer to https://my.f5.com/manage/s/article/K000137106 and the 
> NGINX category, specifically NGINX OSS.  Right now, unless this 
> is updated by F5 (who owns NGINX now) there is conflicting 
> information here.  May I suggest you go internally to F5 and 
> have them actually *revise* their update, or further indicate 
> that "There are already mitigations in place"?

I'm not "internally to F5", I'm an independent developer.  This 
was discussed within nginx security team, with other nginx 
developers, and the conclusion is that nginx is not affected.

I have no idea why F5 decided to include nginx as affected in 
their advisory, who did this, and why.  This is incorrect and 
should be fixed.  I've already pinged them, and hopefully they 
will fix this.

> Ultimately, 
> though, I would assume that keepalive at 100 is also still not 
> enough to *fully* protect against this, therefore at the core 
> "NGINX is still vulnerable to this CVE despite mitigations 
> already being in place for default configuration values and 
> options" is the interpretation, which means yes NGINX is in fact 
> affected by this *in non-default configurations*, which means 
> that it *is* still vulnerable to the CVE.  Mitigations don't 
> mean "fixed" or "not affected" in the strictest interpretation 
> of languages and what means what.

As often the case for DoS vulnerabilities, you cannot be *fully* 
protected, since there is a work to do anyway.  As long as nginx 
is configured to serve HTTP requests, it can be loaded with 
serving HTTP requests, and maximum possible load depends on the 
configurations, including various builtin limits, such as 
keepalive_requests and http2_max_concurrent_streams, and DoS 
mitigations which needs to be explicitly configured, such as 
limit_req and limit_conn.

As already said, in this case the work nginx is willing to do is 
no different from other workloads, such as with normal HTTP/2 
requests or HTTP/1.x requests.  As such, nginx is not considered 
to be affected by this issue.

> 
> 
> Thomas
> 
> -----Original Message-----
> From: nginx-devel <nginx-devel-bounces at nginx.org> On Behalf Of Maxim Dounin
> Sent: Tuesday, October 10, 2023 8:53 AM
> To: nginx-devel at nginx.org
> Subject: Re: [PATCH] HTTP/2: per-iteration stream handling limit
> 
> Hello!
> 
> On Tue, Oct 10, 2023 at 03:29:02PM +0300, Maxim Dounin wrote:
> 
> > # HG changeset patch
> > # User Maxim Dounin <mdounin at mdounin.ru> # Date 1696940019 -10800
> > #      Tue Oct 10 15:13:39 2023 +0300
> > # Node ID cdda286c0f1b4b10f30d4eb6a63fefb9b8708ecc
> > # Parent  3db945fda515014d220151046d02f3960bcfca0a
> > HTTP/2: per-iteration stream handling limit.
> > 
> > To ensure that attempts to flood servers with many streams are 
> > detected early, a limit of no more than 2 * max_concurrent_streams new 
> > streams per one event loop iteration was introduced.  This limit is 
> > applied even if max_concurrent_streams is not yet reached - for 
> > example, if corresponding streams are handled synchronously or reset.
> > 
> > Further, refused streams are now limited to maximum of 
> > max_concurrent_streams and 100, similarly to priority_limit initial 
> > value, providing some tolerance to clients trying to open several 
> > streams at the connection start, yet low tolerance to flooding attempts.
> 
> To clarify:
> 
> There is FUD being spread that nginx is vulnerable to CVE-2023-44487.
> 
> We do not consider nginx to be affected by this issue.  In the default configuration, nginx is sufficiently protected by the limit of allowed requests per connection (see http://nginx.org/r/keepalive_requests for details), so an attacker will be required to reconnect very often, making the attack obvious and easy to stop at the network level.  And it is not possible to circumvent the max concurrent streams limit in nginx, as nginx only allows additional streams when previous streams are completely closed.
> 
> Further, additional protection can be implemented in nginx by using the "limit_req" directive, which limits the rate of requests and rejects excessive requests.
> 
> Overall, with the handling as implemented in nginx, impact of streams being reset does no seem to be significantly different from impacts from over workloads with large number of requests being sent by the client, such as handling of multiple HTTP/2 requests or HTTP/1.x pipelined requests.
> 
> Nevertheless, we've decided to implemented some additional mitigations which will help nginx to detect such attacks and drop connections with misbehaving clients faster.  Hence the patch.
> 
> > 
> > diff --git a/src/http/v2/ngx_http_v2.c b/src/http/v2/ngx_http_v2.c
> > --- a/src/http/v2/ngx_http_v2.c
> > +++ b/src/http/v2/ngx_http_v2.c
> > @@ -347,6 +347,7 @@ ngx_http_v2_read_handler(ngx_event_t *re
> >      ngx_log_debug0(NGX_LOG_DEBUG_HTTP, c->log, 0, "http2 read 
> > handler");
> >  
> >      h2c->blocked = 1;
> > +    h2c->new_streams = 0;
> >  
> >      if (c->close) {
> >          c->close = 0;
> > @@ -1284,6 +1285,14 @@ ngx_http_v2_state_headers(ngx_http_v2_co
> >          goto rst_stream;
> >      }
> >  
> > +    if (h2c->new_streams++ >= 2 * h2scf->concurrent_streams) {
> > +        ngx_log_error(NGX_LOG_INFO, h2c->connection->log, 0,
> > +                      "client sent too many streams at once");
> > +
> > +        status = NGX_HTTP_V2_REFUSED_STREAM;
> > +        goto rst_stream;
> > +    }
> > +
> >      if (!h2c->settings_ack
> >          && !(h2c->state.flags & NGX_HTTP_V2_END_STREAM_FLAG)
> >          && h2scf->preread_size < NGX_HTTP_V2_DEFAULT_WINDOW) @@ 
> > -1349,6 +1358,12 @@ ngx_http_v2_state_headers(ngx_http_v2_co
> >  
> >  rst_stream:
> >  
> > +    if (h2c->refused_streams++ > ngx_max(h2scf->concurrent_streams, 100)) {
> > +        ngx_log_error(NGX_LOG_INFO, h2c->connection->log, 0,
> > +                      "client sent too many refused streams");
> > +        return ngx_http_v2_connection_error(h2c, NGX_HTTP_V2_NO_ERROR);
> > +    }
> > +
> >      if (ngx_http_v2_send_rst_stream(h2c, h2c->state.sid, status) != NGX_OK) {
> >          return ngx_http_v2_connection_error(h2c, NGX_HTTP_V2_INTERNAL_ERROR);
> >      }
> > diff --git a/src/http/v2/ngx_http_v2.h b/src/http/v2/ngx_http_v2.h
> > --- a/src/http/v2/ngx_http_v2.h
> > +++ b/src/http/v2/ngx_http_v2.h
> > @@ -131,6 +131,8 @@ struct ngx_http_v2_connection_s {
> >      ngx_uint_t                       processing;
> >      ngx_uint_t                       frames;
> >      ngx_uint_t                       idle;
> > +    ngx_uint_t                       new_streams;
> > +    ngx_uint_t                       refused_streams;
> >      ngx_uint_t                       priority_limit;
> >  
> >      size_t                           send_window;
> > _______________________________________________
> > nginx-devel mailing list
> > nginx-devel at nginx.org
> > https://mailman.nginx.org/mailman/listinfo/nginx-devel
> 
> --
> Maxim Dounin
> http://mdounin.ru/
> _______________________________________________
> nginx-devel mailing list
> nginx-devel at nginx.org
> https://mailman.nginx.org/mailman/listinfo/nginx-devel
> _______________________________________________
> nginx-devel mailing list
> nginx-devel at nginx.org
> https://mailman.nginx.org/mailman/listinfo/nginx-devel

-- 
Maxim Dounin
http://mdounin.ru/


More information about the nginx-devel mailing list