NGINX and RFC7540 (http2) violation

Valentin V. Bartenev vbart at
Fri Dec 29 00:47:44 UTC 2017

On Thursday, 28 December 2017 22:16:29 MSK Lucas Rolff wrote:
> Hi guys,
> I was playing around with nginx and haproxy recently to decide whether to go for nginx or haproxy in a specific environment.
> One of the requirements was http2 support which both pieces of software support (with nginx having supported it for a lot longer than haproxy).
> However, one thing I saw, is that according to the http2 specification section ( ), HTTP2 does not use the Connection header field to indicate connection-specific headers in the protocol.
> If a client sends a Connection: keep-alive the client effectively violates the specification which surely should not happen, but in case the client actually would send the Connection header the server MUST treat the messages containing the connection header as malformed.
> I saw that this is not the case for nginx in any way, which causes it to not follow the actual specification.
> Can I ask why it was decided to implement it to simply “ignore” the fact that a client might violate the spec? And is there any plans to make nginx compliant with the current http2 specification?
> I’ve found that both Firefox and Safari violates this very specific section, and they’re violated because servers implementing the http2 specification allowed them to do so, effectively causing the specification not to be followed.
> Thanks in advance.

There is a theory in the specification and there is a practise in
the real world.

Strictly following every aspect of a specification is intresting
from academical point of view, but nginx is made for the real world,
where it's used by hundreds of millions of websites and therefor has
to deal with many different clients and protocol implementations.


These implementations may contain various bugs, some clients is
impossible to fix (some are unmaintainable hardware boxes or
Android phones that already reached EOL, for example).

One of the most important aspects of a server with such scale is
interoperability.  It has to reliably work with any clients in every
environment.  If it doesn't, website owners will loose audience, will
loose money, will blame us for that and eventually switch to something

For that purpose sometimes we even have to make ugly hacks in nginx.
See for example:

We immediately started receiving bug reports about this problem and
it took about a year for Chrome developers to release the fix.
And more time is needed for everybody to upgrade on a fixed version.

You can't force someone to buy a new phone, update or fix their
software just because of the idea of full compliance with a
protocol specification.  That doesn't work this way.

Moreover, some aspects of specifications are just impossible to
follow without security or perfomance risks, or without turning
your code into unmaitanable mess.

A fully compliant implementation is usually something that doesn't
exist or just are not used.

In my personal opinion, the HTTP/2 specification (and the protocol
itself) is a bad example.  Some aspects of the protocol are pure
overengineering, some of them are ugly hacks, some of them are just
complexity without any benefits, and some of them are vectors of DoS

I also suggest to read an article written by the author of Varnish:

  wbr, Valentin V. Bartenev

More information about the nginx mailing list