[PATCH] HTTP/2: make http2 server support http1

Haitao Lv i at lvht.net
Mon Mar 5 03:03:55 UTC 2018

> On Mar 4, 2018, at 21:00, Valentin V. Bartenev <vbart at nginx.com> wrote:
> On Sunday, 4 March 2018 05:53:36 MSK Haitao Lv wrote:
>> Hi, wbr,
>> Thanks for your review. I don't know why I can't receive your email.
>> Let me reply directly.
>>> It doesn't look like a useful feature. 
>>> Could you please explain the use cases?
>> The current implementation support both http/1 and http/2 over the
>> same TLS listen port by the ALPN Extension.
>> Nginx also support listening http/2 over plain tcp port, which is
>> perfect for development and inner production environment.
>> However, the current implementation cannot support http/1.1 and http/2
>> simultaneously. We have no choice but listen on two different ports.
>> As a result, one same service has two ports and different clients
>> should choose their suitable port. 
>> Besides, the http/2 is efficient, but http/1 is simple. So I see no
>> chance that the http/2 will replace http/1 totally in the production
>> inner environment. Will call the inner API by both http/1 and http/2.
>> So I think support http/1 and http/2 on the plain tcp port will simplify
>> both the production and development environment.
> HTTP/2 was designed to save connection time between a client and a web
> server, which can be costly with significant RTT and full TLS handshake.
> Also it helps to overcome concurrency limitation in browsers, that usually
> limited to 6 connection per host.
> All these problems usually don't exist in inner environments.
> HTTP/2 isn't efficient.  In fact, it has more overhead and may require
> more TCP packets to transfer the same amount of data.  Moreover, more TCP
> connections are usually faster than one, just because they have bigger start
> window in total and less suffer from packet loss.
> Please, don't be fooled by aggressive marketing around HTTP/2.  Most of
> such articles are written by people who barely understand how network works.
> HTTP/2 is neither a better version nor the "next" version of HTTP protocol
> (the name is misleading).  It's an another protocol designed to solve some
> specific cases, while introducing many other problems.
> I don't recommend to use HTTP/2 in cases other than TLS connections between
> browsers and web servers in public network.  Also even for this purpose,
> there are cases when HTTP/2 isn't recommended, e.g. unreliable networks
> with packet loss (as mobile networks).
> You can easily find on YouTube a talk by Hooman Beheshti called
> "HTTP/2: what no one is telling you" with some interesting research.
> nginx support HTTP/2 over plain TCP for a number of reasons:
> 1. for easy testing and debugging the protocol implementation in nginx;
> 2. that required almost no additional code to implement;
> 3. there's a use case when TLS termination is done by simple TCP proxy
>    and then the traffic routed to HTTP/1 or HTTP/2 nginx ports according
>    to ALPN.

My actual use case is simple. My compony is going to introduce the gRPC.
And the gRPC depends on the HTTP/2 and it's trailer headers. We have many php
legacy code which expose API by HTTP/1. So we want to build gRPC server by PHP.

A simple solution is to send gRPC payload by HTTP/1 and let nginx transfer it
to HTTP/2. And the grpc-status, grpc-message header that should be transferred
by the HTTP/2 trailer header could be transferred by the normal HTTP/1 header.

By this way, we can support both HTTP/1 API and unary-call gRPC.

>>> What if the received data is bigger than h2mcf->recv_buffer?
>> Yes, it is a problem. However, it can be fixed easily.
>> As we know, the http/2 preface is PRI * HTTP/2.0\r\n\r\nSM\r\n\r\n.
>> We could modify the ngx_http_parse_request_line and when we got A PRI
>> method, we need to get a single * uri. If we got things other than *,
>> we just return a invalid request response. By this, we will never got
>> a PRI to_much_long_uri HTTP/2.0 request line, and the buffer will not
>> be exhausted. So the ngx_http_alloc_large_header_buffer will not be called
>> during the handshake. After the ngx_http_parse_request_line, we will
>> ensure we got a PRI request, and the buffer size is client_header_buffer_size.
> As far as I can see, your reasoning is based on assumption that
> client_header_buffer_size is always smaller than http2_recv_buffer_size.
> That simply isn't true as both can be easily configured by users.

Let me explain in the new patch.
>  wbr, Valentin V. Bartenev
> _______________________________________________
> nginx-devel mailing list
> nginx-devel at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx-devel

More information about the nginx-devel mailing list