[PATCH] HTTP/2: make http2 server support http1

Haitao Lv i at lvht.net
Mon Mar 5 15:52:57 UTC 2018


Hello wbr,

Thank you for reviewing this patch.

> On Mar 5, 2018, at 23:06, Valentin V. Bartenev <vbart at nginx.com> wrote:
> 
> On Monday 05 March 2018 11:11:08 Haitao Lv wrote:
> [..]
>>> @@ -145,10 +152,6 @@ ngx_http_parse_request_line(ngx_http_request_t *r, ngx_buf_t *b)
>>>        case sw_start:
>>>            r->request_start = p;
>>> 
>>> -            if (ch == CR || ch == LF) {
>>> -                break;
>>> -            }
>>> -
>> 
>> I think Nginx should not allow any leading \r or \n. HTTP client should never send this chars
>> before the request line. Support this feature makes the buffer management more harder.
> 
> Support for this feature is required.  See the notes from Ruslan.
> Some clients send additional empty line between keepalive requests.

Yes, this is a mistake. I did not read the RFC carefully. Sorry again. But this change isn't needed.

> 
> [..]
>>> +#if (NGX_HTTP_V2)
>>> +        case sw_h2_preface:
>>> +
>>> +            if (ch == *n++) {
>> 
>> I introduce a new sw_h2_preface state. In this state, the parser will check the h2 preface char by char.
>> 
>> The parser only check at most 20 chars "* HTTP/2.0\r\n\r\nSM\r\n\r\n". So the buffer will not be fulfilled.
>> 
> 
> The recv() operation gets buffer size as an argument.
> 
> Thus, the buffer can be fully filled with all the data copied from
> a socket buffer.  HTTP/2 clients usually don't send preface alone.
> They can send other data with the preface. 

Yes, the buffer could be full filled. HTTP/2 client will send more frames after the h2 preface, this is
why I introduce the ngx_http_v2_init_after_preface(ngx_event_t *rev, ngx_buf_t *buf).

All the remaining data in the buffer will be copied to the h2mcf->recv_buffer and be processed by the HTTP/2 ngx_http_v2_state_head handler.

So this will not breaks the HTTP/2 handshake.

However, if we received data is longer than the h2mcf->recv_buffer_size, the current patch will yield a
bad request response. 

A bad case is the client_header_buffer_size(default 1k) set larger than the http2_recv_buffer_size(default 265k), we will may fully filled the header_in buffer but cannot be copied into the h2mcf->recv_buffer,
which will cause the HTTP/2 handshake failed. But simple way to workaround this issue is to change the
http2_recv_buffer_size to or larger than client_header_buffer_size. It is a hack.

> 
> [..]
>>> +#if (NGX_HTTP_V2)
>>> +static void
>>> +ngx_http_process_h2_preface(ngx_event_t *rev)
>>> +{
>>> +    size_t                      len;
>>> +    ngx_connection_t           *c;
>>> +    ngx_http_request_t         *r;
>>> +    ngx_http_connection_t      *hc;
>>> +    ngx_http_v2_main_conf_t    *h2mcf;
>>> +
>>> +    c = rev->data;
>>> +    r = c->data;
>>> +    hc = r->http_connection;
>>> +
>>> +    ngx_log_debug0(NGX_LOG_DEBUG_HTTP, c->log, 0,
>>> +            "http process h2 preface");
>>> +
>>> +    r->header_in->pos = r->header_in->start + 24;
>>> +
>>> +    len = r->header_in->last - r->header_in->pos;
>>> +
>>> +    h2mcf = ngx_http_get_module_main_conf(hc->conf_ctx, ngx_http_v2_module);
>>> +
>>> +    if (len <= h2mcf->recv_buffer_size) {
>> 
>> We check the recv buffer size before do the ngx_http_v2_init_after_preface.
>> 
> [..]
> 
> See above.  It's a bad idea to decline request with a "client sent
> invalid h2 preface" message in error log just because client has
> sent more HTTP/2 frames right after the preface.

In this path, all the valid HTTP/2 handshake will never generate a error log.

> 
> Overall, the patch looks like a hack and introduces too much
> complexity for this feature.  While I understand the reasoning,
> the proposed implementation cannot be accepted.

Could you clarify that whether is this feature not accepted or this patch?

If this feature is not needed, I will terminate this thread.

If this patch only looks like a hack, would you like offer any advice to write
code with good smell?

Thank you.

> 
> Protocol detection before starting processing HTTP/1 or HTTP/2
> could be a better way to go.
> 
>  wbr, Valentin V. Bartenev
> 
> _______________________________________________
> nginx-devel mailing list
> nginx-devel at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx-devel





More information about the nginx-devel mailing list