[PATCH 2 of 4] Upstream: fixed usage of closed sockets with filter finalization

Sergey Kandaurov pluknet at nginx.com
Mon Jan 29 13:21:43 UTC 2024


> On 29 Jan 2024, at 10:43, Maxim Dounin <mdounin at mdounin.ru> wrote:
> 
> Hello!
> 
> On Fri, Jan 26, 2024 at 04:26:00PM +0400, Sergey Kandaurov wrote:
> 
>>> On 27 Nov 2023, at 06:50, Maxim Dounin <mdounin at mdounin.ru> wrote:
>>> 
>>> # HG changeset patch
>>> # User Maxim Dounin <mdounin at mdounin.ru>
>>> # Date 1701049758 -10800
>>> #      Mon Nov 27 04:49:18 2023 +0300
>>> # Node ID faf0b9defc76b8683af466f8a950c2c241382970
>>> # Parent  a5e39e9d1f4c84dcbe6a2f9e079372a3d63aef0b
>>> Upstream: fixed usage of closed sockets with filter finalization.
>>> 
>>> When filter finalization is triggered when working with an upstream server,
>>> and error_page redirects request processing to some simple handler,
>>> ngx_http_request_finalize() triggers request termination when the response
>>> is sent.  In particular, via the upstream cleanup handler, nginx will close
>>> the upstream connection and the corresponding socket.
>>> 
>>> Still, this can happen to be with ngx_event_pipe() on stack.  While
>>> the code will set p->downstream_error due to NGX_ERROR returned from the
>>> output filter chain by filter finalization, otherwise the error will be
>>> ignored till control returns to ngx_http_upstream_process_request().
>>> And event pipe might try reading from the (already closed) socket, resulting
>>> in "readv() failed (9: Bad file descriptor) while reading upstream" errors
>>> (or even segfaults with SSL).
>>> 
>>> Such errors were seen with the following configuration:
>>> 
>>>   location /t2 {
>>>       proxy_pass http://127.0.0.1:8080/big;
>>> 
>>>       image_filter_buffer 10m;
>>>       image_filter   resize  150 100;
>>>       error_page     415   = /empty;
>>>   }
>>> 
>>>   location /empty {
>>>       return 204;
>>>   }
>>> 
>>>   location /big {
>>>       # big enough static file
>>>   }
>>> 
>>> Fix is to set p->upstream_error in ngx_http_upstream_finalize_request(),
>>> so the existing checks in ngx_event_pipe_read_upstream() will prevent
>>> further reading from the closed upstream connection.
>>> 
>>> Similarly, p->upstream_error is now checked when handling events at
>>> ngx_event_pipe() exit, as checking p->upstream->fd is not enough if
>>> keepalive upstream connections are being used and the connection was
>>> saved to cache during request termination.
>>> 
>> 
>> Setting p->upstream_error in ngx_http_upstream_finalize_request()
>> may look suspicious, because it is used to be set on connection errors
>> such as upstream timeout or recv error, or, as a recently introduced
>> exception in the fastcgi module, - also when the FastCGI record ends
>> prematurely, before receiving all the expected content.
>> But technically I think this is quite correct, because we no longer
>> want to receive further data, and also (and you mention this in the
>> commit log) this repeats closing an upstream connection socket in
>> the same place in ngx_http_upstream_finalize_request().
>> So I think it should be fine.
> 
> The biggest concern I personally see here is with the added 
> p->upstream_error check at ngx_event_pipe() exit.  If there is a 
> real upstream error, such as when the connection is reset by the 
> upstream server, and if we want the pipe to be active for some 
> time (for example, if we want it to continue writing to the 
> downstream connection), there will be no ngx_handle_read_event() 
> call.  For level-triggered event methods this means that the read 
> event for the upstream connection will be generated again and 
> again.
> 
> This shouldn't be the problem for existing ngx_event_pipe() uses 
> though, as p->upstream_error is anyway triggers 
> ngx_http_upstream_finalize_request().
> 
> Still, we can consider introducing a separate flag, such as 
> p->upstream_closed, or clearing p->upstream, and checking these in 
> ngx_event_pipe() instead.  This probably would be a more clear 
> solution.
> 
> Updated patch below:
> 
> # HG changeset patch
> # User Maxim Dounin <mdounin at mdounin.ru>
> # Date 1706510064 -10800
> #      Mon Jan 29 09:34:24 2024 +0300
> # Node ID 4a91a03dcd8df0652884ed6ebe9f7437ce82fd26
> # Parent  7b630f6487068f7cc9dd83762fb4ea39f2f340e9
> Upstream: fixed usage of closed sockets with filter finalization.
> 
> When filter finalization is triggered when working with an upstream server,
> and error_page redirects request processing to some simple handler,
> ngx_http_request_finalize() triggers request termination when the response
> is sent.  In particular, via the upstream cleanup handler, nginx will close
> the upstream connection and the corresponding socket.
> 
> Still, this can happen to be with ngx_event_pipe() on stack.  While
> the code will set p->downstream_error due to NGX_ERROR returned from the
> output filter chain by filter finalization, otherwise the error will be
> ignored till control returns to ngx_http_upstream_process_request().
> And event pipe might try reading from the (already closed) socket, resulting
> in "readv() failed (9: Bad file descriptor) while reading upstream" errors
> (or even segfaults with SSL).
> 
> Such errors were seen with the following configuration:
> 
>    location /t2 {
>        proxy_pass http://127.0.0.1:8080/big;
> 
>        image_filter_buffer 10m;
>        image_filter   resize  150 100;
>        error_page     415   = /empty;
>    }
> 
>    location /empty {
>        return 204;
>    }
> 
>    location /big {
>        # big enough static file
>    }
> 
> Fix is to clear p->upstream in ngx_http_upstream_finalize_request(),
> and ensure that p->upstream is checked in ngx_event_pipe_read_upstream()
> and when handling events at ngx_event_pipe() exit.
> 
> diff --git a/src/event/ngx_event_pipe.c b/src/event/ngx_event_pipe.c
> --- a/src/event/ngx_event_pipe.c
> +++ b/src/event/ngx_event_pipe.c
> @@ -57,7 +57,9 @@ ngx_event_pipe(ngx_event_pipe_t *p, ngx_
>         do_write = 1;
>     }
> 
> -    if (p->upstream->fd != (ngx_socket_t) -1) {
> +    if (p->upstream
> +        && p->upstream->fd != (ngx_socket_t) -1)
> +    {
>         rev = p->upstream->read;
> 
>         flags = (rev->eof || rev->error) ? NGX_CLOSE_EVENT : 0;
> @@ -108,7 +110,9 @@ ngx_event_pipe_read_upstream(ngx_event_p
>     ngx_msec_t    delay;
>     ngx_chain_t  *chain, *cl, *ln;
> 
> -    if (p->upstream_eof || p->upstream_error || p->upstream_done) {
> +    if (p->upstream_eof || p->upstream_error || p->upstream_done
> +        || p->upstream == NULL)
> +    {
>         return NGX_OK;
>     }
> 
> diff --git a/src/http/ngx_http_upstream.c b/src/http/ngx_http_upstream.c
> --- a/src/http/ngx_http_upstream.c
> +++ b/src/http/ngx_http_upstream.c
> @@ -4561,6 +4561,10 @@ ngx_http_upstream_finalize_request(ngx_h
> 
>     u->peer.connection = NULL;
> 
> +    if (u->pipe) {
> +        u->pipe->upstream = NULL;
> +    }
> +
>     if (u->pipe && u->pipe->temp_file) {
>         ngx_log_debug1(NGX_LOG_DEBUG_HTTP, r->connection->log, 0,
>                        "http upstream temp fd: %d",
> 

Indeed, this fix looks more isolated, I like it.

-- 
Sergey Kandaurov


More information about the nginx-devel mailing list