[nginx] Upstream: fixed usage of closed sockets with filter finalization.
Sergey Kandaurov
pluknet at nginx.com
Tue Jan 30 15:08:26 UTC 2024
details: https://hg.nginx.org/nginx/rev/631ee3c6d38c
branches:
changeset: 9204:631ee3c6d38c
user: Maxim Dounin <mdounin at mdounin.ru>
date: Tue Jan 30 03:20:10 2024 +0300
description:
Upstream: fixed usage of closed sockets with filter finalization.
When filter finalization is triggered when working with an upstream server,
and error_page redirects request processing to some simple handler,
ngx_http_request_finalize() triggers request termination when the response
is sent. In particular, via the upstream cleanup handler, nginx will close
the upstream connection and the corresponding socket.
Still, this can happen to be with ngx_event_pipe() on stack. While
the code will set p->downstream_error due to NGX_ERROR returned from the
output filter chain by filter finalization, otherwise the error will be
ignored till control returns to ngx_http_upstream_process_request().
And event pipe might try reading from the (already closed) socket, resulting
in "readv() failed (9: Bad file descriptor) while reading upstream" errors
(or even segfaults with SSL).
Such errors were seen with the following configuration:
location /t2 {
proxy_pass http://127.0.0.1:8080/big;
image_filter_buffer 10m;
image_filter resize 150 100;
error_page 415 = /empty;
}
location /empty {
return 204;
}
location /big {
# big enough static file
}
Fix is to clear p->upstream in ngx_http_upstream_finalize_request(),
and ensure that p->upstream is checked in ngx_event_pipe_read_upstream()
and when handling events at ngx_event_pipe() exit.
diffstat:
src/event/ngx_event_pipe.c | 8 ++++++--
src/http/ngx_http_upstream.c | 4 ++++
2 files changed, 10 insertions(+), 2 deletions(-)
diffs (39 lines):
diff -r 0de20f43db25 -r 631ee3c6d38c src/event/ngx_event_pipe.c
--- a/src/event/ngx_event_pipe.c Tue Jan 30 03:20:05 2024 +0300
+++ b/src/event/ngx_event_pipe.c Tue Jan 30 03:20:10 2024 +0300
@@ -57,7 +57,9 @@ ngx_event_pipe(ngx_event_pipe_t *p, ngx_
do_write = 1;
}
- if (p->upstream->fd != (ngx_socket_t) -1) {
+ if (p->upstream
+ && p->upstream->fd != (ngx_socket_t) -1)
+ {
rev = p->upstream->read;
flags = (rev->eof || rev->error) ? NGX_CLOSE_EVENT : 0;
@@ -108,7 +110,9 @@ ngx_event_pipe_read_upstream(ngx_event_p
ngx_msec_t delay;
ngx_chain_t *chain, *cl, *ln;
- if (p->upstream_eof || p->upstream_error || p->upstream_done) {
+ if (p->upstream_eof || p->upstream_error || p->upstream_done
+ || p->upstream == NULL)
+ {
return NGX_OK;
}
diff -r 0de20f43db25 -r 631ee3c6d38c src/http/ngx_http_upstream.c
--- a/src/http/ngx_http_upstream.c Tue Jan 30 03:20:05 2024 +0300
+++ b/src/http/ngx_http_upstream.c Tue Jan 30 03:20:10 2024 +0300
@@ -4574,6 +4574,10 @@ ngx_http_upstream_finalize_request(ngx_h
u->peer.connection = NULL;
+ if (u->pipe) {
+ u->pipe->upstream = NULL;
+ }
+
if (u->pipe && u->pipe->temp_file) {
ngx_log_debug1(NGX_LOG_DEBUG_HTTP, r->connection->log, 0,
"http upstream temp fd: %d",
More information about the nginx-devel
mailing list