Segfault when interpreting cached X-Accel-Redirect response
Jan Prachař
jan.prachar at gmail.com
Mon Feb 5 17:01:54 UTC 2024
Hello,
thank you for your responses.
On Sat, 2024-02-03 at 04:25 +0300, Maxim Dounin wrote:
> Hello!
>
> On Fri, Feb 02, 2024 at 01:47:51PM +0100, Jan Prachař wrote:
>
> > On Fri, 2024-02-02 at 12:48 +0100, Jiří Setnička via nginx-devel wrote:
> > > Hello,
> > >
> > > Also, I believe that the core of the problem is because of the
> > > ngx_http_finalize_request(r, NGX_DONE); call in the
> > > ngx_http_upstream_process_headers function. This call is needed when
> > > doing an internal redirect after the real upstream request (to close the
> > > upstream request), but when serving from the cache, there is no upstream
> > > request to close and this call causes ngx_http_set_lingering_close to be
> > > called from the ngx_http_finalize_connection with no active request on
> > > the connection yielding to the segfault.
> >
> > Hello,
> >
> > I am Jiří's colleague, and so I have taken a closer look at the problem. Another
> > indication of the issue is the alert in the error log for non-keepalive connections,
> > stating "http request count is zero while closing request."
> >
> > Upon reviewing the nginx source code, I discovered that the function
> > ngx_http_upstream_finalize_request(), when called with rc = NGX_DECLINED, does not invoke
> > ngx_http_finalize_request(). However, when there is nothing to clean up (u->cleanup ==
> > NULL), it does. Therefore, I believe the appropriate fix is to follow the patch below.
> >
> > Best, Jan Prachař
> >
> > # User Jan Prachař <jan.prachar at gmail.com>
> > # Date 1706877176 -3600
> > # Fri Feb 02 13:32:56 2024 +0100
> > # Node ID 851c994b48c48c9cd3d32b9aa402f4821aeb8bb2
> > # Parent cf3d537ec6706f8713a757df256f2cfccb8f9b01
> > Upstream: Fix "request count is zero" when procesing X-Accel-Redirect
> >
> > ngx_http_upstream_finalize_request(r, u, NGX_DECLINED) should not call
> > ngx_http_finalize_request().
> >
> > diff -r cf3d537ec670 -r 851c994b48c4 src/http/ngx_http_upstream.c
> > --- a/src/http/ngx_http_upstream.c Thu Nov 26 21:00:25 2020 +0100
> > +++ b/src/http/ngx_http_upstream.c Fri Feb 02 13:32:56 2024 +0100
> > @@ -4340,6 +4340,11 @@
> >
> > if (u->cleanup == NULL) {
> > /* the request was already finalized */
> > +
> > + if (rc == NGX_DECLINED) {
> > + return;
> > + }
> > +
> > ngx_http_finalize_request(r, NGX_DONE);
> > return;
> > }
>
> I somewhat agree: the approach suggested by Jiří certainly looks
> incorrect. The ngx_http_upstream_cache_send() function, which
> calls ngx_http_upstream_process_headers() with r->cached set, can
> be used in two contexts: before the cleanup handler is installed
> (i.e., when sending a cached response during upstream request
> initialization) and after it is installed (i.e., when sending a
> stale cached response on upstream errors). In the latter case
> skipping finalization would mean a socket leak.
>
> Still, checking for NGX_DECLINED explicitly also looks wrong, for
> a number of reasons.
>
> First, the specific code path isn't just for "nothing to clean
> up", it's for the very specific case when the request was already
> finalized due to filter finalization, see 5994:5abf5af257a7. This
> code path is not expected to be triggered when the cleanup handler
> isn't installed yet - before the cleanup handler is installed,
> upstream code is expected to call ngx_http_finalize_request()
> directly instead. And it would be semantically wrong to check for
> NGX_DECLINED: if it's here, it means something already gone wrong.
>
> I think the generic issue here is that
> ngx_http_upstream_process_headers(), which is normally used for
> upstream responses and calls ngx_http_upstream_finalize_request(),
> is also used for cached responses. Still, it assumes it is used
> for an upstream response, and calls
> ngx_http_upstream_finalize_request().
>
> As can be seen from the rest of the
> ngx_http_upstream_process_headers() code, apart from the issue
> with X-Accel-Redirect, it can also call
> ngx_http_upstream_finalize_request(NGX_HTTP_INTERNAL_SERVER_ERROR)
> when hh->copy_handler() or ngx_http_upstream_copy_header_line()
> fails. This will similarly end up in
> ngx_http_finalize_request(NGX_DONE) since there is no u->cleanup,
> leading to a request hang. And it would be certainly wrong to
> check for NGX_HTTP_INTERNAL_SERVER_ERROR similarly to NGX_DECLINED
> in your patch, because it can theoretically happen after filter
> finalization.
>
> Proper solution would probably require re-thinking
> ngx_http_upstream_process_headers() interface.
>
> Some preliminary code below: it disables X-Accel-Redirect
> processing altogether if ngx_http_upstream_process_headers() is
> called when returning a cached response (this essentially means
> that "proxy_ignore_headers X-Accel-Expires" is preserved in the
> cache file, which seems to be the right thing to do as we don't
> save responses with X-Accel-Redirect to cache unless it is
> ignored), and returns NGX_ERROR in other places to trigger
> appropriate error handling instead of calling
> ngx_http_upstream_finalize_request() directly (this no longer
> tries to return 500 Internal Server Error response though, as
> doing so might be unsafe after copying some of the cached headers
> to the response).
>
> Please take a look if it works for you.
The provided patch works as expected, with no observed issues.
Considering that proxy_ignore_headers for caching headers is preserved with the
cached file, it seems reasonable to extend the same behavior to
X-Accel-Redirect.
>From my perspective, the updated code in ngx_http_upstream_process_headers() is
a bit confusing. The function can return NGX_DONE, but this return code is only
handled in one place where ngx_http_upstream_process_headers() is called.
If I may suggest, splitting the function might be helpful – redirect processing
would only occur for direct upstream responses, while the rest of the header
processing would be called always (i.e., also for cached responses).
Additionally, I believe the special handling of NGX_DECLINED in
ngx_http_upstream_finalize_request() can be removed. The updated patch is
provided below.
diff --git a/nginx/src/http/ngx_http_upstream.c b/nginx/src/http/ngx_http_upstream.c
index f5db65338..13c25721d 100644
--- a/nginx/src/http/ngx_http_upstream.c
+++ b/nginx/src/http/ngx_http_upstream.c
@@ -53,6 +53,8 @@ static ngx_int_t ngx_http_upstream_test_next(ngx_http_request_t *r,
static ngx_int_t ngx_http_upstream_intercept_errors(ngx_http_request_t *r,
ngx_http_upstream_t *u);
static ngx_int_t ngx_http_upstream_test_connect(ngx_connection_t *c);
+static ngx_int_t ngx_http_upstream_process_redirect(ngx_http_request_t *r,
+ ngx_http_upstream_t *u);
static ngx_int_t ngx_http_upstream_process_headers(ngx_http_request_t *r,
ngx_http_upstream_t *u);
static ngx_int_t ngx_http_upstream_process_trailers(ngx_http_request_t *r,
@@ -588,10 +590,6 @@ ngx_http_upstream_init_request(ngx_http_request_t *r)
if (rc == NGX_OK) {
rc = ngx_http_upstream_cache_send(r, u);
- if (rc == NGX_DONE) {
- return;
- }
-
if (rc == NGX_HTTP_UPSTREAM_INVALID_HEADER) {
rc = NGX_DECLINED;
r->cached = 0;
@@ -1088,7 +1086,7 @@ ngx_http_upstream_cache_send(ngx_http_request_t *r,
ngx_http_upstream_t *u)
if (rc == NGX_OK) {
if (ngx_http_upstream_process_headers(r, u) != NGX_OK) {
- return NGX_DONE;
+ return NGX_ERROR;
}
return ngx_http_cache_send(r);
@@ -2516,7 +2514,19 @@ ngx_http_upstream_process_header(ngx_http_request_t *r,
ngx_http_upstream_t *u)
}
}
+ rc = ngx_http_upstream_process_redirect(r, u);
+
+ if (rc == NGX_DONE) {
+ return;
+ }
+
+ if (rc == NGX_ERROR) {
+ ngx_http_upstream_finalize_request(r, u, NGX_ERROR);
+ return;
+ }
+
if (ngx_http_upstream_process_headers(r, u) != NGX_OK) {
+ ngx_http_upstream_finalize_request(r, u, NGX_ERROR);
return;
}
@@ -2576,10 +2586,6 @@ ngx_http_upstream_test_next(ngx_http_request_t *r,
ngx_http_upstream_t *u)
u->cache_status = NGX_HTTP_CACHE_STALE;
rc = ngx_http_upstream_cache_send(r, u);
- if (rc == NGX_DONE) {
- return NGX_OK;
- }
-
if (rc == NGX_HTTP_UPSTREAM_INVALID_HEADER) {
rc = NGX_HTTP_INTERNAL_SERVER_ERROR;
}
@@ -2621,10 +2627,6 @@ ngx_http_upstream_test_next(ngx_http_request_t *r,
ngx_http_upstream_t *u)
u->cache_status = NGX_HTTP_CACHE_REVALIDATED;
rc = ngx_http_upstream_cache_send(r, u);
- if (rc == NGX_DONE) {
- return NGX_OK;
- }
-
if (rc == NGX_HTTP_UPSTREAM_INVALID_HEADER) {
rc = NGX_HTTP_INTERNAL_SERVER_ERROR;
}
@@ -2811,7 +2813,7 @@ ngx_http_upstream_test_connect(ngx_connection_t *c)
static ngx_int_t
-ngx_http_upstream_process_headers(ngx_http_request_t *r, ngx_http_upstream_t *u)
+ngx_http_upstream_process_redirect(ngx_http_request_t *r, ngx_http_upstream_t *u)
{
ngx_str_t uri, args;
ngx_uint_t i, flags;
@@ -2822,15 +2824,9 @@ ngx_http_upstream_process_headers(ngx_http_request_t *r,
ngx_http_upstream_t *u)
umcf = ngx_http_get_module_main_conf(r, ngx_http_upstream_module);
- if (u->headers_in.no_cache || u->headers_in.expired) {
- u->cacheable = 0;
- }
-
if (u->headers_in.x_accel_redirect
&& !(u->conf->ignore_headers & NGX_HTTP_UPSTREAM_IGN_XA_REDIRECT))
{
- ngx_http_upstream_finalize_request(r, u, NGX_DECLINED);
-
part = &u->headers_in.headers.part;
h = part->elts;
@@ -2855,13 +2851,15 @@ ngx_http_upstream_process_headers(ngx_http_request_t *r,
ngx_http_upstream_t *u)
if (hh && hh->redirect) {
if (hh->copy_handler(r, &h[i], hh->conf) != NGX_OK) {
- ngx_http_finalize_request(r,
- NGX_HTTP_INTERNAL_SERVER_ERROR);
- return NGX_DONE;
+ return NGX_ERROR;
}
}
}
+ r->count++;
+
+ ngx_http_upstream_finalize_request(r, u, NGX_DONE);
+
uri = u->headers_in.x_accel_redirect->value;
if (uri.data[0] == '@') {
@@ -2888,6 +2886,25 @@ ngx_http_upstream_process_headers(ngx_http_request_t *r,
ngx_http_upstream_t *u)
return NGX_DONE;
}
+ return NGX_OK;
+}
+
+
+static ngx_int_t
+ngx_http_upstream_process_headers(ngx_http_request_t *r, ngx_http_upstream_t *u)
+{
+ ngx_uint_t i;
+ ngx_list_part_t *part;
+ ngx_table_elt_t *h;
+ ngx_http_upstream_header_t *hh;
+ ngx_http_upstream_main_conf_t *umcf;
+
+ umcf = ngx_http_get_module_main_conf(r, ngx_http_upstream_module);
+
+ if (u->headers_in.no_cache || u->headers_in.expired) {
+ u->cacheable = 0;
+ }
+
part = &u->headers_in.headers.part;
h = part->elts;
@@ -2918,18 +2935,14 @@ ngx_http_upstream_process_headers(ngx_http_request_t *r,
ngx_http_upstream_t *u)
if (hh) {
if (hh->copy_handler(r, &h[i], hh->conf) != NGX_OK) {
- ngx_http_upstream_finalize_request(r, u,
- NGX_HTTP_INTERNAL_SERVER_ERROR);
- return NGX_DONE;
+ return NGX_ERROR;
}
continue;
}
if (ngx_http_upstream_copy_header_line(r, &h[i], 0) != NGX_OK) {
- ngx_http_upstream_finalize_request(r, u,
- NGX_HTTP_INTERNAL_SERVER_ERROR);
- return NGX_DONE;
+ return NGX_ERROR;
}
}
@@ -4429,10 +4442,6 @@ ngx_http_upstream_next(ngx_http_request_t *r, ngx_http_upstream_t
*u,
u->cache_status = NGX_HTTP_CACHE_STALE;
rc = ngx_http_upstream_cache_send(r, u);
- if (rc == NGX_DONE) {
- return;
- }
-
if (rc == NGX_HTTP_UPSTREAM_INVALID_HEADER) {
rc = NGX_HTTP_INTERNAL_SERVER_ERROR;
}
@@ -4604,10 +4613,6 @@ ngx_http_upstream_finalize_request(ngx_http_request_t *r,
r->read_event_handler = ngx_http_block_reading;
- if (rc == NGX_DECLINED) {
- return;
- }
-
r->connection->log->action = "sending to client";
if (!u->header_sent
More information about the nginx-devel
mailing list