From xeioex at nginx.com Thu Feb 1 04:08:27 2024 From: xeioex at nginx.com (=?utf-8?q?Dmitry_Volyntsev?=) Date: Thu, 01 Feb 2024 04:08:27 +0000 Subject: [njs] HTTP: fixed stub_status statistic when js_periodic is enabled. Message-ID: details: https://hg.nginx.org/njs/rev/673d78618fc9 branches: changeset: 2279:673d78618fc9 user: Dmitry Volyntsev date: Wed Jan 31 17:06:58 2024 -0800 description: HTTP: fixed stub_status statistic when js_periodic is enabled. Previously, when js_periodic is enabled the Reading statistic was growing each time the js_periodic handler was called. The issue was introduced in f1bd0b1db065 (0.8.1). This fixes #692 issue on Github. diffstat: nginx/ngx_http_js_module.c | 16 +++++----------- 1 files changed, 5 insertions(+), 11 deletions(-) diffs (39 lines): diff -r fca50ba4db9d -r 673d78618fc9 nginx/ngx_http_js_module.c --- a/nginx/ngx_http_js_module.c Mon Jan 29 17:16:08 2024 -0800 +++ b/nginx/ngx_http_js_module.c Wed Jan 31 17:06:58 2024 -0800 @@ -4323,30 +4323,24 @@ ngx_http_js_periodic_finalize(ngx_http_r static void ngx_http_js_periodic_destroy(ngx_http_request_t *r, ngx_js_periodic_t *periodic) { - ngx_connection_t *c; - ngx_http_cleanup_t *cln; + ngx_connection_t *c; c = r->connection; ngx_log_debug1(NGX_LOG_DEBUG_HTTP, c->log, 0, - "http js periodic destroy: \"%V\"", - &periodic->method); + "http js periodic destroy: \"%V\"", &periodic->method); periodic->connection = NULL; - for (cln = r->cleanup; cln; cln = cln->next) { - if (cln->handler) { - cln->handler(cln->data); - } - } + r->logged = 1; + + ngx_http_free_request(r, NGX_OK); ngx_free_connection(c); c->fd = (ngx_socket_t) -1; c->pool = NULL; c->destroyed = 1; - - ngx_destroy_pool(r->pool); } From jiri.setnicka at cdn77.com Fri Feb 2 11:48:19 2024 From: jiri.setnicka at cdn77.com (=?UTF-8?B?SmnFmcOtIFNldG5pxI1rYQ==?=) Date: Fri, 2 Feb 2024 12:48:19 +0100 Subject: Segfault when interpreting cached X-Accel-Redirect response Message-ID: <6f1c6e52-e8db-4158-9c97-ae6015f7e175@cdn77.com> Hello, I came across a situation when using X-Accel-Redirect could lead to a segfault (tested on the latest version - hg changeset 9207:73eb75bee30f but I believe this has been there for a long time). It occurs only if the response with the X-Accel-Redirect header is firstly cached (when using proxy_ignore_headers to not interpret this header) and later this cached response is interpreted (after removing the X-Accel-Redirect from the proxy_ignore_headers directive). I prepared a simple testcase to reproduce it: use the attached nginx.conf file and run this curl (the first request caches the response, the second one tries to interpret it):     curl localhost:1080 localhost:1081 Also, I believe that the core of the problem is because of the ngx_http_finalize_request(r, NGX_DONE); call in the ngx_http_upstream_process_headers function. This call is needed when doing an internal redirect after the real upstream request (to close the upstream request), but when serving from the cache, there is no upstream request to close and this call causes ngx_http_set_lingering_close to be called from the ngx_http_finalize_connection with no active request on the connection yielding to the segfault. See the attached patch. Tested on our side and this fix works, also run through the nginx-tests and ok,  but I am not thoroughly sure that I am not forgetting some cases. Thank you for your review. Sincerely Jiří Setnička -------------- next part -------------- daemon off; events { worker_connections 768; } http { proxy_cache_path /usr/local/nginx/cache levels=1:2 keys_zone=cache:64m; proxy_cache cache; proxy_buffering on; proxy_cache_valid 200 1d; server { listen 1080; location / { proxy_ignore_headers X-Accel-Redirect; proxy_pass http://localhost:1082; } } server { listen 1081; location / { proxy_pass http://localhost:1082; } } server { listen 1082; location / { add_header X-Accel-Redirect "/redirected"; return 200 "ORIG"; } location /redirected { return 200 "REDIRECTED"; } } } -------------- next part -------------- A non-text attachment was scrubbed... Name: x-accel-redirect-fix.patch Type: text/x-patch Size: 1841 bytes Desc: not available URL: From jan.prachar at gmail.com Fri Feb 2 12:47:51 2024 From: jan.prachar at gmail.com (Jan =?UTF-8?Q?Pracha=C5=99?=) Date: Fri, 02 Feb 2024 13:47:51 +0100 Subject: Segfault when interpreting cached X-Accel-Redirect response In-Reply-To: <6f1c6e52-e8db-4158-9c97-ae6015f7e175@cdn77.com> References: <6f1c6e52-e8db-4158-9c97-ae6015f7e175@cdn77.com> Message-ID: <883bd10bacbd455bd00d6cbdb9ec42d38eaeb263.camel@gmail.com> On Fri, 2024-02-02 at 12:48 +0100, Jiří Setnička via nginx-devel wrote: > Hello, > > Also, I believe that the core of the problem is because of the > ngx_http_finalize_request(r, NGX_DONE); call in the > ngx_http_upstream_process_headers function. This call is needed when > doing an internal redirect after the real upstream request (to close the > upstream request), but when serving from the cache, there is no upstream > request to close and this call causes ngx_http_set_lingering_close to be > called from the ngx_http_finalize_connection with no active request on > the connection yielding to the segfault. Hello, I am Jiří's colleague, and so I have taken a closer look at the problem. Another indication of the issue is the alert in the error log for non-keepalive connections, stating "http request count is zero while closing request." Upon reviewing the nginx source code, I discovered that the function ngx_http_upstream_finalize_request(), when called with rc = NGX_DECLINED, does not invoke ngx_http_finalize_request(). However, when there is nothing to clean up (u->cleanup == NULL), it does. Therefore, I believe the appropriate fix is to follow the patch below. Best, Jan Prachař # User Jan Prachař # Date 1706877176 -3600 # Fri Feb 02 13:32:56 2024 +0100 # Node ID 851c994b48c48c9cd3d32b9aa402f4821aeb8bb2 # Parent cf3d537ec6706f8713a757df256f2cfccb8f9b01 Upstream: Fix "request count is zero" when procesing X-Accel-Redirect ngx_http_upstream_finalize_request(r, u, NGX_DECLINED) should not call ngx_http_finalize_request(). diff -r cf3d537ec670 -r 851c994b48c4 src/http/ngx_http_upstream.c --- a/src/http/ngx_http_upstream.c Thu Nov 26 21:00:25 2020 +0100 +++ b/src/http/ngx_http_upstream.c Fri Feb 02 13:32:56 2024 +0100 @@ -4340,6 +4340,11 @@ if (u->cleanup == NULL) { /* the request was already finalized */ + + if (rc == NGX_DECLINED) { + return; + } + ngx_http_finalize_request(r, NGX_DONE); return; } From arut at nginx.com Fri Feb 2 17:26:31 2024 From: arut at nginx.com (Roman Arutyunyan) Date: Fri, 2 Feb 2024 21:26:31 +0400 Subject: Segfault when interpreting cached X-Accel-Redirect response In-Reply-To: <883bd10bacbd455bd00d6cbdb9ec42d38eaeb263.camel@gmail.com> References: <6f1c6e52-e8db-4158-9c97-ae6015f7e175@cdn77.com> <883bd10bacbd455bd00d6cbdb9ec42d38eaeb263.camel@gmail.com> Message-ID: <20240202172631.heaerehqchbzzw73@N00W24XTQX> Hello, On Fri, Feb 02, 2024 at 01:47:51PM +0100, Jan Prachař wrote: > On Fri, 2024-02-02 at 12:48 +0100, Jiří Setnička via nginx-devel wrote: > > Hello, > > > > Also, I believe that the core of the problem is because of the > > ngx_http_finalize_request(r, NGX_DONE); call in the > > ngx_http_upstream_process_headers function. This call is needed when > > doing an internal redirect after the real upstream request (to close the > > upstream request), but when serving from the cache, there is no upstream > > request to close and this call causes ngx_http_set_lingering_close to be > > called from the ngx_http_finalize_connection with no active request on > > the connection yielding to the segfault. > > Hello, > > I am Jiří's colleague, and so I have taken a closer look at the problem. Another > indication of the issue is the alert in the error log for non-keepalive connections, > stating "http request count is zero while closing request." > > Upon reviewing the nginx source code, I discovered that the function > ngx_http_upstream_finalize_request(), when called with rc = NGX_DECLINED, does not invoke > ngx_http_finalize_request(). However, when there is nothing to clean up (u->cleanup == > NULL), it does. Therefore, I believe the appropriate fix is to follow the patch below. Thanks for reporting this. You're right, ngx_http_upstream_finalize_request(NGX_DECLINED) should not decrement the request count, but in your case it does. > Best, Jan Prachař > > # User Jan Prachař > # Date 1706877176 -3600 > # Fri Feb 02 13:32:56 2024 +0100 > # Node ID 851c994b48c48c9cd3d32b9aa402f4821aeb8bb2 > # Parent cf3d537ec6706f8713a757df256f2cfccb8f9b01 > Upstream: Fix "request count is zero" when procesing X-Accel-Redirect Nitpicking: the message after prefix should start with lower case and end with a period. In case of a keepalive connection the problem is use-after-free. For non-keepalive the message you mentioned above may be the only manifestation of the problem, however the message itself may be missing depending on random factors and the behavior of the system allocator. I suggest the following commit log: Upstream: fixed r->count underflow while using X-Accel-Redirect. A response with X-Accel-Redirect could be cached by using proxy_ignore_headers directive. If this directive is then removed, this response could be served from the cache and interpreted as a redirection. While doing that, ngx_http_upstream_finalize_request(r, u, NGX_DECLINED) is called with u->cleanup == NULL, which leads to an extra ngx_http_finalize_request() call which decreases r->main->count. Note that the code path for existing u->cleanup does not call ngx_http_finalize_request() for NGX_DECLINED. For non-keepalive connections the problem manifested itself with the "http request count is zero" alert. For keepalive connections it resulted in use-after-free memory access. > ngx_http_upstream_finalize_request(r, u, NGX_DECLINED) should not call > ngx_http_finalize_request(). > > diff -r cf3d537ec670 -r 851c994b48c4 src/http/ngx_http_upstream.c > --- a/src/http/ngx_http_upstream.c Thu Nov 26 21:00:25 2020 +0100 > +++ b/src/http/ngx_http_upstream.c Fri Feb 02 13:32:56 2024 +0100 > @@ -4340,6 +4340,11 @@ > > if (u->cleanup == NULL) { > /* the request was already finalized */ > + > + if (rc == NGX_DECLINED) { > + return; > + } > + > ngx_http_finalize_request(r, NGX_DONE); > return; > } > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > https://mailman.nginx.org/mailman/listinfo/nginx-devel The patch seems ok, but needs to be tested. -- Roman Arutyunyan From mdounin at mdounin.ru Sat Feb 3 01:25:07 2024 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sat, 3 Feb 2024 04:25:07 +0300 Subject: Segfault when interpreting cached X-Accel-Redirect response In-Reply-To: <883bd10bacbd455bd00d6cbdb9ec42d38eaeb263.camel@gmail.com> References: <6f1c6e52-e8db-4158-9c97-ae6015f7e175@cdn77.com> <883bd10bacbd455bd00d6cbdb9ec42d38eaeb263.camel@gmail.com> Message-ID: Hello! On Fri, Feb 02, 2024 at 01:47:51PM +0100, Jan Prachař wrote: > On Fri, 2024-02-02 at 12:48 +0100, Jiří Setnička via nginx-devel wrote: > > Hello, > > > > Also, I believe that the core of the problem is because of the > > ngx_http_finalize_request(r, NGX_DONE); call in the > > ngx_http_upstream_process_headers function. This call is needed when > > doing an internal redirect after the real upstream request (to close the > > upstream request), but when serving from the cache, there is no upstream > > request to close and this call causes ngx_http_set_lingering_close to be > > called from the ngx_http_finalize_connection with no active request on > > the connection yielding to the segfault. > > Hello, > > I am Jiří's colleague, and so I have taken a closer look at the problem. Another > indication of the issue is the alert in the error log for non-keepalive connections, > stating "http request count is zero while closing request." > > Upon reviewing the nginx source code, I discovered that the function > ngx_http_upstream_finalize_request(), when called with rc = NGX_DECLINED, does not invoke > ngx_http_finalize_request(). However, when there is nothing to clean up (u->cleanup == > NULL), it does. Therefore, I believe the appropriate fix is to follow the patch below. > > Best, Jan Prachař > > # User Jan Prachař > # Date 1706877176 -3600 > # Fri Feb 02 13:32:56 2024 +0100 > # Node ID 851c994b48c48c9cd3d32b9aa402f4821aeb8bb2 > # Parent cf3d537ec6706f8713a757df256f2cfccb8f9b01 > Upstream: Fix "request count is zero" when procesing X-Accel-Redirect > > ngx_http_upstream_finalize_request(r, u, NGX_DECLINED) should not call > ngx_http_finalize_request(). > > diff -r cf3d537ec670 -r 851c994b48c4 src/http/ngx_http_upstream.c > --- a/src/http/ngx_http_upstream.c Thu Nov 26 21:00:25 2020 +0100 > +++ b/src/http/ngx_http_upstream.c Fri Feb 02 13:32:56 2024 +0100 > @@ -4340,6 +4340,11 @@ > > if (u->cleanup == NULL) { > /* the request was already finalized */ > + > + if (rc == NGX_DECLINED) { > + return; > + } > + > ngx_http_finalize_request(r, NGX_DONE); > return; > } I somewhat agree: the approach suggested by Jiří certainly looks incorrect. The ngx_http_upstream_cache_send() function, which calls ngx_http_upstream_process_headers() with r->cached set, can be used in two contexts: before the cleanup handler is installed (i.e., when sending a cached response during upstream request initialization) and after it is installed (i.e., when sending a stale cached response on upstream errors). In the latter case skipping finalization would mean a socket leak. Still, checking for NGX_DECLINED explicitly also looks wrong, for a number of reasons. First, the specific code path isn't just for "nothing to clean up", it's for the very specific case when the request was already finalized due to filter finalization, see 5994:5abf5af257a7. This code path is not expected to be triggered when the cleanup handler isn't installed yet - before the cleanup handler is installed, upstream code is expected to call ngx_http_finalize_request() directly instead. And it would be semantically wrong to check for NGX_DECLINED: if it's here, it means something already gone wrong. I think the generic issue here is that ngx_http_upstream_process_headers(), which is normally used for upstream responses and calls ngx_http_upstream_finalize_request(), is also used for cached responses. Still, it assumes it is used for an upstream response, and calls ngx_http_upstream_finalize_request(). As can be seen from the rest of the ngx_http_upstream_process_headers() code, apart from the issue with X-Accel-Redirect, it can also call ngx_http_upstream_finalize_request(NGX_HTTP_INTERNAL_SERVER_ERROR) when hh->copy_handler() or ngx_http_upstream_copy_header_line() fails. This will similarly end up in ngx_http_finalize_request(NGX_DONE) since there is no u->cleanup, leading to a request hang. And it would be certainly wrong to check for NGX_HTTP_INTERNAL_SERVER_ERROR similarly to NGX_DECLINED in your patch, because it can theoretically happen after filter finalization. Proper solution would probably require re-thinking ngx_http_upstream_process_headers() interface. Some preliminary code below: it disables X-Accel-Redirect processing altogether if ngx_http_upstream_process_headers() is called when returning a cached response (this essentially means that "proxy_ignore_headers X-Accel-Expires" is preserved in the cache file, which seems to be the right thing to do as we don't save responses with X-Accel-Redirect to cache unless it is ignored), and returns NGX_ERROR in other places to trigger appropriate error handling instead of calling ngx_http_upstream_finalize_request() directly (this no longer tries to return 500 Internal Server Error response though, as doing so might be unsafe after copying some of the cached headers to the response). Please take a look if it works for you. diff --git a/src/http/ngx_http_upstream.c b/src/http/ngx_http_upstream.c --- a/src/http/ngx_http_upstream.c +++ b/src/http/ngx_http_upstream.c @@ -588,10 +588,6 @@ ngx_http_upstream_init_request(ngx_http_ if (rc == NGX_OK) { rc = ngx_http_upstream_cache_send(r, u); - if (rc == NGX_DONE) { - return; - } - if (rc == NGX_HTTP_UPSTREAM_INVALID_HEADER) { rc = NGX_DECLINED; r->cached = 0; @@ -1088,7 +1084,7 @@ ngx_http_upstream_cache_send(ngx_http_re if (rc == NGX_OK) { if (ngx_http_upstream_process_headers(r, u) != NGX_OK) { - return NGX_DONE; + return NGX_ERROR; } return ngx_http_cache_send(r); @@ -2516,7 +2512,14 @@ ngx_http_upstream_process_header(ngx_htt } } - if (ngx_http_upstream_process_headers(r, u) != NGX_OK) { + rc = ngx_http_upstream_process_headers(r, u); + + if (rc == NGX_DONE) { + return; + } + + if (rc == NGX_ERROR) { + ngx_http_upstream_finalize_request(r, u, NGX_ERROR); return; } @@ -2576,10 +2579,6 @@ ngx_http_upstream_test_next(ngx_http_req u->cache_status = NGX_HTTP_CACHE_STALE; rc = ngx_http_upstream_cache_send(r, u); - if (rc == NGX_DONE) { - return NGX_OK; - } - if (rc == NGX_HTTP_UPSTREAM_INVALID_HEADER) { rc = NGX_HTTP_INTERNAL_SERVER_ERROR; } @@ -2621,10 +2620,6 @@ ngx_http_upstream_test_next(ngx_http_req u->cache_status = NGX_HTTP_CACHE_REVALIDATED; rc = ngx_http_upstream_cache_send(r, u); - if (rc == NGX_DONE) { - return NGX_OK; - } - if (rc == NGX_HTTP_UPSTREAM_INVALID_HEADER) { rc = NGX_HTTP_INTERNAL_SERVER_ERROR; } @@ -2827,7 +2822,8 @@ ngx_http_upstream_process_headers(ngx_ht } if (u->headers_in.x_accel_redirect - && !(u->conf->ignore_headers & NGX_HTTP_UPSTREAM_IGN_XA_REDIRECT)) + && !(u->conf->ignore_headers & NGX_HTTP_UPSTREAM_IGN_XA_REDIRECT) + && !r->cached) { ngx_http_upstream_finalize_request(r, u, NGX_DECLINED); @@ -2918,18 +2914,14 @@ ngx_http_upstream_process_headers(ngx_ht if (hh) { if (hh->copy_handler(r, &h[i], hh->conf) != NGX_OK) { - ngx_http_upstream_finalize_request(r, u, - NGX_HTTP_INTERNAL_SERVER_ERROR); - return NGX_DONE; + return NGX_ERROR; } continue; } if (ngx_http_upstream_copy_header_line(r, &h[i], 0) != NGX_OK) { - ngx_http_upstream_finalize_request(r, u, - NGX_HTTP_INTERNAL_SERVER_ERROR); - return NGX_DONE; + return NGX_ERROR; } } @@ -4442,10 +4434,6 @@ ngx_http_upstream_next(ngx_http_request_ u->cache_status = NGX_HTTP_CACHE_STALE; rc = ngx_http_upstream_cache_send(r, u); - if (rc == NGX_DONE) { - return; - } - if (rc == NGX_HTTP_UPSTREAM_INVALID_HEADER) { rc = NGX_HTTP_INTERNAL_SERVER_ERROR; } -- Maxim Dounin http://mdounin.ru/ From jan.prachar at gmail.com Mon Feb 5 17:01:54 2024 From: jan.prachar at gmail.com (Jan =?UTF-8?Q?Pracha=C5=99?=) Date: Mon, 05 Feb 2024 18:01:54 +0100 Subject: Segfault when interpreting cached X-Accel-Redirect response In-Reply-To: References: <6f1c6e52-e8db-4158-9c97-ae6015f7e175@cdn77.com> <883bd10bacbd455bd00d6cbdb9ec42d38eaeb263.camel@gmail.com> Message-ID: <8b0858db7e3e2bb82e814d57850d752162edd0c8.camel@gmail.com> Hello, thank you for your responses. On Sat, 2024-02-03 at 04:25 +0300, Maxim Dounin wrote: > Hello! > > On Fri, Feb 02, 2024 at 01:47:51PM +0100, Jan Prachař wrote: > > > On Fri, 2024-02-02 at 12:48 +0100, Jiří Setnička via nginx-devel wrote: > > > Hello, > > > > > > Also, I believe that the core of the problem is because of the > > > ngx_http_finalize_request(r, NGX_DONE); call in the > > > ngx_http_upstream_process_headers function. This call is needed when > > > doing an internal redirect after the real upstream request (to close the > > > upstream request), but when serving from the cache, there is no upstream > > > request to close and this call causes ngx_http_set_lingering_close to be > > > called from the ngx_http_finalize_connection with no active request on > > > the connection yielding to the segfault. > > > > Hello, > > > > I am Jiří's colleague, and so I have taken a closer look at the problem. Another > > indication of the issue is the alert in the error log for non-keepalive connections, > > stating "http request count is zero while closing request." > > > > Upon reviewing the nginx source code, I discovered that the function > > ngx_http_upstream_finalize_request(), when called with rc = NGX_DECLINED, does not invoke > > ngx_http_finalize_request(). However, when there is nothing to clean up (u->cleanup == > > NULL), it does. Therefore, I believe the appropriate fix is to follow the patch below. > > > > Best, Jan Prachař > > > > # User Jan Prachař > > # Date 1706877176 -3600 > > # Fri Feb 02 13:32:56 2024 +0100 > > # Node ID 851c994b48c48c9cd3d32b9aa402f4821aeb8bb2 > > # Parent cf3d537ec6706f8713a757df256f2cfccb8f9b01 > > Upstream: Fix "request count is zero" when procesing X-Accel-Redirect > > > > ngx_http_upstream_finalize_request(r, u, NGX_DECLINED) should not call > > ngx_http_finalize_request(). > > > > diff -r cf3d537ec670 -r 851c994b48c4 src/http/ngx_http_upstream.c > > --- a/src/http/ngx_http_upstream.c Thu Nov 26 21:00:25 2020 +0100 > > +++ b/src/http/ngx_http_upstream.c Fri Feb 02 13:32:56 2024 +0100 > > @@ -4340,6 +4340,11 @@ > > > > if (u->cleanup == NULL) { > > /* the request was already finalized */ > > + > > + if (rc == NGX_DECLINED) { > > + return; > > + } > > + > > ngx_http_finalize_request(r, NGX_DONE); > > return; > > } > > I somewhat agree: the approach suggested by Jiří certainly looks > incorrect. The ngx_http_upstream_cache_send() function, which > calls ngx_http_upstream_process_headers() with r->cached set, can > be used in two contexts: before the cleanup handler is installed > (i.e., when sending a cached response during upstream request > initialization) and after it is installed (i.e., when sending a > stale cached response on upstream errors). In the latter case > skipping finalization would mean a socket leak. > > Still, checking for NGX_DECLINED explicitly also looks wrong, for > a number of reasons. > > First, the specific code path isn't just for "nothing to clean > up", it's for the very specific case when the request was already > finalized due to filter finalization, see 5994:5abf5af257a7. This > code path is not expected to be triggered when the cleanup handler > isn't installed yet - before the cleanup handler is installed, > upstream code is expected to call ngx_http_finalize_request() > directly instead. And it would be semantically wrong to check for > NGX_DECLINED: if it's here, it means something already gone wrong. > > I think the generic issue here is that > ngx_http_upstream_process_headers(), which is normally used for > upstream responses and calls ngx_http_upstream_finalize_request(), > is also used for cached responses. Still, it assumes it is used > for an upstream response, and calls > ngx_http_upstream_finalize_request(). > > As can be seen from the rest of the > ngx_http_upstream_process_headers() code, apart from the issue > with X-Accel-Redirect, it can also call > ngx_http_upstream_finalize_request(NGX_HTTP_INTERNAL_SERVER_ERROR) > when hh->copy_handler() or ngx_http_upstream_copy_header_line() > fails. This will similarly end up in > ngx_http_finalize_request(NGX_DONE) since there is no u->cleanup, > leading to a request hang. And it would be certainly wrong to > check for NGX_HTTP_INTERNAL_SERVER_ERROR similarly to NGX_DECLINED > in your patch, because it can theoretically happen after filter > finalization. > > Proper solution would probably require re-thinking > ngx_http_upstream_process_headers() interface. > > Some preliminary code below: it disables X-Accel-Redirect > processing altogether if ngx_http_upstream_process_headers() is > called when returning a cached response (this essentially means > that "proxy_ignore_headers X-Accel-Expires" is preserved in the > cache file, which seems to be the right thing to do as we don't > save responses with X-Accel-Redirect to cache unless it is > ignored), and returns NGX_ERROR in other places to trigger > appropriate error handling instead of calling > ngx_http_upstream_finalize_request() directly (this no longer > tries to return 500 Internal Server Error response though, as > doing so might be unsafe after copying some of the cached headers > to the response). > > Please take a look if it works for you. The provided patch works as expected, with no observed issues. Considering that proxy_ignore_headers for caching headers is preserved with the cached file, it seems reasonable to extend the same behavior to X-Accel-Redirect. >From my perspective, the updated code in ngx_http_upstream_process_headers() is a bit confusing. The function can return NGX_DONE, but this return code is only handled in one place where ngx_http_upstream_process_headers() is called. If I may suggest, splitting the function might be helpful – redirect processing would only occur for direct upstream responses, while the rest of the header processing would be called always (i.e., also for cached responses). Additionally, I believe the special handling of NGX_DECLINED in ngx_http_upstream_finalize_request() can be removed. The updated patch is provided below. diff --git a/nginx/src/http/ngx_http_upstream.c b/nginx/src/http/ngx_http_upstream.c index f5db65338..13c25721d 100644 --- a/nginx/src/http/ngx_http_upstream.c +++ b/nginx/src/http/ngx_http_upstream.c @@ -53,6 +53,8 @@ static ngx_int_t ngx_http_upstream_test_next(ngx_http_request_t *r, static ngx_int_t ngx_http_upstream_intercept_errors(ngx_http_request_t *r, ngx_http_upstream_t *u); static ngx_int_t ngx_http_upstream_test_connect(ngx_connection_t *c); +static ngx_int_t ngx_http_upstream_process_redirect(ngx_http_request_t *r, + ngx_http_upstream_t *u); static ngx_int_t ngx_http_upstream_process_headers(ngx_http_request_t *r, ngx_http_upstream_t *u); static ngx_int_t ngx_http_upstream_process_trailers(ngx_http_request_t *r, @@ -588,10 +590,6 @@ ngx_http_upstream_init_request(ngx_http_request_t *r) if (rc == NGX_OK) { rc = ngx_http_upstream_cache_send(r, u); - if (rc == NGX_DONE) { - return; - } - if (rc == NGX_HTTP_UPSTREAM_INVALID_HEADER) { rc = NGX_DECLINED; r->cached = 0; @@ -1088,7 +1086,7 @@ ngx_http_upstream_cache_send(ngx_http_request_t *r, ngx_http_upstream_t *u) if (rc == NGX_OK) { if (ngx_http_upstream_process_headers(r, u) != NGX_OK) { - return NGX_DONE; + return NGX_ERROR; } return ngx_http_cache_send(r); @@ -2516,7 +2514,19 @@ ngx_http_upstream_process_header(ngx_http_request_t *r, ngx_http_upstream_t *u) } } + rc = ngx_http_upstream_process_redirect(r, u); + + if (rc == NGX_DONE) { + return; + } + + if (rc == NGX_ERROR) { + ngx_http_upstream_finalize_request(r, u, NGX_ERROR); + return; + } + if (ngx_http_upstream_process_headers(r, u) != NGX_OK) { + ngx_http_upstream_finalize_request(r, u, NGX_ERROR); return; } @@ -2576,10 +2586,6 @@ ngx_http_upstream_test_next(ngx_http_request_t *r, ngx_http_upstream_t *u) u->cache_status = NGX_HTTP_CACHE_STALE; rc = ngx_http_upstream_cache_send(r, u); - if (rc == NGX_DONE) { - return NGX_OK; - } - if (rc == NGX_HTTP_UPSTREAM_INVALID_HEADER) { rc = NGX_HTTP_INTERNAL_SERVER_ERROR; } @@ -2621,10 +2627,6 @@ ngx_http_upstream_test_next(ngx_http_request_t *r, ngx_http_upstream_t *u) u->cache_status = NGX_HTTP_CACHE_REVALIDATED; rc = ngx_http_upstream_cache_send(r, u); - if (rc == NGX_DONE) { - return NGX_OK; - } - if (rc == NGX_HTTP_UPSTREAM_INVALID_HEADER) { rc = NGX_HTTP_INTERNAL_SERVER_ERROR; } @@ -2811,7 +2813,7 @@ ngx_http_upstream_test_connect(ngx_connection_t *c) static ngx_int_t -ngx_http_upstream_process_headers(ngx_http_request_t *r, ngx_http_upstream_t *u) +ngx_http_upstream_process_redirect(ngx_http_request_t *r, ngx_http_upstream_t *u) { ngx_str_t uri, args; ngx_uint_t i, flags; @@ -2822,15 +2824,9 @@ ngx_http_upstream_process_headers(ngx_http_request_t *r, ngx_http_upstream_t *u) umcf = ngx_http_get_module_main_conf(r, ngx_http_upstream_module); - if (u->headers_in.no_cache || u->headers_in.expired) { - u->cacheable = 0; - } - if (u->headers_in.x_accel_redirect && !(u->conf->ignore_headers & NGX_HTTP_UPSTREAM_IGN_XA_REDIRECT)) { - ngx_http_upstream_finalize_request(r, u, NGX_DECLINED); - part = &u->headers_in.headers.part; h = part->elts; @@ -2855,13 +2851,15 @@ ngx_http_upstream_process_headers(ngx_http_request_t *r, ngx_http_upstream_t *u) if (hh && hh->redirect) { if (hh->copy_handler(r, &h[i], hh->conf) != NGX_OK) { - ngx_http_finalize_request(r, - NGX_HTTP_INTERNAL_SERVER_ERROR); - return NGX_DONE; + return NGX_ERROR; } } } + r->count++; + + ngx_http_upstream_finalize_request(r, u, NGX_DONE); + uri = u->headers_in.x_accel_redirect->value; if (uri.data[0] == '@') { @@ -2888,6 +2886,25 @@ ngx_http_upstream_process_headers(ngx_http_request_t *r, ngx_http_upstream_t *u) return NGX_DONE; } + return NGX_OK; +} + + +static ngx_int_t +ngx_http_upstream_process_headers(ngx_http_request_t *r, ngx_http_upstream_t *u) +{ + ngx_uint_t i; + ngx_list_part_t *part; + ngx_table_elt_t *h; + ngx_http_upstream_header_t *hh; + ngx_http_upstream_main_conf_t *umcf; + + umcf = ngx_http_get_module_main_conf(r, ngx_http_upstream_module); + + if (u->headers_in.no_cache || u->headers_in.expired) { + u->cacheable = 0; + } + part = &u->headers_in.headers.part; h = part->elts; @@ -2918,18 +2935,14 @@ ngx_http_upstream_process_headers(ngx_http_request_t *r, ngx_http_upstream_t *u) if (hh) { if (hh->copy_handler(r, &h[i], hh->conf) != NGX_OK) { - ngx_http_upstream_finalize_request(r, u, - NGX_HTTP_INTERNAL_SERVER_ERROR); - return NGX_DONE; + return NGX_ERROR; } continue; } if (ngx_http_upstream_copy_header_line(r, &h[i], 0) != NGX_OK) { - ngx_http_upstream_finalize_request(r, u, - NGX_HTTP_INTERNAL_SERVER_ERROR); - return NGX_DONE; + return NGX_ERROR; } } @@ -4429,10 +4442,6 @@ ngx_http_upstream_next(ngx_http_request_t *r, ngx_http_upstream_t *u, u->cache_status = NGX_HTTP_CACHE_STALE; rc = ngx_http_upstream_cache_send(r, u); - if (rc == NGX_DONE) { - return; - } - if (rc == NGX_HTTP_UPSTREAM_INVALID_HEADER) { rc = NGX_HTTP_INTERNAL_SERVER_ERROR; } @@ -4604,10 +4613,6 @@ ngx_http_upstream_finalize_request(ngx_http_request_t *r, r->read_event_handler = ngx_http_block_reading; - if (rc == NGX_DECLINED) { - return; - } - r->connection->log->action = "sending to client"; if (!u->header_sent From mdounin at mdounin.ru Mon Feb 5 21:46:41 2024 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 6 Feb 2024 00:46:41 +0300 Subject: Segfault when interpreting cached X-Accel-Redirect response In-Reply-To: <8b0858db7e3e2bb82e814d57850d752162edd0c8.camel@gmail.com> References: <6f1c6e52-e8db-4158-9c97-ae6015f7e175@cdn77.com> <883bd10bacbd455bd00d6cbdb9ec42d38eaeb263.camel@gmail.com> <8b0858db7e3e2bb82e814d57850d752162edd0c8.camel@gmail.com> Message-ID: Hello! On Mon, Feb 05, 2024 at 06:01:54PM +0100, Jan Prachař wrote: > Hello, > > thank you for your responses. > > On Sat, 2024-02-03 at 04:25 +0300, Maxim Dounin wrote: > > Hello! > > > > On Fri, Feb 02, 2024 at 01:47:51PM +0100, Jan Prachař wrote: > > > > > On Fri, 2024-02-02 at 12:48 +0100, Jiří Setnička via nginx-devel wrote: > > > > Hello, > > > > > > > > Also, I believe that the core of the problem is because of the > > > > ngx_http_finalize_request(r, NGX_DONE); call in the > > > > ngx_http_upstream_process_headers function. This call is needed when > > > > doing an internal redirect after the real upstream request (to close the > > > > upstream request), but when serving from the cache, there is no upstream > > > > request to close and this call causes ngx_http_set_lingering_close to be > > > > called from the ngx_http_finalize_connection with no active request on > > > > the connection yielding to the segfault. > > > > > > Hello, > > > > > > I am Jiří's colleague, and so I have taken a closer look at the problem. Another > > > indication of the issue is the alert in the error log for non-keepalive connections, > > > stating "http request count is zero while closing request." > > > > > > Upon reviewing the nginx source code, I discovered that the function > > > ngx_http_upstream_finalize_request(), when called with rc = NGX_DECLINED, does not invoke > > > ngx_http_finalize_request(). However, when there is nothing to clean up (u->cleanup == > > > NULL), it does. Therefore, I believe the appropriate fix is to follow the patch below. > > > > > > Best, Jan Prachař > > > > > > # User Jan Prachař > > > # Date 1706877176 -3600 > > > # Fri Feb 02 13:32:56 2024 +0100 > > > # Node ID 851c994b48c48c9cd3d32b9aa402f4821aeb8bb2 > > > # Parent cf3d537ec6706f8713a757df256f2cfccb8f9b01 > > > Upstream: Fix "request count is zero" when procesing X-Accel-Redirect > > > > > > ngx_http_upstream_finalize_request(r, u, NGX_DECLINED) should not call > > > ngx_http_finalize_request(). > > > > > > diff -r cf3d537ec670 -r 851c994b48c4 src/http/ngx_http_upstream.c > > > --- a/src/http/ngx_http_upstream.c Thu Nov 26 21:00:25 2020 +0100 > > > +++ b/src/http/ngx_http_upstream.c Fri Feb 02 13:32:56 2024 +0100 > > > @@ -4340,6 +4340,11 @@ > > > > > > if (u->cleanup == NULL) { > > > /* the request was already finalized */ > > > + > > > + if (rc == NGX_DECLINED) { > > > + return; > > > + } > > > + > > > ngx_http_finalize_request(r, NGX_DONE); > > > return; > > > } > > > > I somewhat agree: the approach suggested by Jiří certainly looks > > incorrect. The ngx_http_upstream_cache_send() function, which > > calls ngx_http_upstream_process_headers() with r->cached set, can > > be used in two contexts: before the cleanup handler is installed > > (i.e., when sending a cached response during upstream request > > initialization) and after it is installed (i.e., when sending a > > stale cached response on upstream errors). In the latter case > > skipping finalization would mean a socket leak. > > > > Still, checking for NGX_DECLINED explicitly also looks wrong, for > > a number of reasons. > > > > First, the specific code path isn't just for "nothing to clean > > up", it's for the very specific case when the request was already > > finalized due to filter finalization, see 5994:5abf5af257a7. This > > code path is not expected to be triggered when the cleanup handler > > isn't installed yet - before the cleanup handler is installed, > > upstream code is expected to call ngx_http_finalize_request() > > directly instead. And it would be semantically wrong to check for > > NGX_DECLINED: if it's here, it means something already gone wrong. > > > > I think the generic issue here is that > > ngx_http_upstream_process_headers(), which is normally used for > > upstream responses and calls ngx_http_upstream_finalize_request(), > > is also used for cached responses. Still, it assumes it is used > > for an upstream response, and calls > > ngx_http_upstream_finalize_request(). > > > > As can be seen from the rest of the > > ngx_http_upstream_process_headers() code, apart from the issue > > with X-Accel-Redirect, it can also call > > ngx_http_upstream_finalize_request(NGX_HTTP_INTERNAL_SERVER_ERROR) > > when hh->copy_handler() or ngx_http_upstream_copy_header_line() > > fails. This will similarly end up in > > ngx_http_finalize_request(NGX_DONE) since there is no u->cleanup, > > leading to a request hang. And it would be certainly wrong to > > check for NGX_HTTP_INTERNAL_SERVER_ERROR similarly to NGX_DECLINED > > in your patch, because it can theoretically happen after filter > > finalization. > > > > Proper solution would probably require re-thinking > > ngx_http_upstream_process_headers() interface. > > > > Some preliminary code below: it disables X-Accel-Redirect > > processing altogether if ngx_http_upstream_process_headers() is > > called when returning a cached response (this essentially means > > that "proxy_ignore_headers X-Accel-Expires" is preserved in the > > cache file, which seems to be the right thing to do as we don't > > save responses with X-Accel-Redirect to cache unless it is > > ignored), and returns NGX_ERROR in other places to trigger > > appropriate error handling instead of calling > > ngx_http_upstream_finalize_request() directly (this no longer > > tries to return 500 Internal Server Error response though, as > > doing so might be unsafe after copying some of the cached headers > > to the response). > > > > Please take a look if it works for you. > > The provided patch works as expected, with no observed issues. > > Considering that proxy_ignore_headers for caching headers is preserved with the > cached file, it seems reasonable to extend the same behavior to > X-Accel-Redirect. Yes, such handling is (mostly) in line with some proxy_ignore_headers handling, that is, X-Accel-Expires, Expires, Cache-Control, Set-Cookie, Vary, and X-Accel-Buffering, as these affect creation of a cache file, but not sending an already cached response to clients. Still, X-Accel-Limit-Rate from a cache file will be applied to the response if not ignored by the current configuration. Similarly, X-Accel-Charset is also applied as long as no longer ignored. As such, I mostly consider this to be a neutral argument. Further, we might reconsider X-Accel-Redirect handling if caching of X-Accel-Redirect responses will be introduced (see https://trac.nginx.org/nginx/ticket/407 for a feature request). > From my perspective, the updated code in ngx_http_upstream_process_headers() is > a bit confusing. The function can return NGX_DONE, but this return code is only > handled in one place where ngx_http_upstream_process_headers() is called. I've removed NGX_DONE handling from the other call since NGX_DONE return code isn't possible there due to r->cached being set just before the call. We can instead assume it can be returned and handle appropriately: this will also make handling X-Accel-Redirect from cached files easier if we'll decide to (instead of checking r->cached, we'll have to call ngx_http_upstream_finalize_request(NGX_DECLINED) conditionally, only if u->cleanup is set). > If I may suggest, splitting the function might be helpful – redirect processing > would only occur for direct upstream responses, while the rest of the header > processing would be called always (i.e., also for cached responses). I can't say I like this idea. Processing of X-Accel-Redirect is a part of headers processing, and quite naturally handled in ngx_http_upstream_process_headers(). Moving it to a separate function will needlessly complicate things. > Additionally, I believe the special handling of NGX_DECLINED in > ngx_http_upstream_finalize_request() can be removed. The updated patch is > provided below. Not really. The ngx_http_upstream_finalize_request(NGX_DECLINED) call ensures that the upstream handling is properly finalized, notably the upstream connection is closed. For short responses after X-Accel-Redirect, this might not be important, because the upstream connection will be closed anyway during request finalization. But if the redirected request processing takes a while, the upstream connection will still be open, and might receive further events - leading to unexpected behaviour (not to mention that various upstream timing variables, such as $upstream_response_time, will be wrong). Below is a patch which preserves proper NGX_DONE processing, and handles X-Accel-Redirect from cached files by checking r->cleanup when calling ngx_http_upstream_finalize_request(NGX_DECLINED). I tend to think this might be the best solution after all, providing better compatibility for further improvements. # HG changeset patch # User Maxim Dounin # Date 1707167064 -10800 # Tue Feb 06 00:04:24 2024 +0300 # Node ID 6e7f0d6d857473517048b8838923253d5230ace0 # Parent 631ee3c6d38cfdf97dec67c3d2c457af5d91db01 Upstream: fixed X-Accel-Redirect handling from cache files. The X-Accel-Redirect header might appear in cache files if its handling is ignored with the "proxy_ignore_headers" directive. If the cache file is later served with different settings, ngx_http_upstream_process_headers() used to call ngx_http_upstream_finalize_request(NGX_DECLINED), which is not expected to happen before the cleanup handler is installed and resulted in ngx_http_finalize_request(NGX_DONE), leading to unexpected request counter decrement, "request count is zero" alerts, and segmentation faults. Similarly, errors in ngx_http_upstream_process_headers() resulted in ngx_http_upstream_finalize_request(NGX_HTTP_INTERNAL_SERVER_ERROR) being called. This is also not expected to happen before the cleanup handler is installed, and resulted in ngx_http_finalize_request(NGX_DONE) without proper request finalization. Fix is to avoid calling ngx_http_upstream_finalize_request() from ngx_http_upstream_process_headers(), notably when the cleanup handler is not yet installed. Errors are now simply return NGX_ERROR, so the caller is responsible for proper finalization by calling either ngx_http_finalize_request() or ngx_http_upstream_finalize_request(). And X-Accel-Redirect handling now does not call ngx_http_upstream_finalize_request(NGX_DECLINED) if no cleanup handler is installed. Reported by Jiří Setnička (https://mailman.nginx.org/pipermail/nginx-devel/2024-February/HWLYHOO3DDB3XTFT6X3GRMXIEJ3SJRUA.html). diff --git a/src/http/ngx_http_upstream.c b/src/http/ngx_http_upstream.c --- a/src/http/ngx_http_upstream.c +++ b/src/http/ngx_http_upstream.c @@ -1087,8 +1087,10 @@ ngx_http_upstream_cache_send(ngx_http_re if (rc == NGX_OK) { - if (ngx_http_upstream_process_headers(r, u) != NGX_OK) { - return NGX_DONE; + rc = ngx_http_upstream_process_headers(r, u); + + if (rc != NGX_OK) { + return rc; } return ngx_http_cache_send(r); @@ -2516,7 +2518,14 @@ ngx_http_upstream_process_header(ngx_htt } } - if (ngx_http_upstream_process_headers(r, u) != NGX_OK) { + rc = ngx_http_upstream_process_headers(r, u); + + if (rc == NGX_DONE) { + return; + } + + if (rc == NGX_ERROR) { + ngx_http_upstream_finalize_request(r, u, NGX_ERROR); return; } @@ -2829,7 +2838,9 @@ ngx_http_upstream_process_headers(ngx_ht if (u->headers_in.x_accel_redirect && !(u->conf->ignore_headers & NGX_HTTP_UPSTREAM_IGN_XA_REDIRECT)) { - ngx_http_upstream_finalize_request(r, u, NGX_DECLINED); + if (u->cleanup) { + ngx_http_upstream_finalize_request(r, u, NGX_DECLINED); + } part = &u->headers_in.headers.part; h = part->elts; @@ -2918,18 +2929,14 @@ ngx_http_upstream_process_headers(ngx_ht if (hh) { if (hh->copy_handler(r, &h[i], hh->conf) != NGX_OK) { - ngx_http_upstream_finalize_request(r, u, - NGX_HTTP_INTERNAL_SERVER_ERROR); - return NGX_DONE; + return NGX_ERROR; } continue; } if (ngx_http_upstream_copy_header_line(r, &h[i], 0) != NGX_OK) { - ngx_http_upstream_finalize_request(r, u, - NGX_HTTP_INTERNAL_SERVER_ERROR); - return NGX_DONE; + return NGX_ERROR; } } -- Maxim Dounin http://mdounin.ru/ From jan.prachar at gmail.com Tue Feb 6 10:42:40 2024 From: jan.prachar at gmail.com (Jan =?UTF-8?Q?Pracha=C5=99?=) Date: Tue, 06 Feb 2024 11:42:40 +0100 Subject: Segfault when interpreting cached X-Accel-Redirect response In-Reply-To: References: <6f1c6e52-e8db-4158-9c97-ae6015f7e175@cdn77.com> <883bd10bacbd455bd00d6cbdb9ec42d38eaeb263.camel@gmail.com> <8b0858db7e3e2bb82e814d57850d752162edd0c8.camel@gmail.com> Message-ID: Hello Maxim, On Tue, 2024-02-06 at 00:46 +0300, Maxim Dounin wrote: > Hello! > > On Mon, Feb 05, 2024 at 06:01:54PM +0100, Jan Prachař wrote: > > > Hello, > > > > thank you for your responses. > > > > On Sat, 2024-02-03 at 04:25 +0300, Maxim Dounin wrote: > > > Hello! > > > > > > On Fri, Feb 02, 2024 at 01:47:51PM +0100, Jan Prachař wrote: > > > > > > > On Fri, 2024-02-02 at 12:48 +0100, Jiří Setnička via nginx-devel wrote: > > > > > Hello, > > > > > > > > > > Also, I believe that the core of the problem is because of the > > > > > ngx_http_finalize_request(r, NGX_DONE); call in the > > > > > ngx_http_upstream_process_headers function. This call is needed when > > > > > doing an internal redirect after the real upstream request (to close the > > > > > upstream request), but when serving from the cache, there is no upstream > > > > > request to close and this call causes ngx_http_set_lingering_close to be > > > > > called from the ngx_http_finalize_connection with no active request on > > > > > the connection yielding to the segfault. > > > > > > > > Hello, > > > > > > > > I am Jiří's colleague, and so I have taken a closer look at the problem. Another > > > > indication of the issue is the alert in the error log for non-keepalive connections, > > > > stating "http request count is zero while closing request." > > > > > > > > Upon reviewing the nginx source code, I discovered that the function > > > > ngx_http_upstream_finalize_request(), when called with rc = NGX_DECLINED, does not invoke > > > > ngx_http_finalize_request(). However, when there is nothing to clean up (u->cleanup == > > > > NULL), it does. Therefore, I believe the appropriate fix is to follow the patch below. > > > > > > > > Best, Jan Prachař > > > > > > > > # User Jan Prachař > > > > # Date 1706877176 -3600 > > > > # Fri Feb 02 13:32:56 2024 +0100 > > > > # Node ID 851c994b48c48c9cd3d32b9aa402f4821aeb8bb2 > > > > # Parent cf3d537ec6706f8713a757df256f2cfccb8f9b01 > > > > Upstream: Fix "request count is zero" when procesing X-Accel-Redirect > > > > > > > > ngx_http_upstream_finalize_request(r, u, NGX_DECLINED) should not call > > > > ngx_http_finalize_request(). > > > > > > > > diff -r cf3d537ec670 -r 851c994b48c4 src/http/ngx_http_upstream.c > > > > --- a/src/http/ngx_http_upstream.c Thu Nov 26 21:00:25 2020 +0100 > > > > +++ b/src/http/ngx_http_upstream.c Fri Feb 02 13:32:56 2024 +0100 > > > > @@ -4340,6 +4340,11 @@ > > > > > > > > if (u->cleanup == NULL) { > > > > /* the request was already finalized */ > > > > + > > > > + if (rc == NGX_DECLINED) { > > > > + return; > > > > + } > > > > + > > > > ngx_http_finalize_request(r, NGX_DONE); > > > > return; > > > > } > > > > > > I somewhat agree: the approach suggested by Jiří certainly looks > > > incorrect. The ngx_http_upstream_cache_send() function, which > > > calls ngx_http_upstream_process_headers() with r->cached set, can > > > be used in two contexts: before the cleanup handler is installed > > > (i.e., when sending a cached response during upstream request > > > initialization) and after it is installed (i.e., when sending a > > > stale cached response on upstream errors). In the latter case > > > skipping finalization would mean a socket leak. > > > > > > Still, checking for NGX_DECLINED explicitly also looks wrong, for > > > a number of reasons. > > > > > > First, the specific code path isn't just for "nothing to clean > > > up", it's for the very specific case when the request was already > > > finalized due to filter finalization, see 5994:5abf5af257a7. This > > > code path is not expected to be triggered when the cleanup handler > > > isn't installed yet - before the cleanup handler is installed, > > > upstream code is expected to call ngx_http_finalize_request() > > > directly instead. And it would be semantically wrong to check for > > > NGX_DECLINED: if it's here, it means something already gone wrong. > > > > > > I think the generic issue here is that > > > ngx_http_upstream_process_headers(), which is normally used for > > > upstream responses and calls ngx_http_upstream_finalize_request(), > > > is also used for cached responses. Still, it assumes it is used > > > for an upstream response, and calls > > > ngx_http_upstream_finalize_request(). > > > > > > As can be seen from the rest of the > > > ngx_http_upstream_process_headers() code, apart from the issue > > > with X-Accel-Redirect, it can also call > > > ngx_http_upstream_finalize_request(NGX_HTTP_INTERNAL_SERVER_ERROR) > > > when hh->copy_handler() or ngx_http_upstream_copy_header_line() > > > fails. This will similarly end up in > > > ngx_http_finalize_request(NGX_DONE) since there is no u->cleanup, > > > leading to a request hang. And it would be certainly wrong to > > > check for NGX_HTTP_INTERNAL_SERVER_ERROR similarly to NGX_DECLINED > > > in your patch, because it can theoretically happen after filter > > > finalization. > > > > > > Proper solution would probably require re-thinking > > > ngx_http_upstream_process_headers() interface. > > > > > > Some preliminary code below: it disables X-Accel-Redirect > > > processing altogether if ngx_http_upstream_process_headers() is > > > called when returning a cached response (this essentially means > > > that "proxy_ignore_headers X-Accel-Expires" is preserved in the > > > cache file, which seems to be the right thing to do as we don't > > > save responses with X-Accel-Redirect to cache unless it is > > > ignored), and returns NGX_ERROR in other places to trigger > > > appropriate error handling instead of calling > > > ngx_http_upstream_finalize_request() directly (this no longer > > > tries to return 500 Internal Server Error response though, as > > > doing so might be unsafe after copying some of the cached headers > > > to the response). > > > > > > Please take a look if it works for you. > > > > The provided patch works as expected, with no observed issues. > > > > Considering that proxy_ignore_headers for caching headers is preserved with the > > cached file, it seems reasonable to extend the same behavior to > > X-Accel-Redirect. > > Yes, such handling is (mostly) in line with some > proxy_ignore_headers handling, that is, X-Accel-Expires, Expires, > Cache-Control, Set-Cookie, Vary, and X-Accel-Buffering, as these > affect creation of a cache file, but not sending an already cached > response to clients. > > Still, X-Accel-Limit-Rate from a cache file will be applied to the > response if not ignored by the current configuration. Similarly, > X-Accel-Charset is also applied as long as no longer ignored. > > As such, I mostly consider this to be a neutral argument. > > Further, we might reconsider X-Accel-Redirect handling if caching > of X-Accel-Redirect responses will be introduced (see > https://trac.nginx.org/nginx/ticket/407 for a feature request). > > > From my perspective, the updated code in ngx_http_upstream_process_headers() is > > a bit confusing. The function can return NGX_DONE, but this return code is only > > handled in one place where ngx_http_upstream_process_headers() is called. > > I've removed NGX_DONE handling from the other call since NGX_DONE > return code isn't possible there due to r->cached being set just > before the call. > > We can instead assume it can be returned and handle appropriately: > this will also make handling X-Accel-Redirect from cached files > easier if we'll decide to (instead of checking r->cached, we'll > have to call ngx_http_upstream_finalize_request(NGX_DECLINED) > conditionally, only if u->cleanup is set). > > > If I may suggest, splitting the function might be helpful – redirect processing > > would only occur for direct upstream responses, while the rest of the header > > processing would be called always (i.e., also for cached responses). > > I can't say I like this idea. Processing of X-Accel-Redirect is a > part of headers processing, and quite naturally handled in > ngx_http_upstream_process_headers(). Moving it to a separate function > will needlessly complicate things. > > > Additionally, I believe the special handling of NGX_DECLINED in > > ngx_http_upstream_finalize_request() can be removed. The updated patch is > > provided below. > > Not really. The ngx_http_upstream_finalize_request(NGX_DECLINED) > call ensures that the upstream handling is properly finalized, > notably the upstream connection is closed. For short responses > after X-Accel-Redirect, this might not be important, because the > upstream connection will be closed anyway during request > finalization. But if the redirected request processing takes a > while, the upstream connection will still be open, and might > receive further events - leading to unexpected behaviour (not to > mention that various upstream timing variables, such as > $upstream_response_time, will be wrong). In my previous patch I replaced ngx_http_upstream_finalize_request(NGX_DECLINED); by r->count++; ngx_http_upstream_finalize_request(NGX_DONE); The upstream connection is still finalized and closed, allowing for the removal of the special handling of NGX_DECLINED from ngx_http_upstream_finalize_request(). > > Below is a patch which preserves proper NGX_DONE processing, and > handles X-Accel-Redirect from cached files by checking r->cleanup > when calling ngx_http_upstream_finalize_request(NGX_DECLINED). I > tend to think this might be the best solution after all, providing > better compatibility for further improvements. > > # HG changeset patch > # User Maxim Dounin > # Date 1707167064 -10800 > # Tue Feb 06 00:04:24 2024 +0300 > # Node ID 6e7f0d6d857473517048b8838923253d5230ace0 > # Parent 631ee3c6d38cfdf97dec67c3d2c457af5d91db01 > Upstream: fixed X-Accel-Redirect handling from cache files. > > The X-Accel-Redirect header might appear in cache files if its handling > is ignored with the "proxy_ignore_headers" directive. If the cache file > is later served with different settings, ngx_http_upstream_process_headers() > used to call ngx_http_upstream_finalize_request(NGX_DECLINED), which > is not expected to happen before the cleanup handler is installed and > resulted in ngx_http_finalize_request(NGX_DONE), leading to unexpected > request counter decrement, "request count is zero" alerts, and segmentation > faults. > > Similarly, errors in ngx_http_upstream_process_headers() resulted in > ngx_http_upstream_finalize_request(NGX_HTTP_INTERNAL_SERVER_ERROR) being > called. This is also not expected to happen before the cleanup handler is > installed, and resulted in ngx_http_finalize_request(NGX_DONE) without > proper request finalization. > > Fix is to avoid calling ngx_http_upstream_finalize_request() from > ngx_http_upstream_process_headers(), notably when the cleanup handler > is not yet installed. Errors are now simply return NGX_ERROR, so the > caller is responsible for proper finalization by calling either > ngx_http_finalize_request() or ngx_http_upstream_finalize_request(). > And X-Accel-Redirect handling now does not call > ngx_http_upstream_finalize_request(NGX_DECLINED) if no cleanup handler > is installed. > > Reported by Jiří Setnička > (https://mailman.nginx.org/pipermail/nginx-devel/2024-February/HWLYHOO3DDB3XTFT6X3GRMXIEJ3SJRUA.html). > > diff --git a/src/http/ngx_http_upstream.c b/src/http/ngx_http_upstream.c > --- a/src/http/ngx_http_upstream.c > +++ b/src/http/ngx_http_upstream.c > @@ -1087,8 +1087,10 @@ ngx_http_upstream_cache_send(ngx_http_re > > if (rc == NGX_OK) { > > - if (ngx_http_upstream_process_headers(r, u) != NGX_OK) { > - return NGX_DONE; > + rc = ngx_http_upstream_process_headers(r, u); > + > + if (rc != NGX_OK) { > + return rc; > } > > return ngx_http_cache_send(r); > @@ -2516,7 +2518,14 @@ ngx_http_upstream_process_header(ngx_htt > } > } > > - if (ngx_http_upstream_process_headers(r, u) != NGX_OK) { > + rc = ngx_http_upstream_process_headers(r, u); > + > + if (rc == NGX_DONE) { > + return; > + } > + > + if (rc == NGX_ERROR) { > + ngx_http_upstream_finalize_request(r, u, NGX_ERROR); > return; > } > > @@ -2829,7 +2838,9 @@ ngx_http_upstream_process_headers(ngx_ht > if (u->headers_in.x_accel_redirect > && !(u->conf->ignore_headers & NGX_HTTP_UPSTREAM_IGN_XA_REDIRECT)) > { > - ngx_http_upstream_finalize_request(r, u, NGX_DECLINED); > + if (u->cleanup) { > + ngx_http_upstream_finalize_request(r, u, NGX_DECLINED); > + } > > part = &u->headers_in.headers.part; > h = part->elts; Just a note. If you move ngx_http_upstream_finalize_request() bellow the for loop that copies upstream headers, then this change is also possible: @@ -2855,13 +2851,15 @@ ngx_http_upstream_process_headers(ngx_http_request_t *r, ngx_http_upstream_t *u) if (hh && hh->redirect) { if (hh->copy_handler(r, &h[i], hh->conf) != NGX_OK) { - ngx_http_finalize_request(r, - NGX_HTTP_INTERNAL_SERVER_ERROR); - return NGX_DONE; + return NGX_ERROR; } } > @@ -2918,18 +2929,14 @@ ngx_http_upstream_process_headers(ngx_ht > > if (hh) { > if (hh->copy_handler(r, &h[i], hh->conf) != NGX_OK) { > - ngx_http_upstream_finalize_request(r, u, > - NGX_HTTP_INTERNAL_SERVER_ERROR); > - return NGX_DONE; > + return NGX_ERROR; > } > > continue; > } > > if (ngx_http_upstream_copy_header_line(r, &h[i], 0) != NGX_OK) { > - ngx_http_upstream_finalize_request(r, u, > - NGX_HTTP_INTERNAL_SERVER_ERROR); > - return NGX_DONE; > + return NGX_ERROR; > } > } > > From mdounin at mdounin.ru Tue Feb 6 11:08:11 2024 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 6 Feb 2024 14:08:11 +0300 Subject: Segfault when interpreting cached X-Accel-Redirect response In-Reply-To: References: <6f1c6e52-e8db-4158-9c97-ae6015f7e175@cdn77.com> <883bd10bacbd455bd00d6cbdb9ec42d38eaeb263.camel@gmail.com> <8b0858db7e3e2bb82e814d57850d752162edd0c8.camel@gmail.com> Message-ID: Hello! On Tue, Feb 06, 2024 at 11:42:40AM +0100, Jan Prachař wrote: > Hello Maxim, > > On Tue, 2024-02-06 at 00:46 +0300, Maxim Dounin wrote: > > Hello! > > > > On Mon, Feb 05, 2024 at 06:01:54PM +0100, Jan Prachař wrote: > > > > > Hello, > > > > > > thank you for your responses. > > > > > > On Sat, 2024-02-03 at 04:25 +0300, Maxim Dounin wrote: > > > > Hello! > > > > > > > > On Fri, Feb 02, 2024 at 01:47:51PM +0100, Jan Prachař wrote: > > > > > > > > > On Fri, 2024-02-02 at 12:48 +0100, Jiří Setnička via nginx-devel wrote: > > > > > > Hello, > > > > > > > > > > > > Also, I believe that the core of the problem is because of the > > > > > > ngx_http_finalize_request(r, NGX_DONE); call in the > > > > > > ngx_http_upstream_process_headers function. This call is needed when > > > > > > doing an internal redirect after the real upstream request (to close the > > > > > > upstream request), but when serving from the cache, there is no upstream > > > > > > request to close and this call causes ngx_http_set_lingering_close to be > > > > > > called from the ngx_http_finalize_connection with no active request on > > > > > > the connection yielding to the segfault. > > > > > > > > > > Hello, > > > > > > > > > > I am Jiří's colleague, and so I have taken a closer look at the problem. Another > > > > > indication of the issue is the alert in the error log for non-keepalive connections, > > > > > stating "http request count is zero while closing request." > > > > > > > > > > Upon reviewing the nginx source code, I discovered that the function > > > > > ngx_http_upstream_finalize_request(), when called with rc = NGX_DECLINED, does not invoke > > > > > ngx_http_finalize_request(). However, when there is nothing to clean up (u->cleanup == > > > > > NULL), it does. Therefore, I believe the appropriate fix is to follow the patch below. > > > > > > > > > > Best, Jan Prachař > > > > > > > > > > # User Jan Prachař > > > > > # Date 1706877176 -3600 > > > > > # Fri Feb 02 13:32:56 2024 +0100 > > > > > # Node ID 851c994b48c48c9cd3d32b9aa402f4821aeb8bb2 > > > > > # Parent cf3d537ec6706f8713a757df256f2cfccb8f9b01 > > > > > Upstream: Fix "request count is zero" when procesing X-Accel-Redirect > > > > > > > > > > ngx_http_upstream_finalize_request(r, u, NGX_DECLINED) should not call > > > > > ngx_http_finalize_request(). > > > > > > > > > > diff -r cf3d537ec670 -r 851c994b48c4 src/http/ngx_http_upstream.c > > > > > --- a/src/http/ngx_http_upstream.c Thu Nov 26 21:00:25 2020 +0100 > > > > > +++ b/src/http/ngx_http_upstream.c Fri Feb 02 13:32:56 2024 +0100 > > > > > @@ -4340,6 +4340,11 @@ > > > > > > > > > > if (u->cleanup == NULL) { > > > > > /* the request was already finalized */ > > > > > + > > > > > + if (rc == NGX_DECLINED) { > > > > > + return; > > > > > + } > > > > > + > > > > > ngx_http_finalize_request(r, NGX_DONE); > > > > > return; > > > > > } > > > > > > > > I somewhat agree: the approach suggested by Jiří certainly looks > > > > incorrect. The ngx_http_upstream_cache_send() function, which > > > > calls ngx_http_upstream_process_headers() with r->cached set, can > > > > be used in two contexts: before the cleanup handler is installed > > > > (i.e., when sending a cached response during upstream request > > > > initialization) and after it is installed (i.e., when sending a > > > > stale cached response on upstream errors). In the latter case > > > > skipping finalization would mean a socket leak. > > > > > > > > Still, checking for NGX_DECLINED explicitly also looks wrong, for > > > > a number of reasons. > > > > > > > > First, the specific code path isn't just for "nothing to clean > > > > up", it's for the very specific case when the request was already > > > > finalized due to filter finalization, see 5994:5abf5af257a7. This > > > > code path is not expected to be triggered when the cleanup handler > > > > isn't installed yet - before the cleanup handler is installed, > > > > upstream code is expected to call ngx_http_finalize_request() > > > > directly instead. And it would be semantically wrong to check for > > > > NGX_DECLINED: if it's here, it means something already gone wrong. > > > > > > > > I think the generic issue here is that > > > > ngx_http_upstream_process_headers(), which is normally used for > > > > upstream responses and calls ngx_http_upstream_finalize_request(), > > > > is also used for cached responses. Still, it assumes it is used > > > > for an upstream response, and calls > > > > ngx_http_upstream_finalize_request(). > > > > > > > > As can be seen from the rest of the > > > > ngx_http_upstream_process_headers() code, apart from the issue > > > > with X-Accel-Redirect, it can also call > > > > ngx_http_upstream_finalize_request(NGX_HTTP_INTERNAL_SERVER_ERROR) > > > > when hh->copy_handler() or ngx_http_upstream_copy_header_line() > > > > fails. This will similarly end up in > > > > ngx_http_finalize_request(NGX_DONE) since there is no u->cleanup, > > > > leading to a request hang. And it would be certainly wrong to > > > > check for NGX_HTTP_INTERNAL_SERVER_ERROR similarly to NGX_DECLINED > > > > in your patch, because it can theoretically happen after filter > > > > finalization. > > > > > > > > Proper solution would probably require re-thinking > > > > ngx_http_upstream_process_headers() interface. > > > > > > > > Some preliminary code below: it disables X-Accel-Redirect > > > > processing altogether if ngx_http_upstream_process_headers() is > > > > called when returning a cached response (this essentially means > > > > that "proxy_ignore_headers X-Accel-Expires" is preserved in the > > > > cache file, which seems to be the right thing to do as we don't > > > > save responses with X-Accel-Redirect to cache unless it is > > > > ignored), and returns NGX_ERROR in other places to trigger > > > > appropriate error handling instead of calling > > > > ngx_http_upstream_finalize_request() directly (this no longer > > > > tries to return 500 Internal Server Error response though, as > > > > doing so might be unsafe after copying some of the cached headers > > > > to the response). > > > > > > > > Please take a look if it works for you. > > > > > > The provided patch works as expected, with no observed issues. > > > > > > Considering that proxy_ignore_headers for caching headers is preserved with the > > > cached file, it seems reasonable to extend the same behavior to > > > X-Accel-Redirect. > > > > Yes, such handling is (mostly) in line with some > > proxy_ignore_headers handling, that is, X-Accel-Expires, Expires, > > Cache-Control, Set-Cookie, Vary, and X-Accel-Buffering, as these > > affect creation of a cache file, but not sending an already cached > > response to clients. > > > > Still, X-Accel-Limit-Rate from a cache file will be applied to the > > response if not ignored by the current configuration. Similarly, > > X-Accel-Charset is also applied as long as no longer ignored. > > > > As such, I mostly consider this to be a neutral argument. > > > > Further, we might reconsider X-Accel-Redirect handling if caching > > of X-Accel-Redirect responses will be introduced (see > > https://trac.nginx.org/nginx/ticket/407 for a feature request). > > > > > From my perspective, the updated code in ngx_http_upstream_process_headers() is > > > a bit confusing. The function can return NGX_DONE, but this return code is only > > > handled in one place where ngx_http_upstream_process_headers() is called. > > > > I've removed NGX_DONE handling from the other call since NGX_DONE > > return code isn't possible there due to r->cached being set just > > before the call. > > > > We can instead assume it can be returned and handle appropriately: > > this will also make handling X-Accel-Redirect from cached files > > easier if we'll decide to (instead of checking r->cached, we'll > > have to call ngx_http_upstream_finalize_request(NGX_DECLINED) > > conditionally, only if u->cleanup is set). > > > > > If I may suggest, splitting the function might be helpful – redirect processing > > > would only occur for direct upstream responses, while the rest of the header > > > processing would be called always (i.e., also for cached responses). > > > > I can't say I like this idea. Processing of X-Accel-Redirect is a > > part of headers processing, and quite naturally handled in > > ngx_http_upstream_process_headers(). Moving it to a separate function > > will needlessly complicate things. > > > > > Additionally, I believe the special handling of NGX_DECLINED in > > > ngx_http_upstream_finalize_request() can be removed. The updated patch is > > > provided below. > > > > Not really. The ngx_http_upstream_finalize_request(NGX_DECLINED) > > call ensures that the upstream handling is properly finalized, > > notably the upstream connection is closed. For short responses > > after X-Accel-Redirect, this might not be important, because the > > upstream connection will be closed anyway during request > > finalization. But if the redirected request processing takes a > > while, the upstream connection will still be open, and might > > receive further events - leading to unexpected behaviour (not to > > mention that various upstream timing variables, such as > > $upstream_response_time, will be wrong). > > In my previous patch I replaced > > ngx_http_upstream_finalize_request(NGX_DECLINED); > > by > > r->count++; > ngx_http_upstream_finalize_request(NGX_DONE); > > The upstream connection is still finalized and closed, allowing > for the removal of the special handling of NGX_DECLINED from > ngx_http_upstream_finalize_request(). Ah, sorry, missed this. Yes, r->count++ followed by a real request finalization is a possible alternative to special handling of NGX_DECLINED without calling ngx_http_finalize_request(). Still, without special handling in ngx_http_upstream_finalize_request() this won't be entirely correct: as can be seen from the code, c->log->action will be incorrectly set to "sending to client". > > > > Below is a patch which preserves proper NGX_DONE processing, and > > handles X-Accel-Redirect from cached files by checking r->cleanup > > when calling ngx_http_upstream_finalize_request(NGX_DECLINED). I > > tend to think this might be the best solution after all, providing > > better compatibility for further improvements. > > > > # HG changeset patch > > # User Maxim Dounin > > # Date 1707167064 -10800 > > # Tue Feb 06 00:04:24 2024 +0300 > > # Node ID 6e7f0d6d857473517048b8838923253d5230ace0 > > # Parent 631ee3c6d38cfdf97dec67c3d2c457af5d91db01 > > Upstream: fixed X-Accel-Redirect handling from cache files. > > > > The X-Accel-Redirect header might appear in cache files if its handling > > is ignored with the "proxy_ignore_headers" directive. If the cache file > > is later served with different settings, ngx_http_upstream_process_headers() > > used to call ngx_http_upstream_finalize_request(NGX_DECLINED), which > > is not expected to happen before the cleanup handler is installed and > > resulted in ngx_http_finalize_request(NGX_DONE), leading to unexpected > > request counter decrement, "request count is zero" alerts, and segmentation > > faults. > > > > Similarly, errors in ngx_http_upstream_process_headers() resulted in > > ngx_http_upstream_finalize_request(NGX_HTTP_INTERNAL_SERVER_ERROR) being > > called. This is also not expected to happen before the cleanup handler is > > installed, and resulted in ngx_http_finalize_request(NGX_DONE) without > > proper request finalization. > > > > Fix is to avoid calling ngx_http_upstream_finalize_request() from > > ngx_http_upstream_process_headers(), notably when the cleanup handler > > is not yet installed. Errors are now simply return NGX_ERROR, so the > > caller is responsible for proper finalization by calling either > > ngx_http_finalize_request() or ngx_http_upstream_finalize_request(). > > And X-Accel-Redirect handling now does not call > > ngx_http_upstream_finalize_request(NGX_DECLINED) if no cleanup handler > > is installed. > > > > Reported by Jiří Setnička > > (https://mailman.nginx.org/pipermail/nginx-devel/2024-February/HWLYHOO3DDB3XTFT6X3GRMXIEJ3SJRUA.html). > > > > diff --git a/src/http/ngx_http_upstream.c b/src/http/ngx_http_upstream.c > > --- a/src/http/ngx_http_upstream.c > > +++ b/src/http/ngx_http_upstream.c > > @@ -1087,8 +1087,10 @@ ngx_http_upstream_cache_send(ngx_http_re > > > > if (rc == NGX_OK) { > > > > - if (ngx_http_upstream_process_headers(r, u) != NGX_OK) { > > - return NGX_DONE; > > + rc = ngx_http_upstream_process_headers(r, u); > > + > > + if (rc != NGX_OK) { > > + return rc; > > } > > > > return ngx_http_cache_send(r); > > @@ -2516,7 +2518,14 @@ ngx_http_upstream_process_header(ngx_htt > > } > > } > > > > - if (ngx_http_upstream_process_headers(r, u) != NGX_OK) { > > + rc = ngx_http_upstream_process_headers(r, u); > > + > > + if (rc == NGX_DONE) { > > + return; > > + } > > + > > + if (rc == NGX_ERROR) { > > + ngx_http_upstream_finalize_request(r, u, NGX_ERROR); > > return; > > } > > > > @@ -2829,7 +2838,9 @@ ngx_http_upstream_process_headers(ngx_ht > > if (u->headers_in.x_accel_redirect > > && !(u->conf->ignore_headers & NGX_HTTP_UPSTREAM_IGN_XA_REDIRECT)) > > { > > - ngx_http_upstream_finalize_request(r, u, NGX_DECLINED); > > + if (u->cleanup) { > > + ngx_http_upstream_finalize_request(r, u, NGX_DECLINED); > > + } > > > > part = &u->headers_in.headers.part; > > h = part->elts; > > Just a note. If you move ngx_http_upstream_finalize_request() bellow > the for loop that copies upstream headers, then this change is also possible: > > @@ -2855,13 +2851,15 @@ ngx_http_upstream_process_headers(ngx_http_request_t *r, > ngx_http_upstream_t *u) > > if (hh && hh->redirect) { > if (hh->copy_handler(r, &h[i], hh->conf) != NGX_OK) { > - ngx_http_finalize_request(r, > - NGX_HTTP_INTERNAL_SERVER_ERROR); > - return NGX_DONE; > + return NGX_ERROR; > } > } > > I don't think it worth the effort, especially given ngx_http_finalize_request(NGX_HTTP_NOT_FOUND) below. [...] -- Maxim Dounin http://mdounin.ru/ From yar at nginx.com Tue Feb 6 11:44:56 2024 From: yar at nginx.com (Yaroslav Zhuravlev) Date: Tue, 6 Feb 2024 11:44:56 +0000 Subject: [PATCH] Documented opensourcing of the OTel module In-Reply-To: References: <00807e94be3622a79d77.1706017747@ORK-ML-00007151> Message-ID: <6FBEDBE8-90CA-4B3F-AE56-4796D3283B7C@nginx.com> Hi Maxim, Thank you for your comments, fixed, new version ready. >> >> -The ngx_otel_module module (1.23.4) provides >> +The ngx_otel_module module (1.23.4) is nginx-authored > > Quoting from > https://mailman.nginx.org/pipermail/nginx-devel/2023-October/4AGH5XVKNP6UDFE32PZIXYO7JQ4RE37P.html: > > : Note that "nginx-authored" here looks misleading, as no nginx core > : developers work on this module. Fixed, thanks > >> +third-party module >> +that provides >> OpenTelemetry >> distributed tracing support. >> The module supports >> @@ -23,12 +25,20 @@ >> >> >> >> +The module is open source since 1.25.2. >> +Download and install instructions are available >> +here. >> +The module is also available as a prebuilt >> +nginx-module-otel dynamic module >> +package (1.25.4). >> + >> + >> + >> >> This module is available as part of our >> commercial subscription >> -in nginx-plus-module-otel package. >> -After installation, the module can be loaded >> -dynamically. >> +(the >> +nginx-plus-module-otel package). > > I don't see reasons to provide additional links here. Rather, the > note probably can be removed altogether, or changed to something > like "In previuos versions, this module is available...". Removed, thanks [...] New version: # HG changeset patch # User Yaroslav Zhuravlev # Date 1704815768 0 # Tue Jan 09 15:56:08 2024 +0000 # Node ID 014598746fcb5dc953b15a6ea0de5410a7ecae6a # Parent e6b785b7e3082fcde152b59b460448a33ec7df64 Documented opensourcing of the OTel module. diff --git a/xml/en/docs/index.xml b/xml/en/docs/index.xml --- a/xml/en/docs/index.xml +++ b/xml/en/docs/index.xml @@ -8,7 +8,7 @@
@@ -681,6 +681,12 @@ ngx_mgmt_module + + + + + + ngx_otel_module diff --git a/xml/en/docs/ngx_otel_module.xml b/xml/en/docs/ngx_otel_module.xml --- a/xml/en/docs/ngx_otel_module.xml +++ b/xml/en/docs/ngx_otel_module.xml @@ -9,12 +9,14 @@ + rev="2">
-The ngx_otel_module module (1.23.4) provides +The ngx_otel_module module (1.23.4) is a +third-party module +that provides OpenTelemetry distributed tracing support. The module supports @@ -23,13 +25,11 @@ - -This module is available as part of our -commercial subscription -in nginx-plus-module-otel package. -After installation, the module can be loaded -dynamically. - +Download and install instructions are available +here. +The module is also available as a prebuilt +nginx-module-otel dynamic module +package (1.25.3).
diff --git a/xml/ru/docs/index.xml b/xml/ru/docs/index.xml --- a/xml/ru/docs/index.xml +++ b/xml/ru/docs/index.xml @@ -8,7 +8,7 @@
@@ -687,9 +687,15 @@ ngx_mgmt_module [en] + + + + + + -ngx_otel_module [en] +ngx_otel_module [en] From jan.prachar at gmail.com Tue Feb 6 12:36:20 2024 From: jan.prachar at gmail.com (Jan =?UTF-8?Q?Pracha=C5=99?=) Date: Tue, 06 Feb 2024 13:36:20 +0100 Subject: Segfault when interpreting cached X-Accel-Redirect response In-Reply-To: References: <6f1c6e52-e8db-4158-9c97-ae6015f7e175@cdn77.com> <883bd10bacbd455bd00d6cbdb9ec42d38eaeb263.camel@gmail.com> <8b0858db7e3e2bb82e814d57850d752162edd0c8.camel@gmail.com> Message-ID: <64aa3a5184956bc21daff13da09a54d4cb2a9c5b.camel@gmail.com> Hello, I have one last note bellow. On Tue, 2024-02-06 at 14:08 +0300, Maxim Dounin wrote: > Hello! > > On Tue, Feb 06, 2024 at 11:42:40AM +0100, Jan Prachař wrote: > > > Hello Maxim, > > > > On Tue, 2024-02-06 at 00:46 +0300, Maxim Dounin wrote: > > > Hello! > > > > > > On Mon, Feb 05, 2024 at 06:01:54PM +0100, Jan Prachař wrote: > > > > > > > Hello, > > > > > > > > thank you for your responses. > > > > > > > > On Sat, 2024-02-03 at 04:25 +0300, Maxim Dounin wrote: > > > > > Hello! > > > > > > > > > > On Fri, Feb 02, 2024 at 01:47:51PM +0100, Jan Prachař wrote: > > > > > > > > > > > On Fri, 2024-02-02 at 12:48 +0100, Jiří Setnička via nginx-devel wrote: > > > > > > > Hello, > > > > > > > > > > > > > > Also, I believe that the core of the problem is because of the > > > > > > > ngx_http_finalize_request(r, NGX_DONE); call in the > > > > > > > ngx_http_upstream_process_headers function. This call is needed when > > > > > > > doing an internal redirect after the real upstream request (to close the > > > > > > > upstream request), but when serving from the cache, there is no upstream > > > > > > > request to close and this call causes ngx_http_set_lingering_close to be > > > > > > > called from the ngx_http_finalize_connection with no active request on > > > > > > > the connection yielding to the segfault. > > > > > > > > > > > > Hello, > > > > > > > > > > > > I am Jiří's colleague, and so I have taken a closer look at the problem. Another > > > > > > indication of the issue is the alert in the error log for non-keepalive connections, > > > > > > stating "http request count is zero while closing request." > > > > > > > > > > > > Upon reviewing the nginx source code, I discovered that the function > > > > > > ngx_http_upstream_finalize_request(), when called with rc = NGX_DECLINED, does not invoke > > > > > > ngx_http_finalize_request(). However, when there is nothing to clean up (u->cleanup == > > > > > > NULL), it does. Therefore, I believe the appropriate fix is to follow the patch below. > > > > > > > > > > > > Best, Jan Prachař > > > > > > > > > > > > # User Jan Prachař > > > > > > # Date 1706877176 -3600 > > > > > > # Fri Feb 02 13:32:56 2024 +0100 > > > > > > # Node ID 851c994b48c48c9cd3d32b9aa402f4821aeb8bb2 > > > > > > # Parent cf3d537ec6706f8713a757df256f2cfccb8f9b01 > > > > > > Upstream: Fix "request count is zero" when procesing X-Accel-Redirect > > > > > > > > > > > > ngx_http_upstream_finalize_request(r, u, NGX_DECLINED) should not call > > > > > > ngx_http_finalize_request(). > > > > > > > > > > > > diff -r cf3d537ec670 -r 851c994b48c4 src/http/ngx_http_upstream.c > > > > > > --- a/src/http/ngx_http_upstream.c Thu Nov 26 21:00:25 2020 +0100 > > > > > > +++ b/src/http/ngx_http_upstream.c Fri Feb 02 13:32:56 2024 +0100 > > > > > > @@ -4340,6 +4340,11 @@ > > > > > > > > > > > > if (u->cleanup == NULL) { > > > > > > /* the request was already finalized */ > > > > > > + > > > > > > + if (rc == NGX_DECLINED) { > > > > > > + return; > > > > > > + } > > > > > > + > > > > > > ngx_http_finalize_request(r, NGX_DONE); > > > > > > return; > > > > > > } > > > > > > > > > > I somewhat agree: the approach suggested by Jiří certainly looks > > > > > incorrect. The ngx_http_upstream_cache_send() function, which > > > > > calls ngx_http_upstream_process_headers() with r->cached set, can > > > > > be used in two contexts: before the cleanup handler is installed > > > > > (i.e., when sending a cached response during upstream request > > > > > initialization) and after it is installed (i.e., when sending a > > > > > stale cached response on upstream errors). In the latter case > > > > > skipping finalization would mean a socket leak. > > > > > > > > > > Still, checking for NGX_DECLINED explicitly also looks wrong, for > > > > > a number of reasons. > > > > > > > > > > First, the specific code path isn't just for "nothing to clean > > > > > up", it's for the very specific case when the request was already > > > > > finalized due to filter finalization, see 5994:5abf5af257a7. This > > > > > code path is not expected to be triggered when the cleanup handler > > > > > isn't installed yet - before the cleanup handler is installed, > > > > > upstream code is expected to call ngx_http_finalize_request() > > > > > directly instead. And it would be semantically wrong to check for > > > > > NGX_DECLINED: if it's here, it means something already gone wrong. > > > > > > > > > > I think the generic issue here is that > > > > > ngx_http_upstream_process_headers(), which is normally used for > > > > > upstream responses and calls ngx_http_upstream_finalize_request(), > > > > > is also used for cached responses. Still, it assumes it is used > > > > > for an upstream response, and calls > > > > > ngx_http_upstream_finalize_request(). > > > > > > > > > > As can be seen from the rest of the > > > > > ngx_http_upstream_process_headers() code, apart from the issue > > > > > with X-Accel-Redirect, it can also call > > > > > ngx_http_upstream_finalize_request(NGX_HTTP_INTERNAL_SERVER_ERROR) > > > > > when hh->copy_handler() or ngx_http_upstream_copy_header_line() > > > > > fails. This will similarly end up in > > > > > ngx_http_finalize_request(NGX_DONE) since there is no u->cleanup, > > > > > leading to a request hang. And it would be certainly wrong to > > > > > check for NGX_HTTP_INTERNAL_SERVER_ERROR similarly to NGX_DECLINED > > > > > in your patch, because it can theoretically happen after filter > > > > > finalization. > > > > > > > > > > Proper solution would probably require re-thinking > > > > > ngx_http_upstream_process_headers() interface. > > > > > > > > > > Some preliminary code below: it disables X-Accel-Redirect > > > > > processing altogether if ngx_http_upstream_process_headers() is > > > > > called when returning a cached response (this essentially means > > > > > that "proxy_ignore_headers X-Accel-Expires" is preserved in the > > > > > cache file, which seems to be the right thing to do as we don't > > > > > save responses with X-Accel-Redirect to cache unless it is > > > > > ignored), and returns NGX_ERROR in other places to trigger > > > > > appropriate error handling instead of calling > > > > > ngx_http_upstream_finalize_request() directly (this no longer > > > > > tries to return 500 Internal Server Error response though, as > > > > > doing so might be unsafe after copying some of the cached headers > > > > > to the response). > > > > > > > > > > Please take a look if it works for you. > > > > > > > > The provided patch works as expected, with no observed issues. > > > > > > > > Considering that proxy_ignore_headers for caching headers is preserved with the > > > > cached file, it seems reasonable to extend the same behavior to > > > > X-Accel-Redirect. > > > > > > Yes, such handling is (mostly) in line with some > > > proxy_ignore_headers handling, that is, X-Accel-Expires, Expires, > > > Cache-Control, Set-Cookie, Vary, and X-Accel-Buffering, as these > > > affect creation of a cache file, but not sending an already cached > > > response to clients. > > > > > > Still, X-Accel-Limit-Rate from a cache file will be applied to the > > > response if not ignored by the current configuration. Similarly, > > > X-Accel-Charset is also applied as long as no longer ignored. > > > > > > As such, I mostly consider this to be a neutral argument. > > > > > > Further, we might reconsider X-Accel-Redirect handling if caching > > > of X-Accel-Redirect responses will be introduced (see > > > https://trac.nginx.org/nginx/ticket/407 for a feature request). > > > > > > > From my perspective, the updated code in ngx_http_upstream_process_headers() is > > > > a bit confusing. The function can return NGX_DONE, but this return code is only > > > > handled in one place where ngx_http_upstream_process_headers() is called. > > > > > > I've removed NGX_DONE handling from the other call since NGX_DONE > > > return code isn't possible there due to r->cached being set just > > > before the call. > > > > > > We can instead assume it can be returned and handle appropriately: > > > this will also make handling X-Accel-Redirect from cached files > > > easier if we'll decide to (instead of checking r->cached, we'll > > > have to call ngx_http_upstream_finalize_request(NGX_DECLINED) > > > conditionally, only if u->cleanup is set). > > > > > > > If I may suggest, splitting the function might be helpful – redirect processing > > > > would only occur for direct upstream responses, while the rest of the header > > > > processing would be called always (i.e., also for cached responses). > > > > > > I can't say I like this idea. Processing of X-Accel-Redirect is a > > > part of headers processing, and quite naturally handled in > > > ngx_http_upstream_process_headers(). Moving it to a separate function > > > will needlessly complicate things. > > > > > > > Additionally, I believe the special handling of NGX_DECLINED in > > > > ngx_http_upstream_finalize_request() can be removed. The updated patch is > > > > provided below. > > > > > > Not really. The ngx_http_upstream_finalize_request(NGX_DECLINED) > > > call ensures that the upstream handling is properly finalized, > > > notably the upstream connection is closed. For short responses > > > after X-Accel-Redirect, this might not be important, because the > > > upstream connection will be closed anyway during request > > > finalization. But if the redirected request processing takes a > > > while, the upstream connection will still be open, and might > > > receive further events - leading to unexpected behaviour (not to > > > mention that various upstream timing variables, such as > > > $upstream_response_time, will be wrong). > > > > In my previous patch I replaced > > > > ngx_http_upstream_finalize_request(NGX_DECLINED); > > > > by > > > > r->count++; > > ngx_http_upstream_finalize_request(NGX_DONE); > > > > The upstream connection is still finalized and closed, allowing > > for the removal of the special handling of NGX_DECLINED from > > ngx_http_upstream_finalize_request(). > > Ah, sorry, missed this. > > Yes, r->count++ followed by a real request finalization is a > possible alternative to special handling of NGX_DECLINED without > calling ngx_http_finalize_request(). Still, without special > handling in ngx_http_upstream_finalize_request() this won't be > entirely correct: as can be seen from the code, c->log->action > will be incorrectly set to "sending to client". > > > > > > > Below is a patch which preserves proper NGX_DONE processing, and > > > handles X-Accel-Redirect from cached files by checking r->cleanup > > > when calling ngx_http_upstream_finalize_request(NGX_DECLINED). I > > > tend to think this might be the best solution after all, providing > > > better compatibility for further improvements. > > > > > > # HG changeset patch > > > # User Maxim Dounin > > > # Date 1707167064 -10800 > > > # Tue Feb 06 00:04:24 2024 +0300 > > > # Node ID 6e7f0d6d857473517048b8838923253d5230ace0 > > > # Parent 631ee3c6d38cfdf97dec67c3d2c457af5d91db01 > > > Upstream: fixed X-Accel-Redirect handling from cache files. > > > > > > The X-Accel-Redirect header might appear in cache files if its handling > > > is ignored with the "proxy_ignore_headers" directive. If the cache file > > > is later served with different settings, ngx_http_upstream_process_headers() > > > used to call ngx_http_upstream_finalize_request(NGX_DECLINED), which > > > is not expected to happen before the cleanup handler is installed and > > > resulted in ngx_http_finalize_request(NGX_DONE), leading to unexpected > > > request counter decrement, "request count is zero" alerts, and segmentation > > > faults. > > > > > > Similarly, errors in ngx_http_upstream_process_headers() resulted in > > > ngx_http_upstream_finalize_request(NGX_HTTP_INTERNAL_SERVER_ERROR) being > > > called. This is also not expected to happen before the cleanup handler is > > > installed, and resulted in ngx_http_finalize_request(NGX_DONE) without > > > proper request finalization. > > > > > > Fix is to avoid calling ngx_http_upstream_finalize_request() from > > > ngx_http_upstream_process_headers(), notably when the cleanup handler > > > is not yet installed. Errors are now simply return NGX_ERROR, so the > > > caller is responsible for proper finalization by calling either > > > ngx_http_finalize_request() or ngx_http_upstream_finalize_request(). > > > And X-Accel-Redirect handling now does not call > > > ngx_http_upstream_finalize_request(NGX_DECLINED) if no cleanup handler > > > is installed. > > > > > > Reported by Jiří Setnička > > > (https://mailman.nginx.org/pipermail/nginx-devel/2024-February/HWLYHOO3DDB3XTFT6X3GRMXIEJ3SJRUA.html). It might be worth mentioning that it has been broken since commit 5994:5abf5af257a7. > > > > > > diff --git a/src/http/ngx_http_upstream.c b/src/http/ngx_http_upstream.c > > > --- a/src/http/ngx_http_upstream.c > > > +++ b/src/http/ngx_http_upstream.c > > > @@ -1087,8 +1087,10 @@ ngx_http_upstream_cache_send(ngx_http_re > > > > > > if (rc == NGX_OK) { > > > > > > - if (ngx_http_upstream_process_headers(r, u) != NGX_OK) { > > > - return NGX_DONE; > > > + rc = ngx_http_upstream_process_headers(r, u); > > > + > > > + if (rc != NGX_OK) { > > > + return rc; > > > } > > > > > > return ngx_http_cache_send(r); > > > @@ -2516,7 +2518,14 @@ ngx_http_upstream_process_header(ngx_htt > > > } > > > } > > > > > > - if (ngx_http_upstream_process_headers(r, u) != NGX_OK) { > > > + rc = ngx_http_upstream_process_headers(r, u); > > > + > > > + if (rc == NGX_DONE) { > > > + return; > > > + } > > > + > > > + if (rc == NGX_ERROR) { > > > + ngx_http_upstream_finalize_request(r, u, NGX_ERROR); > > > return; > > > } > > > > > > @@ -2829,7 +2838,9 @@ ngx_http_upstream_process_headers(ngx_ht > > > if (u->headers_in.x_accel_redirect > > > && !(u->conf->ignore_headers & NGX_HTTP_UPSTREAM_IGN_XA_REDIRECT)) > > > { > > > - ngx_http_upstream_finalize_request(r, u, NGX_DECLINED); > > > + if (u->cleanup) { > > > + ngx_http_upstream_finalize_request(r, u, NGX_DECLINED); > > > + } > > > > > > part = &u->headers_in.headers.part; > > > h = part->elts; > > > > Just a note. If you move ngx_http_upstream_finalize_request() bellow > > the for loop that copies upstream headers, then this change is also possible: > > > > @@ -2855,13 +2851,15 @@ ngx_http_upstream_process_headers(ngx_http_request_t *r, > > ngx_http_upstream_t *u) > > > > if (hh && hh->redirect) { > > if (hh->copy_handler(r, &h[i], hh->conf) != NGX_OK) { > > - ngx_http_finalize_request(r, > > - NGX_HTTP_INTERNAL_SERVER_ERROR); > > - return NGX_DONE; > > + return NGX_ERROR; > > } > > } > > > > > > I don't think it worth the effort, especially given > ngx_http_finalize_request(NGX_HTTP_NOT_FOUND) below. > > [...] > From mdounin at mdounin.ru Tue Feb 6 16:33:09 2024 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 6 Feb 2024 19:33:09 +0300 Subject: Segfault when interpreting cached X-Accel-Redirect response In-Reply-To: <64aa3a5184956bc21daff13da09a54d4cb2a9c5b.camel@gmail.com> References: <6f1c6e52-e8db-4158-9c97-ae6015f7e175@cdn77.com> <883bd10bacbd455bd00d6cbdb9ec42d38eaeb263.camel@gmail.com> <8b0858db7e3e2bb82e814d57850d752162edd0c8.camel@gmail.com> <64aa3a5184956bc21daff13da09a54d4cb2a9c5b.camel@gmail.com> Message-ID: Hello! On Tue, Feb 06, 2024 at 01:36:20PM +0100, Jan Prachař wrote: > Hello, > > I have one last note bellow. > > On Tue, 2024-02-06 at 14:08 +0300, Maxim Dounin wrote: > > Hello! > > > > On Tue, Feb 06, 2024 at 11:42:40AM +0100, Jan Prachař wrote: > > > > > Hello Maxim, > > > > > > On Tue, 2024-02-06 at 00:46 +0300, Maxim Dounin wrote: > > > > Hello! > > > > > > > > On Mon, Feb 05, 2024 at 06:01:54PM +0100, Jan Prachař wrote: > > > > > > > > > Hello, > > > > > > > > > > thank you for your responses. > > > > > > > > > > On Sat, 2024-02-03 at 04:25 +0300, Maxim Dounin wrote: > > > > > > Hello! > > > > > > > > > > > > On Fri, Feb 02, 2024 at 01:47:51PM +0100, Jan Prachař wrote: > > > > > > > > > > > > > On Fri, 2024-02-02 at 12:48 +0100, Jiří Setnička via nginx-devel wrote: > > > > > > > > Hello, > > > > > > > > > > > > > > > > Also, I believe that the core of the problem is because of the > > > > > > > > ngx_http_finalize_request(r, NGX_DONE); call in the > > > > > > > > ngx_http_upstream_process_headers function. This call is needed when > > > > > > > > doing an internal redirect after the real upstream request (to close the > > > > > > > > upstream request), but when serving from the cache, there is no upstream > > > > > > > > request to close and this call causes ngx_http_set_lingering_close to be > > > > > > > > called from the ngx_http_finalize_connection with no active request on > > > > > > > > the connection yielding to the segfault. > > > > > > > > > > > > > > Hello, > > > > > > > > > > > > > > I am Jiří's colleague, and so I have taken a closer look at the problem. Another > > > > > > > indication of the issue is the alert in the error log for non-keepalive connections, > > > > > > > stating "http request count is zero while closing request." > > > > > > > > > > > > > > Upon reviewing the nginx source code, I discovered that the function > > > > > > > ngx_http_upstream_finalize_request(), when called with rc = NGX_DECLINED, does not invoke > > > > > > > ngx_http_finalize_request(). However, when there is nothing to clean up (u->cleanup == > > > > > > > NULL), it does. Therefore, I believe the appropriate fix is to follow the patch below. > > > > > > > > > > > > > > Best, Jan Prachař > > > > > > > > > > > > > > # User Jan Prachař > > > > > > > # Date 1706877176 -3600 > > > > > > > # Fri Feb 02 13:32:56 2024 +0100 > > > > > > > # Node ID 851c994b48c48c9cd3d32b9aa402f4821aeb8bb2 > > > > > > > # Parent cf3d537ec6706f8713a757df256f2cfccb8f9b01 > > > > > > > Upstream: Fix "request count is zero" when procesing X-Accel-Redirect > > > > > > > > > > > > > > ngx_http_upstream_finalize_request(r, u, NGX_DECLINED) should not call > > > > > > > ngx_http_finalize_request(). > > > > > > > > > > > > > > diff -r cf3d537ec670 -r 851c994b48c4 src/http/ngx_http_upstream.c > > > > > > > --- a/src/http/ngx_http_upstream.c Thu Nov 26 21:00:25 2020 +0100 > > > > > > > +++ b/src/http/ngx_http_upstream.c Fri Feb 02 13:32:56 2024 +0100 > > > > > > > @@ -4340,6 +4340,11 @@ > > > > > > > > > > > > > > if (u->cleanup == NULL) { > > > > > > > /* the request was already finalized */ > > > > > > > + > > > > > > > + if (rc == NGX_DECLINED) { > > > > > > > + return; > > > > > > > + } > > > > > > > + > > > > > > > ngx_http_finalize_request(r, NGX_DONE); > > > > > > > return; > > > > > > > } > > > > > > > > > > > > I somewhat agree: the approach suggested by Jiří certainly looks > > > > > > incorrect. The ngx_http_upstream_cache_send() function, which > > > > > > calls ngx_http_upstream_process_headers() with r->cached set, can > > > > > > be used in two contexts: before the cleanup handler is installed > > > > > > (i.e., when sending a cached response during upstream request > > > > > > initialization) and after it is installed (i.e., when sending a > > > > > > stale cached response on upstream errors). In the latter case > > > > > > skipping finalization would mean a socket leak. > > > > > > > > > > > > Still, checking for NGX_DECLINED explicitly also looks wrong, for > > > > > > a number of reasons. > > > > > > > > > > > > First, the specific code path isn't just for "nothing to clean > > > > > > up", it's for the very specific case when the request was already > > > > > > finalized due to filter finalization, see 5994:5abf5af257a7. This > > > > > > code path is not expected to be triggered when the cleanup handler > > > > > > isn't installed yet - before the cleanup handler is installed, > > > > > > upstream code is expected to call ngx_http_finalize_request() > > > > > > directly instead. And it would be semantically wrong to check for > > > > > > NGX_DECLINED: if it's here, it means something already gone wrong. > > > > > > > > > > > > I think the generic issue here is that > > > > > > ngx_http_upstream_process_headers(), which is normally used for > > > > > > upstream responses and calls ngx_http_upstream_finalize_request(), > > > > > > is also used for cached responses. Still, it assumes it is used > > > > > > for an upstream response, and calls > > > > > > ngx_http_upstream_finalize_request(). > > > > > > > > > > > > As can be seen from the rest of the > > > > > > ngx_http_upstream_process_headers() code, apart from the issue > > > > > > with X-Accel-Redirect, it can also call > > > > > > ngx_http_upstream_finalize_request(NGX_HTTP_INTERNAL_SERVER_ERROR) > > > > > > when hh->copy_handler() or ngx_http_upstream_copy_header_line() > > > > > > fails. This will similarly end up in > > > > > > ngx_http_finalize_request(NGX_DONE) since there is no u->cleanup, > > > > > > leading to a request hang. And it would be certainly wrong to > > > > > > check for NGX_HTTP_INTERNAL_SERVER_ERROR similarly to NGX_DECLINED > > > > > > in your patch, because it can theoretically happen after filter > > > > > > finalization. > > > > > > > > > > > > Proper solution would probably require re-thinking > > > > > > ngx_http_upstream_process_headers() interface. > > > > > > > > > > > > Some preliminary code below: it disables X-Accel-Redirect > > > > > > processing altogether if ngx_http_upstream_process_headers() is > > > > > > called when returning a cached response (this essentially means > > > > > > that "proxy_ignore_headers X-Accel-Expires" is preserved in the > > > > > > cache file, which seems to be the right thing to do as we don't > > > > > > save responses with X-Accel-Redirect to cache unless it is > > > > > > ignored), and returns NGX_ERROR in other places to trigger > > > > > > appropriate error handling instead of calling > > > > > > ngx_http_upstream_finalize_request() directly (this no longer > > > > > > tries to return 500 Internal Server Error response though, as > > > > > > doing so might be unsafe after copying some of the cached headers > > > > > > to the response). > > > > > > > > > > > > Please take a look if it works for you. > > > > > > > > > > The provided patch works as expected, with no observed issues. > > > > > > > > > > Considering that proxy_ignore_headers for caching headers is preserved with the > > > > > cached file, it seems reasonable to extend the same behavior to > > > > > X-Accel-Redirect. > > > > > > > > Yes, such handling is (mostly) in line with some > > > > proxy_ignore_headers handling, that is, X-Accel-Expires, Expires, > > > > Cache-Control, Set-Cookie, Vary, and X-Accel-Buffering, as these > > > > affect creation of a cache file, but not sending an already cached > > > > response to clients. > > > > > > > > Still, X-Accel-Limit-Rate from a cache file will be applied to the > > > > response if not ignored by the current configuration. Similarly, > > > > X-Accel-Charset is also applied as long as no longer ignored. > > > > > > > > As such, I mostly consider this to be a neutral argument. > > > > > > > > Further, we might reconsider X-Accel-Redirect handling if caching > > > > of X-Accel-Redirect responses will be introduced (see > > > > https://trac.nginx.org/nginx/ticket/407 for a feature request). > > > > > > > > > From my perspective, the updated code in ngx_http_upstream_process_headers() is > > > > > a bit confusing. The function can return NGX_DONE, but this return code is only > > > > > handled in one place where ngx_http_upstream_process_headers() is called. > > > > > > > > I've removed NGX_DONE handling from the other call since NGX_DONE > > > > return code isn't possible there due to r->cached being set just > > > > before the call. > > > > > > > > We can instead assume it can be returned and handle appropriately: > > > > this will also make handling X-Accel-Redirect from cached files > > > > easier if we'll decide to (instead of checking r->cached, we'll > > > > have to call ngx_http_upstream_finalize_request(NGX_DECLINED) > > > > conditionally, only if u->cleanup is set). > > > > > > > > > If I may suggest, splitting the function might be helpful – redirect processing > > > > > would only occur for direct upstream responses, while the rest of the header > > > > > processing would be called always (i.e., also for cached responses). > > > > > > > > I can't say I like this idea. Processing of X-Accel-Redirect is a > > > > part of headers processing, and quite naturally handled in > > > > ngx_http_upstream_process_headers(). Moving it to a separate function > > > > will needlessly complicate things. > > > > > > > > > Additionally, I believe the special handling of NGX_DECLINED in > > > > > ngx_http_upstream_finalize_request() can be removed. The updated patch is > > > > > provided below. > > > > > > > > Not really. The ngx_http_upstream_finalize_request(NGX_DECLINED) > > > > call ensures that the upstream handling is properly finalized, > > > > notably the upstream connection is closed. For short responses > > > > after X-Accel-Redirect, this might not be important, because the > > > > upstream connection will be closed anyway during request > > > > finalization. But if the redirected request processing takes a > > > > while, the upstream connection will still be open, and might > > > > receive further events - leading to unexpected behaviour (not to > > > > mention that various upstream timing variables, such as > > > > $upstream_response_time, will be wrong). > > > > > > In my previous patch I replaced > > > > > > ngx_http_upstream_finalize_request(NGX_DECLINED); > > > > > > by > > > > > > r->count++; > > > ngx_http_upstream_finalize_request(NGX_DONE); > > > > > > The upstream connection is still finalized and closed, allowing > > > for the removal of the special handling of NGX_DECLINED from > > > ngx_http_upstream_finalize_request(). > > > > Ah, sorry, missed this. > > > > Yes, r->count++ followed by a real request finalization is a > > possible alternative to special handling of NGX_DECLINED without > > calling ngx_http_finalize_request(). Still, without special > > handling in ngx_http_upstream_finalize_request() this won't be > > entirely correct: as can be seen from the code, c->log->action > > will be incorrectly set to "sending to client". > > > > > > > > > > Below is a patch which preserves proper NGX_DONE processing, and > > > > handles X-Accel-Redirect from cached files by checking r->cleanup > > > > when calling ngx_http_upstream_finalize_request(NGX_DECLINED). I > > > > tend to think this might be the best solution after all, providing > > > > better compatibility for further improvements. > > > > > > > > # HG changeset patch > > > > # User Maxim Dounin > > > > # Date 1707167064 -10800 > > > > # Tue Feb 06 00:04:24 2024 +0300 > > > > # Node ID 6e7f0d6d857473517048b8838923253d5230ace0 > > > > # Parent 631ee3c6d38cfdf97dec67c3d2c457af5d91db01 > > > > Upstream: fixed X-Accel-Redirect handling from cache files. > > > > > > > > The X-Accel-Redirect header might appear in cache files if its handling > > > > is ignored with the "proxy_ignore_headers" directive. If the cache file > > > > is later served with different settings, ngx_http_upstream_process_headers() > > > > used to call ngx_http_upstream_finalize_request(NGX_DECLINED), which > > > > is not expected to happen before the cleanup handler is installed and > > > > resulted in ngx_http_finalize_request(NGX_DONE), leading to unexpected > > > > request counter decrement, "request count is zero" alerts, and segmentation > > > > faults. > > > > > > > > Similarly, errors in ngx_http_upstream_process_headers() resulted in > > > > ngx_http_upstream_finalize_request(NGX_HTTP_INTERNAL_SERVER_ERROR) being > > > > called. This is also not expected to happen before the cleanup handler is > > > > installed, and resulted in ngx_http_finalize_request(NGX_DONE) without > > > > proper request finalization. > > > > > > > > Fix is to avoid calling ngx_http_upstream_finalize_request() from > > > > ngx_http_upstream_process_headers(), notably when the cleanup handler > > > > is not yet installed. Errors are now simply return NGX_ERROR, so the > > > > caller is responsible for proper finalization by calling either > > > > ngx_http_finalize_request() or ngx_http_upstream_finalize_request(). > > > > And X-Accel-Redirect handling now does not call > > > > ngx_http_upstream_finalize_request(NGX_DECLINED) if no cleanup handler > > > > is installed. > > > > > > > > Reported by Jiří Setnička > > > > (https://mailman.nginx.org/pipermail/nginx-devel/2024-February/HWLYHOO3DDB3XTFT6X3GRMXIEJ3SJRUA.html). > > It might be worth mentioning that it has been broken > since commit 5994:5abf5af257a7. I don't think it really matters, given that 5994:5abf5af257a7 happened almost 10 years ago, in nginx 1.7.11. (And even before 5994:5abf5af257a7, the behaviour wasn't semantically correct, though was safe in practice.) OTOH, I have no objections to mention this, adjusted the first paragraph of the commit log as follows: ... resulted in ngx_http_finalize_request(NGX_DONE) (after 5994:5abf5af257a7, nginx 1.7.11), leading to ... Just in case, full patch below. # HG changeset patch # User Maxim Dounin # Date 1707231687 -10800 # Tue Feb 06 18:01:27 2024 +0300 # Node ID 0eb9c806b2827cd5cc409db31e87dd4a9f1d15b0 # Parent 631ee3c6d38cfdf97dec67c3d2c457af5d91db01 Upstream: fixed X-Accel-Redirect handling from cache files. The X-Accel-Redirect header might appear in cache files if its handling is ignored with the "proxy_ignore_headers" directive. If the cache file is later served with different settings, ngx_http_upstream_process_headers() used to call ngx_http_upstream_finalize_request(NGX_DECLINED), which is not expected to happen before the cleanup handler is installed and resulted in ngx_http_finalize_request(NGX_DONE) (after 5994:5abf5af257a7, nginx 1.7.11), leading to unexpected request counter decrement, "request count is zero" alerts, and segmentation faults. Similarly, errors in ngx_http_upstream_process_headers() resulted in ngx_http_upstream_finalize_request(NGX_HTTP_INTERNAL_SERVER_ERROR) being called. This is also not expected to happen before the cleanup handler is installed, and resulted in ngx_http_finalize_request(NGX_DONE) without proper request finalization. Fix is to avoid calling ngx_http_upstream_finalize_request() from ngx_http_upstream_process_headers(), notably when the cleanup handler is not yet installed. Errors are now simply return NGX_ERROR, so the caller is responsible for proper finalization by calling either ngx_http_finalize_request() or ngx_http_upstream_finalize_request(). And X-Accel-Redirect handling now does not call ngx_http_upstream_finalize_request(NGX_DECLINED) if no cleanup handler is installed. Reported by Jiří Setnička (https://mailman.nginx.org/pipermail/nginx-devel/2024-February/HWLYHOO3DDB3XTFT6X3GRMXIEJ3SJRUA.html). diff --git a/src/http/ngx_http_upstream.c b/src/http/ngx_http_upstream.c --- a/src/http/ngx_http_upstream.c +++ b/src/http/ngx_http_upstream.c @@ -1087,8 +1087,10 @@ ngx_http_upstream_cache_send(ngx_http_re if (rc == NGX_OK) { - if (ngx_http_upstream_process_headers(r, u) != NGX_OK) { - return NGX_DONE; + rc = ngx_http_upstream_process_headers(r, u); + + if (rc != NGX_OK) { + return rc; } return ngx_http_cache_send(r); @@ -2516,7 +2518,14 @@ ngx_http_upstream_process_header(ngx_htt } } - if (ngx_http_upstream_process_headers(r, u) != NGX_OK) { + rc = ngx_http_upstream_process_headers(r, u); + + if (rc == NGX_DONE) { + return; + } + + if (rc == NGX_ERROR) { + ngx_http_upstream_finalize_request(r, u, NGX_ERROR); return; } @@ -2829,7 +2838,9 @@ ngx_http_upstream_process_headers(ngx_ht if (u->headers_in.x_accel_redirect && !(u->conf->ignore_headers & NGX_HTTP_UPSTREAM_IGN_XA_REDIRECT)) { - ngx_http_upstream_finalize_request(r, u, NGX_DECLINED); + if (u->cleanup) { + ngx_http_upstream_finalize_request(r, u, NGX_DECLINED); + } part = &u->headers_in.headers.part; h = part->elts; @@ -2918,18 +2929,14 @@ ngx_http_upstream_process_headers(ngx_ht if (hh) { if (hh->copy_handler(r, &h[i], hh->conf) != NGX_OK) { - ngx_http_upstream_finalize_request(r, u, - NGX_HTTP_INTERNAL_SERVER_ERROR); - return NGX_DONE; + return NGX_ERROR; } continue; } if (ngx_http_upstream_copy_header_line(r, &h[i], 0) != NGX_OK) { - ngx_http_upstream_finalize_request(r, u, - NGX_HTTP_INTERNAL_SERVER_ERROR); - return NGX_DONE; + return NGX_ERROR; } } -- Maxim Dounin http://mdounin.ru/ From xeioex at nginx.com Wed Feb 7 04:28:43 2024 From: xeioex at nginx.com (=?utf-8?q?Dmitry_Volyntsev?=) Date: Wed, 07 Feb 2024 04:28:43 +0000 Subject: [njs] Reverted changes introduced in 7eaaa7d57636 (not released) back. Message-ID: details: https://hg.nginx.org/njs/rev/25e548de3d61 branches: changeset: 2280:25e548de3d61 user: Dmitry Volyntsev date: Tue Feb 06 19:32:08 2024 -0800 description: Reverted changes introduced in 7eaaa7d57636 (not released) back. Relative importing is again supported. diffstat: external/njs_shell.c | 101 ++++++++++++++++++++++++++++++++++++- nginx/ngx_js.c | 88 ++++++++++++++++++++++++++++++++- nginx/ngx_js.h | 1 + nginx/t/js_import_relative.t | 100 +++++++++++++++++++++++++++++++++++++ test/js/import_relative_path.t.js | 9 +++ test/js/module/libs/hash.js | 1 + test/js/module/sub/sub3.js | 1 + 7 files changed, 296 insertions(+), 5 deletions(-) diffs (468 lines): diff -r 673d78618fc9 -r 25e548de3d61 external/njs_shell.c --- a/external/njs_shell.c Wed Jan 31 17:06:58 2024 -0800 +++ b/external/njs_shell.c Tue Feb 06 19:32:08 2024 -0800 @@ -115,6 +115,8 @@ typedef struct { njs_queue_t labels; + njs_str_t cwd; + njs_arr_t *rejected_promises; njs_bool_t suppress_stdout; @@ -693,6 +695,8 @@ njs_console_init(njs_vm_t *vm, njs_conso njs_queue_init(&console->posted_events); njs_queue_init(&console->labels); + njs_memzero(&console->cwd, sizeof(njs_str_t)); + console->rejected_promises = NULL; console->completion.completions = njs_vm_completions(vm, NULL); @@ -869,7 +873,7 @@ njs_module_path(const njs_str_t *dir, nj if (dir != NULL) { length += dir->length; - if (length == 0) { + if (length == 0 || dir->length == 0) { return NJS_DECLINED; } @@ -915,7 +919,8 @@ njs_module_path(const njs_str_t *dir, nj static njs_int_t -njs_module_lookup(njs_opts_t *opts, njs_module_info_t *info) +njs_module_lookup(njs_opts_t *opts, const njs_str_t *cwd, + njs_module_info_t *info) { njs_int_t ret; njs_str_t *path; @@ -925,6 +930,12 @@ njs_module_lookup(njs_opts_t *opts, njs_ return njs_module_path(NULL, info); } + ret = njs_module_path(cwd, info); + + if (ret != NJS_DECLINED) { + return ret; + } + path = opts->paths; for (i = 0; i < opts->n_paths; i++) { @@ -980,23 +991,86 @@ fail: } +static void +njs_file_dirname(const njs_str_t *path, njs_str_t *name) +{ + const u_char *p, *end; + + if (path->length == 0) { + goto current_dir; + } + + p = path->start + path->length - 1; + + /* Stripping basename. */ + + while (p >= path->start && *p != '/') { p--; } + + end = p + 1; + + if (end == path->start) { + goto current_dir; + } + + /* Stripping trailing slashes. */ + + while (p >= path->start && *p == '/') { p--; } + + p++; + + if (p == path->start) { + p = end; + } + + name->start = path->start; + name->length = p - path->start; + + return; + +current_dir: + + *name = njs_str_value("."); +} + + +static njs_int_t +njs_console_set_cwd(njs_vm_t *vm, njs_console_t *console, njs_str_t *file) +{ + njs_str_t cwd; + + njs_file_dirname(file, &cwd); + + console->cwd.start = njs_mp_alloc(njs_vm_memory_pool(vm), cwd.length); + if (njs_slow_path(console->cwd.start == NULL)) { + return NJS_ERROR; + } + + memcpy(console->cwd.start, cwd.start, cwd.length); + console->cwd.length = cwd.length; + + return NJS_OK; +} + + static njs_mod_t * njs_module_loader(njs_vm_t *vm, njs_external_ptr_t external, njs_str_t *name) { u_char *start; njs_int_t ret; - njs_str_t text; + njs_str_t text, prev_cwd; njs_mod_t *module; njs_opts_t *opts; + njs_console_t *console; njs_module_info_t info; opts = external; + console = njs_vm_external_ptr(vm); njs_memzero(&info, sizeof(njs_module_info_t)); info.name = *name; - ret = njs_module_lookup(opts, &info); + ret = njs_module_lookup(opts, &console->cwd, &info); if (njs_slow_path(ret != NJS_OK)) { return NULL; } @@ -1010,11 +1084,23 @@ njs_module_loader(njs_vm_t *vm, njs_exte return NULL; } + prev_cwd = console->cwd; + + ret = njs_console_set_cwd(vm, console, &info.file); + if (njs_slow_path(ret != NJS_OK)) { + njs_vm_internal_error(vm, "while setting cwd for \"%V\" module", + &info.file); + return NULL; + } + start = text.start; module = njs_vm_compile_module(vm, &info.file, &start, &text.start[text.length]); + njs_mp_free(njs_vm_memory_pool(vm), console->cwd.start); + console->cwd = prev_cwd; + njs_mp_free(njs_vm_memory_pool(vm), text.start); return module; @@ -1025,6 +1111,7 @@ static njs_vm_t * njs_create_vm(njs_opts_t *opts) { njs_vm_t *vm; + njs_int_t ret; njs_vm_opt_t vm_options; njs_vm_opt_init(&vm_options); @@ -1068,6 +1155,12 @@ njs_create_vm(njs_opts_t *opts) njs_vm_external_ptr(vm)); } + ret = njs_console_set_cwd(vm, njs_vm_external_ptr(vm), &vm_options.file); + if (njs_slow_path(ret != NJS_OK)) { + njs_stderror("failed to set cwd\n"); + return NULL; + } + njs_vm_set_module_loader(vm, njs_module_loader, opts); return vm; diff -r 673d78618fc9 -r 25e548de3d61 nginx/ngx_js.c --- a/nginx/ngx_js.c Wed Jan 31 17:06:58 2024 -0800 +++ b/nginx/ngx_js.c Tue Feb 06 19:32:08 2024 -0800 @@ -1743,7 +1743,7 @@ ngx_js_module_path(const ngx_str_t *dir, if (dir != NULL) { length += dir->len; - if (length == 0) { + if (length == 0 || dir->len == 0) { return NJS_DECLINED; } @@ -1799,6 +1799,12 @@ ngx_js_module_lookup(ngx_js_loc_conf_t * return ngx_js_module_path(NULL, info); } + ret = ngx_js_module_path(&conf->cwd, info); + + if (ret != NJS_DECLINED) { + return ret; + } + ret = ngx_js_module_path((const ngx_str_t *) &ngx_cycle->conf_prefix, info); if (ret != NJS_DECLINED) { @@ -1864,12 +1870,74 @@ fail: } +static void +ngx_js_file_dirname(const njs_str_t *path, ngx_str_t *name) +{ + const u_char *p, *end; + + if (path->length == 0) { + goto current_dir; + } + + p = path->start + path->length - 1; + + /* Stripping basename. */ + + while (p >= path->start && *p != '/') { p--; } + + end = p + 1; + + if (end == path->start) { + goto current_dir; + } + + /* Stripping trailing slashes. */ + + while (p >= path->start && *p == '/') { p--; } + + p++; + + if (p == path->start) { + p = end; + } + + name->data = path->start; + name->len = p - path->start; + + return; + +current_dir: + + ngx_str_set(name, "."); +} + + +static njs_int_t +ngx_js_set_cwd(njs_vm_t *vm, ngx_js_loc_conf_t *conf, njs_str_t *path) +{ + ngx_str_t cwd; + + ngx_js_file_dirname(path, &cwd); + + conf->cwd.data = njs_mp_alloc(njs_vm_memory_pool(vm), cwd.len); + if (conf->cwd.data == NULL) { + return NJS_ERROR; + } + + memcpy(conf->cwd.data, cwd.data, cwd.len); + conf->cwd.len = cwd.len; + + return NJS_OK; +} + + static njs_mod_t * ngx_js_module_loader(njs_vm_t *vm, njs_external_ptr_t external, njs_str_t *name) { u_char *start; njs_int_t ret; njs_str_t text; + ngx_str_t prev_cwd; njs_mod_t *module; ngx_js_loc_conf_t *conf; njs_module_info_t info; @@ -1894,11 +1962,23 @@ ngx_js_module_loader(njs_vm_t *vm, njs_e return NULL; } + prev_cwd = conf->cwd; + + ret = ngx_js_set_cwd(vm, conf, &info.file); + if (ret != NJS_OK) { + njs_vm_internal_error(vm, "while setting cwd for \"%V\" module", + &info.file); + return NULL; + } + start = text.start; module = njs_vm_compile_module(vm, &info.file, &start, &text.start[text.length]); + njs_mp_free(njs_vm_memory_pool(vm), conf->cwd.data); + conf->cwd = prev_cwd; + njs_mp_free(njs_vm_memory_pool(vm), text.start); return module; @@ -1985,6 +2065,12 @@ ngx_js_init_conf_vm(ngx_conf_t *cf, ngx_ njs_vm_set_rejection_tracker(conf->vm, ngx_js_rejection_tracker, NULL); + rc = ngx_js_set_cwd(conf->vm, conf, &options->file); + if (rc != NJS_OK) { + ngx_log_error(NGX_LOG_EMERG, cf->log, 0, "failed to set cwd"); + return NGX_ERROR; + } + njs_vm_set_module_loader(conf->vm, ngx_js_module_loader, conf); if (conf->paths != NGX_CONF_UNSET_PTR) { diff -r 673d78618fc9 -r 25e548de3d61 nginx/ngx_js.h --- a/nginx/ngx_js.h Wed Jan 31 17:06:58 2024 -0800 +++ b/nginx/ngx_js.h Tue Feb 06 19:32:08 2024 -0800 @@ -83,6 +83,7 @@ struct ngx_js_event_s { #define _NGX_JS_COMMON_LOC_CONF \ njs_vm_t *vm; \ + ngx_str_t cwd; \ ngx_array_t *imports; \ ngx_array_t *paths; \ \ diff -r 673d78618fc9 -r 25e548de3d61 nginx/t/js_import_relative.t --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/nginx/t/js_import_relative.t Tue Feb 06 19:32:08 2024 -0800 @@ -0,0 +1,100 @@ +#!/usr/bin/perl + +# (C) Dmitry Volyntsev +# (c) Nginx, Inc. + +# Tests for http njs module, js_import directive, importing relative paths. + +############################################################################### + +use warnings; +use strict; + +use Test::More; + +BEGIN { use FindBin; chdir($FindBin::Bin); } + +use lib 'lib'; +use Test::Nginx; + +############################################################################### + +select STDERR; $| = 1; +select STDOUT; $| = 1; + +my $t = Test::Nginx->new()->has(qw/http/) + ->write_file_expand('nginx.conf', <<'EOF'); + +%%TEST_GLOBALS%% + +daemon off; + +events { +} + +http { + %%TEST_GLOBALS_HTTP%% + + js_import main from lib/main.js; + + server { + listen 127.0.0.1:8080; + server_name localhost; + + location /local { + js_content main.test_local; + } + + location /top { + js_content main.test_top; + } + } +} + +EOF + +my $d = $t->testdir(); + +mkdir("$d/lib"); +mkdir("$d/lib/sub"); + +$t->write_file('lib/main.js', <write_file('lib/sub/foo.js', <write_file('lib/foo.js', <write_file('foo.js', <try_run('no njs available')->plan(2); + +############################################################################### + +like(http_get('/local'), qr/LOCAL/s, 'local relative import'); +like(http_get('/top'), qr/TOP/s, 'local relative import 2'); + +############################################################################### diff -r 673d78618fc9 -r 25e548de3d61 test/js/import_relative_path.t.js --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/test/js/import_relative_path.t.js Tue Feb 06 19:32:08 2024 -0800 @@ -0,0 +1,9 @@ +/*--- +includes: [] +flags: [] +paths: [test/js/module/] +---*/ + +import hash from 'libs/hash.js'; + +assert.sameValue(hash.name, "libs.name"); diff -r 673d78618fc9 -r 25e548de3d61 test/js/module/libs/hash.js --- a/test/js/module/libs/hash.js Wed Jan 31 17:06:58 2024 -0800 +++ b/test/js/module/libs/hash.js Tue Feb 06 19:32:08 2024 -0800 @@ -4,6 +4,7 @@ function hash() { return v; } +import sub from 'sub/sub3.js'; import name from 'name.js'; import crypto from 'crypto'; diff -r 673d78618fc9 -r 25e548de3d61 test/js/module/sub/sub3.js --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/test/js/module/sub/sub3.js Tue Feb 06 19:32:08 2024 -0800 @@ -0,0 +1,1 @@ +export default { name: "SUB3" }; From xeioex at nginx.com Wed Feb 7 16:36:13 2024 From: xeioex at nginx.com (=?utf-8?q?Dmitry_Volyntsev?=) Date: Wed, 07 Feb 2024 16:36:13 +0000 Subject: [njs] Version 0.8.3. Message-ID: details: https://hg.nginx.org/njs/rev/3aba7ee62080 branches: changeset: 2281:3aba7ee62080 user: Dmitry Volyntsev date: Wed Feb 07 08:34:00 2024 -0800 description: Version 0.8.3. diffstat: CHANGES | 31 +++++++++++++++++++++++++++++++ 1 files changed, 31 insertions(+), 0 deletions(-) diffs (38 lines): diff -r 25e548de3d61 -r 3aba7ee62080 CHANGES --- a/CHANGES Tue Feb 06 19:32:08 2024 -0800 +++ b/CHANGES Wed Feb 07 08:34:00 2024 -0800 @@ -1,3 +1,34 @@ +Changes with njs 0.8.3 07 Feb 2024 + + nginx modules: + + *) Bugfix: fixed Headers.set(). + + *) Bugfix: fixed js_set with Buffer values. + + *) Bugfix: fixed clear() method of a shared dictionary when + a timeout is not specified. + + *) Bugfix: fixed stub_status statistics when js_periodic is + enabled. + + Core: + + *) Bugfix: fixed building with libxml2 2.12 and later. + + *) Bugfix: fixed Date constructor for overflows and with + NaN values. + + *) Bugfix: fixed underflow in querystring.parse(). + + *) Bugfix: fixed potential buffer overread in + String.prototype.match(). + + *) Bugfix: fixed parsing of for-in loops. + + *) Bugfix: fixed parsing of hexadecimal, octal, and binary + literals with no digits. + Changes with njs 0.8.2 24 Oct 2023 nginx modules: From xeioex at nginx.com Wed Feb 7 16:36:15 2024 From: xeioex at nginx.com (=?utf-8?q?Dmitry_Volyntsev?=) Date: Wed, 07 Feb 2024 16:36:15 +0000 Subject: [njs] Added tag 0.8.3 for changeset 3aba7ee62080 Message-ID: details: https://hg.nginx.org/njs/rev/f98dd6884786 branches: changeset: 2282:f98dd6884786 user: Dmitry Volyntsev date: Wed Feb 07 08:34:17 2024 -0800 description: Added tag 0.8.3 for changeset 3aba7ee62080 diffstat: .hgtags | 1 + 1 files changed, 1 insertions(+), 0 deletions(-) diffs (8 lines): diff -r 3aba7ee62080 -r f98dd6884786 .hgtags --- a/.hgtags Wed Feb 07 08:34:00 2024 -0800 +++ b/.hgtags Wed Feb 07 08:34:17 2024 -0800 @@ -65,3 +65,4 @@ a1faa64d4972020413fd168e2b542bcc150819c0 0ed1952588ab1e0e1c18425fe7923b2b76f38a65 0.8.0 a52b49f9afcf410597dc6657ad39ae3dbbfeec56 0.8.1 45f81882c780a12e56be519cd3106c4fe5567a64 0.8.2 +3aba7ee620807ad10bc34bff3677350fa8a3c3b2 0.8.3 From xeioex at nginx.com Thu Feb 8 01:57:51 2024 From: xeioex at nginx.com (=?utf-8?q?Dmitry_Volyntsev?=) Date: Thu, 08 Feb 2024 01:57:51 +0000 Subject: [njs] Version bump. Message-ID: details: https://hg.nginx.org/njs/rev/93562e512d26 branches: changeset: 2283:93562e512d26 user: Dmitry Volyntsev date: Wed Feb 07 17:56:59 2024 -0800 description: Version bump. diffstat: src/njs.h | 4 ++-- 1 files changed, 2 insertions(+), 2 deletions(-) diffs (14 lines): diff -r f98dd6884786 -r 93562e512d26 src/njs.h --- a/src/njs.h Wed Feb 07 08:34:17 2024 -0800 +++ b/src/njs.h Wed Feb 07 17:56:59 2024 -0800 @@ -11,8 +11,8 @@ #include -#define NJS_VERSION "0.8.3" -#define NJS_VERSION_NUMBER 0x000803 +#define NJS_VERSION "0.8.4" +#define NJS_VERSION_NUMBER 0x000804 #include From xeioex at nginx.com Thu Feb 8 01:57:53 2024 From: xeioex at nginx.com (=?utf-8?q?Dmitry_Volyntsev?=) Date: Thu, 08 Feb 2024 01:57:53 +0000 Subject: [njs] Removed njs_file.c not needed after 8aad26845b18 (0.8.3). Message-ID: details: https://hg.nginx.org/njs/rev/45f72ce8761b branches: changeset: 2284:45f72ce8761b user: Dmitry Volyntsev date: Wed Feb 07 17:57:01 2024 -0800 description: Removed njs_file.c not needed after 8aad26845b18 (0.8.3). diffstat: auto/sources | 1 - src/njs_file.c | 69 --------------------------------- src/njs_file.h | 15 ------- src/njs_main.h | 1 - src/test/njs_unit_test.c | 98 ------------------------------------------------ 5 files changed, 0 insertions(+), 184 deletions(-) diffs (236 lines): diff -r 93562e512d26 -r 45f72ce8761b auto/sources --- a/auto/sources Wed Feb 07 17:56:59 2024 -0800 +++ b/auto/sources Wed Feb 07 17:57:01 2024 -0800 @@ -17,7 +17,6 @@ NJS_LIB_SRCS=" \ src/njs_sha1.c \ src/njs_sha2.c \ src/njs_time.c \ - src/njs_file.c \ src/njs_malloc.c \ src/njs_mp.c \ src/njs_sprintf.c \ diff -r 93562e512d26 -r 45f72ce8761b src/njs_file.c --- a/src/njs_file.c Wed Feb 07 17:56:59 2024 -0800 +++ /dev/null Thu Jan 01 00:00:00 1970 +0000 @@ -1,69 +0,0 @@ - -/* - * Copyright (C) Igor Sysoev - * Copyright (C) NGINX, Inc. - */ - - -#include - - -void -njs_file_basename(const njs_str_t *path, njs_str_t *name) -{ - const u_char *p, *end; - - end = path->start + path->length; - p = end - 1; - - /* Stripping dir prefix. */ - - while (p >= path->start && *p != '/') { p--; } - - p++; - - name->start = (u_char *) p; - name->length = end - p; -} - - -void -njs_file_dirname(const njs_str_t *path, njs_str_t *name) -{ - const u_char *p, *end; - - if (path->length == 0) { - goto current_dir; - } - - p = path->start + path->length - 1; - - /* Stripping basename. */ - - while (p >= path->start && *p != '/') { p--; } - - end = p + 1; - - if (end == path->start) { - goto current_dir; - } - - /* Stripping trailing slashes. */ - - while (p >= path->start && *p == '/') { p--; } - - p++; - - if (p == path->start) { - p = end; - } - - name->start = path->start; - name->length = p - path->start; - - return; - -current_dir: - - *name = njs_str_value("."); -} diff -r 93562e512d26 -r 45f72ce8761b src/njs_file.h --- a/src/njs_file.h Wed Feb 07 17:56:59 2024 -0800 +++ /dev/null Thu Jan 01 00:00:00 1970 +0000 @@ -1,15 +0,0 @@ - -/* - * Copyright (C) Igor Sysoev - * Copyright (C) NGINX, Inc. - */ - -#ifndef _NJS_FILE_H_INCLUDED_ -#define _NJS_FILE_H_INCLUDED_ - - -void njs_file_basename(const njs_str_t *path, njs_str_t *name); -void njs_file_dirname(const njs_str_t *path, njs_str_t *name); - - -#endif /* _NJS_FILE_H_INCLUDED_ */ diff -r 93562e512d26 -r 45f72ce8761b src/njs_main.h --- a/src/njs_main.h Wed Feb 07 17:56:59 2024 -0800 +++ b/src/njs_main.h Wed Feb 07 17:57:01 2024 -0800 @@ -28,7 +28,6 @@ #include #include #include -#include #include #include #include diff -r 93562e512d26 -r 45f72ce8761b src/test/njs_unit_test.c --- a/src/test/njs_unit_test.c Wed Feb 07 17:56:59 2024 -0800 +++ b/src/test/njs_unit_test.c Wed Feb 07 17:57:01 2024 -0800 @@ -6,7 +6,6 @@ #include #include -#include #include #include #include @@ -24449,99 +24448,6 @@ njs_vm_object_alloc_test(njs_vm_t *vm, n static njs_int_t -njs_file_basename_test(njs_vm_t *vm, njs_opts_t *opts, njs_stat_t *stat) -{ - njs_str_t name; - njs_bool_t success; - njs_uint_t i; - - static const struct { - njs_str_t path; - njs_str_t expected; - } tests[] = { - { njs_str(""), njs_str("") }, - { njs_str("/"), njs_str("") }, - { njs_str("/a"), njs_str("a") }, - { njs_str("///"), njs_str("") }, - { njs_str("///a"), njs_str("a") }, - { njs_str("///a/"), njs_str("") }, - { njs_str("a"), njs_str("a") }, - { njs_str("a/"), njs_str("") }, - { njs_str("a//"), njs_str("") }, - { njs_str("path/name"), njs_str("name") }, - { njs_str("/path/name"), njs_str("name") }, - { njs_str("/path/name/"), njs_str("") }, - }; - - for (i = 0; i < njs_nitems(tests); i++) { - njs_file_basename(&tests[i].path, &name); - - success = njs_strstr_eq(&tests[i].expected, &name); - - if (!success) { - njs_printf("njs_file_basename_test(\"%V\"):\n" - "expected: \"%V\"\n got: \"%V\"\n", - &tests[i].path, &tests[i].expected, &name); - - stat->failed++; - - } else { - stat->passed++; - } - } - - return NJS_OK; -} - - -static njs_int_t -njs_file_dirname_test(njs_vm_t *vm, njs_opts_t *opts, njs_stat_t *stat) -{ - njs_str_t name; - njs_bool_t success; - njs_uint_t i; - - static const struct { - njs_str_t path; - njs_str_t expected; - } tests[] = { - { njs_str(""), njs_str(".") }, - { njs_str("/"), njs_str("/") }, - { njs_str("/a"), njs_str("/") }, - { njs_str("///"), njs_str("///") }, - { njs_str("///a"), njs_str("///") }, - { njs_str("///a/"), njs_str("///a") }, - { njs_str("a"), njs_str(".") }, - { njs_str("a/"), njs_str("a") }, - { njs_str("a//"), njs_str("a") }, - { njs_str("p1/p2/name"), njs_str("p1/p2") }, - { njs_str("/p1/p2/name"), njs_str("/p1/p2") }, - { njs_str("/p1/p2///name"), njs_str("/p1/p2") }, - { njs_str("/p1/p2/name/"), njs_str("/p1/p2/name") }, - }; - - for (i = 0; i < njs_nitems(tests); i++) { - njs_file_dirname(&tests[i].path, &name); - - success = njs_strstr_eq(&tests[i].expected, &name); - - if (!success) { - njs_printf("njs_file_dirname_test(\"%V\"):\n" - "expected: \"%V\"\n got: \"%V\"\n", - &tests[i].path, &tests[i].expected, &name); - - stat->failed++; - } else { - stat->passed++; - } - - } - - return NJS_OK; -} - - -static njs_int_t njs_chb_test(njs_vm_t *vm, njs_opts_t *opts, njs_stat_t *stat) { u_char *p; @@ -24935,10 +24841,6 @@ njs_vm_internal_api_test(njs_unit_test_t } tests[] = { { njs_vm_object_alloc_test, njs_str("njs_vm_object_alloc_test") }, - { njs_file_basename_test, - njs_str("njs_file_basename_test") }, - { njs_file_dirname_test, - njs_str("njs_file_dirname_test") }, { njs_chb_test, njs_str("njs_chb_test") }, { njs_sort_test, From xeioex at nginx.com Thu Feb 8 01:57:55 2024 From: xeioex at nginx.com (=?utf-8?q?Dmitry_Volyntsev?=) Date: Thu, 08 Feb 2024 01:57:55 +0000 Subject: [njs] Test262: simplified import_chain.t.js. Message-ID: details: https://hg.nginx.org/njs/rev/82ae061db9d0 branches: changeset: 2285:82ae061db9d0 user: Dmitry Volyntsev date: Wed Feb 07 17:57:01 2024 -0800 description: Test262: simplified import_chain.t.js. Avoid using "crypto" module, which unnessesary complicates the test. diffstat: test/js/import_chain.t.js | 8 ++------ test/js/module/libs/hash.js | 5 +---- test/js/module/sub/sub1.js | 3 +-- 3 files changed, 4 insertions(+), 12 deletions(-) diffs (49 lines): diff -r 45f72ce8761b -r 82ae061db9d0 test/js/import_chain.t.js --- a/test/js/import_chain.t.js Wed Feb 07 17:57:01 2024 -0800 +++ b/test/js/import_chain.t.js Wed Feb 07 17:57:01 2024 -0800 @@ -4,10 +4,6 @@ flags: [] paths: [test/js/module/, test/js/module/libs/, test/js/module/sub] ---*/ -import lib2 from 'lib2.js'; +import lib2 from 'lib2.js'; -import crypto from 'crypto'; -var h = crypto.createHash('md5'); -var hash = h.update('AB').digest('hex'); - -assert.sameValue(lib2.hash(), hash); +assert.sameValue(lib2.hash(), "XXX"); diff -r 45f72ce8761b -r 82ae061db9d0 test/js/module/libs/hash.js --- a/test/js/module/libs/hash.js Wed Feb 07 17:57:01 2024 -0800 +++ b/test/js/module/libs/hash.js Wed Feb 07 17:57:01 2024 -0800 @@ -1,11 +1,8 @@ function hash() { - var h = crypto.createHash('md5'); - var v = h.update('AB').digest('hex'); - return v; + return "XXX"; } import sub from 'sub/sub3.js'; import name from 'name.js'; -import crypto from 'crypto'; export default {hash, name}; diff -r 45f72ce8761b -r 82ae061db9d0 test/js/module/sub/sub1.js --- a/test/js/module/sub/sub1.js Wed Feb 07 17:57:01 2024 -0800 +++ b/test/js/module/sub/sub1.js Wed Feb 07 17:57:01 2024 -0800 @@ -1,5 +1,5 @@ function hash() { - return sub2.hash(crypto); + return sub2.hash(); } function error() { @@ -7,6 +7,5 @@ function error() { } import sub2 from 'sub2.js'; -import crypto from 'crypto'; export default {hash, error}; From xeioex at nginx.com Thu Feb 8 01:57:57 2024 From: xeioex at nginx.com (=?utf-8?q?Dmitry_Volyntsev?=) Date: Thu, 08 Feb 2024 01:57:57 +0000 Subject: [njs] Test262: fix import_global_ref_var.t.js. Message-ID: details: https://hg.nginx.org/njs/rev/d3a9f2f153f8 branches: changeset: 2286:d3a9f2f153f8 user: Dmitry Volyntsev date: Wed Feb 07 17:57:02 2024 -0800 description: Test262: fix import_global_ref_var.t.js. diffstat: test/js/import_global_ref_var.t.js | 2 +- test/js/module/export_global_a.js | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diffs (22 lines): diff -r 82ae061db9d0 -r d3a9f2f153f8 test/js/import_global_ref_var.t.js --- a/test/js/import_global_ref_var.t.js Wed Feb 07 17:57:01 2024 -0800 +++ b/test/js/import_global_ref_var.t.js Wed Feb 07 17:57:02 2024 -0800 @@ -4,7 +4,7 @@ flags: [] paths: [test/js/module/] ---*/ -var a = 42; +globalThis.a = 42; import m from 'export_global_a.js'; assert.sameValue(m.f(), 42); diff -r 82ae061db9d0 -r d3a9f2f153f8 test/js/module/export_global_a.js --- a/test/js/module/export_global_a.js Wed Feb 07 17:57:01 2024 -0800 +++ b/test/js/module/export_global_a.js Wed Feb 07 17:57:02 2024 -0800 @@ -1,5 +1,5 @@ function f() { - return a; + return globalThis.a; } export default {f}; From arut at nginx.com Fri Feb 9 09:56:42 2024 From: arut at nginx.com (=?iso-8859-1?q?Roman_Arutyunyan?=) Date: Fri, 09 Feb 2024 13:56:42 +0400 Subject: [PATCH] QUIC: fixed unsent MTU probe acknowledgement Message-ID: <9b89f44ddd3637afc939.1707472602@arut-laptop> # HG changeset patch # User Roman Arutyunyan # Date 1707472496 -14400 # Fri Feb 09 13:54:56 2024 +0400 # Node ID 9b89f44ddd3637afc939e31de348c7986ae9e76d # Parent 73eb75bee30f4aee66edfb500270dbb14710aafd QUIC: fixed unsent MTU probe acknowledgement. Previously if an MTU probe send failed early in ngx_quic_frame_sendto() due to allocation error or congestion control, the application level packet number was not increased, but was still saved as MTU probe packet number. Later when a packet with this number was acknowledged, the unsent MTU probe was acknowledged as well. This could result in discovering a bigger MTU than supported by the path, which could lead to EMSGSIZE (Message too long) errors while sending further packets. The problem existed since PMTUD was introduced in 58afcd72446f (1.25.2). Back then only the unlikely memory allocation error could trigger it. However in efcdaa66df2e congestion control was added to ngx_quic_frame_sendto() which can now trigger the issue with a higher probability. diff --git a/src/event/quic/ngx_event_quic_migration.c b/src/event/quic/ngx_event_quic_migration.c --- a/src/event/quic/ngx_event_quic_migration.c +++ b/src/event/quic/ngx_event_quic_migration.c @@ -925,12 +925,6 @@ ngx_quic_send_path_mtu_probe(ngx_connect qc = ngx_quic_get_connection(c); ctx = ngx_quic_get_send_ctx(qc, ssl_encryption_application); - path->mtu_pnum[path->tries] = ctx->pnum; - - ngx_log_debug4(NGX_LOG_DEBUG_EVENT, c->log, 0, - "quic path seq:%uL send probe " - "mtu:%uz pnum:%uL tries:%ui", - path->seqnum, path->mtud, ctx->pnum, path->tries); log_error = c->log_error; c->log_error = NGX_ERROR_IGNORE_EMSGSIZE; @@ -943,14 +937,26 @@ ngx_quic_send_path_mtu_probe(ngx_connect path->mtu = mtu; c->log_error = log_error; + if (rc == NGX_OK) { + path->mtu_pnum[path->tries] = ctx->pnum; + + ngx_log_debug4(NGX_LOG_DEBUG_EVENT, c->log, 0, + "quic path seq:%uL send probe " + "mtu:%uz pnum:%uL tries:%ui", + path->seqnum, path->mtud, ctx->pnum, path->tries); + + return NGX_OK; + } + + path->mtu_pnum[path->tries] = NGX_QUIC_UNSET_PN; + + ngx_log_debug2(NGX_LOG_DEBUG_EVENT, c->log, 0, + "quic path seq:%uL rejected mtu:%uz", + path->seqnum, path->mtud); + if (rc == NGX_ERROR) { if (c->write->error) { c->write->error = 0; - - ngx_log_debug2(NGX_LOG_DEBUG_EVENT, c->log, 0, - "quic path seq:%uL rejected mtu:%uz", - path->seqnum, path->mtud); - return NGX_DECLINED; } From sambhavgupta at microsoft.com Fri Feb 9 12:15:54 2024 From: sambhavgupta at microsoft.com (Sambhav Gupta) Date: Fri, 9 Feb 2024 12:15:54 +0000 Subject: Upstream response time populated even when upstream connection timed out. Message-ID: Hi Nginx experts, I have noticed that upstream response time is populated to a non-zero and non-negative value in Nginx even when upstream connection times out. This is not true with other values like upstream connect time and upstream header time. Is this intentional ? Get Outlook for Android -------------- next part -------------- An HTML attachment was scrubbed... URL: From mikulaspoul at gmail.com Fri Feb 9 14:53:57 2024 From: mikulaspoul at gmail.com (=?UTF-8?B?TWlrdWzDocWhIFBvdWw=?=) Date: Fri, 9 Feb 2024 14:53:57 +0000 Subject: Nginx Packages Repos Are Down Message-ID: Good afternoon (or whatever time of day is in your time-zone), I believe the Nginx-provided repos for installing nginx packages from linux are down. When following the appropriate guide (for example on debian this one https://docs.nginx.com/nginx/admin-guide/installing-nginx/installing-nginx-open-source/#installing-prebuilt-debian-packages), the installation fails with E: The repository 'http://nginx.org/packages/debian bookworm Release' does not have a Release file. Looking at https://stackoverflow.com/questions/77968545/how-to-install-nginx-is-nginx-repos-down the centos and ubuntu packages are also down. Is there some planned maintenance going on, or are the URLs changing? With regards, Mikuláš -------------- next part -------------- An HTML attachment was scrubbed... URL: From sb at nginx.com Fri Feb 9 16:15:40 2024 From: sb at nginx.com (Sergey Budnevich) Date: Fri, 9 Feb 2024 16:15:40 +0000 Subject: Nginx Packages Repos Are Down In-Reply-To: References: Message-ID: <8479FDA4-3C6E-493A-9D3B-6205FFE2CDC2@nginx.com> Hello, This issue was resolved some time ago. Sorry for inconvenience. P.S. it is better to write non-devel messages to nginx at nginx.org > On 9 Feb 2024, at 14:53, Mikuláš Poul wrote: > > Good afternoon (or whatever time of day is in your time-zone), > > I believe the Nginx-provided repos for installing nginx packages from linux are down. When following the appropriate guide (for example on debian this one https://docs.nginx.com/nginx/admin-guide/installing-nginx/installing-nginx-open-source/#installing-prebuilt-debian-packages), the installation fails with > > E: The repository 'http://nginx.org/packages/debian bookworm Release' does not have a Release file. > > Looking at https://stackoverflow.com/questions/77968545/how-to-install-nginx-is-nginx-repos-down the centos and ubuntu packages are also down. > > Is there some planned maintenance going on, or are the URLs changing? > > With regards, > Mikuláš > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > https://mailman.nginx.org/mailman/listinfo/nginx-devel -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Sat Feb 10 14:42:44 2024 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sat, 10 Feb 2024 17:42:44 +0300 Subject: Upstream response time populated even when upstream connection timed out. In-Reply-To: References: Message-ID: Hello! On Fri, Feb 09, 2024 at 12:15:54PM +0000, Sambhav Gupta via nginx-devel wrote: > I have noticed that upstream response time is populated to a > non-zero and non-negative value in Nginx even when upstream > connection times out. > > This is not true with other values like upstream connect time > and upstream header time. > > Is this intentional ? Yes. The $upstream_response_time variable shows total time spent when working with the upstream, with all the activities included. It is always set as long as nginx was working with an upstream, even if there was an error or a timeout. In contrast, $upstream_connect_time and $upstream_header_time are only set when the connection was established or the header was successfully received, respectively. -- Maxim Dounin http://mdounin.ru/ From pluknet at nginx.com Tue Feb 13 10:46:35 2024 From: pluknet at nginx.com (Sergey Kandaurov) Date: Tue, 13 Feb 2024 14:46:35 +0400 Subject: [PATCH 3 of 3] Stream: ngx_stream_pass_module In-Reply-To: <3cab85fe55272835674b.1699610841@arut-laptop> References: <3cab85fe55272835674b.1699610841@arut-laptop> Message-ID: <8C0B7CF6-7BE8-4B63-8BA7-9608C455D30A@nginx.com> > On 10 Nov 2023, at 14:07, Roman Arutyunyan wrote: > > # HG changeset patch > # User Roman Arutyunyan > # Date 1699543504 -14400 > # Thu Nov 09 19:25:04 2023 +0400 > # Node ID 3cab85fe55272835674b7f1c296796955256d019 > # Parent 1d3464283405a4d8ac54caae9bf1815c723f04c5 > Stream: ngx_stream_pass_module. > > The module allows to pass connections from Stream to other modules such as HTTP > or Mail, as well as back to Stream. Previously, this was only possible with > proxying. Connections with preread buffer read out from socket cannot be > passed. > > The module allows to terminate SSL selectively based on SNI. > > stream { > server { > listen 8000 default_server; > ssl_preread on; > ... > } > > server { > listen 8000; > server_name foo.example.com; > pass 8001; # to HTTP > } > > server { > listen 8000; > server_name bar.example.com; > ... > } > } > > http { > server { > listen 8001 ssl; > ... > > location / { > root html; > } > } > } > > diff --git a/auto/modules b/auto/modules > --- a/auto/modules > +++ b/auto/modules > @@ -1166,6 +1166,16 @@ if [ $STREAM != NO ]; then > . auto/module > fi > > + if [ $STREAM_PASS = YES ]; then > + ngx_module_name=ngx_stream_pass_module > + ngx_module_deps= > + ngx_module_srcs=src/stream/ngx_stream_pass_module.c > + ngx_module_libs= > + ngx_module_link=$STREAM_PASS > + > + . auto/module > + fi > + > if [ $STREAM_SET = YES ]; then > ngx_module_name=ngx_stream_set_module > ngx_module_deps= > diff --git a/auto/options b/auto/options > --- a/auto/options > +++ b/auto/options > @@ -127,6 +127,7 @@ STREAM_GEOIP=NO > STREAM_MAP=YES > STREAM_SPLIT_CLIENTS=YES > STREAM_RETURN=YES > +STREAM_PASS=YES > STREAM_SET=YES > STREAM_UPSTREAM_HASH=YES > STREAM_UPSTREAM_LEAST_CONN=YES > @@ -337,6 +338,7 @@ use the \"--with-mail_ssl_module\" optio > --without-stream_split_clients_module) > STREAM_SPLIT_CLIENTS=NO ;; > --without-stream_return_module) STREAM_RETURN=NO ;; > + --without-stream_pass_module) STREAM_PASS=NO ;; > --without-stream_set_module) STREAM_SET=NO ;; > --without-stream_upstream_hash_module) > STREAM_UPSTREAM_HASH=NO ;; > @@ -556,6 +558,7 @@ cat << END > --without-stream_split_clients_module > disable ngx_stream_split_clients_module > --without-stream_return_module disable ngx_stream_return_module > + --without-stream_pass_module disable ngx_stream_pass_module > --without-stream_set_module disable ngx_stream_set_module > --without-stream_upstream_hash_module > disable ngx_stream_upstream_hash_module > diff --git a/src/stream/ngx_stream_pass_module.c b/src/stream/ngx_stream_pass_module.c > new file mode 100644 > --- /dev/null > +++ b/src/stream/ngx_stream_pass_module.c > @@ -0,0 +1,245 @@ > + > +/* > + * Copyright (C) Roman Arutyunyan > + * Copyright (C) Nginx, Inc. > + */ > + > + > +#include > +#include > +#include > + > + > +typedef struct { > + ngx_addr_t *addr; > + ngx_stream_complex_value_t *addr_value; > +} ngx_stream_pass_srv_conf_t; > + > + > +static void ngx_stream_pass_handler(ngx_stream_session_t *s); > +static void *ngx_stream_pass_create_srv_conf(ngx_conf_t *cf); > +static char *ngx_stream_pass(ngx_conf_t *cf, ngx_command_t *cmd, void *conf); > + > + > +static ngx_command_t ngx_stream_pass_commands[] = { > + > + { ngx_string("pass"), > + NGX_STREAM_SRV_CONF|NGX_CONF_TAKE1, > + ngx_stream_pass, > + NGX_STREAM_SRV_CONF_OFFSET, > + 0, > + NULL }, > + > + ngx_null_command > +}; > + > + > +static ngx_stream_module_t ngx_stream_pass_module_ctx = { > + NULL, /* preconfiguration */ > + NULL, /* postconfiguration */ > + > + NULL, /* create main configuration */ > + NULL, /* init main configuration */ > + > + ngx_stream_pass_create_srv_conf, /* create server configuration */ > + NULL /* merge server configuration */ > +}; > + > + > +ngx_module_t ngx_stream_pass_module = { > + NGX_MODULE_V1, > + &ngx_stream_pass_module_ctx, /* module conaddr */ > + ngx_stream_pass_commands, /* module directives */ > + NGX_STREAM_MODULE, /* module type */ > + NULL, /* init master */ > + NULL, /* init module */ > + NULL, /* init process */ > + NULL, /* init thread */ > + NULL, /* exit thread */ > + NULL, /* exit process */ > + NULL, /* exit master */ > + NGX_MODULE_V1_PADDING > +}; > + > + > +static void > +ngx_stream_pass_handler(ngx_stream_session_t *s) > +{ > + ngx_url_t u; > + ngx_str_t url; > + ngx_addr_t *addr; > + ngx_uint_t i; > + ngx_listening_t *ls; > + ngx_connection_t *c; > + ngx_stream_pass_srv_conf_t *pscf; > + > + c = s->connection; > + > + c->log->action = "passing connection to another module"; > + > + if (c->buffer && c->buffer->pos != c->buffer->last) { > + ngx_log_error(NGX_LOG_ERR, s->connection->log, 0, > + "cannot pass connection with preread data"); > + goto failed; > + } > + > + pscf = ngx_stream_get_module_srv_conf(s, ngx_stream_pass_module); > + > + addr = pscf->addr; > + > + if (addr == NULL) { > + if (ngx_stream_complex_value(s, pscf->addr_value, &url) != NGX_OK) { > + goto failed; > + } > + > + ngx_memzero(&u, sizeof(ngx_url_t)); > + > + u.url = url; > + u.listen = 1; > + u.no_resolve = 1; > + > + if (ngx_parse_url(s->connection->pool, &u) != NGX_OK) { > + if (u.err) { > + ngx_log_error(NGX_LOG_ERR, s->connection->log, 0, > + "%s in pass \"%V\"", u.err, &u.url); > + } > + > + goto failed; > + } > + > + if (u.naddrs == 0) { > + ngx_log_error(NGX_LOG_ERR, s->connection->log, 0, > + "no addresses in pass \"%V\"", &u.url); > + goto failed; > + } > + > + addr = &u.addrs[0]; > + } > + > + ngx_log_debug1(NGX_LOG_DEBUG_STREAM, c->log, 0, > + "stream pass addr: \"%V\"", &addr->name); > + > + ls = ngx_cycle->listening.elts; > + > + for (i = 0; i < ngx_cycle->listening.nelts; i++) { > + if (ngx_cmp_sockaddr(ls[i].sockaddr, ls[i].socklen, > + addr->sockaddr, addr->socklen, 1) > + == NGX_OK) > + { > + c->listening = &ls[i]; The address configuration (addr_conf) is stored depending on the protocol family of the listening socket, it's different for AF_INET6. So, if the protocol family is switched when passing a connection, it may happen that c->local_sockaddr->sa_family will keep a wrong value, the listen handler will dereference addr_conf incorrectly. Consider the following example: server { listen 127.0.0.1:8081; pass [::1]:8091; } server { listen [::1]:8091; ... } When ls->handler is invoked, c->local_sockaddr is kept inherited from the originally accepted connection, which is of AF_INET. To fix this, c->local_sockaddr and c->local_socklen should be updated according to the new listen socket configuration. OTOH, c->sockaddr / c->socklen should be kept intact. Note that this makes possible cross protocol family configurations in e.g. realip and access modules; from now on this will have to be taken into account. > + > + c->data = NULL; > + c->buffer = NULL; > + > + *c->log = c->listening->log; > + c->log->handler = NULL; > + c->log->data = NULL; > + > + c->listening->handler(c); > + > + return; > + } > + } > + > + ngx_log_error(NGX_LOG_ERR, c->log, 0, > + "listen not found for \"%V\"", &addr->name); > + > + ngx_stream_finalize_session(s, NGX_STREAM_OK); > + > + return; > + > +failed: > + > + ngx_stream_finalize_session(s, NGX_STREAM_INTERNAL_SERVER_ERROR); > +} > + > + > +static void * > +ngx_stream_pass_create_srv_conf(ngx_conf_t *cf) > +{ > + ngx_stream_pass_srv_conf_t *conf; > + > + conf = ngx_pcalloc(cf->pool, sizeof(ngx_stream_pass_srv_conf_t)); > + if (conf == NULL) { > + return NULL; > + } > + > + /* > + * set by ngx_pcalloc(): > + * > + * conf->addr = NULL; > + * conf->addr_value = NULL; > + */ > + > + return conf; > +} > + > + > +static char * > +ngx_stream_pass(ngx_conf_t *cf, ngx_command_t *cmd, void *conf) > +{ > + ngx_stream_pass_srv_conf_t *pscf = conf; > + > + ngx_url_t u; > + ngx_str_t *value, *url; > + ngx_stream_complex_value_t cv; > + ngx_stream_core_srv_conf_t *cscf; > + ngx_stream_compile_complex_value_t ccv; > + > + if (pscf->addr || pscf->addr_value) { > + return "is duplicate"; > + } > + > + cscf = ngx_stream_conf_get_module_srv_conf(cf, ngx_stream_core_module); > + > + cscf->handler = ngx_stream_pass_handler; > + > + value = cf->args->elts; > + > + url = &value[1]; > + > + ngx_memzero(&ccv, sizeof(ngx_stream_compile_complex_value_t)); > + > + ccv.cf = cf; > + ccv.value = url; > + ccv.complex_value = &cv; > + > + if (ngx_stream_compile_complex_value(&ccv) != NGX_OK) { > + return NGX_CONF_ERROR; > + } > + > + if (cv.lengths) { > + pscf->addr_value = ngx_palloc(cf->pool, > + sizeof(ngx_stream_complex_value_t)); > + if (pscf->addr_value == NULL) { > + return NGX_CONF_ERROR; > + } > + > + *pscf->addr_value = cv; > + > + return NGX_CONF_OK; > + } > + > + ngx_memzero(&u, sizeof(ngx_url_t)); > + > + u.url = *url; > + u.listen = 1; > + > + if (ngx_parse_url(cf->pool, &u) != NGX_OK) { > + if (u.err) { > + ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, > + "%s in \"%V\" of the \"pass\" directive", > + u.err, &u.url); > + } > + > + return NGX_CONF_ERROR; > + } > + > + if (u.naddrs == 0) { > + return "has no addresses"; > + } > + > + pscf->addr = &u.addrs[0]; > + > + return NGX_CONF_OK; > +} -- Sergey Kandaurov From pluknet at nginx.com Tue Feb 13 12:54:24 2024 From: pluknet at nginx.com (Sergey Kandaurov) Date: Tue, 13 Feb 2024 16:54:24 +0400 Subject: [PATCH] QUIC: fixed unsent MTU probe acknowledgement In-Reply-To: <9b89f44ddd3637afc939.1707472602@arut-laptop> References: <9b89f44ddd3637afc939.1707472602@arut-laptop> Message-ID: > On 9 Feb 2024, at 13:56, Roman Arutyunyan wrote: > > # HG changeset patch > # User Roman Arutyunyan > # Date 1707472496 -14400 > # Fri Feb 09 13:54:56 2024 +0400 > # Node ID 9b89f44ddd3637afc939e31de348c7986ae9e76d > # Parent 73eb75bee30f4aee66edfb500270dbb14710aafd > QUIC: fixed unsent MTU probe acknowledgement. > > Previously if an MTU probe send failed early in ngx_quic_frame_sendto() > due to allocation error or congestion control, the application level packet > number was not increased, but was still saved as MTU probe packet number. > Later when a packet with this number was acknowledged, the unsent MTU probe > was acknowledged as well. This could result in discovering a bigger MTU than > supported by the path, which could lead to EMSGSIZE (Message too long) errors > while sending further packets. > > The problem existed since PMTUD was introduced in 58afcd72446f (1.25.2). > Back then only the unlikely memory allocation error could trigger it. However > in efcdaa66df2e congestion control was added to ngx_quic_frame_sendto() which > can now trigger the issue with a higher probability. > > diff --git a/src/event/quic/ngx_event_quic_migration.c b/src/event/quic/ngx_event_quic_migration.c > --- a/src/event/quic/ngx_event_quic_migration.c > +++ b/src/event/quic/ngx_event_quic_migration.c > @@ -925,12 +925,6 @@ ngx_quic_send_path_mtu_probe(ngx_connect > > qc = ngx_quic_get_connection(c); > ctx = ngx_quic_get_send_ctx(qc, ssl_encryption_application); > - path->mtu_pnum[path->tries] = ctx->pnum; > - > - ngx_log_debug4(NGX_LOG_DEBUG_EVENT, c->log, 0, > - "quic path seq:%uL send probe " > - "mtu:%uz pnum:%uL tries:%ui", > - path->seqnum, path->mtud, ctx->pnum, path->tries); > > log_error = c->log_error; > c->log_error = NGX_ERROR_IGNORE_EMSGSIZE; > @@ -943,14 +937,26 @@ ngx_quic_send_path_mtu_probe(ngx_connect > path->mtu = mtu; > c->log_error = log_error; > > + if (rc == NGX_OK) { > + path->mtu_pnum[path->tries] = ctx->pnum; It's too late to save the packet number here after we've sent it. A successful call to ngx_quic_output_packet() or ngx_quic_frame_sendto() updates ctx->pnum to contain the next packet number, so it's off-by-one. It may have sense to preserve mtu_pnum, and restore it on non-success, but see below. > + > + ngx_log_debug4(NGX_LOG_DEBUG_EVENT, c->log, 0, > + "quic path seq:%uL send probe " > + "mtu:%uz pnum:%uL tries:%ui", > + path->seqnum, path->mtud, ctx->pnum, path->tries); IMHO, such late logging makes hard to follow through debug log messages. I'd prefer to keep it first, before all the underlined logging. > + > + return NGX_OK; > + } > + > + path->mtu_pnum[path->tries] = NGX_QUIC_UNSET_PN; This will break matching ACK'ed probes on subsequent retries in ngx_quic_handle_path_mtu(), it stops looking after NGX_QUIC_UNSET_PN. Possible solutions are to rollback path->tries after NGX_AGAIN from ngx_quic_frame_sendto(), or to ignore NGX_QUIC_UNSET_PN in ngx_quic_handle_path_mtu(). > + > + ngx_log_debug2(NGX_LOG_DEBUG_EVENT, c->log, 0, > + "quic path seq:%uL rejected mtu:%uz", > + path->seqnum, path->mtud); > + > if (rc == NGX_ERROR) { > if (c->write->error) { > c->write->error = 0; > - > - ngx_log_debug2(NGX_LOG_DEBUG_EVENT, c->log, 0, > - "quic path seq:%uL rejected mtu:%uz", > - path->seqnum, path->mtud); > - > return NGX_DECLINED; > } > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > https://mailman.nginx.org/mailman/listinfo/nginx-devel -- Sergey Kandaurov From arut at nginx.com Wed Feb 14 13:09:35 2024 From: arut at nginx.com (Roman Arutyunyan) Date: Wed, 14 Feb 2024 17:09:35 +0400 Subject: [PATCH] QUIC: fixed unsent MTU probe acknowledgement In-Reply-To: References: <9b89f44ddd3637afc939.1707472602@arut-laptop> Message-ID: <20240214130935.p7777bltaqpdekdc@N00W24XTQX> Hi, On Tue, Feb 13, 2024 at 04:54:24PM +0400, Sergey Kandaurov wrote: > > > On 9 Feb 2024, at 13:56, Roman Arutyunyan wrote: > > > > # HG changeset patch > > # User Roman Arutyunyan > > # Date 1707472496 -14400 > > # Fri Feb 09 13:54:56 2024 +0400 > > # Node ID 9b89f44ddd3637afc939e31de348c7986ae9e76d > > # Parent 73eb75bee30f4aee66edfb500270dbb14710aafd > > QUIC: fixed unsent MTU probe acknowledgement. > > > > Previously if an MTU probe send failed early in ngx_quic_frame_sendto() > > due to allocation error or congestion control, the application level packet > > number was not increased, but was still saved as MTU probe packet number. > > Later when a packet with this number was acknowledged, the unsent MTU probe > > was acknowledged as well. This could result in discovering a bigger MTU than > > supported by the path, which could lead to EMSGSIZE (Message too long) errors > > while sending further packets. > > > > The problem existed since PMTUD was introduced in 58afcd72446f (1.25.2). > > Back then only the unlikely memory allocation error could trigger it. However > > in efcdaa66df2e congestion control was added to ngx_quic_frame_sendto() which > > can now trigger the issue with a higher probability. > > > > diff --git a/src/event/quic/ngx_event_quic_migration.c b/src/event/quic/ngx_event_quic_migration.c > > --- a/src/event/quic/ngx_event_quic_migration.c > > +++ b/src/event/quic/ngx_event_quic_migration.c > > @@ -925,12 +925,6 @@ ngx_quic_send_path_mtu_probe(ngx_connect > > > > qc = ngx_quic_get_connection(c); > > ctx = ngx_quic_get_send_ctx(qc, ssl_encryption_application); > > - path->mtu_pnum[path->tries] = ctx->pnum; > > - > > - ngx_log_debug4(NGX_LOG_DEBUG_EVENT, c->log, 0, > > - "quic path seq:%uL send probe " > > - "mtu:%uz pnum:%uL tries:%ui", > > - path->seqnum, path->mtud, ctx->pnum, path->tries); > > > > log_error = c->log_error; > > c->log_error = NGX_ERROR_IGNORE_EMSGSIZE; > > @@ -943,14 +937,26 @@ ngx_quic_send_path_mtu_probe(ngx_connect > > path->mtu = mtu; > > c->log_error = log_error; > > > > + if (rc == NGX_OK) { > > + path->mtu_pnum[path->tries] = ctx->pnum; > > It's too late to save the packet number here after we've sent it. > A successful call to ngx_quic_output_packet() or ngx_quic_frame_sendto() > updates ctx->pnum to contain the next packet number, so it's off-by-one. > It may have sense to preserve mtu_pnum, and restore it on non-success, > but see below. Indeed, thanks. > > + > > + ngx_log_debug4(NGX_LOG_DEBUG_EVENT, c->log, 0, > > + "quic path seq:%uL send probe " > > + "mtu:%uz pnum:%uL tries:%ui", > > + path->seqnum, path->mtud, ctx->pnum, path->tries); > > IMHO, such late logging makes hard to follow through debug log messages. > I'd prefer to keep it first, before all the underlined logging. Logging early may report the pnum that will not be sent. However we can leave it as is for simplicity. > > + > > + return NGX_OK; > > + } > > + > > + path->mtu_pnum[path->tries] = NGX_QUIC_UNSET_PN; > > This will break matching ACK'ed probes on subsequent retries in > ngx_quic_handle_path_mtu(), it stops looking after NGX_QUIC_UNSET_PN. > Possible solutions are to rollback path->tries after NGX_AGAIN from > ngx_quic_frame_sendto(), or to ignore NGX_QUIC_UNSET_PN in > ngx_quic_handle_path_mtu(). Rolling back path->tries is hardly possible. We need to skip NGX_QUIC_UNSET_PN ngx_quic_handle_path_mtu(). > > + > > + ngx_log_debug2(NGX_LOG_DEBUG_EVENT, c->log, 0, > > + "quic path seq:%uL rejected mtu:%uz", > > + path->seqnum, path->mtud); > > + > > if (rc == NGX_ERROR) { > > if (c->write->error) { > > c->write->error = 0; > > - > > - ngx_log_debug2(NGX_LOG_DEBUG_EVENT, c->log, 0, > > - "quic path seq:%uL rejected mtu:%uz", > > - path->seqnum, path->mtud); > > - > > return NGX_DECLINED; > > } > > > > _______________________________________________ > > nginx-devel mailing list > > nginx-devel at nginx.org > > https://mailman.nginx.org/mailman/listinfo/nginx-devel > > -- > Sergey Kandaurov > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > https://mailman.nginx.org/mailman/listinfo/nginx-devel -- Roman Arutyunyan -------------- next part -------------- # HG changeset patch # User Roman Arutyunyan # Date 1707915388 -14400 # Wed Feb 14 16:56:28 2024 +0400 # Node ID 2ed3f57dca0a664340bca2236c7d614902db4180 # Parent 73eb75bee30f4aee66edfb500270dbb14710aafd QUIC: fixed unsent MTU probe acknowledgement. Previously if an MTU probe send failed early in ngx_quic_frame_sendto() due to allocation error or congestion control, the application level packet number was not increased, but was still saved as MTU probe packet number. Later when a packet with this number was acknowledged, the unsent MTU probe was acknowledged as well. This could result in discovering a bigger MTU than supported by the path, which could lead to EMSGSIZE (Message too long) errors while sending further packets. The problem existed since PMTUD was introduced in 58afcd72446f (1.25.2). Back then only the unlikely memory allocation error could trigger it. However in efcdaa66df2e congestion control was added to ngx_quic_frame_sendto() which can now trigger the issue with a higher probability. diff --git a/src/event/quic/ngx_event_quic_migration.c b/src/event/quic/ngx_event_quic_migration.c --- a/src/event/quic/ngx_event_quic_migration.c +++ b/src/event/quic/ngx_event_quic_migration.c @@ -909,6 +909,7 @@ static ngx_int_t ngx_quic_send_path_mtu_probe(ngx_connection_t *c, ngx_quic_path_t *path) { size_t mtu; + uint64_t pnum; ngx_int_t rc; ngx_uint_t log_error; ngx_quic_frame_t *frame; @@ -925,7 +926,7 @@ ngx_quic_send_path_mtu_probe(ngx_connect qc = ngx_quic_get_connection(c); ctx = ngx_quic_get_send_ctx(qc, ssl_encryption_application); - path->mtu_pnum[path->tries] = ctx->pnum; + pnum = ctx->pnum; ngx_log_debug4(NGX_LOG_DEBUG_EVENT, c->log, 0, "quic path seq:%uL send probe " @@ -943,14 +944,18 @@ ngx_quic_send_path_mtu_probe(ngx_connect path->mtu = mtu; c->log_error = log_error; + if (rc == NGX_OK) { + path->mtu_pnum[path->tries] = pnum; + return NGX_OK; + } + + ngx_log_debug2(NGX_LOG_DEBUG_EVENT, c->log, 0, + "quic path seq:%uL rejected mtu:%uz", + path->seqnum, path->mtud); + if (rc == NGX_ERROR) { if (c->write->error) { c->write->error = 0; - - ngx_log_debug2(NGX_LOG_DEBUG_EVENT, c->log, 0, - "quic path seq:%uL rejected mtu:%uz", - path->seqnum, path->mtud); - return NGX_DECLINED; } @@ -976,7 +981,7 @@ ngx_quic_handle_path_mtu(ngx_connection_ pnum = path->mtu_pnum[i]; if (pnum == NGX_QUIC_UNSET_PN) { - break; + continue; } if (pnum < min || pnum > max) { From pluknet at nginx.com Wed Feb 14 14:08:49 2024 From: pluknet at nginx.com (Sergey Kandaurov) Date: Wed, 14 Feb 2024 18:08:49 +0400 Subject: [PATCH] QUIC: fixed unsent MTU probe acknowledgement In-Reply-To: <20240214130935.p7777bltaqpdekdc@N00W24XTQX> References: <9b89f44ddd3637afc939.1707472602@arut-laptop> <20240214130935.p7777bltaqpdekdc@N00W24XTQX> Message-ID: <20240214140849.nv7jhlx3pw4tyhpa@Y9MQ9X2QVV> On Wed, Feb 14, 2024 at 05:09:35PM +0400, Roman Arutyunyan wrote: > Hi, > > On Tue, Feb 13, 2024 at 04:54:24PM +0400, Sergey Kandaurov wrote: > > > > > On 9 Feb 2024, at 13:56, Roman Arutyunyan wrote: > > > > > > # HG changeset patch > > > # User Roman Arutyunyan > > > # Date 1707472496 -14400 > > > # Fri Feb 09 13:54:56 2024 +0400 > > > # Node ID 9b89f44ddd3637afc939e31de348c7986ae9e76d > > > # Parent 73eb75bee30f4aee66edfb500270dbb14710aafd > > > QUIC: fixed unsent MTU probe acknowledgement. > > > > > > Previously if an MTU probe send failed early in ngx_quic_frame_sendto() > > > due to allocation error or congestion control, the application level packet > > > number was not increased, but was still saved as MTU probe packet number. > > > Later when a packet with this number was acknowledged, the unsent MTU probe > > > was acknowledged as well. This could result in discovering a bigger MTU than > > > supported by the path, which could lead to EMSGSIZE (Message too long) errors > > > while sending further packets. > > > > > > The problem existed since PMTUD was introduced in 58afcd72446f (1.25.2). > > > Back then only the unlikely memory allocation error could trigger it. However > > > in efcdaa66df2e congestion control was added to ngx_quic_frame_sendto() which > > > can now trigger the issue with a higher probability. > > > > > > diff --git a/src/event/quic/ngx_event_quic_migration.c b/src/event/quic/ngx_event_quic_migration.c > > > --- a/src/event/quic/ngx_event_quic_migration.c > > > +++ b/src/event/quic/ngx_event_quic_migration.c > > > @@ -925,12 +925,6 @@ ngx_quic_send_path_mtu_probe(ngx_connect > > > > > > qc = ngx_quic_get_connection(c); > > > ctx = ngx_quic_get_send_ctx(qc, ssl_encryption_application); > > > - path->mtu_pnum[path->tries] = ctx->pnum; > > > - > > > - ngx_log_debug4(NGX_LOG_DEBUG_EVENT, c->log, 0, > > > - "quic path seq:%uL send probe " > > > - "mtu:%uz pnum:%uL tries:%ui", > > > - path->seqnum, path->mtud, ctx->pnum, path->tries); > > > > > > log_error = c->log_error; > > > c->log_error = NGX_ERROR_IGNORE_EMSGSIZE; > > > @@ -943,14 +937,26 @@ ngx_quic_send_path_mtu_probe(ngx_connect > > > path->mtu = mtu; > > > c->log_error = log_error; > > > > > > + if (rc == NGX_OK) { > > > + path->mtu_pnum[path->tries] = ctx->pnum; > > > > It's too late to save the packet number here after we've sent it. > > A successful call to ngx_quic_output_packet() or ngx_quic_frame_sendto() > > updates ctx->pnum to contain the next packet number, so it's off-by-one. > > It may have sense to preserve mtu_pnum, and restore it on non-success, > > but see below. > > Indeed, thanks. > > > > + > > > + ngx_log_debug4(NGX_LOG_DEBUG_EVENT, c->log, 0, > > > + "quic path seq:%uL send probe " > > > + "mtu:%uz pnum:%uL tries:%ui", > > > + path->seqnum, path->mtud, ctx->pnum, path->tries); > > > > IMHO, such late logging makes hard to follow through debug log messages. > > I'd prefer to keep it first, before all the underlined logging. > > Logging early may report the pnum that will not be sent. > However we can leave it as is for simplicity. > > > > + > > > + return NGX_OK; > > > + } > > > + > > > + path->mtu_pnum[path->tries] = NGX_QUIC_UNSET_PN; > > > > This will break matching ACK'ed probes on subsequent retries in > > ngx_quic_handle_path_mtu(), it stops looking after NGX_QUIC_UNSET_PN. > > Possible solutions are to rollback path->tries after NGX_AGAIN from > > ngx_quic_frame_sendto(), or to ignore NGX_QUIC_UNSET_PN in > > ngx_quic_handle_path_mtu(). > > Rolling back path->tries is hardly possible. We need to skip NGX_QUIC_UNSET_PN > ngx_quic_handle_path_mtu(). > > > > + > > > + ngx_log_debug2(NGX_LOG_DEBUG_EVENT, c->log, 0, > > > + "quic path seq:%uL rejected mtu:%uz", > > > + path->seqnum, path->mtud); > > > + > > > if (rc == NGX_ERROR) { > > > if (c->write->error) { > > > c->write->error = 0; > > > - > > > - ngx_log_debug2(NGX_LOG_DEBUG_EVENT, c->log, 0, > > > - "quic path seq:%uL rejected mtu:%uz", > > > - path->seqnum, path->mtud); > > > - > > > return NGX_DECLINED; > > > } > > > > # HG changeset patch > # User Roman Arutyunyan > # Date 1707915388 -14400 > # Wed Feb 14 16:56:28 2024 +0400 > # Node ID 2ed3f57dca0a664340bca2236c7d614902db4180 > # Parent 73eb75bee30f4aee66edfb500270dbb14710aafd > QUIC: fixed unsent MTU probe acknowledgement. > > Previously if an MTU probe send failed early in ngx_quic_frame_sendto() > due to allocation error or congestion control, the application level packet > number was not increased, but was still saved as MTU probe packet number. > Later when a packet with this number was acknowledged, the unsent MTU probe > was acknowledged as well. This could result in discovering a bigger MTU than > supported by the path, which could lead to EMSGSIZE (Message too long) errors > while sending further packets. > > The problem existed since PMTUD was introduced in 58afcd72446f (1.25.2). > Back then only the unlikely memory allocation error could trigger it. However > in efcdaa66df2e congestion control was added to ngx_quic_frame_sendto() which > can now trigger the issue with a higher probability. > > diff --git a/src/event/quic/ngx_event_quic_migration.c b/src/event/quic/ngx_event_quic_migration.c > --- a/src/event/quic/ngx_event_quic_migration.c > +++ b/src/event/quic/ngx_event_quic_migration.c > @@ -909,6 +909,7 @@ static ngx_int_t > ngx_quic_send_path_mtu_probe(ngx_connection_t *c, ngx_quic_path_t *path) > { > size_t mtu; > + uint64_t pnum; > ngx_int_t rc; > ngx_uint_t log_error; > ngx_quic_frame_t *frame; > @@ -925,7 +926,7 @@ ngx_quic_send_path_mtu_probe(ngx_connect > > qc = ngx_quic_get_connection(c); > ctx = ngx_quic_get_send_ctx(qc, ssl_encryption_application); > - path->mtu_pnum[path->tries] = ctx->pnum; > + pnum = ctx->pnum; > > ngx_log_debug4(NGX_LOG_DEBUG_EVENT, c->log, 0, > "quic path seq:%uL send probe " > @@ -943,14 +944,18 @@ ngx_quic_send_path_mtu_probe(ngx_connect > path->mtu = mtu; > c->log_error = log_error; > > + if (rc == NGX_OK) { > + path->mtu_pnum[path->tries] = pnum; > + return NGX_OK; > + } > + > + ngx_log_debug2(NGX_LOG_DEBUG_EVENT, c->log, 0, > + "quic path seq:%uL rejected mtu:%uz", > + path->seqnum, path->mtud); > + > if (rc == NGX_ERROR) { > if (c->write->error) { > c->write->error = 0; > - > - ngx_log_debug2(NGX_LOG_DEBUG_EVENT, c->log, 0, > - "quic path seq:%uL rejected mtu:%uz", > - path->seqnum, path->mtud); > - > return NGX_DECLINED; > } > > @@ -976,7 +981,7 @@ ngx_quic_handle_path_mtu(ngx_connection_ > pnum = path->mtu_pnum[i]; > > if (pnum == NGX_QUIC_UNSET_PN) { > - break; > + continue; > } > > if (pnum < min || pnum > max) { Looks good. From arut at nginx.com Wed Feb 14 14:11:46 2024 From: arut at nginx.com (=?utf-8?q?Roman_Arutyunyan?=) Date: Wed, 14 Feb 2024 14:11:46 +0000 Subject: [nginx] QUIC: fixed unsent MTU probe acknowledgement. Message-ID: details: https://hg.nginx.org/nginx/rev/2ed3f57dca0a branches: changeset: 9208:2ed3f57dca0a user: Roman Arutyunyan date: Wed Feb 14 16:56:28 2024 +0400 description: QUIC: fixed unsent MTU probe acknowledgement. Previously if an MTU probe send failed early in ngx_quic_frame_sendto() due to allocation error or congestion control, the application level packet number was not increased, but was still saved as MTU probe packet number. Later when a packet with this number was acknowledged, the unsent MTU probe was acknowledged as well. This could result in discovering a bigger MTU than supported by the path, which could lead to EMSGSIZE (Message too long) errors while sending further packets. The problem existed since PMTUD was introduced in 58afcd72446f (1.25.2). Back then only the unlikely memory allocation error could trigger it. However in efcdaa66df2e congestion control was added to ngx_quic_frame_sendto() which can now trigger the issue with a higher probability. diffstat: src/event/quic/ngx_event_quic_migration.c | 19 ++++++++++++------- 1 files changed, 12 insertions(+), 7 deletions(-) diffs (53 lines): diff -r 73eb75bee30f -r 2ed3f57dca0a src/event/quic/ngx_event_quic_migration.c --- a/src/event/quic/ngx_event_quic_migration.c Tue Jan 30 19:19:26 2024 +0400 +++ b/src/event/quic/ngx_event_quic_migration.c Wed Feb 14 16:56:28 2024 +0400 @@ -909,6 +909,7 @@ static ngx_int_t ngx_quic_send_path_mtu_probe(ngx_connection_t *c, ngx_quic_path_t *path) { size_t mtu; + uint64_t pnum; ngx_int_t rc; ngx_uint_t log_error; ngx_quic_frame_t *frame; @@ -925,7 +926,7 @@ ngx_quic_send_path_mtu_probe(ngx_connect qc = ngx_quic_get_connection(c); ctx = ngx_quic_get_send_ctx(qc, ssl_encryption_application); - path->mtu_pnum[path->tries] = ctx->pnum; + pnum = ctx->pnum; ngx_log_debug4(NGX_LOG_DEBUG_EVENT, c->log, 0, "quic path seq:%uL send probe " @@ -943,14 +944,18 @@ ngx_quic_send_path_mtu_probe(ngx_connect path->mtu = mtu; c->log_error = log_error; + if (rc == NGX_OK) { + path->mtu_pnum[path->tries] = pnum; + return NGX_OK; + } + + ngx_log_debug2(NGX_LOG_DEBUG_EVENT, c->log, 0, + "quic path seq:%uL rejected mtu:%uz", + path->seqnum, path->mtud); + if (rc == NGX_ERROR) { if (c->write->error) { c->write->error = 0; - - ngx_log_debug2(NGX_LOG_DEBUG_EVENT, c->log, 0, - "quic path seq:%uL rejected mtu:%uz", - path->seqnum, path->mtud); - return NGX_DECLINED; } @@ -976,7 +981,7 @@ ngx_quic_handle_path_mtu(ngx_connection_ pnum = path->mtu_pnum[i]; if (pnum == NGX_QUIC_UNSET_PN) { - break; + continue; } if (pnum < min || pnum > max) { From pluknet at nginx.com Wed Feb 14 16:15:28 2024 From: pluknet at nginx.com (=?utf-8?q?Sergey_Kandaurov?=) Date: Wed, 14 Feb 2024 16:15:28 +0000 Subject: [nginx] QUIC: trial packet decryption in response to invalid key update. Message-ID: details: https://hg.nginx.org/nginx/rev/1bf1b423f268 branches: changeset: 9209:1bf1b423f268 user: Sergey Kandaurov date: Wed Feb 14 15:55:34 2024 +0400 description: QUIC: trial packet decryption in response to invalid key update. Inspired by RFC 9001, Section 6.3, trial packet decryption with the current keys is now used to avoid a timing side-channel signal. Further, this fixes segfault while accessing missing next keys (ticket #2585). diffstat: src/event/quic/ngx_event_quic_protection.c | 15 +++++++++++++-- 1 files changed, 13 insertions(+), 2 deletions(-) diffs (25 lines): diff -r 2ed3f57dca0a -r 1bf1b423f268 src/event/quic/ngx_event_quic_protection.c --- a/src/event/quic/ngx_event_quic_protection.c Wed Feb 14 16:56:28 2024 +0400 +++ b/src/event/quic/ngx_event_quic_protection.c Wed Feb 14 15:55:34 2024 +0400 @@ -1144,8 +1144,19 @@ ngx_quic_decrypt(ngx_quic_header_t *pkt, key_phase = (pkt->flags & NGX_QUIC_PKT_KPHASE) != 0; if (key_phase != pkt->key_phase) { - secret = &pkt->keys->next_key.client; - pkt->key_update = 1; + if (pkt->keys->next_key.client.ctx != NULL) { + secret = &pkt->keys->next_key.client; + pkt->key_update = 1; + + } else { + /* + * RFC 9001, 6.3. Timing of Receive Key Generation. + * + * Trial decryption to avoid timing side-channel. + */ + ngx_log_debug0(NGX_LOG_DEBUG_EVENT, pkt->log, 0, + "quic next key missing"); + } } } From pluknet at nginx.com Wed Feb 14 16:15:31 2024 From: pluknet at nginx.com (=?utf-8?q?Sergey_Kandaurov?=) Date: Wed, 14 Feb 2024 16:15:31 +0000 Subject: [nginx] QUIC: fixed stream cleanup (ticket #2586). Message-ID: details: https://hg.nginx.org/nginx/rev/4ed4e1e7f115 branches: changeset: 9210:4ed4e1e7f115 user: Roman Arutyunyan date: Wed Feb 14 15:55:37 2024 +0400 description: QUIC: fixed stream cleanup (ticket #2586). Stream connection cleanup handler ngx_quic_stream_cleanup_handler() calls ngx_quic_shutdown_stream() after which it resets the pointer from quic stream to the connection (sc->connection = NULL). Previously if this call failed, sc->connection retained the old value, while the connection was freed by the application code. This resulted later in a second attempt to close the freed connection, which lead to allocator double free error. The fix is to reset the sc->connection pointer in case of error. diffstat: src/event/quic/ngx_event_quic_streams.c | 1 + 1 files changed, 1 insertions(+), 0 deletions(-) diffs (11 lines): diff -r 1bf1b423f268 -r 4ed4e1e7f115 src/event/quic/ngx_event_quic_streams.c --- a/src/event/quic/ngx_event_quic_streams.c Wed Feb 14 15:55:34 2024 +0400 +++ b/src/event/quic/ngx_event_quic_streams.c Wed Feb 14 15:55:37 2024 +0400 @@ -1097,6 +1097,7 @@ ngx_quic_stream_cleanup_handler(void *da "quic stream id:0x%xL cleanup", qs->id); if (ngx_quic_shutdown_stream(c, NGX_RDWR_SHUTDOWN) != NGX_OK) { + qs->connection = NULL; goto failed; } From pluknet at nginx.com Wed Feb 14 16:15:34 2024 From: pluknet at nginx.com (=?utf-8?q?Sergey_Kandaurov?=) Date: Wed, 14 Feb 2024 16:15:34 +0000 Subject: [nginx] Updated OpenSSL and zlib used for win32 builds. Message-ID: details: https://hg.nginx.org/nginx/rev/0d9e536ec628 branches: changeset: 9211:0d9e536ec628 user: Sergey Kandaurov date: Wed Feb 14 15:55:42 2024 +0400 description: Updated OpenSSL and zlib used for win32 builds. diffstat: misc/GNUmakefile | 4 ++-- 1 files changed, 2 insertions(+), 2 deletions(-) diffs (14 lines): diff -r 4ed4e1e7f115 -r 0d9e536ec628 misc/GNUmakefile --- a/misc/GNUmakefile Wed Feb 14 15:55:37 2024 +0400 +++ b/misc/GNUmakefile Wed Feb 14 15:55:42 2024 +0400 @@ -6,8 +6,8 @@ TEMP = tmp CC = cl OBJS = objs.msvc8 -OPENSSL = openssl-3.0.11 -ZLIB = zlib-1.3 +OPENSSL = openssl-3.0.13 +ZLIB = zlib-1.3.1 PCRE = pcre2-10.39 From pluknet at nginx.com Wed Feb 14 16:15:37 2024 From: pluknet at nginx.com (=?utf-8?q?Sergey_Kandaurov?=) Date: Wed, 14 Feb 2024 16:15:37 +0000 Subject: [nginx] nginx-1.25.4-RELEASE Message-ID: details: https://hg.nginx.org/nginx/rev/173a0a7dbce5 branches: changeset: 9212:173a0a7dbce5 user: Sergey Kandaurov date: Wed Feb 14 15:55:46 2024 +0400 description: nginx-1.25.4-RELEASE diffstat: docs/xml/nginx/changes.xml | 76 ++++++++++++++++++++++++++++++++++++++++++++++ 1 files changed, 76 insertions(+), 0 deletions(-) diffs (86 lines): diff -r 0d9e536ec628 -r 173a0a7dbce5 docs/xml/nginx/changes.xml --- a/docs/xml/nginx/changes.xml Wed Feb 14 15:55:42 2024 +0400 +++ b/docs/xml/nginx/changes.xml Wed Feb 14 15:55:46 2024 +0400 @@ -5,6 +5,82 @@ + + + + +при использовании HTTP/3 в рабочем процессе мог произойти segmentation fault +во время обработки специально созданной QUIC-сессии +(CVE-2024-24989, CVE-2024-24990). + + +when using HTTP/3 a segmentation fault might occur in a worker process +while processing a specially crafted QUIC session +(CVE-2024-24989, CVE-2024-24990). + + + + + +соединения с незавершенными AIO-операциями могли закрываться преждевременно +во время плавного завершения старых рабочих процессов. + + +connections with pending AIO operations might be closed prematurely +during graceful shutdown of old worker processes. + + + + + +теперь nginx не пишет в лог сообщения об утечке сокетов, +если во время плавного завершения старых рабочих процессов +было запрошено быстрое завершение. + + +socket leak alerts no longer logged when fast shutdown +was requested after graceful shutdown of old worker processes. + + + + + +при использовании AIO в подзапросе могла происходить +ошибка на сокете, утечка сокетов, +либо segmentation fault в рабочем процессе (при SSL-проксировании). + + +a socket descriptor error, a socket leak, +or a segmentation fault in a worker process (for SSL proxying) +might occur if AIO was used in a subrequest. + + + + + +в рабочем процессе мог произойти segmentation fault, +если использовалось SSL-проксирование и директива image_filter, +а ошибки с кодом 415 перенаправлялись с помощью директивы error_page. + + +a segmentation fault might occur in a worker process +if SSL proxying was used along with the "image_filter" directive +and errors with code 415 were redirected with the "error_page" directive. + + + + + +Исправления и улучшения в HTTP/3. + + +Bugfixes and improvements in HTTP/3. + + + + + + From pluknet at nginx.com Wed Feb 14 16:15:40 2024 From: pluknet at nginx.com (=?utf-8?q?Sergey_Kandaurov?=) Date: Wed, 14 Feb 2024 16:15:40 +0000 Subject: [nginx] release-1.25.4 tag Message-ID: details: https://hg.nginx.org/nginx/rev/89bff782528a branches: changeset: 9213:89bff782528a user: Sergey Kandaurov date: Wed Feb 14 20:03:00 2024 +0400 description: release-1.25.4 tag diffstat: .hgtags | 1 + 1 files changed, 1 insertions(+), 0 deletions(-) diffs (8 lines): diff -r 173a0a7dbce5 -r 89bff782528a .hgtags --- a/.hgtags Wed Feb 14 15:55:46 2024 +0400 +++ b/.hgtags Wed Feb 14 20:03:00 2024 +0400 @@ -476,3 +476,4 @@ 12dcf92b0c2c68552398f19644ce3104459807d7 f8134640e8615448205785cf00b0bc810489b495 release-1.25.1 1d839f05409d1a50d0f15a2bf36547001f99ae40 release-1.25.2 294a3d07234f8f65d7b0e0b0e2c5b05c12c5da0a release-1.25.3 +173a0a7dbce569adbb70257c6ec4f0f6bc585009 release-1.25.4 From mdounin at mdounin.ru Wed Feb 14 18:03:11 2024 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 14 Feb 2024 21:03:11 +0300 Subject: announcing freenginx.org Message-ID: Hello! As you probably know, F5 closed Moscow office in 2022, and I no longer work for F5 since then. Still, we’ve reached an agreement that I will maintain my role in nginx development as a volunteer. And for almost two years I was working on improving nginx and making it better for everyone, for free. Unfortunately, some new non-technical management at F5 recently decided that they know better how to run open source projects. In particular, they decided to interfere with security policy nginx uses for years, ignoring both the policy and developers’ position. That’s quite understandable: they own the project, and can do anything with it, including doing marketing-motivated actions, ignoring developers position and community. Still, this contradicts our agreement. And, more importantly, I no longer able to control which changes are made in nginx within F5, and no longer see nginx as a free and open source project developed and maintained for the public good. As such, starting from today, I will no longer participate in nginx development as run by F5. Instead, I’m starting an alternative project, which is going to be run by developers, and not corporate entities: http://freenginx.org/ The goal is to keep nginx development free from arbitrary corporate actions. Help and contributions are welcome. Hope it will be beneficial for everyone. -- Maxim Dounin http://freenginx.org/ From serg.brester at sebres.de Wed Feb 14 21:45:37 2024 From: serg.brester at sebres.de (Sergey Brester) Date: Wed, 14 Feb 2024 22:45:37 +0100 Subject: announcing freenginx.org In-Reply-To: References: Message-ID: <6c0cfb0380b84175709b1bd80ba27397@sebres.de> Hi Maxim, it is pity to hear such news... I have few comments and questions about, which I enclosed inline below... Regards, Serg. 14.02.2024 19:03, Maxim Dounin wrote: > Hello! > > As you probably know, F5 closed Moscow office in 2022, and I no > longer work for F5 since then. Still, we've reached an agreement > that I will maintain my role in nginx development as a volunteer. > And for almost two years I was working on improving nginx and > making it better for everyone, for free. And you did a very good job! > Unfortunately, some new non-technical management at F5 recently > decided that they know better how to run open source projects. In > particular, they decided to interfere with security policy nginx > uses for years, ignoring both the policy and developers' position. Can you explain a bit more about that (or provide some examples or a link to a public discussion about, if it exists)? > That's quite understandable: they own the project, and can do > anything with it, including doing marketing-motivated actions, > ignoring developers position and community. Still, this > contradicts our agreement. And, more importantly, I no longer able > to control which changes are made in nginx within F5, and no longer > see nginx as a free and open source project developed and > maintained for the public good. Do you speak only about you?.. Or are there also other developers which share your point of view? Just for the record... What is about R. Arutyunyan, V. Bartenev and others? Could one expect any statement from Igor (Sysoev) about the subject? > As such, starting from today, I will no longer participate in nginx > development as run by F5. Instead, I'm starting an alternative > project, which is going to be run by developers, and not corporate > entities: > > http://freenginx.org/ [1] Why yet another fork? I mean why just not "angie", for instance? Additionally I'd like to ask whether the name "freenginx" is really well thought-out? I mean: - it can be easy confused with free nginx (compared to nginx plus) - the search for that will be horrible (if you would try to search for freenginx, even as exact (within quotes, with plus etc), many internet search engine would definitely include free nginx in the result. - possibly copyright or trademark problems, etc > The goal is to keep nginx development free from arbitrary corporate > actions. Help and contributions are welcome. Hope it will be > beneficial for everyone. Just as an idea: switch the primary dev to GH (github)... (and commonly from hg to git). I'm sure it would boost the development drastically, as well as bring many new developers and let grow the community. Links: ------ [1] http://freenginx.org/ From mdounin at mdounin.ru Wed Feb 14 22:21:10 2024 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 15 Feb 2024 01:21:10 +0300 Subject: announcing freenginx.org In-Reply-To: <6c0cfb0380b84175709b1bd80ba27397@sebres.de> References: <6c0cfb0380b84175709b1bd80ba27397@sebres.de> Message-ID: Hello! On Wed, Feb 14, 2024 at 10:45:37PM +0100, Sergey Brester wrote: > Hi Maxim, > > it is pity to hear such news... > > I have few comments and questions about, which I enclosed inline below... > > Regards, > Serg. > > 14.02.2024 19:03, Maxim Dounin wrote: > > > Hello! > > > > As you probably know, F5 closed Moscow office in 2022, and I no > > longer work for F5 since then. Still, we've reached an agreement > > that I will maintain my role in nginx development as a volunteer. > > And for almost two years I was working on improving nginx and > > making it better for everyone, for free. > > And you did a very good job! Thanks. > > Unfortunately, some new non-technical management at F5 recently > > decided that they know better how to run open source projects. In > > particular, they decided to interfere with security policy nginx > > uses for years, ignoring both the policy and developers' position. > > Can you explain a bit more about that (or provide some examples > or a link to a public discussion about, if it exists)? I've already provided some details here: https://freenginx.org/pipermail/nginx/2024-February/000007.html : The most recent "security advisory" was released despite the fact : that the particular bug in the experimental HTTP/3 code is : expected to be fixed as a normal bug as per the existing security : policy, and all the developers, including me, agree on this. : : And, while the particular action isn't exactly very bad, the : approach in general is quite problematic. There was no public discussion. The only discussion I'm aware of happened on the security-alert@ list, and the consensus was that the bug should be fixed as a normal bug. Still, I was reached several days ago with the information that some unnamed management requested an advisory and security release anyway, regardless of the policy and developers position. > > That's quite understandable: they own the project, and can do > > anything with it, including doing marketing-motivated actions, > > ignoring developers position and community. Still, this > > contradicts our agreement. And, more importantly, I no longer able > > to control which changes are made in nginx within F5, and no longer > > see nginx as a free and open source project developed and > > maintained for the public good. > > Do you speak only about you?.. Or are there also other developers which > share your point of view? Just for the record... > What is about R. Arutyunyan, V. Bartenev and others? > Could one expect any statement from Igor (Sysoev) about the subject? I speak only about me. Others, if they are interested in, are welcome to join. > > As such, starting from today, I will no longer participate in nginx > > development as run by F5. Instead, I'm starting an alternative > > project, which is going to be run by developers, and not corporate > > entities: > > > > http://freenginx.org/ [1] > > Why yet another fork? I mean why just not "angie", for instance? The "angie" fork shares the same problem as nginx run by F5: it's run by a for-profit corporate entity. Even if it's good enough now, things might change unexpectedly, like it happened with F5. > Additionally I'd like to ask whether the name "freenginx" is really well > thought-out? > I mean: > - it can be easy confused with free nginx (compared to nginx plus) > - the search for that will be horrible (if you would try to search for > freenginx, > even as exact (within quotes, with plus etc), many internet search > engine > would definitely include free nginx in the result. > - possibly copyright or trademark problems, etc Apart from potential trademark concerns (which I believe do not apply here, but IANAL), these does not seem to be significant (and search results are already good enough). Still, the name aligns well with project goals. > > The goal is to keep nginx development free from arbitrary corporate > > actions. Help and contributions are welcome. Hope it will be > > beneficial for everyone. > > Just as an idea: switch the primary dev to GH (github)... (and commonly from > hg to git). > I'm sure it would boost the development drastically, as well as bring many > new > developers and let grow the community. While I understand the suggestion and potential benefits, I'm not a fun of git and github, and prefer Mercurial. -- Maxim Dounin http://mdounin.ru/ From vasiliy.soshnikov at gmail.com Wed Feb 14 22:33:08 2024 From: vasiliy.soshnikov at gmail.com (Vasiliy Soshnikov) Date: Thu, 15 Feb 2024 01:33:08 +0300 Subject: announcing freenginx.org In-Reply-To: References: <6c0cfb0380b84175709b1bd80ba27397@sebres.de> Message-ID: Hello Maxim, Sad to read it. I can't promise that, but I will try to support your project by using freenginx at least. I wish good luck to freenginx! -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Wed Feb 14 22:35:29 2024 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 15 Feb 2024 01:35:29 +0300 Subject: announcing freenginx.org In-Reply-To: References: <6c0cfb0380b84175709b1bd80ba27397@sebres.de> Message-ID: Hello! On Thu, Feb 15, 2024 at 01:33:08AM +0300, Vasiliy Soshnikov wrote: > Hello Maxim, > Sad to read it. I can't promise that, but I will try to support your > project by using freenginx at least. > I wish good luck to freenginx! Thanks, appreciated. -- Maxim Dounin http://mdounin.ru/ From benjamin.p.kallus.gr at dartmouth.edu Thu Feb 15 00:44:22 2024 From: benjamin.p.kallus.gr at dartmouth.edu (Ben Kallus) Date: Wed, 14 Feb 2024 19:44:22 -0500 Subject: [PATCH] Enforce that CR precede LF in chunk lines In-Reply-To: References: Message-ID: > Overall, I don't think there is a big difference here. All I can say is that the hardest part of pulling off that type of attack is guessing the length correctly. If you want to make that job marginally easier, that's fine by me :) > It won't, because "-C" is a non-portable flag provided by a Debian-specific patch. There is a CRLF option for nmap-ncat, openbsd netcat, and netcat-traditional, as well as whatever nc ships with macOS. GNU netcat doesn't support it, but it's unmaintained anyway. > And even if it will work for some, this will still complicate testing. Most of the tests already use CRLF appropriately. Test cases that use bare LF in chunks are inadvertently also testing an Nginx quirk in addition to what they are intending to test, which is probably undesirable. -Ben From xeioex at nginx.com Thu Feb 15 05:35:03 2024 From: xeioex at nginx.com (=?utf-8?q?Dmitry_Volyntsev?=) Date: Thu, 15 Feb 2024 05:35:03 +0000 Subject: [njs] Moved njs_time() out of the core as it is not a part of the spec. Message-ID: details: https://hg.nginx.org/njs/rev/6fa96ea99037 branches: changeset: 2287:6fa96ea99037 user: Dmitry Volyntsev date: Wed Feb 14 21:33:56 2024 -0800 description: Moved njs_time() out of the core as it is not a part of the spec. diffstat: auto/sources | 1 - external/njs_shell.c | 21 ++++++++++++++++++++- src/njs_date.c | 13 +++++++++++++ src/njs_main.h | 1 - src/njs_time.c | 27 --------------------------- src/njs_time.h | 27 --------------------------- src/test/njs_benchmark.c | 19 +++++++++++++++++++ 7 files changed, 52 insertions(+), 57 deletions(-) diffs (178 lines): diff -r d3a9f2f153f8 -r 6fa96ea99037 auto/sources --- a/auto/sources Wed Feb 07 17:57:02 2024 -0800 +++ b/auto/sources Wed Feb 14 21:33:56 2024 -0800 @@ -16,7 +16,6 @@ NJS_LIB_SRCS=" \ src/njs_md5.c \ src/njs_sha1.c \ src/njs_sha2.c \ - src/njs_time.c \ src/njs_malloc.c \ src/njs_mp.c \ src/njs_sprintf.c \ diff -r d3a9f2f153f8 -r 6fa96ea99037 external/njs_shell.c --- a/external/njs_shell.c Wed Feb 07 17:57:02 2024 -0800 +++ b/external/njs_shell.c Wed Feb 14 21:33:56 2024 -0800 @@ -7,7 +7,6 @@ #include #include -#include #include #include #include @@ -169,6 +168,7 @@ static void njs_console_logger(njs_log_l static intptr_t njs_event_rbtree_compare(njs_rbtree_node_t *node1, njs_rbtree_node_t *node2); +static uint64_t njs_time(void); njs_int_t njs_array_buffer_detach(njs_vm_t *vm, njs_value_t *args, njs_uint_t nargs, njs_index_t unused, njs_value_t *retval); @@ -2122,3 +2122,22 @@ njs_event_rbtree_compare(njs_rbtree_node return 0; } + + +static uint64_t +njs_time(void) +{ +#if (NJS_HAVE_CLOCK_MONOTONIC) + struct timespec ts; + + clock_gettime(CLOCK_MONOTONIC, &ts); + + return (uint64_t) ts.tv_sec * 1000000000 + ts.tv_nsec; +#else + struct timeval tv; + + gettimeofday(&tv, NULL); + + return (uint64_t) tv.tv_sec * 1000000000 + tv.tv_usec * 1000; +#endif +} diff -r d3a9f2f153f8 -r 6fa96ea99037 src/njs_date.c --- a/src/njs_date.c Wed Feb 07 17:57:02 2024 -0800 +++ b/src/njs_date.c Wed Feb 14 21:33:56 2024 -0800 @@ -22,6 +22,19 @@ #define NJS_DATE_MSEC 7 +#if (NJS_HAVE_TM_GMTOFF) + +#define njs_timezone(tm) \ + ((tm)->tm_gmtoff) + +#elif (NJS_HAVE_ALTZONE) + +#define njs_timezone(tm) \ + (-(((tm)->tm_isdst > 0) ? altzone : timezone)) + +#endif + + #define njs_date_magic(field, local) \ ((local << 6) + field) diff -r d3a9f2f153f8 -r 6fa96ea99037 src/njs_main.h --- a/src/njs_main.h Wed Feb 07 17:57:02 2024 -0800 +++ b/src/njs_main.h Wed Feb 14 21:33:56 2024 -0800 @@ -27,7 +27,6 @@ #include #include #include -#include #include #include #include diff -r d3a9f2f153f8 -r 6fa96ea99037 src/njs_time.c --- a/src/njs_time.c Wed Feb 07 17:57:02 2024 -0800 +++ /dev/null Thu Jan 01 00:00:00 1970 +0000 @@ -1,27 +0,0 @@ - -/* - * Copyright (C) Igor Sysoev - * Copyright (C) NGINX, Inc. - */ - - -#include - - -uint64_t -njs_time(void) -{ -#if (NJS_HAVE_CLOCK_MONOTONIC) - struct timespec ts; - - clock_gettime(CLOCK_MONOTONIC, &ts); - - return (uint64_t) ts.tv_sec * 1000000000 + ts.tv_nsec; -#else - struct timeval tv; - - gettimeofday(&tv, NULL); - - return (uint64_t) tv.tv_sec * 1000000000 + tv.tv_usec * 1000; -#endif -} diff -r d3a9f2f153f8 -r 6fa96ea99037 src/njs_time.h --- a/src/njs_time.h Wed Feb 07 17:57:02 2024 -0800 +++ /dev/null Thu Jan 01 00:00:00 1970 +0000 @@ -1,27 +0,0 @@ - -/* - * Copyright (C) Igor Sysoev - * Copyright (C) NGINX, Inc. - */ - -#ifndef _NJS_TIME_H_INCLUDED_ -#define _NJS_TIME_H_INCLUDED_ - - -#if (NJS_HAVE_TM_GMTOFF) - -#define njs_timezone(tm) \ - ((tm)->tm_gmtoff) - -#elif (NJS_HAVE_ALTZONE) - -#define njs_timezone(tm) \ - (-(((tm)->tm_isdst > 0) ? altzone : timezone)) - -#endif - - -uint64_t njs_time(void); - - -#endif /* _NJS_TIME_H_INCLUDED_ */ diff -r d3a9f2f153f8 -r 6fa96ea99037 src/test/njs_benchmark.c --- a/src/test/njs_benchmark.c Wed Feb 07 17:57:02 2024 -0800 +++ b/src/test/njs_benchmark.c Wed Feb 14 21:33:56 2024 -0800 @@ -35,6 +35,25 @@ njs_module_t *njs_benchmark_addon_extern }; +static uint64_t +njs_time(void) +{ +#if (NJS_HAVE_CLOCK_MONOTONIC) + struct timespec ts; + + clock_gettime(CLOCK_MONOTONIC, &ts); + + return (uint64_t) ts.tv_sec * 1000000000 + ts.tv_nsec; +#else + struct timeval tv; + + gettimeofday(&tv, NULL); + + return (uint64_t) tv.tv_sec * 1000000000 + tv.tv_usec * 1000; +#endif +} + + static njs_int_t njs_benchmark_test(njs_vm_t *parent, njs_opts_t *opts, njs_value_t *report, njs_benchmark_test_t *test) From xeioex at nginx.com Thu Feb 15 05:35:05 2024 From: xeioex at nginx.com (=?utf-8?q?Dmitry_Volyntsev?=) Date: Thu, 15 Feb 2024 05:35:05 +0000 Subject: [njs] Moving hash code out of the njs core. Message-ID: details: https://hg.nginx.org/njs/rev/0479e5821ab2 branches: changeset: 2288:0479e5821ab2 user: Dmitry Volyntsev date: Wed Feb 14 21:34:02 2024 -0800 description: Moving hash code out of the njs core. diffstat: auto/modules | 5 +- auto/sources | 3 - external/njs_crypto_module.c | 2 +- external/njs_hash.h | 32 ++++ external/njs_md5.c | 270 ++++++++++++++++++++++++++++++++++++ external/njs_sha1.c | 298 ++++++++++++++++++++++++++++++++++++++++ external/njs_sha2.c | 320 +++++++++++++++++++++++++++++++++++++++++++ src/njs_hash.h | 32 ---- src/njs_main.h | 2 - src/njs_md5.c | 266 ----------------------------------- src/njs_sha1.c | 294 --------------------------------------- src/njs_sha2.c | 316 ------------------------------------------ 12 files changed, 925 insertions(+), 915 deletions(-) diffs (truncated from 1914 to 1000 lines): diff -r 6fa96ea99037 -r 0479e5821ab2 auto/modules --- a/auto/modules Wed Feb 14 21:33:56 2024 -0800 +++ b/auto/modules Wed Feb 14 21:34:02 2024 -0800 @@ -9,7 +9,10 @@ njs_module_srcs=src/njs_buffer.c njs_module_name=njs_crypto_module njs_module_incs= -njs_module_srcs=external/njs_crypto_module.c +njs_module_srcs="external/njs_crypto_module.c \ + external/njs_md5.c \ + external/njs_sha1.c \ + external/njs_sha2.c" . auto/module diff -r 6fa96ea99037 -r 0479e5821ab2 auto/sources --- a/auto/sources Wed Feb 14 21:33:56 2024 -0800 +++ b/auto/sources Wed Feb 14 21:34:02 2024 -0800 @@ -13,9 +13,6 @@ NJS_LIB_SRCS=" \ src/njs_flathsh.c \ src/njs_trace.c \ src/njs_random.c \ - src/njs_md5.c \ - src/njs_sha1.c \ - src/njs_sha2.c \ src/njs_malloc.c \ src/njs_mp.c \ src/njs_sprintf.c \ diff -r 6fa96ea99037 -r 0479e5821ab2 external/njs_crypto_module.c --- a/external/njs_crypto_module.c Wed Feb 14 21:33:56 2024 -0800 +++ b/external/njs_crypto_module.c Wed Feb 14 21:34:02 2024 -0800 @@ -6,9 +6,9 @@ #include -#include #include #include +#include "njs_hash.h" typedef void (*njs_hash_init)(njs_hash_t *ctx); diff -r 6fa96ea99037 -r 0479e5821ab2 external/njs_hash.h --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/external/njs_hash.h Wed Feb 14 21:34:02 2024 -0800 @@ -0,0 +1,32 @@ + +/* + * Copyright (C) Dmitry Volyntsev + * Copyright (C) NGINX, Inc. + */ + + +#ifndef _NJS_HASH_H_INCLUDED_ +#define _NJS_HASH_H_INCLUDED_ + + +typedef struct { + uint64_t bytes; + uint32_t a, b, c, d, e, f, g, h; + u_char buffer[64]; +} njs_hash_t; + + +NJS_EXPORT void njs_md5_init(njs_hash_t *ctx); +NJS_EXPORT void njs_md5_update(njs_hash_t *ctx, const void *data, size_t size); +NJS_EXPORT void njs_md5_final(u_char result[32], njs_hash_t *ctx); + +NJS_EXPORT void njs_sha1_init(njs_hash_t *ctx); +NJS_EXPORT void njs_sha1_update(njs_hash_t *ctx, const void *data, size_t size); +NJS_EXPORT void njs_sha1_final(u_char result[32], njs_hash_t *ctx); + +NJS_EXPORT void njs_sha2_init(njs_hash_t *ctx); +NJS_EXPORT void njs_sha2_update(njs_hash_t *ctx, const void *data, size_t size); +NJS_EXPORT void njs_sha2_final(u_char result[32], njs_hash_t *ctx); + + +#endif /* _NJS_HASH_H_INCLUDED_ */ diff -r 6fa96ea99037 -r 0479e5821ab2 external/njs_md5.c --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/external/njs_md5.c Wed Feb 14 21:34:02 2024 -0800 @@ -0,0 +1,270 @@ + +/* + * An internal implementation, based on Alexander Peslyak's + * public domain implementation: + * http://openwall.info/wiki/people/solar/software/public-domain-source-code/md5 + */ + + +#include +#include +#include +#include +#include "njs_hash.h" + + +static const u_char *njs_md5_body(njs_hash_t *ctx, const u_char *data, + size_t size); + + +void +njs_md5_init(njs_hash_t *ctx) +{ + ctx->a = 0x67452301; + ctx->b = 0xefcdab89; + ctx->c = 0x98badcfe; + ctx->d = 0x10325476; + + ctx->bytes = 0; +} + + +void +njs_md5_update(njs_hash_t *ctx, const void *data, size_t size) +{ + size_t used, free; + + used = (size_t) (ctx->bytes & 0x3f); + ctx->bytes += size; + + if (used) { + free = 64 - used; + + if (size < free) { + memcpy(&ctx->buffer[used], data, size); + return; + } + + memcpy(&ctx->buffer[used], data, free); + data = (u_char *) data + free; + size -= free; + (void) njs_md5_body(ctx, ctx->buffer, 64); + } + + if (size >= 64) { + data = njs_md5_body(ctx, data, size & ~(size_t) 0x3f); + size &= 0x3f; + } + + memcpy(ctx->buffer, data, size); +} + + +void +njs_md5_final(u_char result[32], njs_hash_t *ctx) +{ + size_t used, free; + + used = (size_t) (ctx->bytes & 0x3f); + + ctx->buffer[used++] = 0x80; + + free = 64 - used; + + if (free < 8) { + njs_memzero(&ctx->buffer[used], free); + (void) njs_md5_body(ctx, ctx->buffer, 64); + used = 0; + free = 64; + } + + njs_memzero(&ctx->buffer[used], free - 8); + + ctx->bytes <<= 3; + ctx->buffer[56] = (u_char) ctx->bytes; + ctx->buffer[57] = (u_char) (ctx->bytes >> 8); + ctx->buffer[58] = (u_char) (ctx->bytes >> 16); + ctx->buffer[59] = (u_char) (ctx->bytes >> 24); + ctx->buffer[60] = (u_char) (ctx->bytes >> 32); + ctx->buffer[61] = (u_char) (ctx->bytes >> 40); + ctx->buffer[62] = (u_char) (ctx->bytes >> 48); + ctx->buffer[63] = (u_char) (ctx->bytes >> 56); + + (void) njs_md5_body(ctx, ctx->buffer, 64); + + result[0] = (u_char) ctx->a; + result[1] = (u_char) (ctx->a >> 8); + result[2] = (u_char) (ctx->a >> 16); + result[3] = (u_char) (ctx->a >> 24); + result[4] = (u_char) ctx->b; + result[5] = (u_char) (ctx->b >> 8); + result[6] = (u_char) (ctx->b >> 16); + result[7] = (u_char) (ctx->b >> 24); + result[8] = (u_char) ctx->c; + result[9] = (u_char) (ctx->c >> 8); + result[10] = (u_char) (ctx->c >> 16); + result[11] = (u_char) (ctx->c >> 24); + result[12] = (u_char) ctx->d; + result[13] = (u_char) (ctx->d >> 8); + result[14] = (u_char) (ctx->d >> 16); + result[15] = (u_char) (ctx->d >> 24); + + njs_explicit_memzero(ctx, sizeof(*ctx)); +} + + +/* + * The basic MD5 functions. + * + * F and G are optimized compared to their RFC 1321 definitions for + * architectures that lack an AND-NOT instruction, just like in + * Colin Plumb's implementation. + */ + +#define F(x, y, z) ((z) ^ ((x) & ((y) ^ (z)))) +#define G(x, y, z) ((y) ^ ((z) & ((x) ^ (y)))) +#define H(x, y, z) ((x) ^ (y) ^ (z)) +#define I(x, y, z) ((y) ^ ((x) | ~(z))) + +/* + * The MD5 transformation for all four rounds. + */ + +#define STEP(f, a, b, c, d, x, t, s) \ + (a) += f((b), (c), (d)) + (x) + (t); \ + (a) = (((a) << (s)) | (((a) & 0xffffffff) >> (32 - (s)))); \ + (a) += (b) + +/* + * SET() reads 4 input bytes in little-endian byte order and stores them + * in a properly aligned word in host byte order. + */ + +#define SET(n) \ + (block[n] = \ + ( (uint32_t) p[n * 4] \ + | ((uint32_t) p[n * 4 + 1] << 8) \ + | ((uint32_t) p[n * 4 + 2] << 16) \ + | ((uint32_t) p[n * 4 + 3] << 24))) \ + +#define GET(n) block[n] + + +/* + * This processes one or more 64-byte data blocks, but does not update + * the bit counters. There are no alignment requirements. + */ + +static const u_char * +njs_md5_body(njs_hash_t *ctx, const u_char *data, size_t size) +{ + uint32_t a, b, c, d; + uint32_t saved_a, saved_b, saved_c, saved_d; + const u_char *p; + uint32_t block[16]; + + p = data; + + a = ctx->a; + b = ctx->b; + c = ctx->c; + d = ctx->d; + + do { + saved_a = a; + saved_b = b; + saved_c = c; + saved_d = d; + + /* Round 1 */ + + STEP(F, a, b, c, d, SET(0), 0xd76aa478, 7); + STEP(F, d, a, b, c, SET(1), 0xe8c7b756, 12); + STEP(F, c, d, a, b, SET(2), 0x242070db, 17); + STEP(F, b, c, d, a, SET(3), 0xc1bdceee, 22); + STEP(F, a, b, c, d, SET(4), 0xf57c0faf, 7); + STEP(F, d, a, b, c, SET(5), 0x4787c62a, 12); + STEP(F, c, d, a, b, SET(6), 0xa8304613, 17); + STEP(F, b, c, d, a, SET(7), 0xfd469501, 22); + STEP(F, a, b, c, d, SET(8), 0x698098d8, 7); + STEP(F, d, a, b, c, SET(9), 0x8b44f7af, 12); + STEP(F, c, d, a, b, SET(10), 0xffff5bb1, 17); + STEP(F, b, c, d, a, SET(11), 0x895cd7be, 22); + STEP(F, a, b, c, d, SET(12), 0x6b901122, 7); + STEP(F, d, a, b, c, SET(13), 0xfd987193, 12); + STEP(F, c, d, a, b, SET(14), 0xa679438e, 17); + STEP(F, b, c, d, a, SET(15), 0x49b40821, 22); + + /* Round 2 */ + + STEP(G, a, b, c, d, GET(1), 0xf61e2562, 5); + STEP(G, d, a, b, c, GET(6), 0xc040b340, 9); + STEP(G, c, d, a, b, GET(11), 0x265e5a51, 14); + STEP(G, b, c, d, a, GET(0), 0xe9b6c7aa, 20); + STEP(G, a, b, c, d, GET(5), 0xd62f105d, 5); + STEP(G, d, a, b, c, GET(10), 0x02441453, 9); + STEP(G, c, d, a, b, GET(15), 0xd8a1e681, 14); + STEP(G, b, c, d, a, GET(4), 0xe7d3fbc8, 20); + STEP(G, a, b, c, d, GET(9), 0x21e1cde6, 5); + STEP(G, d, a, b, c, GET(14), 0xc33707d6, 9); + STEP(G, c, d, a, b, GET(3), 0xf4d50d87, 14); + STEP(G, b, c, d, a, GET(8), 0x455a14ed, 20); + STEP(G, a, b, c, d, GET(13), 0xa9e3e905, 5); + STEP(G, d, a, b, c, GET(2), 0xfcefa3f8, 9); + STEP(G, c, d, a, b, GET(7), 0x676f02d9, 14); + STEP(G, b, c, d, a, GET(12), 0x8d2a4c8a, 20); + + /* Round 3 */ + + STEP(H, a, b, c, d, GET(5), 0xfffa3942, 4); + STEP(H, d, a, b, c, GET(8), 0x8771f681, 11); + STEP(H, c, d, a, b, GET(11), 0x6d9d6122, 16); + STEP(H, b, c, d, a, GET(14), 0xfde5380c, 23); + STEP(H, a, b, c, d, GET(1), 0xa4beea44, 4); + STEP(H, d, a, b, c, GET(4), 0x4bdecfa9, 11); + STEP(H, c, d, a, b, GET(7), 0xf6bb4b60, 16); + STEP(H, b, c, d, a, GET(10), 0xbebfbc70, 23); + STEP(H, a, b, c, d, GET(13), 0x289b7ec6, 4); + STEP(H, d, a, b, c, GET(0), 0xeaa127fa, 11); + STEP(H, c, d, a, b, GET(3), 0xd4ef3085, 16); + STEP(H, b, c, d, a, GET(6), 0x04881d05, 23); + STEP(H, a, b, c, d, GET(9), 0xd9d4d039, 4); + STEP(H, d, a, b, c, GET(12), 0xe6db99e5, 11); + STEP(H, c, d, a, b, GET(15), 0x1fa27cf8, 16); + STEP(H, b, c, d, a, GET(2), 0xc4ac5665, 23); + + /* Round 4 */ + + STEP(I, a, b, c, d, GET(0), 0xf4292244, 6); + STEP(I, d, a, b, c, GET(7), 0x432aff97, 10); + STEP(I, c, d, a, b, GET(14), 0xab9423a7, 15); + STEP(I, b, c, d, a, GET(5), 0xfc93a039, 21); + STEP(I, a, b, c, d, GET(12), 0x655b59c3, 6); + STEP(I, d, a, b, c, GET(3), 0x8f0ccc92, 10); + STEP(I, c, d, a, b, GET(10), 0xffeff47d, 15); + STEP(I, b, c, d, a, GET(1), 0x85845dd1, 21); + STEP(I, a, b, c, d, GET(8), 0x6fa87e4f, 6); + STEP(I, d, a, b, c, GET(15), 0xfe2ce6e0, 10); + STEP(I, c, d, a, b, GET(6), 0xa3014314, 15); + STEP(I, b, c, d, a, GET(13), 0x4e0811a1, 21); + STEP(I, a, b, c, d, GET(4), 0xf7537e82, 6); + STEP(I, d, a, b, c, GET(11), 0xbd3af235, 10); + STEP(I, c, d, a, b, GET(2), 0x2ad7d2bb, 15); + STEP(I, b, c, d, a, GET(9), 0xeb86d391, 21); + + a += saved_a; + b += saved_b; + c += saved_c; + d += saved_d; + + p += 64; + + } while (size -= 64); + + ctx->a = a; + ctx->b = b; + ctx->c = c; + ctx->d = d; + + return p; +} diff -r 6fa96ea99037 -r 0479e5821ab2 external/njs_sha1.c --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/external/njs_sha1.c Wed Feb 14 21:34:02 2024 -0800 @@ -0,0 +1,298 @@ + +/* + * Copyright (C) Maxim Dounin + * Copyright (C) NGINX, Inc. + * + * An internal SHA1 implementation. + */ + + +#include +#include +#include +#include +#include "njs_hash.h" + + +static const u_char *njs_sha1_body(njs_hash_t *ctx, const u_char *data, + size_t size); + + +void +njs_sha1_init(njs_hash_t *ctx) +{ + ctx->a = 0x67452301; + ctx->b = 0xefcdab89; + ctx->c = 0x98badcfe; + ctx->d = 0x10325476; + ctx->e = 0xc3d2e1f0; + + ctx->bytes = 0; +} + + +void +njs_sha1_update(njs_hash_t *ctx, const void *data, size_t size) +{ + size_t used, free; + + used = (size_t) (ctx->bytes & 0x3f); + ctx->bytes += size; + + if (used) { + free = 64 - used; + + if (size < free) { + memcpy(&ctx->buffer[used], data, size); + return; + } + + memcpy(&ctx->buffer[used], data, free); + data = (u_char *) data + free; + size -= free; + (void) njs_sha1_body(ctx, ctx->buffer, 64); + } + + if (size >= 64) { + data = njs_sha1_body(ctx, data, size & ~(size_t) 0x3f); + size &= 0x3f; + } + + memcpy(ctx->buffer, data, size); +} + + +void +njs_sha1_final(u_char result[32], njs_hash_t *ctx) +{ + size_t used, free; + + used = (size_t) (ctx->bytes & 0x3f); + + ctx->buffer[used++] = 0x80; + + free = 64 - used; + + if (free < 8) { + njs_memzero(&ctx->buffer[used], free); + (void) njs_sha1_body(ctx, ctx->buffer, 64); + used = 0; + free = 64; + } + + njs_memzero(&ctx->buffer[used], free - 8); + + ctx->bytes <<= 3; + ctx->buffer[56] = (u_char) (ctx->bytes >> 56); + ctx->buffer[57] = (u_char) (ctx->bytes >> 48); + ctx->buffer[58] = (u_char) (ctx->bytes >> 40); + ctx->buffer[59] = (u_char) (ctx->bytes >> 32); + ctx->buffer[60] = (u_char) (ctx->bytes >> 24); + ctx->buffer[61] = (u_char) (ctx->bytes >> 16); + ctx->buffer[62] = (u_char) (ctx->bytes >> 8); + ctx->buffer[63] = (u_char) ctx->bytes; + + (void) njs_sha1_body(ctx, ctx->buffer, 64); + + result[0] = (u_char) (ctx->a >> 24); + result[1] = (u_char) (ctx->a >> 16); + result[2] = (u_char) (ctx->a >> 8); + result[3] = (u_char) ctx->a; + result[4] = (u_char) (ctx->b >> 24); + result[5] = (u_char) (ctx->b >> 16); + result[6] = (u_char) (ctx->b >> 8); + result[7] = (u_char) ctx->b; + result[8] = (u_char) (ctx->c >> 24); + result[9] = (u_char) (ctx->c >> 16); + result[10] = (u_char) (ctx->c >> 8); + result[11] = (u_char) ctx->c; + result[12] = (u_char) (ctx->d >> 24); + result[13] = (u_char) (ctx->d >> 16); + result[14] = (u_char) (ctx->d >> 8); + result[15] = (u_char) ctx->d; + result[16] = (u_char) (ctx->e >> 24); + result[17] = (u_char) (ctx->e >> 16); + result[18] = (u_char) (ctx->e >> 8); + result[19] = (u_char) ctx->e; + + njs_explicit_memzero(ctx, sizeof(*ctx)); +} + + +/* + * Helper functions. + */ + +#define ROTATE(bits, word) (((word) << (bits)) | ((word) >> (32 - (bits)))) + +#define F1(b, c, d) (((b) & (c)) | ((~(b)) & (d))) +#define F2(b, c, d) ((b) ^ (c) ^ (d)) +#define F3(b, c, d) (((b) & (c)) | ((b) & (d)) | ((c) & (d))) + +#define STEP(f, a, b, c, d, e, w, t) \ + temp = ROTATE(5, (a)) + f((b), (c), (d)) + (e) + (w) + (t); \ + (e) = (d); \ + (d) = (c); \ + (c) = ROTATE(30, (b)); \ + (b) = (a); \ + (a) = temp; + + +/* + * GET() reads 4 input bytes in big-endian byte order and returns + * them as uint32_t. + */ + +#define GET(n) \ + ( ((uint32_t) p[n * 4 + 3]) \ + | ((uint32_t) p[n * 4 + 2] << 8) \ + | ((uint32_t) p[n * 4 + 1] << 16) \ + | ((uint32_t) p[n * 4] << 24)) + + +/* + * This processes one or more 64-byte data blocks, but does not update + * the bit counters. There are no alignment requirements. + */ + +static const u_char * +njs_sha1_body(njs_hash_t *ctx, const u_char *data, size_t size) +{ + uint32_t a, b, c, d, e, temp; + uint32_t saved_a, saved_b, saved_c, saved_d, saved_e; + uint32_t words[80]; + njs_uint_t i; + const u_char *p; + + p = data; + + a = ctx->a; + b = ctx->b; + c = ctx->c; + d = ctx->d; + e = ctx->e; + + do { + saved_a = a; + saved_b = b; + saved_c = c; + saved_d = d; + saved_e = e; + + /* Load data block into the words array */ + + for (i = 0; i < 16; i++) { + words[i] = GET(i); + } + + for (i = 16; i < 80; i++) { + words[i] = ROTATE(1, words[i - 3] + ^ words[i - 8] + ^ words[i - 14] + ^ words[i - 16]); + } + + /* Transformations */ + + STEP(F1, a, b, c, d, e, words[0], 0x5a827999); + STEP(F1, a, b, c, d, e, words[1], 0x5a827999); + STEP(F1, a, b, c, d, e, words[2], 0x5a827999); + STEP(F1, a, b, c, d, e, words[3], 0x5a827999); + STEP(F1, a, b, c, d, e, words[4], 0x5a827999); + STEP(F1, a, b, c, d, e, words[5], 0x5a827999); + STEP(F1, a, b, c, d, e, words[6], 0x5a827999); + STEP(F1, a, b, c, d, e, words[7], 0x5a827999); + STEP(F1, a, b, c, d, e, words[8], 0x5a827999); + STEP(F1, a, b, c, d, e, words[9], 0x5a827999); + STEP(F1, a, b, c, d, e, words[10], 0x5a827999); + STEP(F1, a, b, c, d, e, words[11], 0x5a827999); + STEP(F1, a, b, c, d, e, words[12], 0x5a827999); + STEP(F1, a, b, c, d, e, words[13], 0x5a827999); + STEP(F1, a, b, c, d, e, words[14], 0x5a827999); + STEP(F1, a, b, c, d, e, words[15], 0x5a827999); + STEP(F1, a, b, c, d, e, words[16], 0x5a827999); + STEP(F1, a, b, c, d, e, words[17], 0x5a827999); + STEP(F1, a, b, c, d, e, words[18], 0x5a827999); + STEP(F1, a, b, c, d, e, words[19], 0x5a827999); + + STEP(F2, a, b, c, d, e, words[20], 0x6ed9eba1); + STEP(F2, a, b, c, d, e, words[21], 0x6ed9eba1); + STEP(F2, a, b, c, d, e, words[22], 0x6ed9eba1); + STEP(F2, a, b, c, d, e, words[23], 0x6ed9eba1); + STEP(F2, a, b, c, d, e, words[24], 0x6ed9eba1); + STEP(F2, a, b, c, d, e, words[25], 0x6ed9eba1); + STEP(F2, a, b, c, d, e, words[26], 0x6ed9eba1); + STEP(F2, a, b, c, d, e, words[27], 0x6ed9eba1); + STEP(F2, a, b, c, d, e, words[28], 0x6ed9eba1); + STEP(F2, a, b, c, d, e, words[29], 0x6ed9eba1); + STEP(F2, a, b, c, d, e, words[30], 0x6ed9eba1); + STEP(F2, a, b, c, d, e, words[31], 0x6ed9eba1); + STEP(F2, a, b, c, d, e, words[32], 0x6ed9eba1); + STEP(F2, a, b, c, d, e, words[33], 0x6ed9eba1); + STEP(F2, a, b, c, d, e, words[34], 0x6ed9eba1); + STEP(F2, a, b, c, d, e, words[35], 0x6ed9eba1); + STEP(F2, a, b, c, d, e, words[36], 0x6ed9eba1); + STEP(F2, a, b, c, d, e, words[37], 0x6ed9eba1); + STEP(F2, a, b, c, d, e, words[38], 0x6ed9eba1); + STEP(F2, a, b, c, d, e, words[39], 0x6ed9eba1); + + STEP(F3, a, b, c, d, e, words[40], 0x8f1bbcdc); + STEP(F3, a, b, c, d, e, words[41], 0x8f1bbcdc); + STEP(F3, a, b, c, d, e, words[42], 0x8f1bbcdc); + STEP(F3, a, b, c, d, e, words[43], 0x8f1bbcdc); + STEP(F3, a, b, c, d, e, words[44], 0x8f1bbcdc); + STEP(F3, a, b, c, d, e, words[45], 0x8f1bbcdc); + STEP(F3, a, b, c, d, e, words[46], 0x8f1bbcdc); + STEP(F3, a, b, c, d, e, words[47], 0x8f1bbcdc); + STEP(F3, a, b, c, d, e, words[48], 0x8f1bbcdc); + STEP(F3, a, b, c, d, e, words[49], 0x8f1bbcdc); + STEP(F3, a, b, c, d, e, words[50], 0x8f1bbcdc); + STEP(F3, a, b, c, d, e, words[51], 0x8f1bbcdc); + STEP(F3, a, b, c, d, e, words[52], 0x8f1bbcdc); + STEP(F3, a, b, c, d, e, words[53], 0x8f1bbcdc); + STEP(F3, a, b, c, d, e, words[54], 0x8f1bbcdc); + STEP(F3, a, b, c, d, e, words[55], 0x8f1bbcdc); + STEP(F3, a, b, c, d, e, words[56], 0x8f1bbcdc); + STEP(F3, a, b, c, d, e, words[57], 0x8f1bbcdc); + STEP(F3, a, b, c, d, e, words[58], 0x8f1bbcdc); + STEP(F3, a, b, c, d, e, words[59], 0x8f1bbcdc); + + STEP(F2, a, b, c, d, e, words[60], 0xca62c1d6); + STEP(F2, a, b, c, d, e, words[61], 0xca62c1d6); + STEP(F2, a, b, c, d, e, words[62], 0xca62c1d6); + STEP(F2, a, b, c, d, e, words[63], 0xca62c1d6); + STEP(F2, a, b, c, d, e, words[64], 0xca62c1d6); + STEP(F2, a, b, c, d, e, words[65], 0xca62c1d6); + STEP(F2, a, b, c, d, e, words[66], 0xca62c1d6); + STEP(F2, a, b, c, d, e, words[67], 0xca62c1d6); + STEP(F2, a, b, c, d, e, words[68], 0xca62c1d6); + STEP(F2, a, b, c, d, e, words[69], 0xca62c1d6); + STEP(F2, a, b, c, d, e, words[70], 0xca62c1d6); + STEP(F2, a, b, c, d, e, words[71], 0xca62c1d6); + STEP(F2, a, b, c, d, e, words[72], 0xca62c1d6); + STEP(F2, a, b, c, d, e, words[73], 0xca62c1d6); + STEP(F2, a, b, c, d, e, words[74], 0xca62c1d6); + STEP(F2, a, b, c, d, e, words[75], 0xca62c1d6); + STEP(F2, a, b, c, d, e, words[76], 0xca62c1d6); + STEP(F2, a, b, c, d, e, words[77], 0xca62c1d6); + STEP(F2, a, b, c, d, e, words[78], 0xca62c1d6); + STEP(F2, a, b, c, d, e, words[79], 0xca62c1d6); + + a += saved_a; + b += saved_b; + c += saved_c; + d += saved_d; + e += saved_e; + + p += 64; + + } while (size -= 64); + + ctx->a = a; + ctx->b = b; + ctx->c = c; + ctx->d = d; + ctx->e = e; + + return p; +} diff -r 6fa96ea99037 -r 0479e5821ab2 external/njs_sha2.c --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/external/njs_sha2.c Wed Feb 14 21:34:02 2024 -0800 @@ -0,0 +1,320 @@ + +/* + * Copyright (C) Dmitry Volyntsev + * Copyright (C) NGINX, Inc. + * + * An internal SHA2 implementation. + */ + + +#include +#include +#include +#include +#include "njs_hash.h" + + +static const u_char *njs_sha2_body(njs_hash_t *ctx, const u_char *data, + size_t size); + + +void +njs_sha2_init(njs_hash_t *ctx) +{ + ctx->a = 0x6a09e667; + ctx->b = 0xbb67ae85; + ctx->c = 0x3c6ef372; + ctx->d = 0xa54ff53a; + ctx->e = 0x510e527f; + ctx->f = 0x9b05688c; + ctx->g = 0x1f83d9ab; + ctx->h = 0x5be0cd19; + + ctx->bytes = 0; +} + + +void +njs_sha2_update(njs_hash_t *ctx, const void *data, size_t size) +{ + size_t used, free; + + used = (size_t) (ctx->bytes & 0x3f); + ctx->bytes += size; + + if (used) { + free = 64 - used; + + if (size < free) { + memcpy(&ctx->buffer[used], data, size); + return; + } + + memcpy(&ctx->buffer[used], data, free); + data = (u_char *) data + free; + size -= free; + (void) njs_sha2_body(ctx, ctx->buffer, 64); + } + + if (size >= 64) { + data = njs_sha2_body(ctx, data, size & ~(size_t) 0x3f); + size &= 0x3f; + } + + memcpy(ctx->buffer, data, size); +} + + +void +njs_sha2_final(u_char result[32], njs_hash_t *ctx) +{ + size_t used, free; + + used = (size_t) (ctx->bytes & 0x3f); + + ctx->buffer[used++] = 0x80; + + free = 64 - used; + + if (free < 8) { + njs_memzero(&ctx->buffer[used], free); + (void) njs_sha2_body(ctx, ctx->buffer, 64); + used = 0; + free = 64; + } + + njs_memzero(&ctx->buffer[used], free - 8); + + ctx->bytes <<= 3; + ctx->buffer[56] = (u_char) (ctx->bytes >> 56); + ctx->buffer[57] = (u_char) (ctx->bytes >> 48); + ctx->buffer[58] = (u_char) (ctx->bytes >> 40); + ctx->buffer[59] = (u_char) (ctx->bytes >> 32); + ctx->buffer[60] = (u_char) (ctx->bytes >> 24); + ctx->buffer[61] = (u_char) (ctx->bytes >> 16); + ctx->buffer[62] = (u_char) (ctx->bytes >> 8); + ctx->buffer[63] = (u_char) ctx->bytes; + + (void) njs_sha2_body(ctx, ctx->buffer, 64); + + result[0] = (u_char) (ctx->a >> 24); + result[1] = (u_char) (ctx->a >> 16); + result[2] = (u_char) (ctx->a >> 8); + result[3] = (u_char) ctx->a; + result[4] = (u_char) (ctx->b >> 24); + result[5] = (u_char) (ctx->b >> 16); + result[6] = (u_char) (ctx->b >> 8); + result[7] = (u_char) ctx->b; + result[8] = (u_char) (ctx->c >> 24); + result[9] = (u_char) (ctx->c >> 16); + result[10] = (u_char) (ctx->c >> 8); + result[11] = (u_char) ctx->c; + result[12] = (u_char) (ctx->d >> 24); + result[13] = (u_char) (ctx->d >> 16); + result[14] = (u_char) (ctx->d >> 8); + result[15] = (u_char) ctx->d; + result[16] = (u_char) (ctx->e >> 24); + result[17] = (u_char) (ctx->e >> 16); + result[18] = (u_char) (ctx->e >> 8); + result[19] = (u_char) ctx->e; + result[20] = (u_char) (ctx->f >> 24); + result[21] = (u_char) (ctx->f >> 16); + result[22] = (u_char) (ctx->f >> 8); + result[23] = (u_char) ctx->f; + result[24] = (u_char) (ctx->g >> 24); + result[25] = (u_char) (ctx->g >> 16); + result[26] = (u_char) (ctx->g >> 8); + result[27] = (u_char) ctx->g; + result[28] = (u_char) (ctx->h >> 24); + result[29] = (u_char) (ctx->h >> 16); + result[30] = (u_char) (ctx->h >> 8); + result[31] = (u_char) ctx->h; + + njs_explicit_memzero(ctx, sizeof(*ctx)); +} + + +/* + * Helper functions. + */ + +#define ROTATE(bits, word) (((word) >> (bits)) | ((word) << (32 - (bits)))) + +#define S0(a) (ROTATE(2, a) ^ ROTATE(13, a) ^ ROTATE(22, a)) +#define S1(e) (ROTATE(6, e) ^ ROTATE(11, e) ^ ROTATE(25, e)) +#define CH(e, f, g) (((e) & (f)) ^ ((~(e)) & (g))) +#define MAJ(a, b, c) (((a) & (b)) ^ ((a) & (c)) ^ ((b) & (c))) + +#define STEP(a, b, c, d, e, f, g, h, w, k) \ + temp1 = (h) + S1(e) + CH(e, f, g) + (k) + (w); \ + temp2 = S0(a) + MAJ(a, b, c); \ + (h) = (g); \ + (g) = (f); \ + (f) = (e); \ + (e) = (d) + temp1; \ + (d) = (c); \ + (c) = (b); \ + (b) = (a); \ + (a) = temp1 + temp2; + + +/* + * GET() reads 4 input bytes in big-endian byte order and returns + * them as uint32_t. + */ + +#define GET(n) \ + ( ((uint32_t) p[n * 4 + 3]) \ + | ((uint32_t) p[n * 4 + 2] << 8) \ + | ((uint32_t) p[n * 4 + 1] << 16) \ + | ((uint32_t) p[n * 4] << 24)) + + +/* + * This processes one or more 64-byte data blocks, but does not update + * the bit counters. There are no alignment requirements. + */ + +static const u_char * +njs_sha2_body(njs_hash_t *ctx, const u_char *data, size_t size) +{ + uint32_t a, b, c, d, e, f, g, h, s0, s1, temp1, temp2; + uint32_t saved_a, saved_b, saved_c, saved_d, saved_e, saved_f, + saved_g, saved_h; + uint32_t words[64]; + njs_uint_t i; + const u_char *p; + + p = data; + + a = ctx->a; + b = ctx->b; + c = ctx->c; + d = ctx->d; + e = ctx->e; + f = ctx->f; + g = ctx->g; + h = ctx->h; + + do { + saved_a = a; + saved_b = b; + saved_c = c; + saved_d = d; + saved_e = e; + saved_f = f; + saved_g = g; + saved_h = h; + + /* Load data block into the words array */ + + for (i = 0; i < 16; i++) { + words[i] = GET(i); + } + + for (i = 16; i < 64; i++) { + s0 = ROTATE(7, words[i - 15]) + ^ ROTATE(18, words[i - 15]) + ^ (words[i - 15] >> 3); + + s1 = ROTATE(17, words[i - 2]) + ^ ROTATE(19, words[i - 2]) + ^ (words[i - 2] >> 10); + + words[i] = words[i - 16] + s0 + words[i - 7] + s1; + } + + /* Transformations */ + + STEP(a, b, c, d, e, f, g, h, words[0], 0x428a2f98); + STEP(a, b, c, d, e, f, g, h, words[1], 0x71374491); + STEP(a, b, c, d, e, f, g, h, words[2], 0xb5c0fbcf); + STEP(a, b, c, d, e, f, g, h, words[3], 0xe9b5dba5); + STEP(a, b, c, d, e, f, g, h, words[4], 0x3956c25b); + STEP(a, b, c, d, e, f, g, h, words[5], 0x59f111f1); + STEP(a, b, c, d, e, f, g, h, words[6], 0x923f82a4); + STEP(a, b, c, d, e, f, g, h, words[7], 0xab1c5ed5); + STEP(a, b, c, d, e, f, g, h, words[8], 0xd807aa98); + STEP(a, b, c, d, e, f, g, h, words[9], 0x12835b01); + STEP(a, b, c, d, e, f, g, h, words[10], 0x243185be); + STEP(a, b, c, d, e, f, g, h, words[11], 0x550c7dc3); + STEP(a, b, c, d, e, f, g, h, words[12], 0x72be5d74); + STEP(a, b, c, d, e, f, g, h, words[13], 0x80deb1fe); + STEP(a, b, c, d, e, f, g, h, words[14], 0x9bdc06a7); + STEP(a, b, c, d, e, f, g, h, words[15], 0xc19bf174); + + STEP(a, b, c, d, e, f, g, h, words[16], 0xe49b69c1); + STEP(a, b, c, d, e, f, g, h, words[17], 0xefbe4786); + STEP(a, b, c, d, e, f, g, h, words[18], 0x0fc19dc6); + STEP(a, b, c, d, e, f, g, h, words[19], 0x240ca1cc); + STEP(a, b, c, d, e, f, g, h, words[20], 0x2de92c6f); + STEP(a, b, c, d, e, f, g, h, words[21], 0x4a7484aa); + STEP(a, b, c, d, e, f, g, h, words[22], 0x5cb0a9dc); + STEP(a, b, c, d, e, f, g, h, words[23], 0x76f988da); + STEP(a, b, c, d, e, f, g, h, words[24], 0x983e5152); + STEP(a, b, c, d, e, f, g, h, words[25], 0xa831c66d); + STEP(a, b, c, d, e, f, g, h, words[26], 0xb00327c8); + STEP(a, b, c, d, e, f, g, h, words[27], 0xbf597fc7); + STEP(a, b, c, d, e, f, g, h, words[28], 0xc6e00bf3); + STEP(a, b, c, d, e, f, g, h, words[29], 0xd5a79147); + STEP(a, b, c, d, e, f, g, h, words[30], 0x06ca6351); + STEP(a, b, c, d, e, f, g, h, words[31], 0x14292967); + + STEP(a, b, c, d, e, f, g, h, words[32], 0x27b70a85); + STEP(a, b, c, d, e, f, g, h, words[33], 0x2e1b2138); + STEP(a, b, c, d, e, f, g, h, words[34], 0x4d2c6dfc); + STEP(a, b, c, d, e, f, g, h, words[35], 0x53380d13); + STEP(a, b, c, d, e, f, g, h, words[36], 0x650a7354); + STEP(a, b, c, d, e, f, g, h, words[37], 0x766a0abb); + STEP(a, b, c, d, e, f, g, h, words[38], 0x81c2c92e); + STEP(a, b, c, d, e, f, g, h, words[39], 0x92722c85); + STEP(a, b, c, d, e, f, g, h, words[40], 0xa2bfe8a1); + STEP(a, b, c, d, e, f, g, h, words[41], 0xa81a664b); + STEP(a, b, c, d, e, f, g, h, words[42], 0xc24b8b70); + STEP(a, b, c, d, e, f, g, h, words[43], 0xc76c51a3); + STEP(a, b, c, d, e, f, g, h, words[44], 0xd192e819); + STEP(a, b, c, d, e, f, g, h, words[45], 0xd6990624); + STEP(a, b, c, d, e, f, g, h, words[46], 0xf40e3585); + STEP(a, b, c, d, e, f, g, h, words[47], 0x106aa070); + + STEP(a, b, c, d, e, f, g, h, words[48], 0x19a4c116); + STEP(a, b, c, d, e, f, g, h, words[49], 0x1e376c08); + STEP(a, b, c, d, e, f, g, h, words[50], 0x2748774c); + STEP(a, b, c, d, e, f, g, h, words[51], 0x34b0bcb5); + STEP(a, b, c, d, e, f, g, h, words[52], 0x391c0cb3); + STEP(a, b, c, d, e, f, g, h, words[53], 0x4ed8aa4a); + STEP(a, b, c, d, e, f, g, h, words[54], 0x5b9cca4f); + STEP(a, b, c, d, e, f, g, h, words[55], 0x682e6ff3); + STEP(a, b, c, d, e, f, g, h, words[56], 0x748f82ee); + STEP(a, b, c, d, e, f, g, h, words[57], 0x78a5636f); + STEP(a, b, c, d, e, f, g, h, words[58], 0x84c87814); + STEP(a, b, c, d, e, f, g, h, words[59], 0x8cc70208); + STEP(a, b, c, d, e, f, g, h, words[60], 0x90befffa); + STEP(a, b, c, d, e, f, g, h, words[61], 0xa4506ceb); + STEP(a, b, c, d, e, f, g, h, words[62], 0xbef9a3f7); + STEP(a, b, c, d, e, f, g, h, words[63], 0xc67178f2); + + a += saved_a; + b += saved_b; + c += saved_c; + d += saved_d; + e += saved_e; + f += saved_f; + g += saved_g; + h += saved_h; + + p += 64; + + } while (size -= 64); + + ctx->a = a; + ctx->b = b; + ctx->c = c; + ctx->d = d; + ctx->e = e; + ctx->f = f; + ctx->g = g; + ctx->h = h; + + return p; +} diff -r 6fa96ea99037 -r 0479e5821ab2 src/njs_hash.h --- a/src/njs_hash.h Wed Feb 14 21:33:56 2024 -0800 +++ /dev/null Thu Jan 01 00:00:00 1970 +0000 @@ -1,32 +0,0 @@ - -/* - * Copyright (C) Dmitry Volyntsev - * Copyright (C) NGINX, Inc. - */ - - -#ifndef _NJS_HASH_H_INCLUDED_ -#define _NJS_HASH_H_INCLUDED_ - - -typedef struct { - uint64_t bytes; - uint32_t a, b, c, d, e, f, g, h; - u_char buffer[64]; -} njs_hash_t; - - From archimedes.gaviola at gmail.com Thu Feb 15 08:49:10 2024 From: archimedes.gaviola at gmail.com (Archimedes Gaviola) Date: Thu, 15 Feb 2024 16:49:10 +0800 Subject: announcing freenginx.org In-Reply-To: References: Message-ID: On Thu, Feb 15, 2024 at 2:03 AM Maxim Dounin wrote: > Hello! > > As you probably know, F5 closed Moscow office in 2022, and I no > longer work for F5 since then. Still, we’ve reached an agreement > that I will maintain my role in nginx development as a volunteer. > And for almost two years I was working on improving nginx and > making it better for everyone, for free. > > Unfortunately, some new non-technical management at F5 recently > decided that they know better how to run open source projects. In > particular, they decided to interfere with security policy nginx > uses for years, ignoring both the policy and developers’ position. > > That’s quite understandable: they own the project, and can do > anything with it, including doing marketing-motivated actions, > ignoring developers position and community. Still, this > contradicts our agreement. And, more importantly, I no longer able > to control which changes are made in nginx within F5, and no longer > see nginx as a free and open source project developed and > maintained for the public good. > > As such, starting from today, I will no longer participate in nginx > development as run by F5. Instead, I’m starting an alternative > project, which is going to be run by developers, and not corporate > entities: > > http://freenginx.org/ > > The goal is to keep nginx development free from arbitrary corporate > actions. Help and contributions are welcome. Hope it will be > beneficial for everyone. > > > -- > Maxim Dounin > http://freenginx.org/ > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > https://mailman.nginx.org/mailman/listinfo/nginx-devel Hi Maxim, Sorry to hear that. Is the license still the same for freenginx? Thanks, Archimedes -------------- next part -------------- An HTML attachment was scrubbed... URL: From antoine.bonavita at gmail.com Thu Feb 15 09:40:05 2024 From: antoine.bonavita at gmail.com (Antoine Bonavita) Date: Thu, 15 Feb 2024 10:40:05 +0100 Subject: announcing freenginx.org In-Reply-To: References: Message-ID: Maxim, Thanks for the amazing work and your dedication all those years. Will definitely follow freenginx and use it as my webserver of choice. A. On Thu, Feb 15, 2024 at 9:49 AM Archimedes Gaviola < archimedes.gaviola at gmail.com> wrote: > > > On Thu, Feb 15, 2024 at 2:03 AM Maxim Dounin wrote: > >> Hello! >> >> As you probably know, F5 closed Moscow office in 2022, and I no >> longer work for F5 since then. Still, we’ve reached an agreement >> that I will maintain my role in nginx development as a volunteer. >> And for almost two years I was working on improving nginx and >> making it better for everyone, for free. >> >> Unfortunately, some new non-technical management at F5 recently >> decided that they know better how to run open source projects. In >> particular, they decided to interfere with security policy nginx >> uses for years, ignoring both the policy and developers’ position. >> >> That’s quite understandable: they own the project, and can do >> anything with it, including doing marketing-motivated actions, >> ignoring developers position and community. Still, this >> contradicts our agreement. And, more importantly, I no longer able >> to control which changes are made in nginx within F5, and no longer >> see nginx as a free and open source project developed and >> maintained for the public good. >> >> As such, starting from today, I will no longer participate in nginx >> development as run by F5. Instead, I’m starting an alternative >> project, which is going to be run by developers, and not corporate >> entities: >> >> http://freenginx.org/ >> >> The goal is to keep nginx development free from arbitrary corporate >> actions. Help and contributions are welcome. Hope it will be >> beneficial for everyone. >> >> >> -- >> Maxim Dounin >> http://freenginx.org/ >> _______________________________________________ >> nginx-devel mailing list >> nginx-devel at nginx.org >> https://mailman.nginx.org/mailman/listinfo/nginx-devel > > > Hi Maxim, > > Sorry to hear that. Is the license still the same for freenginx? > > Thanks, > Archimedes > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > https://mailman.nginx.org/mailman/listinfo/nginx-devel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Thu Feb 15 10:33:03 2024 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 15 Feb 2024 13:33:03 +0300 Subject: announcing freenginx.org In-Reply-To: References: Message-ID: Hello! On Thu, Feb 15, 2024 at 04:49:10PM +0800, Archimedes Gaviola wrote: > On Thu, Feb 15, 2024 at 2:03 AM Maxim Dounin wrote: > > > Hello! > > > > As you probably know, F5 closed Moscow office in 2022, and I no > > longer work for F5 since then. Still, we’ve reached an agreement > > that I will maintain my role in nginx development as a volunteer. > > And for almost two years I was working on improving nginx and > > making it better for everyone, for free. > > > > Unfortunately, some new non-technical management at F5 recently > > decided that they know better how to run open source projects. In > > particular, they decided to interfere with security policy nginx > > uses for years, ignoring both the policy and developers’ position. > > > > That’s quite understandable: they own the project, and can do > > anything with it, including doing marketing-motivated actions, > > ignoring developers position and community. Still, this > > contradicts our agreement. And, more importantly, I no longer able > > to control which changes are made in nginx within F5, and no longer > > see nginx as a free and open source project developed and > > maintained for the public good. > > > > As such, starting from today, I will no longer participate in nginx > > development as run by F5. Instead, I’m starting an alternative > > project, which is going to be run by developers, and not corporate > > entities: > > > > http://freenginx.org/ > > > > The goal is to keep nginx development free from arbitrary corporate > > actions. Help and contributions are welcome. Hope it will be > > beneficial for everyone. > > > > > > -- > > Maxim Dounin > > http://freenginx.org/ > > Hi Maxim, > > Sorry to hear that. Is the license still the same for freenginx? Yes, the license will remain the same. -- Maxim Dounin http://mdounin.ru/ From izorkin at gmail.com Fri Feb 16 11:19:57 2024 From: izorkin at gmail.com (izorkin at gmail.com) Date: Fri, 16 Feb 2024 14:19:57 +0300 Subject: [nginx] Update mime-types Message-ID: <1092736985.20240216141957@gmail.com> Hello. Patch to update current MIME types. Most of information is taken from IANA and Wikipedia. -- С уважением, Izorkin mailto:izorkin at gmail.com -------------- next part -------------- A non-text attachment was scrubbed... Name: mime_types_01.patch Type: application/octet-stream Size: 33279 bytes Desc: not available URL: From jordanc.carter at outlook.com Tue Feb 20 01:54:23 2024 From: jordanc.carter at outlook.com (J Carter) Date: Tue, 20 Feb 2024 01:54:23 +0000 Subject: [nginx] Update mime-types In-Reply-To: <1092736985.20240216141957@gmail.com> References: <1092736985.20240216141957@gmail.com> Message-ID: Hello, On Fri, 16 Feb 2024 14:19:57 +0300 izorkin at gmail.com wrote: > Hello. > > Patch to update current MIME types. > Most of information is taken from IANA and Wikipedia. > > It might be a good idea to provide a reason for each of these mime type changes, such as what real problem this solves for you or others. For example: + audio/mpeg mp1 mp2 mp3 m1a m2a mpa; Are you serving .mp1 and .mp2 files? I've never seen such a file in my life (nor many of the other extensions you've added in the series). Also it may be a good idea to link to past discussions you've had on the topic (regardless of language): https://mailman.nginx.org/pipermail/nginx-ru/2023-November/36Z6S37IZQQWYQXJGFKOMQXFL2XQUJM2.html From izorkin at gmail.com Tue Feb 20 07:17:41 2024 From: izorkin at gmail.com (izorkin at gmail.com) Date: Tue, 20 Feb 2024 10:17:41 +0300 Subject: [nginx] Update mime-types In-Reply-To: References: <1092736985.20240216141957@gmail.com> Message-ID: <1304797816.20240220101741@gmail.com> Hello, J. It is now generally recommended to use the mime types from the mailcap package. At a minimum, it is used in the NixOS and ArchLinux OS distributives. But in the mailcap package: - too many different mime types specified (nginx developers do not recommend using large lists of mime-types) - uses new type values for javascript and xml, which are not supported in nginx - instead of the recommended type image/x-icon, image/vnd.microsoft.icon is used - https://en.wikipedia.org/wiki/ICO_(file_format)#MIME_type - Those some meme-types are missing I would like the developers to update the current mime list and add new commonly used types. Вы писали 20 февраля 2024 г., 4:54:23: > Hello, > It might be a good idea to provide a reason for each of these mime type > changes, such as what real problem this solves for you or others. > For example: > + audio/mpeg mp1 mp2 mp3 m1a m2a mpa; > Are you serving .mp1 and .mp2 files? I've never seen such a file in my > life (nor many of the other extensions you've added in the series). > Also it may be a good idea to link to past discussions you've had on > the topic (regardless of language): > https://mailman.nginx.org/pipermail/nginx-ru/2023-November/36Z6S37IZQQWYQXJGFKOMQXFL2XQUJM2.html > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > https://mailman.nginx.org/mailman/listinfo/nginx-devel -- С уважением, Izorkin mailto:izorkin at gmail.com From arut at nginx.com Wed Feb 21 13:29:52 2024 From: arut at nginx.com (Roman Arutyunyan) Date: Wed, 21 Feb 2024 17:29:52 +0400 Subject: [PATCH] Avoiding mixed socket families in PROXY protocol v1 (ticket #2594) In-Reply-To: References: <2f12c929527b2337c15e.1705920594@arut-laptop> <20240122154801.ycda4ie442ipzw6n@N00W24XTQX> Message-ID: <20240221132920.chmms5v3aekvmc2i@N00W24XTQX> Hi, On Wed, Jan 24, 2024 at 12:03:06AM +0300, Maxim Dounin wrote: > Hello! > > On Mon, Jan 22, 2024 at 07:48:01PM +0400, Roman Arutyunyan wrote: > > > Hi, > > > > On Mon, Jan 22, 2024 at 02:59:21PM +0300, Maxim Dounin wrote: > > > Hello! > > > > > > On Mon, Jan 22, 2024 at 02:49:54PM +0400, Roman Arutyunyan wrote: > > > > > > > # HG changeset patch > > > > # User Roman Arutyunyan > > > > # Date 1705916128 -14400 > > > > # Mon Jan 22 13:35:28 2024 +0400 > > > > # Node ID 2f12c929527b2337c15ef99d3a4dc97819b61fbd > > > > # Parent ee40e2b1d0833b46128a357fbc84c6e23be9be07 > > > > Avoiding mixed socket families in PROXY protocol v1 (ticket #2594). > > Also nitpicking: ticket #2010 might be a better choice. > > The #2594 is actually a duplicate (with a side issue noted that > using long unix socket path might result in a PROXY protocol > header without ports and CRLF) and should be closed as such. > > > > > > > > > When using realip module, remote and local addreses of a connection can belong > > > > to different address families. This previously resulted in generating PROXY > > > > protocol headers like this: > > > > > > > > PROXY TCP4 127.0.0.1 unix:/tmp/nginx1.sock 55544 0 > > > > > > > > The PROXY protocol v1 specification does not allow mixed families. The change > > > > will generate the unknown PROXY protocol header in this case: > > > > > > > > PROXY UNKNOWN > > > > > > > > Also, the above mentioned format for unix socket address is not specified in > > > > PROXY protocol v1 and is a by-product of internal nginx representation of it. > > > > The change eliminates such addresses from PROXY protocol headers as well. > > > > > > Nitpicking: double space in "from PROXY". > > > > Yes, thanks. > > > > > This change will essentially disable use of PROXY protocol in such > > > configurations. While it is probably good enough from formal > > > point of view, and better that what we have now, this might still > > > be a surprise, especially when multiple address families are used > > > on the original proxy server, and the configuration works for some > > > of them, but not for others. > > > > > > Wouldn't it be better to remember if the PROXY protocol was used > > > to set the address, and use $proxy_protocol_server_addr / > > > $proxy_protocol_server_port in this case? > > > > > > Alternatively, we can use some dummy server address instead, so > > > the client address will be still sent. > > > > Another alternative is duplicating client address in this case, see patch. > > I don't think it is a good idea. Using some meaningful real > address might easily mislead users. I would rather use a clearly > dummy address instead, such as INADDR_ANY with port 0. > > Also, as suggested, using the server address as obtained via PROXY > protocol from the client might be a better solution as long as the > client address was set via PROXY protocol (regardless of whether > address families match or not), and what users expect from the > "proty_protocol on;" when chaining stream proxies in the first > place. Checking whether the address used in PROXY writer is in fact the address that was passed in the PROXY header, is complicated. This will either require setting a flag when PROXY address is set by realip, which is ugly. Another approach is checking if the client address written to a PROXY header matches the client address in the received PROXY header. However since currently PROXY protocol addresses are stored as text, and not all addresses have unique text repersentations, this approach would require refactoring all PROXY protocol code + realip modules to switch from text to sockaddr. I suggest that we follow the first plan (INADDR_ANY etc). > [...] Updated patch attached. -- Roman Arutyunyan -------------- next part -------------- # HG changeset patch # User Roman Arutyunyan # Date 1706083568 -14400 # Wed Jan 24 12:06:08 2024 +0400 # Node ID 49e4bdc883520924cbed992959c36c1099fdcfcf # Parent c5e01070a53b9f2e867f25481c6d4862aac68b17 Avoiding mixed socket families in PROXY protocol v1 (ticket #2010). When using realip module, remote and local addresses of a connection can belong to different address families. This previously resulted in generating PROXY protocol headers like this: PROXY TCP4 127.0.0.1 unix:/tmp/nginx1.sock 55544 0 The PROXY protocol v1 specification does not allow mixed families. The change substitutes server address with zero address in this case: PROXY TCP4 127.0.0.1 0.0.0.0 55544 0 As an alternative, "PROXY UNKNOWN" header could be used, which unlike this header does not contain any useful information about the client. Also, the above mentioned format for unix socket address is not specified in PROXY protocol v1 and is a by-product of internal nginx representation of it. The change eliminates such addresses from PROXY protocol headers as well. diff --git a/src/core/ngx_proxy_protocol.c b/src/core/ngx_proxy_protocol.c --- a/src/core/ngx_proxy_protocol.c +++ b/src/core/ngx_proxy_protocol.c @@ -279,7 +279,10 @@ ngx_proxy_protocol_read_port(u_char *p, u_char * ngx_proxy_protocol_write(ngx_connection_t *c, u_char *buf, u_char *last) { - ngx_uint_t port, lport; + socklen_t local_socklen; + ngx_uint_t port, lport; + struct sockaddr *local_sockaddr; + static ngx_sockaddr_t default_sockaddr; if (last - buf < NGX_PROXY_PROTOCOL_V1_MAX_HEADER) { ngx_log_error(NGX_LOG_ALERT, c->log, 0, @@ -312,11 +315,21 @@ ngx_proxy_protocol_write(ngx_connection_ *buf++ = ' '; - buf += ngx_sock_ntop(c->local_sockaddr, c->local_socklen, buf, last - buf, - 0); + if (c->sockaddr->sa_family == c->local_sockaddr->sa_family) { + local_sockaddr = c->local_sockaddr; + local_socklen = c->local_socklen; + + } else { + default_sockaddr.sockaddr.sa_family = c->sockaddr->sa_family; + + local_sockaddr = &default_sockaddr.sockaddr; + local_socklen = sizeof(ngx_sockaddr_t); + } + + buf += ngx_sock_ntop(local_sockaddr, local_socklen, buf, last - buf, 0); port = ngx_inet_get_port(c->sockaddr); - lport = ngx_inet_get_port(c->local_sockaddr); + lport = ngx_inet_get_port(local_sockaddr); return ngx_slprintf(buf, last, " %ui %ui" CRLF, port, lport); } From arut at nginx.com Wed Feb 21 13:37:51 2024 From: arut at nginx.com (Roman Arutyunyan) Date: Wed, 21 Feb 2024 17:37:51 +0400 Subject: [PATCH 3 of 3] Stream: ngx_stream_pass_module In-Reply-To: <8C0B7CF6-7BE8-4B63-8BA7-9608C455D30A@nginx.com> References: <3cab85fe55272835674b.1699610841@arut-laptop> <8C0B7CF6-7BE8-4B63-8BA7-9608C455D30A@nginx.com> Message-ID: <20240221133751.hrnz43d77aq455ps@N00W24XTQX> Hi, On Tue, Feb 13, 2024 at 02:46:35PM +0400, Sergey Kandaurov wrote: > > > On 10 Nov 2023, at 14:07, Roman Arutyunyan wrote: > > > > # HG changeset patch > > # User Roman Arutyunyan > > # Date 1699543504 -14400 > > # Thu Nov 09 19:25:04 2023 +0400 > > # Node ID 3cab85fe55272835674b7f1c296796955256d019 > > # Parent 1d3464283405a4d8ac54caae9bf1815c723f04c5 > > Stream: ngx_stream_pass_module. > > > > The module allows to pass connections from Stream to other modules such as HTTP > > or Mail, as well as back to Stream. Previously, this was only possible with > > proxying. Connections with preread buffer read out from socket cannot be > > passed. > > > > The module allows to terminate SSL selectively based on SNI. > > > > stream { > > server { > > listen 8000 default_server; > > ssl_preread on; > > ... > > } > > > > server { > > listen 8000; > > server_name foo.example.com; > > pass 8001; # to HTTP > > } > > > > server { > > listen 8000; > > server_name bar.example.com; > > ... > > } > > } > > > > http { > > server { > > listen 8001 ssl; > > ... > > > > location / { > > root html; > > } > > } > > } > > > > diff --git a/auto/modules b/auto/modules > > --- a/auto/modules > > +++ b/auto/modules > > @@ -1166,6 +1166,16 @@ if [ $STREAM != NO ]; then > > . auto/module > > fi > > > > + if [ $STREAM_PASS = YES ]; then > > + ngx_module_name=ngx_stream_pass_module > > + ngx_module_deps= > > + ngx_module_srcs=src/stream/ngx_stream_pass_module.c > > + ngx_module_libs= > > + ngx_module_link=$STREAM_PASS > > + > > + . auto/module > > + fi > > + > > if [ $STREAM_SET = YES ]; then > > ngx_module_name=ngx_stream_set_module > > ngx_module_deps= > > diff --git a/auto/options b/auto/options > > --- a/auto/options > > +++ b/auto/options > > @@ -127,6 +127,7 @@ STREAM_GEOIP=NO > > STREAM_MAP=YES > > STREAM_SPLIT_CLIENTS=YES > > STREAM_RETURN=YES > > +STREAM_PASS=YES > > STREAM_SET=YES > > STREAM_UPSTREAM_HASH=YES > > STREAM_UPSTREAM_LEAST_CONN=YES > > @@ -337,6 +338,7 @@ use the \"--with-mail_ssl_module\" optio > > --without-stream_split_clients_module) > > STREAM_SPLIT_CLIENTS=NO ;; > > --without-stream_return_module) STREAM_RETURN=NO ;; > > + --without-stream_pass_module) STREAM_PASS=NO ;; > > --without-stream_set_module) STREAM_SET=NO ;; > > --without-stream_upstream_hash_module) > > STREAM_UPSTREAM_HASH=NO ;; > > @@ -556,6 +558,7 @@ cat << END > > --without-stream_split_clients_module > > disable ngx_stream_split_clients_module > > --without-stream_return_module disable ngx_stream_return_module > > + --without-stream_pass_module disable ngx_stream_pass_module > > --without-stream_set_module disable ngx_stream_set_module > > --without-stream_upstream_hash_module > > disable ngx_stream_upstream_hash_module > > diff --git a/src/stream/ngx_stream_pass_module.c b/src/stream/ngx_stream_pass_module.c > > new file mode 100644 > > --- /dev/null > > +++ b/src/stream/ngx_stream_pass_module.c > > @@ -0,0 +1,245 @@ > > + > > +/* > > + * Copyright (C) Roman Arutyunyan > > + * Copyright (C) Nginx, Inc. > > + */ > > + > > + > > +#include > > +#include > > +#include > > + > > + > > +typedef struct { > > + ngx_addr_t *addr; > > + ngx_stream_complex_value_t *addr_value; > > +} ngx_stream_pass_srv_conf_t; > > + > > + > > +static void ngx_stream_pass_handler(ngx_stream_session_t *s); > > +static void *ngx_stream_pass_create_srv_conf(ngx_conf_t *cf); > > +static char *ngx_stream_pass(ngx_conf_t *cf, ngx_command_t *cmd, void *conf); > > + > > + > > +static ngx_command_t ngx_stream_pass_commands[] = { > > + > > + { ngx_string("pass"), > > + NGX_STREAM_SRV_CONF|NGX_CONF_TAKE1, > > + ngx_stream_pass, > > + NGX_STREAM_SRV_CONF_OFFSET, > > + 0, > > + NULL }, > > + > > + ngx_null_command > > +}; > > + > > + > > +static ngx_stream_module_t ngx_stream_pass_module_ctx = { > > + NULL, /* preconfiguration */ > > + NULL, /* postconfiguration */ > > + > > + NULL, /* create main configuration */ > > + NULL, /* init main configuration */ > > + > > + ngx_stream_pass_create_srv_conf, /* create server configuration */ > > + NULL /* merge server configuration */ > > +}; > > + > > + > > +ngx_module_t ngx_stream_pass_module = { > > + NGX_MODULE_V1, > > + &ngx_stream_pass_module_ctx, /* module conaddr */ > > + ngx_stream_pass_commands, /* module directives */ > > + NGX_STREAM_MODULE, /* module type */ > > + NULL, /* init master */ > > + NULL, /* init module */ > > + NULL, /* init process */ > > + NULL, /* init thread */ > > + NULL, /* exit thread */ > > + NULL, /* exit process */ > > + NULL, /* exit master */ > > + NGX_MODULE_V1_PADDING > > +}; > > + > > + > > +static void > > +ngx_stream_pass_handler(ngx_stream_session_t *s) > > +{ > > + ngx_url_t u; > > + ngx_str_t url; > > + ngx_addr_t *addr; > > + ngx_uint_t i; > > + ngx_listening_t *ls; > > + ngx_connection_t *c; > > + ngx_stream_pass_srv_conf_t *pscf; > > + > > + c = s->connection; > > + > > + c->log->action = "passing connection to another module"; > > + > > + if (c->buffer && c->buffer->pos != c->buffer->last) { > > + ngx_log_error(NGX_LOG_ERR, s->connection->log, 0, > > + "cannot pass connection with preread data"); > > + goto failed; > > + } > > + > > + pscf = ngx_stream_get_module_srv_conf(s, ngx_stream_pass_module); > > + > > + addr = pscf->addr; > > + > > + if (addr == NULL) { > > + if (ngx_stream_complex_value(s, pscf->addr_value, &url) != NGX_OK) { > > + goto failed; > > + } > > + > > + ngx_memzero(&u, sizeof(ngx_url_t)); > > + > > + u.url = url; > > + u.listen = 1; > > + u.no_resolve = 1; > > + > > + if (ngx_parse_url(s->connection->pool, &u) != NGX_OK) { > > + if (u.err) { > > + ngx_log_error(NGX_LOG_ERR, s->connection->log, 0, > > + "%s in pass \"%V\"", u.err, &u.url); > > + } > > + > > + goto failed; > > + } > > + > > + if (u.naddrs == 0) { > > + ngx_log_error(NGX_LOG_ERR, s->connection->log, 0, > > + "no addresses in pass \"%V\"", &u.url); > > + goto failed; > > + } > > + > > + addr = &u.addrs[0]; > > + } > > + > > + ngx_log_debug1(NGX_LOG_DEBUG_STREAM, c->log, 0, > > + "stream pass addr: \"%V\"", &addr->name); > > + > > + ls = ngx_cycle->listening.elts; > > + > > + for (i = 0; i < ngx_cycle->listening.nelts; i++) { > > + if (ngx_cmp_sockaddr(ls[i].sockaddr, ls[i].socklen, > > + addr->sockaddr, addr->socklen, 1) > > + == NGX_OK) > > + { > > + c->listening = &ls[i]; > > The address configuration (addr_conf) is stored depending on the > protocol family of the listening socket, it's different for AF_INET6. > So, if the protocol family is switched when passing a connection, > it may happen that c->local_sockaddr->sa_family will keep a wrong > value, the listen handler will dereference addr_conf incorrectly. > > Consider the following example: > > server { > listen 127.0.0.1:8081; > pass [::1]:8091; > } > > server { > listen [::1]:8091; > ... > } > > When ls->handler is invoked, c->local_sockaddr is kept inherited > from the originally accepted connection, which is of AF_INET. > To fix this, c->local_sockaddr and c->local_socklen should be > updated according to the new listen socket configuration. Sure, thanks. > OTOH, c->sockaddr / c->socklen should be kept intact. > Note that this makes possible cross protocol family > configurations in e.g. realip and access modules; > from now on this will have to be taken into account. This is already possible with proxy_protocol+realip and is known to cause minor issues with third-party code that's too pedantic about families. Also I've just sent an updated patch which fixes PROXY protocol headers generated for mixed family addresses. > > + > > + c->data = NULL; > > + c->buffer = NULL; > > + > > + *c->log = c->listening->log; > > + c->log->handler = NULL; > > + c->log->data = NULL; > > + > > + c->listening->handler(c); > > + > > + return; > > + } > > + } > > + > > + ngx_log_error(NGX_LOG_ERR, c->log, 0, > > + "listen not found for \"%V\"", &addr->name); > > + > > + ngx_stream_finalize_session(s, NGX_STREAM_OK); > > + > > + return; > > + > > +failed: > > + > > + ngx_stream_finalize_session(s, NGX_STREAM_INTERNAL_SERVER_ERROR); > > +} > > + > > + > > +static void * > > +ngx_stream_pass_create_srv_conf(ngx_conf_t *cf) > > +{ > > + ngx_stream_pass_srv_conf_t *conf; > > + > > + conf = ngx_pcalloc(cf->pool, sizeof(ngx_stream_pass_srv_conf_t)); > > + if (conf == NULL) { > > + return NULL; > > + } > > + > > + /* > > + * set by ngx_pcalloc(): > > + * > > + * conf->addr = NULL; > > + * conf->addr_value = NULL; > > + */ > > + > > + return conf; > > +} > > + > > + > > +static char * > > +ngx_stream_pass(ngx_conf_t *cf, ngx_command_t *cmd, void *conf) > > +{ > > + ngx_stream_pass_srv_conf_t *pscf = conf; > > + > > + ngx_url_t u; > > + ngx_str_t *value, *url; > > + ngx_stream_complex_value_t cv; > > + ngx_stream_core_srv_conf_t *cscf; > > + ngx_stream_compile_complex_value_t ccv; > > + > > + if (pscf->addr || pscf->addr_value) { > > + return "is duplicate"; > > + } > > + > > + cscf = ngx_stream_conf_get_module_srv_conf(cf, ngx_stream_core_module); > > + > > + cscf->handler = ngx_stream_pass_handler; > > + > > + value = cf->args->elts; > > + > > + url = &value[1]; > > + > > + ngx_memzero(&ccv, sizeof(ngx_stream_compile_complex_value_t)); > > + > > + ccv.cf = cf; > > + ccv.value = url; > > + ccv.complex_value = &cv; > > + > > + if (ngx_stream_compile_complex_value(&ccv) != NGX_OK) { > > + return NGX_CONF_ERROR; > > + } > > + > > + if (cv.lengths) { > > + pscf->addr_value = ngx_palloc(cf->pool, > > + sizeof(ngx_stream_complex_value_t)); > > + if (pscf->addr_value == NULL) { > > + return NGX_CONF_ERROR; > > + } > > + > > + *pscf->addr_value = cv; > > + > > + return NGX_CONF_OK; > > + } > > + > > + ngx_memzero(&u, sizeof(ngx_url_t)); > > + > > + u.url = *url; > > + u.listen = 1; > > + > > + if (ngx_parse_url(cf->pool, &u) != NGX_OK) { > > + if (u.err) { > > + ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, > > + "%s in \"%V\" of the \"pass\" directive", > > + u.err, &u.url); > > + } > > + > > + return NGX_CONF_ERROR; > > + } > > + > > + if (u.naddrs == 0) { > > + return "has no addresses"; > > + } > > + > > + pscf->addr = &u.addrs[0]; > > + > > + return NGX_CONF_OK; > > +} Attached is an improved version with the following changes: - Removed 'listen = 1' flag when parsing "pass" parameter. Now it's treated like "proxy_pass" parameter. - Listen match reworked to be able to match wildcards. - Local_sockaddr is copied to the connection after match. - Fixes in log action, log messages, commit log etc. -- Roman Arutyunyan -------------- next part -------------- # HG changeset patch # User Roman Arutyunyan # Date 1708522562 -14400 # Wed Feb 21 17:36:02 2024 +0400 # Node ID 44da04c2d4db94ad4eefa84b299e07c5fa4a00b9 # Parent 4eb76c257fd07a69fc9e9386e845edcc9e2b1b08 Stream: ngx_stream_pass_module. The module allows to pass connections from Stream to other modules such as HTTP or Mail, as well as back to Stream. Previously, this was only possible with proxying. Connections with preread buffer read out from socket cannot be passed. The module allows selective SSL termination based on SNI. stream { server { listen 8000 default_server; ssl_preread on; ... } server { listen 8000; server_name foo.example.com; pass 127.0.0.1:8001; # to HTTP } server { listen 8000; server_name bar.example.com; ... } } http { server { listen 8001 ssl; ... location / { root html; } } } diff --git a/auto/modules b/auto/modules --- a/auto/modules +++ b/auto/modules @@ -1166,6 +1166,16 @@ if [ $STREAM != NO ]; then . auto/module fi + if [ $STREAM_PASS = YES ]; then + ngx_module_name=ngx_stream_pass_module + ngx_module_deps= + ngx_module_srcs=src/stream/ngx_stream_pass_module.c + ngx_module_libs= + ngx_module_link=$STREAM_PASS + + . auto/module + fi + if [ $STREAM_SET = YES ]; then ngx_module_name=ngx_stream_set_module ngx_module_deps= diff --git a/auto/options b/auto/options --- a/auto/options +++ b/auto/options @@ -127,6 +127,7 @@ STREAM_GEOIP=NO STREAM_MAP=YES STREAM_SPLIT_CLIENTS=YES STREAM_RETURN=YES +STREAM_PASS=YES STREAM_SET=YES STREAM_UPSTREAM_HASH=YES STREAM_UPSTREAM_LEAST_CONN=YES @@ -337,6 +338,7 @@ use the \"--with-mail_ssl_module\" optio --without-stream_split_clients_module) STREAM_SPLIT_CLIENTS=NO ;; --without-stream_return_module) STREAM_RETURN=NO ;; + --without-stream_pass_module) STREAM_PASS=NO ;; --without-stream_set_module) STREAM_SET=NO ;; --without-stream_upstream_hash_module) STREAM_UPSTREAM_HASH=NO ;; @@ -556,6 +558,7 @@ cat << END --without-stream_split_clients_module disable ngx_stream_split_clients_module --without-stream_return_module disable ngx_stream_return_module + --without-stream_pass_module disable ngx_stream_pass_module --without-stream_set_module disable ngx_stream_set_module --without-stream_upstream_hash_module disable ngx_stream_upstream_hash_module diff --git a/src/stream/ngx_stream_pass_module.c b/src/stream/ngx_stream_pass_module.c new file mode 100644 --- /dev/null +++ b/src/stream/ngx_stream_pass_module.c @@ -0,0 +1,272 @@ + +/* + * Copyright (C) Roman Arutyunyan + * Copyright (C) Nginx, Inc. + */ + + +#include +#include +#include + + +typedef struct { + ngx_addr_t *addr; + ngx_stream_complex_value_t *addr_value; +} ngx_stream_pass_srv_conf_t; + + +static void ngx_stream_pass_handler(ngx_stream_session_t *s); +static ngx_int_t ngx_stream_pass_match(ngx_listening_t *ls, ngx_addr_t *addr); +static void *ngx_stream_pass_create_srv_conf(ngx_conf_t *cf); +static char *ngx_stream_pass(ngx_conf_t *cf, ngx_command_t *cmd, void *conf); + + +static ngx_command_t ngx_stream_pass_commands[] = { + + { ngx_string("pass"), + NGX_STREAM_SRV_CONF|NGX_CONF_TAKE1, + ngx_stream_pass, + NGX_STREAM_SRV_CONF_OFFSET, + 0, + NULL }, + + ngx_null_command +}; + + +static ngx_stream_module_t ngx_stream_pass_module_ctx = { + NULL, /* preconfiguration */ + NULL, /* postconfiguration */ + + NULL, /* create main configuration */ + NULL, /* init main configuration */ + + ngx_stream_pass_create_srv_conf, /* create server configuration */ + NULL /* merge server configuration */ +}; + + +ngx_module_t ngx_stream_pass_module = { + NGX_MODULE_V1, + &ngx_stream_pass_module_ctx, /* module context */ + ngx_stream_pass_commands, /* module directives */ + NGX_STREAM_MODULE, /* module type */ + NULL, /* init master */ + NULL, /* init module */ + NULL, /* init process */ + NULL, /* init thread */ + NULL, /* exit thread */ + NULL, /* exit process */ + NULL, /* exit master */ + NGX_MODULE_V1_PADDING +}; + + +static void +ngx_stream_pass_handler(ngx_stream_session_t *s) +{ + ngx_url_t u; + ngx_str_t url; + ngx_addr_t *addr; + ngx_uint_t i; + ngx_listening_t *ls; + struct sockaddr *sa; + ngx_connection_t *c; + ngx_stream_pass_srv_conf_t *pscf; + + c = s->connection; + + c->log->action = "passing connection to port"; + + if (c->buffer && c->buffer->pos != c->buffer->last) { + ngx_log_error(NGX_LOG_ERR, c->log, 0, + "cannot pass connection with preread data"); + goto failed; + } + + pscf = ngx_stream_get_module_srv_conf(s, ngx_stream_pass_module); + + addr = pscf->addr; + + if (addr == NULL) { + if (ngx_stream_complex_value(s, pscf->addr_value, &url) != NGX_OK) { + goto failed; + } + + ngx_memzero(&u, sizeof(ngx_url_t)); + + u.url = url; + u.no_resolve = 1; + + if (ngx_parse_url(c->pool, &u) != NGX_OK) { + if (u.err) { + ngx_log_error(NGX_LOG_ERR, c->log, 0, + "%s in pass \"%V\"", u.err, &u.url); + } + + goto failed; + } + + if (u.naddrs == 0) { + ngx_log_error(NGX_LOG_ERR, c->log, 0, + "no addresses in pass \"%V\"", &u.url); + goto failed; + } + + addr = &u.addrs[0]; + } + + ngx_log_debug1(NGX_LOG_DEBUG_STREAM, c->log, 0, + "stream pass addr: \"%V\"", &addr->name); + + ls = ngx_cycle->listening.elts; + + for (i = 0; i < ngx_cycle->listening.nelts; i++) { + + if (ngx_stream_pass_match(&ls[i], addr) != NGX_OK) { + continue; + } + + c->listening = &ls[i]; + + c->data = NULL; + c->buffer = NULL; + + *c->log = c->listening->log; + c->log->handler = NULL; + c->log->data = NULL; + + sa = ngx_palloc(c->pool, addr->socklen); + if (sa == NULL) { + goto failed; + } + + ngx_memcpy(sa, addr->sockaddr, addr->socklen); + c->local_sockaddr = sa; + c->local_socklen = addr->socklen; + + c->listening->handler(c); + + return; + } + + ngx_log_error(NGX_LOG_ERR, c->log, 0, + "port not found for \"%V\"", &addr->name); + + ngx_stream_finalize_session(s, NGX_STREAM_OK); + + return; + +failed: + + ngx_stream_finalize_session(s, NGX_STREAM_INTERNAL_SERVER_ERROR); +} + + +static ngx_int_t +ngx_stream_pass_match(ngx_listening_t *ls, ngx_addr_t *addr) +{ + if (!ls->wildcard) { + return ngx_cmp_sockaddr(ls->sockaddr, ls->socklen, + addr->sockaddr, addr->socklen, 1); + } + + if (ls->sockaddr->sa_family == addr->sockaddr->sa_family + && ngx_inet_get_port(ls->sockaddr) == ngx_inet_get_port(addr->sockaddr)) + { + return NGX_OK; + } + + return NGX_DECLINED; +} + + +static void * +ngx_stream_pass_create_srv_conf(ngx_conf_t *cf) +{ + ngx_stream_pass_srv_conf_t *conf; + + conf = ngx_pcalloc(cf->pool, sizeof(ngx_stream_pass_srv_conf_t)); + if (conf == NULL) { + return NULL; + } + + /* + * set by ngx_pcalloc(): + * + * conf->addr = NULL; + * conf->addr_value = NULL; + */ + + return conf; +} + + +static char * +ngx_stream_pass(ngx_conf_t *cf, ngx_command_t *cmd, void *conf) +{ + ngx_stream_pass_srv_conf_t *pscf = conf; + + ngx_url_t u; + ngx_str_t *value, *url; + ngx_stream_complex_value_t cv; + ngx_stream_core_srv_conf_t *cscf; + ngx_stream_compile_complex_value_t ccv; + + if (pscf->addr || pscf->addr_value) { + return "is duplicate"; + } + + cscf = ngx_stream_conf_get_module_srv_conf(cf, ngx_stream_core_module); + + cscf->handler = ngx_stream_pass_handler; + + value = cf->args->elts; + + url = &value[1]; + + ngx_memzero(&ccv, sizeof(ngx_stream_compile_complex_value_t)); + + ccv.cf = cf; + ccv.value = url; + ccv.complex_value = &cv; + + if (ngx_stream_compile_complex_value(&ccv) != NGX_OK) { + return NGX_CONF_ERROR; + } + + if (cv.lengths) { + pscf->addr_value = ngx_palloc(cf->pool, + sizeof(ngx_stream_complex_value_t)); + if (pscf->addr_value == NULL) { + return NGX_CONF_ERROR; + } + + *pscf->addr_value = cv; + + return NGX_CONF_OK; + } + + ngx_memzero(&u, sizeof(ngx_url_t)); + + u.url = *url; + + if (ngx_parse_url(cf->pool, &u) != NGX_OK) { + if (u.err) { + ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, + "%s in \"%V\" of the \"pass\" directive", + u.err, &u.url); + } + + return NGX_CONF_ERROR; + } + + if (u.naddrs == 0) { + return "has no addresses"; + } + + pscf->addr = &u.addrs[0]; + + return NGX_CONF_OK; +} From thresh at nginx.com Wed Feb 21 21:53:47 2024 From: thresh at nginx.com (=?iso-8859-1?q?Konstantin_Pavlov?=) Date: Wed, 21 Feb 2024 13:53:47 -0800 Subject: [PATCH 1 of 2] Linux packages: removed Ubuntu 23.04 'lunar' due to EOL Message-ID: <98a4f772621c4f075104.1708552427@qgcd7xg9r9.olympus.f5net.com> # HG changeset patch # User Konstantin Pavlov # Date 1708551797 28800 # Wed Feb 21 13:43:17 2024 -0800 # Node ID 98a4f772621c4f0751042ab0f7e1f2d4ba53556f # Parent e10905e43fa1d5abfdbc0bb6e9bd6e188aad6421 Linux packages: removed Ubuntu 23.04 'lunar' due to EOL. diff -r e10905e43fa1 -r 98a4f772621c xml/en/linux_packages.xml --- a/xml/en/linux_packages.xml Mon Feb 19 14:34:47 2024 +0000 +++ b/xml/en/linux_packages.xml Wed Feb 21 13:43:17 2024 -0800 @@ -7,7 +7,7 @@
+ rev="94">
@@ -88,11 +88,6 @@ versions: -23.04 “lunar” -x86_64, aarch64/arm64 - - - 23.10 “mantic” x86_64, aarch64/arm64 diff -r e10905e43fa1 -r 98a4f772621c xml/ru/linux_packages.xml --- a/xml/ru/linux_packages.xml Mon Feb 19 14:34:47 2024 +0000 +++ b/xml/ru/linux_packages.xml Wed Feb 21 13:43:17 2024 -0800 @@ -7,7 +7,7 @@
+ rev="94">
@@ -88,11 +88,6 @@ -23.04 “lunar” -x86_64, aarch64/arm64 - - - 23.10 “mantic” x86_64, aarch64/arm64 From thresh at nginx.com Wed Feb 21 21:53:48 2024 From: thresh at nginx.com (=?iso-8859-1?q?Konstantin_Pavlov?=) Date: Wed, 21 Feb 2024 13:53:48 -0800 Subject: [PATCH 2 of 2] Removed Maxim Dounin's PGP key In-Reply-To: <98a4f772621c4f075104.1708552427@qgcd7xg9r9.olympus.f5net.com> References: <98a4f772621c4f075104.1708552427@qgcd7xg9r9.olympus.f5net.com> Message-ID: <646ce0bcdac6817560f1.1708552428@qgcd7xg9r9.olympus.f5net.com> # HG changeset patch # User Konstantin Pavlov # Date 1708551944 28800 # Wed Feb 21 13:45:44 2024 -0800 # Node ID 646ce0bcdac6817560f1c39bbcdf7439cc0be73d # Parent 98a4f772621c4f0751042ab0f7e1f2d4ba53556f Removed Maxim Dounin's PGP key. diff -r 98a4f772621c -r 646ce0bcdac6 text/keys/mdounin.key --- a/text/keys/mdounin.key Wed Feb 21 13:43:17 2024 -0800 +++ /dev/null Thu Jan 01 00:00:00 1970 +0000 @@ -1,33 +0,0 @@ ------BEGIN PGP PUBLIC KEY BLOCK----- -Version: GnuPG v1.4.11 (FreeBSD) - -mQENBE7SKu8BCADQo6x4ZQfAcPlJMLmL8zBEBUS6GyKMMMDtrTh3Yaq481HB54oR -0cpKL05Ff9upjrIzLD5TJUCzYYM9GQOhguDUP8+ZU9JpSz3yO2TvH7WBbUZ8FADf -hblmmUBLNgOWgLo3W+FYhl3mz1GFS2Fvid6Tfn02L8CBAj7jxbjL1Qj/OA/WmLLc -m6BMTqI7IBlYW2vyIOIHasISGiAwZfp0ucMeXXvTtt14LGa8qXVcFnJTdwbf03AS -ljhYrQnKnpl3VpDAoQt8C68YCwjaNJW59hKqWB+XeIJ9CW98+EOAxLAFszSyGanp -rCqPd0numj9TIddjcRkTA/ZbmCWK+xjpVBGXABEBAAG0IU1heGltIERvdW5pbiA8 -bWRvdW5pbkBtZG91bmluLnJ1PokBOAQTAQIAIgUCTtIq7wIbAwYLCQgHAwIGFQgC -CQoLBBYCAwECHgECF4AACgkQUgqZk6HAUvj+iwf/b4FS6zVzJ5T0v1vcQGD4ZzXe -D5xMC4BJW414wVMU15rfX7aCdtoCYBNiApPxEd7SwiyxWRhRA9bikUq87JEgmnyV -0iYbHZvCvc1jOkx4WR7E45t1Mi29KBoPaFXA9X5adZkYcOQLDxa2Z8m6LGXnlF6N -tJkxQ8APrjZsdrbDvo3HxU9muPcq49ydzhgwfLwpUs11LYkwB0An9WRPuv3jporZ -/XgI6RfPMZ5NIx+FRRCjn6DnfHboY9rNF6NzrOReJRBhXCi6I+KkHHEnMoyg8XET -9lVkfHTOl81aIZqrAloX3/00TkYWyM2zO9oYpOg6eUFCX/Lw4MJZsTcT5EKVxIhG -BBARAgAGBQJO01Y/AAoJEOzw6QssFyCDVyQAn3qwTZlcZgyyzWu9Cs8gJ0CXREaS -AJ92QjGLT9DijTcbB+q9OS/nl16Z/IhGBBARAgAGBQJO02JDAAoJEKk3YTmlJMU+ -P64AnjCKEXFelSVMtgefJk3+vpyt3QX1AKCH9M3MbTWPeDUL+MpULlfdyfvjj7kB -DQRO0irvAQgA0LjCc8S6oZzjiap2MjRNhRFA5BYjXZRZBdKF2VP74avt2/RELq8G -W0n7JWmKn6vvrXabEGLyfkCngAhTq9tJ/K7LPx/bmlO5+jboO/1inH2BTtLiHjAX -vicXZk3oaZt2Sotx5mMI3yzpFQRVqZXsi0LpUTPJEh3oS8IdYRjslQh1A7P5hfCZ -wtzwb/hKm8upODe/ITUMuXeWfLuQj/uEU6wMzmfMHb+jlYMWtb+v98aJa2FODeKP -mWCXLa7bliXp1SSeBOEfIgEAmjM6QGlDx5sZhr2Ss2xSPRdZ8DqD7oiRVzmstX1Y -oxEzC0yXfaefC7SgM0nMnaTvYEOYJ9CH3wARAQABiQEfBBgBAgAJBQJO0irvAhsM -AAoJEFIKmZOhwFL4844H/jo8icCcS6eOWvnen7lg0FcCo1fIm4wW3tEmkQdchSHE -CJDq7pgTloN65pwB5tBoT47cyYNZA9eTfJVgRc74q5cexKOYrMC3KuAqWbwqXhkV -s0nkWxnOIidTHSXvBZfDFA4Idwte94Thrzf8Pn8UESudTiqrWoCBXk2UyVsl03gJ -blSJAeJGYPPeo+Yj6m63OWe2+/S2VTgmbPS/RObn0Aeg7yuff0n5+ytEt2KL51gO -QE2uIxTCawHr12PsllPkbqPk/PagIttfEJqn9b0CrqPC3HREePb2aMJ/Ctw/76CO -wn0mtXeIXLCTvBmznXfaMKllsqbsy2nCJ2P2uJjOntw= -=Tavt ------END PGP PUBLIC KEY BLOCK----- diff -r 98a4f772621c -r 646ce0bcdac6 xml/en/pgp_keys.xml --- a/xml/en/pgp_keys.xml Wed Feb 21 13:43:17 2024 -0800 +++ b/xml/en/pgp_keys.xml Wed Feb 21 13:45:44 2024 -0800 @@ -14,10 +14,6 @@ -Maxim Dounin’s -PGP public key - - Maxim Konovalov’s PGP public key From jordanc.carter at outlook.com Thu Feb 22 01:59:25 2024 From: jordanc.carter at outlook.com (J Carter) Date: Thu, 22 Feb 2024 01:59:25 +0000 Subject: [PATCH] Avoiding mixed socket families in PROXY protocol v1 (ticket #2594) In-Reply-To: <20240221132920.chmms5v3aekvmc2i@N00W24XTQX> References: <2f12c929527b2337c15e.1705920594@arut-laptop> <20240122154801.ycda4ie442ipzw6n@N00W24XTQX> <20240221132920.chmms5v3aekvmc2i@N00W24XTQX> Message-ID: Hello Roman, On Wed, 21 Feb 2024 17:29:52 +0400 Roman Arutyunyan wrote: > Hi, > [...] > Checking whether the address used in PROXY writer is in fact the address > that was passed in the PROXY header, is complicated. This will either require > setting a flag when PROXY address is set by realip, which is ugly. > Another approach is checking if the client address written to a PROXY header > matches the client address in the received PROXY header. However since > currently PROXY protocol addresses are stored as text, and not all addresses > have unique text repersentations, this approach would require refactoring all > PROXY protocol code + realip modules to switch from text to sockaddr. > > I suggest that we follow the first plan (INADDR_ANY etc). > > > [...] > > Updated patch attached. > > -- > Roman Arutyunyan > diff --git a/src/core/ngx_proxy_protocol.c b/src/core/ngx_proxy_protocol.c > --- a/src/core/ngx_proxy_protocol.c > +++ b/src/core/ngx_proxy_protocol.c > @@ -279,7 +279,10 @@ ngx_proxy_protocol_read_port(u_char *p, > u_char * > ngx_proxy_protocol_write(ngx_connection_t *c, u_char *buf, u_char *last) > { > - ngx_uint_t port, lport; > + socklen_t local_socklen; > + ngx_uint_t port, lport; > + struct sockaddr *local_sockaddr; > + static ngx_sockaddr_t default_sockaddr; I understand you are using the fact static variables are zero initalized - to be both INADDR_ANY and "IN6ADDR_ANY", however is this defined behavior for a union (specifically for ipv6 case) ? I was under the impression only the first declared member, along with padding bits were garunteed to be zero'ed. https://stackoverflow.com/questions/54160137/what-constitutes-as-padding-in-a-union > > if (last - buf < NGX_PROXY_PROTOCOL_V1_MAX_HEADER) { > ngx_log_error(NGX_LOG_ALERT, c->log, 0, > @@ -312,11 +315,21 @@ ngx_proxy_protocol_write(ngx_connection_ > > *buf++ = ' '; > > - buf += ngx_sock_ntop(c->local_sockaddr, c->local_socklen, buf, last - buf, > - 0); > + if (c->sockaddr->sa_family == c->local_sockaddr->sa_family) { > + local_sockaddr = c->local_sockaddr; > + local_socklen = c->local_socklen; > + > + } else { > + default_sockaddr.sockaddr.sa_family = c->sockaddr->sa_family; > + > + local_sockaddr = &default_sockaddr.sockaddr; > + local_socklen = sizeof(ngx_sockaddr_t); > + } > + > + buf += ngx_sock_ntop(local_sockaddr, local_socklen, buf, last - buf, 0); > > port = ngx_inet_get_port(c->sockaddr); > - lport = ngx_inet_get_port(c->local_sockaddr); > + lport = ngx_inet_get_port(local_sockaddr); > > return ngx_slprintf(buf, last, " %ui %ui" CRLF, port, lport); > } > From pluknet at nginx.com Thu Feb 22 15:17:05 2024 From: pluknet at nginx.com (Sergey Kandaurov) Date: Thu, 22 Feb 2024 19:17:05 +0400 Subject: [PATCH 1 of 2] Linux packages: removed Ubuntu 23.04 'lunar' due to EOL In-Reply-To: <98a4f772621c4f075104.1708552427@qgcd7xg9r9.olympus.f5net.com> <646ce0bcdac6817560f1.1708552428@qgcd7xg9r9.olympus.f5net.com> Message-ID: On Wed, Feb 21, 2024 at 01:53:47PM -0800, Konstantin Pavlov wrote: > # HG changeset patch > # User Konstantin Pavlov > # Date 1708551797 28800 > # Wed Feb 21 13:43:17 2024 -0800 > # Node ID 98a4f772621c4f0751042ab0f7e1f2d4ba53556f > # Parent e10905e43fa1d5abfdbc0bb6e9bd6e188aad6421 > Linux packages: removed Ubuntu 23.04 'lunar' due to EOL. > On Wed, Feb 21, 2024 at 01:53:48PM -0800, Konstantin Pavlov wrote: > # HG changeset patch > # User Konstantin Pavlov > # Date 1708551944 28800 > # Wed Feb 21 13:45:44 2024 -0800 > # Node ID 646ce0bcdac6817560f1c39bbcdf7439cc0be73d > # Parent 98a4f772621c4f0751042ab0f7e1f2d4ba53556f > Removed Maxim Dounin's PGP key. > Ok for me. From arut at nginx.com Thu Feb 22 15:17:26 2024 From: arut at nginx.com (Roman Arutyunyan) Date: Thu, 22 Feb 2024 19:17:26 +0400 Subject: [PATCH] Avoiding mixed socket families in PROXY protocol v1 (ticket #2594) In-Reply-To: References: <2f12c929527b2337c15e.1705920594@arut-laptop> <20240122154801.ycda4ie442ipzw6n@N00W24XTQX> <20240221132920.chmms5v3aekvmc2i@N00W24XTQX> Message-ID: <20240222151726.3dzpvvanswdqhbkh@N00W24XTQX> Hi, On Thu, Feb 22, 2024 at 01:59:25AM +0000, J Carter wrote: > Hello Roman, > > On Wed, 21 Feb 2024 17:29:52 +0400 > Roman Arutyunyan wrote: > > > Hi, > > > > [...] > > > Checking whether the address used in PROXY writer is in fact the address > > that was passed in the PROXY header, is complicated. This will either require > > setting a flag when PROXY address is set by realip, which is ugly. > > Another approach is checking if the client address written to a PROXY header > > matches the client address in the received PROXY header. However since > > currently PROXY protocol addresses are stored as text, and not all addresses > > have unique text repersentations, this approach would require refactoring all > > PROXY protocol code + realip modules to switch from text to sockaddr. > > > > I suggest that we follow the first plan (INADDR_ANY etc). > > > > > [...] > > > > Updated patch attached. > > > > -- > > Roman Arutyunyan > > > diff --git a/src/core/ngx_proxy_protocol.c b/src/core/ngx_proxy_protocol.c > > --- a/src/core/ngx_proxy_protocol.c > > +++ b/src/core/ngx_proxy_protocol.c > > @@ -279,7 +279,10 @@ ngx_proxy_protocol_read_port(u_char *p, > > u_char * > > ngx_proxy_protocol_write(ngx_connection_t *c, u_char *buf, u_char *last) > > { > > - ngx_uint_t port, lport; > > + socklen_t local_socklen; > > + ngx_uint_t port, lport; > > + struct sockaddr *local_sockaddr; > > + static ngx_sockaddr_t default_sockaddr; > > I understand you are using the fact static variables are zero > initalized - to be both INADDR_ANY and "IN6ADDR_ANY", however is > this defined behavior for a union (specifically for ipv6 case) ? > > I was under the impression only the first declared member, along with > padding bits were garunteed to be zero'ed. > > https://stackoverflow.com/questions/54160137/what-constitutes-as-padding-in-a-union It's not clear what exactly is meant by padding in that particular statement. It may as well be everything beyong the first member. However C99 does not require zeroing out the padding anyway. I can hardly believe there's a platform where the second union member may in fact be non-zero in this case. The entire union is allocated in the BSS segment and is not even present in the binary file. The reasons why C standard has this grey area are clear. Zeroing out a chunk of memory may not be equivalent to initializing particular members to NULL, 0.0 etc on certain platforms. However nginx already heavily relies on ngx_memzero() for initializing most structures. For platforms where this works, there are no reasons why a static union would not be allocated in a BSS segment. However it's not hard to avoid the issue completely, just to be on the safe side. Thanks for noticing this. > > if (last - buf < NGX_PROXY_PROTOCOL_V1_MAX_HEADER) { > > ngx_log_error(NGX_LOG_ALERT, c->log, 0, > > @@ -312,11 +315,21 @@ ngx_proxy_protocol_write(ngx_connection_ > > > > *buf++ = ' '; > > > > - buf += ngx_sock_ntop(c->local_sockaddr, c->local_socklen, buf, last - buf, > > - 0); > > + if (c->sockaddr->sa_family == c->local_sockaddr->sa_family) { > > + local_sockaddr = c->local_sockaddr; > > + local_socklen = c->local_socklen; > > + > > + } else { > > + default_sockaddr.sockaddr.sa_family = c->sockaddr->sa_family; > > + > > + local_sockaddr = &default_sockaddr.sockaddr; > > + local_socklen = sizeof(ngx_sockaddr_t); > > + } > > + > > + buf += ngx_sock_ntop(local_sockaddr, local_socklen, buf, last - buf, 0); > > > > port = ngx_inet_get_port(c->sockaddr); > > - lport = ngx_inet_get_port(c->local_sockaddr); > > + lport = ngx_inet_get_port(local_sockaddr); > > > > return ngx_slprintf(buf, last, " %ui %ui" CRLF, port, lport); > > } > > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > https://mailman.nginx.org/mailman/listinfo/nginx-devel -- Roman Arutyunyan -------------- next part -------------- # HG changeset patch # User Roman Arutyunyan # Date 1708522464 -14400 # Wed Feb 21 17:34:24 2024 +0400 # Node ID 2d9bb7b49d64576fa29a673133129f16de3cfbbe # Parent 44da04c2d4db94ad4eefa84b299e07c5fa4a00b9 Avoiding mixed socket families in PROXY protocol v1 (ticket #2010). When using realip module, remote and local addresses of a connection can belong to different address families. This previously resulted in generating PROXY protocol headers like this: PROXY TCP4 127.0.0.1 unix:/tmp/nginx1.sock 55544 0 The PROXY protocol v1 specification does not allow mixed families. The change substitutes server address with zero address in this case: PROXY TCP4 127.0.0.1 0.0.0.0 55544 0 As an alternative, "PROXY UNKNOWN" header could be used, which unlike this header does not contain any useful information about the client. Also, the above mentioned format for unix socket address is not specified in PROXY protocol v1 and is a by-product of internal nginx representation of it. The change eliminates such addresses from PROXY protocol headers as well. diff --git a/src/core/ngx_proxy_protocol.c b/src/core/ngx_proxy_protocol.c --- a/src/core/ngx_proxy_protocol.c +++ b/src/core/ngx_proxy_protocol.c @@ -279,7 +279,13 @@ ngx_proxy_protocol_read_port(u_char *p, u_char * ngx_proxy_protocol_write(ngx_connection_t *c, u_char *buf, u_char *last) { - ngx_uint_t port, lport; + socklen_t local_socklen; + ngx_uint_t port, lport; + struct sockaddr *local_sockaddr; + struct sockaddr_in sin; +#if (NGX_HAVE_INET6) + struct sockaddr_in6 sin6; +#endif if (last - buf < NGX_PROXY_PROTOCOL_V1_MAX_HEADER) { ngx_log_error(NGX_LOG_ALERT, c->log, 0, @@ -312,11 +318,35 @@ ngx_proxy_protocol_write(ngx_connection_ *buf++ = ' '; - buf += ngx_sock_ntop(c->local_sockaddr, c->local_socklen, buf, last - buf, - 0); + if (c->sockaddr->sa_family == c->local_sockaddr->sa_family) { + local_sockaddr = c->local_sockaddr; + local_socklen = c->local_socklen; + + } else { + switch (c->sockaddr->sa_family) { + +#if (NGX_HAVE_INET6) + case AF_INET6: + ngx_memzero(&sin6, sizeof(struct sockaddr_in6)); + sin6.sin6_family = AF_INET6; + local_sockaddr = (struct sockaddr *) &sin6; + local_socklen = sizeof(struct sockaddr_in6); + break; +#endif + + default: /* AF_INET */ + ngx_memzero(&sin, sizeof(struct sockaddr)); + sin.sin_family = AF_INET; + local_sockaddr = (struct sockaddr *) &sin; + local_socklen = sizeof(struct sockaddr_in); + break; + } + } + + buf += ngx_sock_ntop(local_sockaddr, local_socklen, buf, last - buf, 0); port = ngx_inet_get_port(c->sockaddr); - lport = ngx_inet_get_port(c->local_sockaddr); + lport = ngx_inet_get_port(local_sockaddr); return ngx_slprintf(buf, last, " %ui %ui" CRLF, port, lport); } From xeioex at nginx.com Fri Feb 23 01:39:33 2024 From: xeioex at nginx.com (=?utf-8?q?Dmitry_Volyntsev?=) Date: Fri, 23 Feb 2024 01:39:33 +0000 Subject: [njs] Fixed atob() with non-padded base64 strings. Message-ID: details: https://hg.nginx.org/njs/rev/272af619b821 branches: changeset: 2289:272af619b821 user: Dmitry Volyntsev date: Thu Feb 22 17:38:58 2024 -0800 description: Fixed atob() with non-padded base64 strings. This fixes #695 issue on Github. diffstat: src/njs_string.c | 9 ++++++++- src/test/njs_unit_test.c | 12 ++++++++++++ 2 files changed, 20 insertions(+), 1 deletions(-) diffs (41 lines): diff -r 0479e5821ab2 -r 272af619b821 src/njs_string.c --- a/src/njs_string.c Wed Feb 14 21:34:02 2024 -0800 +++ b/src/njs_string.c Thu Feb 22 17:38:58 2024 -0800 @@ -4298,7 +4298,14 @@ njs_string_atob(njs_vm_t *vm, njs_value_ } } - len = njs_base64_decoded_length(str.length, pad); + len = str.length; + + if (len % 4 != 0) { + pad = 4 - (len % 4); + len += pad; + } + + len = njs_base64_decoded_length(len, pad); njs_chb_init(&chain, vm->mem_pool); diff -r 0479e5821ab2 -r 272af619b821 src/test/njs_unit_test.c --- a/src/test/njs_unit_test.c Wed Feb 14 21:34:02 2024 -0800 +++ b/src/test/njs_unit_test.c Thu Feb 22 17:38:58 2024 -0800 @@ -10093,6 +10093,18 @@ static njs_unit_test_t njs_test[] = "].every(v => c(atob(v)).toString() == '8,52,86')"), njs_str("true")}, + { njs_str("atob('aGVsbG8=')"), + njs_str("hello") }, + + { njs_str("atob('aGVsbG8')"), + njs_str("hello") }, + + { njs_str("atob('TQ==')"), + njs_str("M") }, + + { njs_str("atob('TQ')"), + njs_str("M") }, + /* Functions. */ { njs_str("return"), From xeioex at nginx.com Fri Feb 23 05:19:47 2024 From: xeioex at nginx.com (=?utf-8?q?Dmitry_Volyntsev?=) Date: Fri, 23 Feb 2024 05:19:47 +0000 Subject: [njs] Shell: added QuickJS engine support. Message-ID: details: https://hg.nginx.org/njs/rev/cb3e068a511c branches: changeset: 2290:cb3e068a511c user: Dmitry Volyntsev date: Thu Feb 22 20:25:43 2024 -0800 description: Shell: added QuickJS engine support. diffstat: auto/expect | 22 +- auto/make | 23 +- auto/options | 2 + auto/quickjs | 55 + auto/summary | 4 + configure | 1 + external/njs_shell.c | 2225 ++++++++++++++++++++++++++++++++++++++++----- src/njs_builtin.c | 3 + src/test/njs_unit_test.c | 4 +- test/setup | 5 + test/shell_test.exp | 361 +------- test/shell_test_njs.exp | 418 ++++++++ test/test262 | 5 + 13 files changed, 2531 insertions(+), 597 deletions(-) diffs (truncated from 3866 to 1000 lines): diff -r 272af619b821 -r cb3e068a511c auto/expect --- a/auto/expect Thu Feb 22 17:38:58 2024 -0800 +++ b/auto/expect Thu Feb 22 20:25:43 2024 -0800 @@ -20,11 +20,31 @@ fi if [ $njs_found = yes -a $NJS_HAVE_READLINE = YES ]; then cat << END >> $NJS_MAKEFILE -shell_test: njs test/shell_test.exp +shell_test_njs: njs test/shell_test.exp PATH=$NJS_BUILD_DIR:\$(PATH) LANG=C.UTF-8 TERM=screen \ expect -f test/shell_test.exp + PATH=$NJS_BUILD_DIR:\$(PATH) LANG=C.UTF-8 TERM=screen \ + expect -f test/shell_test_njs.exp END +if [ $NJS_HAVE_QUICKJS = YES ]; then + cat << END >> $NJS_MAKEFILE + +shell_test: shell_test_njs shell_test_quickjs + +shell_test_quickjs: njs test/shell_test.exp + PATH=$NJS_BUILD_DIR:\$(PATH) LANG=C.UTF-8 TERM=screen NJS_ENGINE=QuickJS \ + expect -f test/shell_test.exp +END + +else + cat << END >> $NJS_MAKEFILE + +shell_test: shell_test_njs +END + +fi + else echo " - expect tests are disabled" diff -r 272af619b821 -r cb3e068a511c auto/make --- a/auto/make Thu Feb 22 17:38:58 2024 -0800 +++ b/auto/make Thu Feb 22 20:25:43 2024 -0800 @@ -241,8 +241,7 @@ lib_test: $NJS_BUILD_DIR/njs_auto_config $NJS_BUILD_DIR/lvlhsh_unit_test $NJS_BUILD_DIR/unicode_unit_test -test262: njs - +test262_njs: njs test/test262 --binary=$NJS_BUILD_DIR/njs unit_test: $NJS_BUILD_DIR/njs_auto_config.h \\ @@ -265,6 +264,26 @@ dist: && echo njs-\$(NJS_VER).tar.gz done END +if [ $NJS_HAVE_QUICKJS = YES ]; then + cat << END >> $NJS_MAKEFILE + +test262: njs test262_njs test262_quickjs + +test262_quickjs: njs + NJS_SKIP_LIST="test/js/promise_rejection_tracker_recursive.t.js \\ +test/js/async_exception_in_await.t.js" \\ + test/test262 --binary='$NJS_BUILD_DIR/njs -n QuickJS -m' +END + +else + cat << END >> $NJS_MAKEFILE + +test262: njs test262_njs +END + +fi + + njs_ts_deps=`echo $NJS_TS_SRCS \ | sed -e "s# *\([^ ][^ ]*\)#\1$njs_regex_cont#g"` diff -r 272af619b821 -r cb3e068a511c auto/options --- a/auto/options Thu Feb 22 17:38:58 2024 -0800 +++ b/auto/options Thu Feb 22 20:25:43 2024 -0800 @@ -14,6 +14,7 @@ NJS_DEBUG_GENERATOR=NO NJS_ADDRESS_SANITIZER=NO NJS_ADDR2LINE=NO +NJS_QUICKJS=YES NJS_OPENSSL=YES NJS_LIBXML2=YES NJS_ZLIB=YES @@ -47,6 +48,7 @@ do --debug-opcode=*) NJS_DEBUG_OPCODE="$value" ;; --debug-generator=*) NJS_DEBUG_GENERATOR="$value" ;; + --no-quickjs) NJS_QUICKJS=NO ;; --no-openssl) NJS_OPENSSL=NO ;; --no-libxml2) NJS_LIBXML2=NO ;; --no-zlib) NJS_ZLIB=NO ;; diff -r 272af619b821 -r cb3e068a511c auto/quickjs --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/auto/quickjs Thu Feb 22 20:25:43 2024 -0800 @@ -0,0 +1,55 @@ + +# Copyright (C) Dmitry Volyntsev +# Copyright (C) NGINX, Inc. + + +NJS_QUICKJS_LIB= +NJS_HAVE_QUICKJS=NO + +if [ $NJS_QUICKJS = YES ]; then + njs_found=no + + njs_feature="QuickJS library" + njs_feature_name=NJS_HAVE_QUICKJS + njs_feature_run=yes + njs_feature_incs= + njs_feature_libs="" + njs_feature_test="#if defined(__GNUC__) && (__GNUC__ >= 8) + #pragma GCC diagnostic push + #pragma GCC diagnostic ignored \"-Wcast-function-type\" + #endif + + #include + + int main() { + JSRuntime *rt; + + rt = JS_NewRuntime(); + JS_FreeRuntime(rt); + return 0; + }" + . auto/feature + + if [ $njs_found = no ]; then + njs_feature="QuickJS library -lquickjs.lto" + njs_feature_incs="/usr/include/quickjs/" + njs_feature_libs="-L/usr/lib/quickjs/ -lquickjs.lto -lm -ldl -lpthread" + + . auto/feature + fi + + if [ $njs_found = no ]; then + njs_feature="QuickJS library -lquickjs" + njs_feature_libs="-L/usr/lib/quickjs/ -lquickjs -lm -ldl -lpthread" + + . auto/feature + fi + + if [ $njs_found = yes ]; then + NJS_HAVE_QUICKJS=YES + NJS_QUICKJS_LIB="$njs_feature_libs" + NJS_LIB_INCS="$NJS_LIB_INCS $njs_feature_incs" + NJS_LIB_AUX_LIBS="$NJS_LIB_AUX_LIBS $njs_feature_libs" + fi + +fi diff -r 272af619b821 -r cb3e068a511c auto/summary --- a/auto/summary Thu Feb 22 17:38:58 2024 -0800 +++ b/auto/summary Thu Feb 22 20:25:43 2024 -0800 @@ -18,6 +18,10 @@ if [ $NJS_HAVE_READLINE = YES ]; then echo " + using readline library: $NJS_READLINE_LIB" fi +if [ $NJS_HAVE_QUICKJS = YES ]; then + echo " + using QuickJS library: $NJS_QUICKJS_LIB" +fi + if [ $NJS_HAVE_OPENSSL = YES ]; then echo " + using OpenSSL library: $NJS_OPENSSL_LIB" fi diff -r 272af619b821 -r cb3e068a511c configure --- a/configure Thu Feb 22 17:38:58 2024 -0800 +++ b/configure Thu Feb 22 20:25:43 2024 -0800 @@ -50,6 +50,7 @@ NJS_LIB_AUX_LIBS= . auto/explicit_bzero . auto/pcre . auto/readline +. auto/quickjs . auto/openssl . auto/libxml2 . auto/zlib diff -r 272af619b821 -r cb3e068a511c external/njs_shell.c --- a/external/njs_shell.c Thu Feb 22 17:38:58 2024 -0800 +++ b/external/njs_shell.c Thu Feb 22 20:25:43 2024 -0800 @@ -11,6 +11,21 @@ #include #include +#if (NJS_HAVE_QUICKJS) +#if defined(__GNUC__) && (__GNUC__ >= 8) +#pragma GCC diagnostic push +#pragma GCC diagnostic ignored "-Wcast-function-type" +#endif + +#include + +#if defined(__GNUC__) && (__GNUC__ >= 8) +#pragma GCC diagnostic pop +#endif +#define NJS_QUICKJS_VERSION "Unknown version" +#include +#endif + #if (!defined NJS_FUZZER_TARGET && defined NJS_HAVE_READLINE) #include @@ -38,8 +53,10 @@ typedef struct { uint8_t version; uint8_t ast; uint8_t unhandled_rejection; + uint8_t suppress_stdout; uint8_t opcode_debug; uint8_t generator_debug; + uint8_t can_block; int exit_code; int stack_size; @@ -49,6 +66,11 @@ typedef struct { njs_str_t *paths; char **argv; njs_uint_t argc; + + enum { + NJS_ENGINE_NJS = 0, + NJS_ENGINE_QUICKJS = 1, + } engine; } njs_opts_t; @@ -75,8 +97,19 @@ typedef struct { typedef struct { NJS_RBTREE_NODE (node); - njs_function_t *function; - njs_value_t *args; + union { + struct { + njs_function_t *function; + njs_value_t *args; + } njs; +#if (NJS_HAVE_QUICKJS) + struct { + JSValue function; + JSValue *args; + } qjs; +#endif + } u; + njs_uint_t nargs; uint32_t id; @@ -92,8 +125,18 @@ typedef struct { typedef struct { - void *promise; - njs_opaque_value_t message; + union { + struct { + njs_opaque_value_t promise; + njs_opaque_value_t message; + } njs; +#if (NJS_HAVE_QUICKJS) + struct { + JSValue promise; + JSValue message; + } qjs; +#endif + } u; } njs_rejected_promise_t; @@ -105,8 +148,39 @@ typedef struct { } njs_module_info_t; +typedef struct njs_engine_s njs_engine_t; + + +struct njs_engine_s { + union { + struct { + njs_vm_t *vm; + + njs_opaque_value_t value; + njs_completion_t completion; + } njs; +#if (NJS_HAVE_QUICKJS) + struct { + JSRuntime *rt; + JSContext *ctx; + JSValue value; + } qjs; +#endif + } u; + + njs_int_t (*eval)(njs_engine_t *engine, njs_str_t *script); + njs_int_t (*execute_pending_job)(njs_engine_t *engine); + njs_int_t (*unhandled_rejection)(njs_engine_t *engine); + njs_int_t (*process_events)(njs_engine_t *engine); + njs_int_t (*destroy)(njs_engine_t *engine); + njs_int_t (*output)(njs_engine_t *engine, njs_int_t ret); + + unsigned type; + njs_mp_t *pool; +}; + typedef struct { - njs_vm_t *vm; + njs_engine_t *engine; uint32_t event_id; njs_rbtree_t events; /* njs_ev_t * */ @@ -119,25 +193,57 @@ typedef struct { njs_arr_t *rejected_promises; njs_bool_t suppress_stdout; - - njs_completion_t completion; + njs_bool_t interactive; + njs_bool_t module; + char **argv; + njs_uint_t argc; + +#if (NJS_HAVE_QUICKJS) + JSValue process; + + njs_queue_t agents; + njs_queue_t reports; + pthread_mutex_t agent_mutex; + pthread_cond_t agent_cond; + pthread_mutex_t report_mutex; +#endif } njs_console_t; +#if (NJS_HAVE_QUICKJS) +typedef struct { + njs_queue_link_t link; + pthread_t tid; + njs_console_t *console; + char *script; + JSValue broadcast_func; + njs_bool_t broadcast_pending; + JSValue broadcast_sab; + uint8_t *broadcast_sab_buf; + size_t broadcast_sab_size; + int32_t broadcast_val; +} njs_262agent_t; + + +typedef struct { + njs_queue_link_t link; + char *str; +} njs_agent_report_t; +#endif + + static njs_int_t njs_main(njs_opts_t *opts); -static njs_int_t njs_console_init(njs_vm_t *vm, njs_console_t *console); -static void njs_console_output(njs_vm_t *vm, njs_value_t *value, - njs_int_t ret); +static njs_int_t njs_console_init(njs_opts_t *opts, njs_console_t *console); static njs_int_t njs_externals_init(njs_vm_t *vm); -static njs_vm_t *njs_create_vm(njs_opts_t *opts); -static void njs_process_output(njs_vm_t *vm, njs_value_t *value, njs_int_t ret); +static njs_engine_t *njs_create_engine(njs_opts_t *opts); static njs_int_t njs_process_file(njs_opts_t *opts); -static njs_int_t njs_process_script(njs_vm_t *vm, void *runtime, - const njs_str_t *script); +static njs_int_t njs_process_script(njs_engine_t *engine, + njs_console_t *console, njs_str_t *script); #ifndef NJS_FUZZER_TARGET static njs_int_t njs_options_parse(njs_opts_t *opts, int argc, char **argv); +static njs_int_t njs_options_parse_engine(njs_opts_t *opts, const char *engine); static njs_int_t njs_options_add_path(njs_opts_t *opts, char *path, size_t len); static void njs_options_free(njs_opts_t *opts); @@ -166,6 +272,9 @@ static void njs_console_log(njs_log_leve static void njs_console_logger(njs_log_level_t level, const u_char *start, size_t length); +static njs_int_t njs_console_time(njs_console_t *console, njs_str_t *name); +static void njs_console_time_end(njs_console_t *console, njs_str_t *name, + uint64_t time); static intptr_t njs_event_rbtree_compare(njs_rbtree_node_t *node1, njs_rbtree_node_t *node2); static uint64_t njs_time(void); @@ -317,8 +426,8 @@ static njs_console_t njs_console; static njs_int_t njs_main(njs_opts_t *opts) { - njs_vm_t *vm; - njs_int_t ret; + njs_int_t ret; + njs_engine_t *engine; njs_mm_denormals(opts->denormals); @@ -339,6 +448,12 @@ njs_main(njs_opts_t *opts) } } + ret = njs_console_init(opts, &njs_console); + if (njs_slow_path(ret != NJS_OK)) { + njs_stderror("njs_console_init() failed\n"); + return NJS_ERROR; + } + #if (!defined NJS_FUZZER_TARGET && defined NJS_HAVE_READLINE) if (opts->interactive) { @@ -349,13 +464,13 @@ njs_main(njs_opts_t *opts) #endif if (opts->command.length != 0) { - vm = njs_create_vm(opts); - if (vm == NULL) { + engine = njs_create_engine(opts); + if (engine == NULL) { return NJS_ERROR; } - ret = njs_process_script(vm, njs_vm_external_ptr(vm), &opts->command); - njs_vm_destroy(vm); + ret = njs_process_script(engine, &njs_console, &opts->command); + engine->destroy(engine); } else { ret = njs_process_file(opts); @@ -426,6 +541,10 @@ njs_options_parse(njs_opts_t *opts, int " -g enable generator debug.\n" #endif " -j set the maximum stack size in bytes.\n" + " -m load as ES6 module (script is default).\n" +#ifdef NJS_HAVE_QUICKJS + " -n njs|QuickJS set JS engine (njs is default)\n" +#endif #ifdef NJS_DEBUG_OPCODE " -o enable opcode debug.\n" #endif @@ -433,15 +552,14 @@ njs_options_parse(njs_opts_t *opts, int " -q disable interactive introduction prompt.\n" " -r ignore unhandled promise rejection.\n" " -s sandbox mode.\n" - " -t script|module source code type (script is default).\n" " -v print njs version and exit.\n" " -u disable \"unsafe\" mode.\n" " script.js | - run code from a file or stdin.\n"; - ret = NJS_DONE; - opts->denormals = 1; + opts->can_block = 1; opts->exit_code = EXIT_FAILURE; + opts->engine = NJS_ENGINE_NJS; opts->unhandled_rejection = 1; p = getenv("NJS_EXIT_CODE"); @@ -449,6 +567,24 @@ njs_options_parse(njs_opts_t *opts, int opts->exit_code = atoi(p); } + p = getenv("NJS_CAN_BLOCK"); + if (p != NULL) { + opts->can_block = atoi(p); + } + + p = getenv("NJS_LOAD_AS_MODULE"); + if (p != NULL) { + opts->module = 1; + } + + p = getenv("NJS_ENGINE"); + if (p != NULL) { + ret = njs_options_parse_engine(opts, p); + if (ret != NJS_OK) { + return NJS_ERROR; + } + } + start = getenv("NJS_PATH"); if (start != NULL) { for ( ;; ) { @@ -486,7 +622,7 @@ njs_options_parse(njs_opts_t *opts, int case '?': case 'h': njs_printf("%*s", njs_length(help), help); - return ret; + return NJS_DONE; case 'a': opts->ast = 1; @@ -541,6 +677,23 @@ njs_options_parse(njs_opts_t *opts, int njs_stderror("option \"-j\" requires argument\n"); return NJS_ERROR; + case 'm': + opts->module = 1; + break; + + case 'n': + if (++i < argc) { + ret = njs_options_parse_engine(opts, argv[i]); + if (ret != NJS_OK) { + return NJS_ERROR; + } + + break; + } + + njs_stderror("option \"-n\" requires argument\n"); + return NJS_ERROR; + #ifdef NJS_DEBUG_OPCODE case 'o': opts->opcode_debug = 1; @@ -573,22 +726,6 @@ njs_options_parse(njs_opts_t *opts, int opts->sandbox = 1; break; - case 't': - if (++i < argc) { - if (strcmp(argv[i], "module") == 0) { - opts->module = 1; - - } else if (strcmp(argv[i], "script") != 0) { - njs_stderror("option \"-t\" unexpected source type: %s\n", - argv[i]); - return NJS_ERROR; - } - - break; - } - - njs_stderror("option \"-t\" requires source type\n"); - return NJS_ERROR; case 'v': case 'V': opts->version = 1; @@ -608,6 +745,40 @@ njs_options_parse(njs_opts_t *opts, int done: +#ifdef NJS_HAVE_QUICKJS + if (opts->engine == NJS_ENGINE_QUICKJS) { + if (opts->ast) { + njs_stderror("option \"-a\" is not supported for quickjs\n"); + return NJS_ERROR; + } + + if (opts->disassemble) { + njs_stderror("option \"-d\" is not supported for quickjs\n"); + return NJS_ERROR; + } + + if (opts->generator_debug) { + njs_stderror("option \"-g\" is not supported for quickjs\n"); + return NJS_ERROR; + } + + if (opts->opcode_debug) { + njs_stderror("option \"-o\" is not supported for quickjs\n"); + return NJS_ERROR; + } + + if (opts->sandbox) { + njs_stderror("option \"-s\" is not supported for quickjs\n"); + return NJS_ERROR; + } + + if (opts->safe) { + njs_stderror("option \"-u\" is not supported for quickjs\n"); + return NJS_ERROR; + } + } +#endif + opts->argc = njs_max(argc - i + 1, 2); opts->argv = malloc(sizeof(char*) * opts->argc); if (opts->argv == NULL) { @@ -626,6 +797,26 @@ done: static njs_int_t +njs_options_parse_engine(njs_opts_t *opts, const char *engine) +{ + if (strncasecmp(engine, "njs", 3) == 0) { + opts->engine = NJS_ENGINE_NJS; + +#ifdef NJS_HAVE_QUICKJS + } else if (strncasecmp(engine, "QuickJS", 7) == 0) { + opts->engine = NJS_ENGINE_QUICKJS; +#endif + + } else { + njs_stderror("unknown engine \"%s\"\n", engine); + return NJS_ERROR; + } + + return NJS_OK; +} + + +static njs_int_t njs_options_add_path(njs_opts_t *opts, char *path, size_t len) { njs_str_t *paths; @@ -675,10 +866,7 @@ LLVMFuzzerTestOneInput(const uint8_t* da opts.file = (char *) "fuzzer"; opts.command.start = (u_char *) data; opts.command.length = size; - - njs_memzero(&njs_console, sizeof(njs_console_t)); - - njs_console.suppress_stdout = 1; + opts.suppress_stdout = 1; return njs_main(&opts); } @@ -686,23 +874,31 @@ LLVMFuzzerTestOneInput(const uint8_t* da #endif static njs_int_t -njs_console_init(njs_vm_t *vm, njs_console_t *console) +njs_console_init(njs_opts_t *opts, njs_console_t *console) { - console->vm = vm; - - console->event_id = 0; + njs_memzero(console, sizeof(njs_console_t)); + njs_rbtree_init(&console->events, njs_event_rbtree_compare); njs_queue_init(&console->posted_events); njs_queue_init(&console->labels); - njs_memzero(&console->cwd, sizeof(njs_str_t)); - - console->rejected_promises = NULL; - - console->completion.completions = njs_vm_completions(vm, NULL); - if (console->completion.completions == NULL) { - return NJS_ERROR; + console->interactive = opts->interactive; + console->suppress_stdout = opts->suppress_stdout; + console->module = opts->module; + console->argv = opts->argv; + console->argc = opts->argc; + +#if (NJS_HAVE_QUICKJS) + if (opts->engine == NJS_ENGINE_QUICKJS) { + njs_queue_init(&console->agents); + njs_queue_init(&console->reports); + pthread_mutex_init(&console->report_mutex, NULL); + pthread_mutex_init(&console->agent_mutex, NULL); + pthread_cond_init(&console->agent_cond, NULL); + + console->process = JS_UNDEFINED; } +#endif return NJS_OK; } @@ -741,7 +937,7 @@ njs_externals_init(njs_vm_t *vm) static const njs_str_t set_immediate = njs_str("setImmediate"); static const njs_str_t clear_timeout = njs_str("clearTimeout"); - console = njs_vm_options(vm)->external; + console = njs_vm_external_ptr(vm); njs_console_proto_id = njs_vm_external_prototype(vm, njs_ext_console, njs_nitems(njs_ext_console)); @@ -803,11 +999,6 @@ njs_externals_init(njs_vm_t *vm) return NJS_ERROR; } - ret = njs_console_init(vm, console); - if (njs_slow_path(ret != NJS_OK)) { - return NJS_ERROR; - } - return NJS_OK; } @@ -830,7 +1021,9 @@ njs_rejection_tracker(njs_vm_t *vm, njs_ promise_obj = njs_value_ptr(promise); for (i = 0; i < length; i++) { - if (rejected_promise[i].promise == promise_obj) { + if (njs_value_ptr(njs_value_arg(&rejected_promise[i].u.njs.promise)) + == promise_obj) + { njs_arr_remove(console->rejected_promises, &rejected_promise[i]); @@ -842,7 +1035,7 @@ njs_rejection_tracker(njs_vm_t *vm, njs_ } if (console->rejected_promises == NULL) { - console->rejected_promises = njs_arr_create(njs_vm_memory_pool(vm), 4, + console->rejected_promises = njs_arr_create(console->engine->pool, 4, sizeof(njs_rejected_promise_t)); if (njs_slow_path(console->rejected_promises == NULL)) { return; @@ -854,8 +1047,8 @@ njs_rejection_tracker(njs_vm_t *vm, njs_ return; } - rejected_promise->promise = njs_value_ptr(promise); - njs_value_assign(&rejected_promise->message, reason); + njs_value_assign(&rejected_promise->u.njs.promise, promise); + njs_value_assign(&rejected_promise->u.njs.message, reason); } @@ -968,7 +1161,7 @@ njs_module_read(njs_mp_t *mp, int fd, nj text->length = sb.st_size; - text->start = njs_mp_alloc(mp, text->length); + text->start = njs_mp_alloc(mp, text->length + 1); if (text->start == NULL) { goto fail; } @@ -979,6 +1172,8 @@ njs_module_read(njs_mp_t *mp, int fd, nj goto fail; } + text->start[text->length] = '\0'; + return NJS_OK; fail: @@ -1034,13 +1229,13 @@ current_dir: static njs_int_t -njs_console_set_cwd(njs_vm_t *vm, njs_console_t *console, njs_str_t *file) +njs_console_set_cwd(njs_console_t *console, njs_str_t *file) { njs_str_t cwd; njs_file_dirname(file, &cwd); - console->cwd.start = njs_mp_alloc(njs_vm_memory_pool(vm), cwd.length); + console->cwd.start = njs_mp_alloc(console->engine->pool, cwd.length); if (njs_slow_path(console->cwd.start == NULL)) { return NJS_ERROR; } @@ -1086,7 +1281,7 @@ njs_module_loader(njs_vm_t *vm, njs_exte prev_cwd = console->cwd; - ret = njs_console_set_cwd(vm, console, &info.file); + ret = njs_console_set_cwd(console, &info.file); if (njs_slow_path(ret != NJS_OK)) { njs_vm_internal_error(vm, "while setting cwd for \"%V\" module", &info.file); @@ -1107,8 +1302,8 @@ njs_module_loader(njs_vm_t *vm, njs_exte } -static njs_vm_t * -njs_create_vm(njs_opts_t *opts) +static njs_int_t +njs_engine_njs_init(njs_engine_t *engine, njs_opts_t *opts) { njs_vm_t *vm; njs_int_t ret; @@ -1147,7 +1342,12 @@ njs_create_vm(njs_opts_t *opts) vm = njs_vm_create(&vm_options); if (vm == NULL) { njs_stderror("failed to create vm\n"); - return NULL; + return NJS_ERROR; + } + + engine->u.njs.completion.completions = njs_vm_completions(vm, NULL); + if (engine->u.njs.completion.completions == NULL) { + return NJS_ERROR; } if (opts->unhandled_rejection) { @@ -1155,30 +1355,77 @@ njs_create_vm(njs_opts_t *opts) njs_vm_external_ptr(vm)); } - ret = njs_console_set_cwd(vm, njs_vm_external_ptr(vm), &vm_options.file); + ret = njs_console_set_cwd(njs_vm_external_ptr(vm), &vm_options.file); if (njs_slow_path(ret != NJS_OK)) { njs_stderror("failed to set cwd\n"); - return NULL; + return NJS_ERROR; } njs_vm_set_module_loader(vm, njs_module_loader, opts); - return vm; + engine->u.njs.vm = vm; + + return NJS_OK; +} + + +static njs_int_t +njs_engine_njs_destroy(njs_engine_t *engine) +{ + njs_vm_destroy(engine->u.njs.vm); + njs_mp_destroy(engine->pool); + + return NJS_OK; } -static void -njs_console_output(njs_vm_t *vm, njs_value_t *value, njs_int_t ret) +static njs_int_t +njs_engine_njs_eval(njs_engine_t *engine, njs_str_t *script) { - njs_str_t out; + u_char *start, *end; + njs_int_t ret; + + start = script->start; + end = start + script->length; + + ret = njs_vm_compile(engine->u.njs.vm, &start, end); + + if (ret == NJS_OK && start == end) { + return njs_vm_start(engine->u.njs.vm, + njs_value_arg(&engine->u.njs.value)); + } + + return NJS_ERROR; +} + + +static njs_int_t +njs_engine_njs_execute_pending_job(njs_engine_t *engine) +{ + return njs_vm_execute_pending_job(engine->u.njs.vm); +} + + +static njs_int_t +njs_engine_njs_output(njs_engine_t *engine, njs_int_t ret) +{ + njs_vm_t *vm; + njs_str_t out; + njs_console_t *console; + + vm = engine->u.njs.vm; + console = njs_vm_external_ptr(vm); if (ret == NJS_OK) { - if (njs_vm_value_dump(vm, &out, value, 0, 1) != NJS_OK) { - njs_stderror("Shell:failed to get retval from VM\n"); - return; - } - - if (njs_vm_options(vm)->interactive) { + if (console->interactive) { + if (njs_vm_value_dump(vm, &out, njs_value_arg(&engine->u.njs.value), + 0, 1) + != NJS_OK) + { + njs_stderror("Shell:failed to get retval from VM\n"); + return NJS_ERROR; + } + njs_print(out.start, out.length); njs_print("\n", 1); } @@ -1187,11 +1434,13 @@ njs_console_output(njs_vm_t *vm, njs_val njs_vm_exception_string(vm, &out); njs_stderror("Thrown:\n%V\n", &out); } + + return NJS_OK; } static njs_int_t -njs_process_events(void *runtime) +njs_engine_njs_process_events(njs_engine_t *engine) { njs_ev_t *ev; njs_vm_t *vm; @@ -1201,14 +1450,8 @@ njs_process_events(void *runtime) njs_queue_link_t *link; njs_opaque_value_t retval; - if (runtime == NULL) { - njs_stderror("njs_process_events(): no runtime\n"); - return NJS_ERROR; - } - - console = runtime; - vm = console->vm; - + vm = engine->u.njs.vm; + console = njs_vm_external_ptr(vm); events = &console->posted_events; for ( ;; ) { @@ -1221,17 +1464,14 @@ njs_process_events(void *runtime) ev = njs_queue_link_data(link, njs_ev_t, link); njs_queue_remove(&ev->link); - ev->link.prev = NULL; - ev->link.next = NULL; - njs_rbtree_delete(&console->events, &ev->node); - ret = njs_vm_invoke(vm, ev->function, ev->args, ev->nargs, + ret = njs_vm_invoke(vm, ev->u.njs.function, ev->u.njs.args, ev->nargs, njs_value_arg(&retval)); if (ret == NJS_ERROR) { - njs_process_output(vm, njs_value_arg(&retval), ret); - - if (!njs_vm_options(vm)->interactive) { + njs_engine_njs_output(engine, ret); + + if (!console->interactive) { return NJS_ERROR; } } @@ -1246,14 +1486,16 @@ njs_process_events(void *runtime) static njs_int_t -njs_unhandled_rejection(void *runtime) +njs_engine_njs_unhandled_rejection(njs_engine_t *engine) { + njs_vm_t *vm; njs_int_t ret; njs_str_t message; njs_console_t *console; njs_rejected_promise_t *rejected_promise; - console = runtime; + vm = engine->u.njs.vm; + console = njs_vm_external_ptr(vm); if (console->rejected_promises == NULL || console->rejected_promises->items == 0) @@ -1263,14 +1505,13 @@ njs_unhandled_rejection(void *runtime) rejected_promise = console->rejected_promises->start; - ret = njs_vm_value_to_string(console->vm, &message, - njs_value_arg(&rejected_promise->message)); + ret = njs_vm_value_to_string(vm, &message, + njs_value_arg(&rejected_promise->u.njs.message)); if (njs_slow_path(ret != NJS_OK)) { return -1; } - njs_vm_error(console->vm, "unhandled promise rejection: %V", - &message); + njs_vm_error(vm, "unhandled promise rejection: %V", &message); njs_arr_destroy(console->rejected_promises); console->rejected_promises = NULL; @@ -1278,6 +1519,1483 @@ njs_unhandled_rejection(void *runtime) return 1; } +#ifdef NJS_HAVE_QUICKJS + +static JSValue +njs_qjs_console_log(JSContext *ctx, JSValueConst this_val, int argc, + JSValueConst *argv, int magic) +{ + int i; + size_t len; + const char *str; + + for (i = 0; i < argc; i++) { + str = JS_ToCStringLen(ctx, &len, argv[i]); + if (!str) { + return JS_EXCEPTION; + } + + njs_console_logger(magic, (const u_char*) str, len); + JS_FreeCString(ctx, str); + } + + return JS_UNDEFINED; +} + + +static JSValue +njs_qjs_console_time(JSContext *ctx, JSValueConst this_val, int argc, + JSValueConst *argv) +{ + njs_str_t name; + const char *str; + njs_console_t *console; + From yar at nginx.com Fri Feb 23 10:32:29 2024 From: yar at nginx.com (Yaroslav Zhuravlev) Date: Fri, 23 Feb 2024 10:32:29 +0000 Subject: [PATCH] Documented opensourcing of the OTel module In-Reply-To: <6FBEDBE8-90CA-4B3F-AE56-4796D3283B7C@nginx.com> References: <00807e94be3622a79d77.1706017747@ORK-ML-00007151> <6FBEDBE8-90CA-4B3F-AE56-4796D3283B7C@nginx.com> Message-ID: Hi, Thank you for the review, committed: http://hg.nginx.org/nginx.org/rev/48c688d80004 > On 6 Feb 2024, at 11:44, Yaroslav Zhuravlev wrote: > > Hi Maxim, > > Thank you for your comments, fixed, new version ready. > >>> >>> -The ngx_otel_module module (1.23.4) provides >>> +The ngx_otel_module module (1.23.4) is nginx-authored >> >> Quoting from >> https://mailman.nginx.org/pipermail/nginx-devel/2023-October/4AGH5XVKNP6UDFE32PZIXYO7JQ4RE37P.html: >> >> : Note that "nginx-authored" here looks misleading, as no nginx core >> : developers work on this module. > > Fixed, thanks > >> >>> +third-party module >>> +that provides >>> OpenTelemetry >>> distributed tracing support. >>> The module supports >>> @@ -23,12 +25,20 @@ >>> >>> >>> >>> +The module is open source since 1.25.2. >>> +Download and install instructions are available >>> +here. >>> +The module is also available as a prebuilt >>> +nginx-module-otel dynamic module >>> +package (1.25.4). >>> + >>> + >>> + >>> >>> This module is available as part of our >>> commercial subscription >>> -in nginx-plus-module-otel package. >>> -After installation, the module can be loaded >>> -dynamically. >>> +(the >>> +nginx-plus-module-otel package). >> >> I don't see reasons to provide additional links here. Rather, the >> note probably can be removed altogether, or changed to something >> like "In previuos versions, this module is available...". > > Removed, thanks > > [...] > > New version: > > # HG changeset patch > # User Yaroslav Zhuravlev > # Date 1704815768 0 > # Tue Jan 09 15:56:08 2024 +0000 > # Node ID 014598746fcb5dc953b15a6ea0de5410a7ecae6a > # Parent e6b785b7e3082fcde152b59b460448a33ec7df64 > Documented opensourcing of the OTel module. > > diff --git a/xml/en/docs/index.xml b/xml/en/docs/index.xml > --- a/xml/en/docs/index.xml > +++ b/xml/en/docs/index.xml > @@ -8,7 +8,7 @@ >
link="/en/docs/" > lang="en" > - rev="49" > + rev="50" > toc="no"> > @@ -681,6 +681,12 @@ > ngx_mgmt_module > > + > + > + > + > + > + > > > ngx_otel_module > diff --git a/xml/en/docs/ngx_otel_module.xml b/xml/en/docs/ngx_otel_module.xml > --- a/xml/en/docs/ngx_otel_module.xml > +++ b/xml/en/docs/ngx_otel_module.xml > @@ -9,12 +9,14 @@ > link="/en/docs/ngx_otel_module.html" > lang="en" > - rev="1"> > + rev="2"> >
> > -The ngx_otel_module module (1.23.4) provides > +The ngx_otel_module module (1.23.4) is a > +third-party module > +that provides > OpenTelemetry > distributed tracing support. > The module supports > @@ -23,13 +25,11 @@ > > > - > -This module is available as part of our > -commercial subscription > -in nginx-plus-module-otel package. > -After installation, the module can be loaded > -dynamically. > - > +Download and install instructions are available > +here. > +The module is also available as a prebuilt > +nginx-module-otel dynamic module > +package (1.25.3). > >
> diff --git a/xml/ru/docs/index.xml b/xml/ru/docs/index.xml > --- a/xml/ru/docs/index.xml > +++ b/xml/ru/docs/index.xml > @@ -8,7 +8,7 @@ >
link="/ru/docs/" > lang="ru" > - rev="49" > + rev="50" > toc="no"> > @@ -687,9 +687,15 @@ > ngx_mgmt_module [en] > > + > + > + > + > + > + > > > -ngx_otel_module [en] > +ngx_otel_module [en] > > > From jt26wzz at gmail.com Sun Feb 25 07:53:23 2024 From: jt26wzz at gmail.com (=?UTF-8?B?5LiK5Yu+5ouz?=) Date: Sun, 25 Feb 2024 15:53:23 +0800 Subject: Inquiry Regarding Handling of QUIC Connections During Nginx Reload Message-ID: Hello, I hope this email finds you well. I am writing to inquire about the status of an issue I have encountered regarding the handling of QUIC connections when Nginx is reloaded. Recently, I delved into the Nginx eBPF implementation, specifically examining how QUIC connection packets are handled, especially during Nginx reloads. My focus was on ensuring that existing QUIC connection packets are correctly routed to the appropriate worker after a reload, and the Nginx eBPF prog have done this job perfectly. However, I observed that during a reload, new QUIC connections might be directed to the old worker. The underlying problem stems from the fact that new QUIC connections fail to match the eBPF reuseport socket map. The kernel default logic then routes UDP packets based on the hash UDP 4-tuple in the reuseport group socket array. Since the old worker's listen FDs persist in the reuseport group socket array (reuse->socks), there is a possibility that the old worker may still be tasked with handling new QUIC connections. Given that the old worker should not process new requests, it results in the old worker not responding to the QUIC new connection. Consequently, clients have to endure the handshake timeout and retry the connection, potentially encountering the old worker again, leading to an ineffective cycle. In my investigation, I came across a thread in the nginx-devel mailing list [https://www.mail-archive.com/nginx-devel at nginx.org/msg10627.html], where it was mentioned that there would be some work addressing this issue. Considering my limited experience with eBPF, I propose a potential solution. The eBPF program could maintain another eBPF map containing only the listen sockets of the new worker. When the eBPF program calls `bpf_sk_select_reuseport` and receives `-ENOENT`, it could utilize this new eBPF map with the hash UDP 4-tuple to route the new QUIC connection to the new worker. This approach aims to circumvent the kernel logic routing the packet to the shutting down worker since removing the old worker's listen socket from the reuseport group socket array seems unfeasible. Not sure about whether this solution is a good idea and I also wonder if there are other solutions for this. I would appreciate any insights or updates you could provide on this matter. Thank you for your time and consideration. Best regards, Zhenzhong -------------- next part -------------- An HTML attachment was scrubbed... URL: From arut at nginx.com Mon Feb 26 11:49:30 2024 From: arut at nginx.com (Roman Arutyunyan) Date: Mon, 26 Feb 2024 15:49:30 +0400 Subject: Inquiry Regarding Handling of QUIC Connections During Nginx Reload In-Reply-To: References: Message-ID: <20240226114930.izp2quxwsp2usnvg@N00W24XTQX> Hi, On Sun, Feb 25, 2024 at 03:53:23PM +0800, 上勾拳 wrote: > Hello, > > I hope this email finds you well. I am writing to inquire about the status > of an issue I have encountered regarding the handling of QUIC connections > when Nginx is reloaded. > > Recently, I delved into the Nginx eBPF implementation, specifically > examining how QUIC connection packets are handled, especially during Nginx > reloads. My focus was on ensuring that existing QUIC connection packets are > correctly routed to the appropriate worker after a reload, and the Nginx > eBPF prog have done this job perfectly. > > However, I observed that during a reload, new QUIC connections might be > directed to the old worker. The underlying problem stems from the fact that > new QUIC connections fail to match the eBPF reuseport socket map. The > kernel default logic then routes UDP packets based on the hash UDP 4-tuple > in the reuseport group socket array. Since the old worker's listen FDs > persist in the reuseport group socket array (reuse->socks), there is a > possibility that the old worker may still be tasked with handling new QUIC > connections. > > Given that the old worker should not process new requests, it results in > the old worker not responding to the QUIC new connection. Consequently, > clients have to endure the handshake timeout and retry the connection, > potentially encountering the old worker again, leading to an ineffective > cycle. > > In my investigation, I came across a thread in the nginx-devel mailing list > [https://www.mail-archive.com/nginx-devel at nginx.org/msg10627.html], where > it was mentioned that there would be some work addressing this issue. > > Considering my limited experience with eBPF, I propose a potential > solution. The eBPF program could maintain another eBPF map containing only > the listen sockets of the new worker. When the eBPF program calls > `bpf_sk_select_reuseport` and receives `-ENOENT`, it could utilize this new > eBPF map with the hash UDP 4-tuple to route the new QUIC connection to the > new worker. This approach aims to circumvent the kernel logic routing the > packet to the shutting down worker since removing the old worker's listen > socket from the reuseport group socket array seems unfeasible. Not sure > about whether this solution is a good idea and I also wonder if there are > other solutions for this. I would appreciate any insights or updates you > could provide on this matter. Thank you for your time and consideration. It is true QUIC in nginx does not handle reloads well. This is a known issue and we are working on improving it. I made an effort a while back to address QUIC reloads in nginx: https://mailman.nginx.org/pipermail/nginx-devel/2023-January/thread.html#16073 Here patch #3 implements ePBF-based solution and patch #4 implements client sockets-based solution. The client sockets require extra worker process privileges to bind to listen port, which is a serious problem for a typical nginx configuration. The ePBF solution does not seem to have any problems, but we still need more feedback to proceed with this. If you apply all 4 patches, make sure you disable client sockets using --without-quic_client_sockets. Otherwise just apply the first 3 patches. Here's a relevant trac ticket: https://trac.nginx.org/nginx/ticket/2528 -- Roman Arutyunyan From jt26wzz at gmail.com Mon Feb 26 14:10:51 2024 From: jt26wzz at gmail.com (=?UTF-8?B?5LiK5Yu+5ouz?=) Date: Mon, 26 Feb 2024 22:10:51 +0800 Subject: nginx-devel Digest, Vol 162, Issue 26 In-Reply-To: References: Message-ID: Hi Roman, Thanks a bunch for your quick response, it really made a difference for me. I went through Patch 3, and it's pretty cool! And about Patch 4, which also addresses the reload route issue, I would like to share an experience from utilizing this solution in a production environment. Unfortunately, I encountered a significant challenge that surpasses the race condition between bind and connect. Specifically, this solution led to a notable performance degradation in the kernel's UDP packet lookup under high concurrency scenarios. The excessive number of client sockets caused the UDP hash table lookup performance degrade into a linked list, because the udp hashtable is based on target ip and target port hash. To address this lookup performance issue, we implemented a proprietary kernel patch that introduces a 4-tuple hash table for UDP socket lookup. Although effective, it appears that the eBPF solution is more versatile and universal. Big thanks again for your attention! Best Regards, Zhenzhong 于2024年2月26日周一 20:00写道: > Send nginx-devel mailing list submissions to > nginx-devel at nginx.org > > To subscribe or unsubscribe via the World Wide Web, visit > https://mailman.nginx.org/mailman/listinfo/nginx-devel > or, via email, send a message with subject or body 'help' to > nginx-devel-request at nginx.org > > You can reach the person managing the list at > nginx-devel-owner at nginx.org > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of nginx-devel digest..." > > > Today's Topics: > > 1. Re: Inquiry Regarding Handling of QUIC Connections During > Nginx Reload (Roman Arutyunyan) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Mon, 26 Feb 2024 15:49:30 +0400 > From: Roman Arutyunyan > To: nginx-devel at nginx.org > Subject: Re: Inquiry Regarding Handling of QUIC Connections During > Nginx Reload > Message-ID: <20240226114930.izp2quxwsp2usnvg at N00W24XTQX> > Content-Type: text/plain; charset=utf-8 > > Hi, > > On Sun, Feb 25, 2024 at 03:53:23PM +0800, 上勾拳 wrote: > > Hello, > > > > I hope this email finds you well. I am writing to inquire about the > status > > of an issue I have encountered regarding the handling of QUIC connections > > when Nginx is reloaded. > > > > Recently, I delved into the Nginx eBPF implementation, specifically > > examining how QUIC connection packets are handled, especially during > Nginx > > reloads. My focus was on ensuring that existing QUIC connection packets > are > > correctly routed to the appropriate worker after a reload, and the Nginx > > eBPF prog have done this job perfectly. > > > > However, I observed that during a reload, new QUIC connections might be > > directed to the old worker. The underlying problem stems from the fact > that > > new QUIC connections fail to match the eBPF reuseport socket map. The > > kernel default logic then routes UDP packets based on the hash UDP > 4-tuple > > in the reuseport group socket array. Since the old worker's listen FDs > > persist in the reuseport group socket array (reuse->socks), there is a > > possibility that the old worker may still be tasked with handling new > QUIC > > connections. > > > > Given that the old worker should not process new requests, it results in > > the old worker not responding to the QUIC new connection. Consequently, > > clients have to endure the handshake timeout and retry the connection, > > potentially encountering the old worker again, leading to an ineffective > > cycle. > > > > In my investigation, I came across a thread in the nginx-devel mailing > list > > [https://www.mail-archive.com/nginx-devel at nginx.org/msg10627.html], > where > > it was mentioned that there would be some work addressing this issue. > > > > Considering my limited experience with eBPF, I propose a potential > > solution. The eBPF program could maintain another eBPF map containing > only > > the listen sockets of the new worker. When the eBPF program calls > > `bpf_sk_select_reuseport` and receives `-ENOENT`, it could utilize this > new > > eBPF map with the hash UDP 4-tuple to route the new QUIC connection to > the > > new worker. This approach aims to circumvent the kernel logic routing the > > packet to the shutting down worker since removing the old worker's listen > > socket from the reuseport group socket array seems unfeasible. Not sure > > about whether this solution is a good idea and I also wonder if there are > > other solutions for this. I would appreciate any insights or updates you > > could provide on this matter. Thank you for your time and consideration. > > It is true QUIC in nginx does not handle reloads well. This is a known > issue > and we are working on improving it. I made an effort a while back to > address > QUIC reloads in nginx: > > > https://mailman.nginx.org/pipermail/nginx-devel/2023-January/thread.html#16073 > > Here patch #3 implements ePBF-based solution and patch #4 implements > client sockets-based solution. The client sockets require extra worker > process > privileges to bind to listen port, which is a serious problem for a typical > nginx configuration. The ePBF solution does not seem to have any problems, > but we still need more feedback to proceed with this. If you apply all 4 > patches, make sure you disable client sockets using > --without-quic_client_sockets. Otherwise just apply the first 3 patches. > > Here's a relevant trac ticket: > > https://trac.nginx.org/nginx/ticket/2528 > > -- > Roman Arutyunyan > > ------------------------------ > > Subject: Digest Footer > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > https://mailman.nginx.org/mailman/listinfo/nginx-devel > > > ------------------------------ > > End of nginx-devel Digest, Vol 162, Issue 26 > ******************************************** > -------------- next part -------------- An HTML attachment was scrubbed... URL: From arut at nginx.com Tue Feb 27 07:52:48 2024 From: arut at nginx.com (Roman Arutyunyan) Date: Tue, 27 Feb 2024 11:52:48 +0400 Subject: nginx-devel Digest, Vol 162, Issue 26 In-Reply-To: References: Message-ID: <20240227075248.duv6tuqpmihfjj6z@N00W24XTQX> Hi, On Mon, Feb 26, 2024 at 10:10:51PM +0800, 上勾拳 wrote: > Hi Roman, > > Thanks a bunch for your quick response, it really made a difference for me. > > I went through Patch 3, and it's pretty cool! And about Patch 4, which also > addresses the reload route issue, I would like to share an experience from > utilizing this solution in a production environment. Unfortunately, I > encountered a significant challenge that surpasses the race condition > between bind and connect. Specifically, this solution led to a notable > performance degradation in the kernel's UDP packet lookup under high > concurrency scenarios. The excessive number of client sockets caused the > UDP hash table lookup performance degrade into a linked list, because the > udp hashtable is based on target ip and target port hash. Thanks for the feedback. Apparently current UDP stack is just not made for client sockets. > To address this > lookup performance issue, we implemented a proprietary kernel patch that > introduces a 4-tuple hash table for UDP socket lookup. Sounds great. Also I wish there was a patch that would eliminate the race condition as well. This would be something like accept() for UDP with atomic bind+connect and would not require extra privileges for bind(). > Although effective, > it appears that the eBPF solution is more versatile and universal. eBPF comes with its own issues as well, you now need privileges for it. > Big thanks again for your attention! > > Best Regards, > Zhenzhong > > 于2024年2月26日周一 20:00写道: > > > Send nginx-devel mailing list submissions to > > nginx-devel at nginx.org > > > > To subscribe or unsubscribe via the World Wide Web, visit > > https://mailman.nginx.org/mailman/listinfo/nginx-devel > > or, via email, send a message with subject or body 'help' to > > nginx-devel-request at nginx.org > > > > You can reach the person managing the list at > > nginx-devel-owner at nginx.org > > > > When replying, please edit your Subject line so it is more specific > > than "Re: Contents of nginx-devel digest..." > > > > > > Today's Topics: > > > > 1. Re: Inquiry Regarding Handling of QUIC Connections During > > Nginx Reload (Roman Arutyunyan) > > > > > > ---------------------------------------------------------------------- > > > > Message: 1 > > Date: Mon, 26 Feb 2024 15:49:30 +0400 > > From: Roman Arutyunyan > > To: nginx-devel at nginx.org > > Subject: Re: Inquiry Regarding Handling of QUIC Connections During > > Nginx Reload > > Message-ID: <20240226114930.izp2quxwsp2usnvg at N00W24XTQX> > > Content-Type: text/plain; charset=utf-8 > > > > Hi, > > > > On Sun, Feb 25, 2024 at 03:53:23PM +0800, 上勾拳 wrote: > > > Hello, > > > > > > I hope this email finds you well. I am writing to inquire about the > > status > > > of an issue I have encountered regarding the handling of QUIC connections > > > when Nginx is reloaded. > > > > > > Recently, I delved into the Nginx eBPF implementation, specifically > > > examining how QUIC connection packets are handled, especially during > > Nginx > > > reloads. My focus was on ensuring that existing QUIC connection packets > > are > > > correctly routed to the appropriate worker after a reload, and the Nginx > > > eBPF prog have done this job perfectly. > > > > > > However, I observed that during a reload, new QUIC connections might be > > > directed to the old worker. The underlying problem stems from the fact > > that > > > new QUIC connections fail to match the eBPF reuseport socket map. The > > > kernel default logic then routes UDP packets based on the hash UDP > > 4-tuple > > > in the reuseport group socket array. Since the old worker's listen FDs > > > persist in the reuseport group socket array (reuse->socks), there is a > > > possibility that the old worker may still be tasked with handling new > > QUIC > > > connections. > > > > > > Given that the old worker should not process new requests, it results in > > > the old worker not responding to the QUIC new connection. Consequently, > > > clients have to endure the handshake timeout and retry the connection, > > > potentially encountering the old worker again, leading to an ineffective > > > cycle. > > > > > > In my investigation, I came across a thread in the nginx-devel mailing > > list > > > [https://www.mail-archive.com/nginx-devel at nginx.org/msg10627.html], > > where > > > it was mentioned that there would be some work addressing this issue. > > > > > > Considering my limited experience with eBPF, I propose a potential > > > solution. The eBPF program could maintain another eBPF map containing > > only > > > the listen sockets of the new worker. When the eBPF program calls > > > `bpf_sk_select_reuseport` and receives `-ENOENT`, it could utilize this > > new > > > eBPF map with the hash UDP 4-tuple to route the new QUIC connection to > > the > > > new worker. This approach aims to circumvent the kernel logic routing the > > > packet to the shutting down worker since removing the old worker's listen > > > socket from the reuseport group socket array seems unfeasible. Not sure > > > about whether this solution is a good idea and I also wonder if there are > > > other solutions for this. I would appreciate any insights or updates you > > > could provide on this matter. Thank you for your time and consideration. > > > > It is true QUIC in nginx does not handle reloads well. This is a known > > issue > > and we are working on improving it. I made an effort a while back to > > address > > QUIC reloads in nginx: > > > > > > https://mailman.nginx.org/pipermail/nginx-devel/2023-January/thread.html#16073 > > > > Here patch #3 implements ePBF-based solution and patch #4 implements > > client sockets-based solution. The client sockets require extra worker > > process > > privileges to bind to listen port, which is a serious problem for a typical > > nginx configuration. The ePBF solution does not seem to have any problems, > > but we still need more feedback to proceed with this. If you apply all 4 > > patches, make sure you disable client sockets using > > --without-quic_client_sockets. Otherwise just apply the first 3 patches. > > > > Here's a relevant trac ticket: > > > > https://trac.nginx.org/nginx/ticket/2528 > > > > -- > > Roman Arutyunyan > > > > ------------------------------ > > > > Subject: Digest Footer > > > > _______________________________________________ > > nginx-devel mailing list > > nginx-devel at nginx.org > > https://mailman.nginx.org/mailman/listinfo/nginx-devel > > > > > > ------------------------------ > > > > End of nginx-devel Digest, Vol 162, Issue 26 > > ******************************************** > > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > https://mailman.nginx.org/mailman/listinfo/nginx-devel -- Roman Arutyunyan From piotr at aviatrix.com Wed Feb 28 01:20:35 2024 From: piotr at aviatrix.com (Piotr Sikora) Date: Wed, 28 Feb 2024 01:20:35 +0000 Subject: [PATCH] HTTP: stop emitting server version by default Message-ID: # HG changeset patch # User Piotr Sikora # Date 1708977611 0 # Mon Feb 26 20:00:11 2024 +0000 # Branch patch001 # Node ID a8a592b9b62eff7bca03e8b46669f59d2da689ed # Parent 89bff782528a91ad123b63b624f798e6fd9c8e68 HTTP: stop emitting server version by default. This information is only useful to attackers. The previous behavior can be restored using "server_tokens on". Signed-off-by: Piotr Sikora diff -r 89bff782528a -r a8a592b9b62e src/http/ngx_http_core_module.c --- a/src/http/ngx_http_core_module.c Wed Feb 14 20:03:00 2024 +0400 +++ b/src/http/ngx_http_core_module.c Mon Feb 26 20:00:11 2024 +0000 @@ -3899,7 +3899,7 @@ ngx_conf_merge_value(conf->etag, prev->etag, 1); ngx_conf_merge_uint_value(conf->server_tokens, prev->server_tokens, - NGX_HTTP_SERVER_TOKENS_ON); + NGX_HTTP_SERVER_TOKENS_OFF); ngx_conf_merge_ptr_value(conf->open_file_cache, prev->open_file_cache, NULL); From piotr at aviatrix.com Wed Feb 28 01:21:09 2024 From: piotr at aviatrix.com (Piotr Sikora) Date: Wed, 28 Feb 2024 01:21:09 +0000 Subject: [PATCH] Core: free connections and read/write events at shutdown Message-ID: # HG changeset patch # User Piotr Sikora # Date 1708977616 0 # Mon Feb 26 20:00:16 2024 +0000 # Branch patch002 # Node ID f8d9fb94eab212f6e640b7a68ed111562e3157d5 # Parent a8a592b9b62eff7bca03e8b46669f59d2da689ed Core: free connections and read/write events at shutdown. Found with LeakSanitizer. Signed-off-by: Piotr Sikora diff -r a8a592b9b62e -r f8d9fb94eab2 src/os/unix/ngx_process_cycle.c --- a/src/os/unix/ngx_process_cycle.c Mon Feb 26 20:00:11 2024 +0000 +++ b/src/os/unix/ngx_process_cycle.c Mon Feb 26 20:00:16 2024 +0000 @@ -940,6 +940,7 @@ ngx_worker_process_exit(ngx_cycle_t *cycle) { ngx_uint_t i; + ngx_event_t *rev, *wev; ngx_connection_t *c; for (i = 0; cycle->modules[i]; i++) { @@ -989,8 +990,16 @@ ngx_exit_cycle.files_n = ngx_cycle->files_n; ngx_cycle = &ngx_exit_cycle; + c = cycle->connections; + rev = cycle->read_events; + wev = cycle->write_events; + ngx_destroy_pool(cycle->pool); + ngx_free(c); + ngx_free(rev); + ngx_free(wev); + ngx_log_error(NGX_LOG_NOTICE, ngx_cycle->log, 0, "exit"); exit(0); From piotr at aviatrix.com Wed Feb 28 01:21:15 2024 From: piotr at aviatrix.com (Piotr Sikora) Date: Wed, 28 Feb 2024 01:21:15 +0000 Subject: [PATCH] Upstream: cleanup at shutdown Message-ID: <8edb4003177dac56301a.1709083275@ip-172-31-36-66.ec2.internal> # HG changeset patch # User Piotr Sikora # Date 1708977618 0 # Mon Feb 26 20:00:18 2024 +0000 # Branch patch003 # Node ID 8edb4003177dac56301aed7f86f8d2a564b47552 # Parent f8d9fb94eab212f6e640b7a68ed111562e3157d5 Upstream: cleanup at shutdown. Add "free_upstream" callback called on worker exit to free any per-upstream objects allocated from the heap. Found with LeakSanitizer. Signed-off-by: Piotr Sikora diff -r f8d9fb94eab2 -r 8edb4003177d src/http/modules/ngx_http_upstream_random_module.c --- a/src/http/modules/ngx_http_upstream_random_module.c Mon Feb 26 20:00:16 2024 +0000 +++ b/src/http/modules/ngx_http_upstream_random_module.c Mon Feb 26 20:00:18 2024 +0000 @@ -114,6 +114,35 @@ } +static void +ngx_http_upstream_free_random(ngx_http_upstream_srv_conf_t *us) +{ +#if (NGX_HTTP_UPSTREAM_ZONE) + + ngx_http_upstream_rr_peers_t *peers; + ngx_http_upstream_random_srv_conf_t *rcf; + + peers = us->peer.data; + + if (peers->shpool) { + + rcf = ngx_http_conf_upstream_srv_conf(us, + ngx_http_upstream_random_module); + + if (rcf->ranges) { + ngx_log_debug1(NGX_LOG_DEBUG_HTTP, ngx_cycle->log, 0, + "free ranges: %p", rcf->ranges); + ngx_free(rcf->ranges); + rcf->ranges = NULL; + } + } + +#endif + + ngx_http_upstream_free_round_robin(us); +} + + static ngx_int_t ngx_http_upstream_update_random(ngx_pool_t *pool, ngx_http_upstream_srv_conf_t *us) @@ -465,6 +494,7 @@ } uscf->peer.init_upstream = ngx_http_upstream_init_random; + uscf->peer.free_upstream = ngx_http_upstream_free_random; uscf->flags = NGX_HTTP_UPSTREAM_CREATE |NGX_HTTP_UPSTREAM_WEIGHT diff -r f8d9fb94eab2 -r 8edb4003177d src/http/ngx_http_upstream.c --- a/src/http/ngx_http_upstream.c Mon Feb 26 20:00:16 2024 +0000 +++ b/src/http/ngx_http_upstream.c Mon Feb 26 20:00:18 2024 +0000 @@ -189,6 +189,8 @@ ngx_http_upstream_t *u, ngx_connection_t *c); #endif +static void ngx_http_upstream_worker_cleanup(ngx_cycle_t *cycle); + static ngx_http_upstream_header_t ngx_http_upstream_headers_in[] = { @@ -368,7 +370,7 @@ NULL, /* init process */ NULL, /* init thread */ NULL, /* exit thread */ - NULL, /* exit process */ + ngx_http_upstream_worker_cleanup, /* exit process */ NULL, /* exit master */ NGX_MODULE_V1_PADDING }; @@ -6829,3 +6831,29 @@ return NGX_CONF_OK; } + + +static void +ngx_http_upstream_worker_cleanup(ngx_cycle_t *cycle) +{ + ngx_uint_t i; + ngx_http_upstream_free_pt free; + ngx_http_upstream_srv_conf_t **uscfp; + ngx_http_upstream_main_conf_t *umcf; + + umcf = ngx_http_cycle_get_module_main_conf(cycle, ngx_http_upstream_module); + + if (umcf) { + + uscfp = umcf->upstreams.elts; + + for (i = 0; i < umcf->upstreams.nelts; i++) { + + free = uscfp[i]->peer.free_upstream + ? uscfp[i]->peer.free_upstream + : ngx_http_upstream_free_round_robin; + + free(uscfp[i]); + } + } +} diff -r f8d9fb94eab2 -r 8edb4003177d src/http/ngx_http_upstream.h --- a/src/http/ngx_http_upstream.h Mon Feb 26 20:00:16 2024 +0000 +++ b/src/http/ngx_http_upstream.h Mon Feb 26 20:00:18 2024 +0000 @@ -82,11 +82,13 @@ ngx_http_upstream_srv_conf_t *us); typedef ngx_int_t (*ngx_http_upstream_init_peer_pt)(ngx_http_request_t *r, ngx_http_upstream_srv_conf_t *us); +typedef void (*ngx_http_upstream_free_pt)(ngx_http_upstream_srv_conf_t *us); typedef struct { ngx_http_upstream_init_pt init_upstream; ngx_http_upstream_init_peer_pt init; + ngx_http_upstream_free_pt free_upstream; void *data; } ngx_http_upstream_peer_t; diff -r f8d9fb94eab2 -r 8edb4003177d src/http/ngx_http_upstream_round_robin.c --- a/src/http/ngx_http_upstream_round_robin.c Mon Feb 26 20:00:16 2024 +0000 +++ b/src/http/ngx_http_upstream_round_robin.c Mon Feb 26 20:00:18 2024 +0000 @@ -851,3 +851,34 @@ } #endif + + +void +ngx_http_upstream_free_round_robin(ngx_http_upstream_srv_conf_t *us) +{ +#if (NGX_HTTP_SSL) + + ngx_uint_t i; + ngx_http_upstream_rr_peer_t *peer; + ngx_http_upstream_rr_peers_t *peers; + + peers = us->peer.data; + +#if (NGX_HTTP_UPSTREAM_ZONE) + if (peers->shpool) { + return; + } +#endif + + for (peer = peers->peer, i = 0; peer; peer = peer->next, i++) { + + if (peer->ssl_session) { + ngx_log_debug1(NGX_LOG_DEBUG_HTTP, ngx_cycle->log, 0, + "free session: %p", peer->ssl_session); + ngx_ssl_free_session(peer->ssl_session); + peer->ssl_session = NULL; + } + } + +#endif +} diff -r f8d9fb94eab2 -r 8edb4003177d src/http/ngx_http_upstream_round_robin.h --- a/src/http/ngx_http_upstream_round_robin.h Mon Feb 26 20:00:16 2024 +0000 +++ b/src/http/ngx_http_upstream_round_robin.h Mon Feb 26 20:00:18 2024 +0000 @@ -144,6 +144,7 @@ void *data); void ngx_http_upstream_free_round_robin_peer(ngx_peer_connection_t *pc, void *data, ngx_uint_t state); +void ngx_http_upstream_free_round_robin(ngx_http_upstream_srv_conf_t *us); #if (NGX_HTTP_SSL) ngx_int_t diff -r f8d9fb94eab2 -r 8edb4003177d src/stream/ngx_stream_upstream.c --- a/src/stream/ngx_stream_upstream.c Mon Feb 26 20:00:16 2024 +0000 +++ b/src/stream/ngx_stream_upstream.c Mon Feb 26 20:00:18 2024 +0000 @@ -25,6 +25,8 @@ static void *ngx_stream_upstream_create_main_conf(ngx_conf_t *cf); static char *ngx_stream_upstream_init_main_conf(ngx_conf_t *cf, void *conf); +static void ngx_stream_upstream_worker_cleanup(ngx_cycle_t *cycle); + static ngx_command_t ngx_stream_upstream_commands[] = { @@ -68,7 +70,7 @@ NULL, /* init process */ NULL, /* init thread */ NULL, /* exit thread */ - NULL, /* exit process */ + ngx_stream_upstream_worker_cleanup, /* exit process */ NULL, /* exit master */ NGX_MODULE_V1_PADDING }; @@ -713,3 +715,30 @@ return NGX_CONF_OK; } + + +static void +ngx_stream_upstream_worker_cleanup(ngx_cycle_t *cycle) +{ + ngx_uint_t i; + ngx_stream_upstream_free_pt free; + ngx_stream_upstream_srv_conf_t **uscfp; + ngx_stream_upstream_main_conf_t *umcf; + + umcf = ngx_stream_cycle_get_module_main_conf(cycle, + ngx_stream_upstream_module); + + if (umcf) { + + uscfp = umcf->upstreams.elts; + + for (i = 0; i < umcf->upstreams.nelts; i++) { + + free = uscfp[i]->peer.free_upstream + ? uscfp[i]->peer.free_upstream + : ngx_stream_upstream_free_round_robin; + + free(uscfp[i]); + } + } +} diff -r f8d9fb94eab2 -r 8edb4003177d src/stream/ngx_stream_upstream.h --- a/src/stream/ngx_stream_upstream.h Mon Feb 26 20:00:16 2024 +0000 +++ b/src/stream/ngx_stream_upstream.h Mon Feb 26 20:00:18 2024 +0000 @@ -40,11 +40,13 @@ ngx_stream_upstream_srv_conf_t *us); typedef ngx_int_t (*ngx_stream_upstream_init_peer_pt)(ngx_stream_session_t *s, ngx_stream_upstream_srv_conf_t *us); +typedef void (*ngx_stream_upstream_free_pt)(ngx_stream_upstream_srv_conf_t *us); typedef struct { ngx_stream_upstream_init_pt init_upstream; ngx_stream_upstream_init_peer_pt init; + ngx_stream_upstream_free_pt free_upstream; void *data; } ngx_stream_upstream_peer_t; diff -r f8d9fb94eab2 -r 8edb4003177d src/stream/ngx_stream_upstream_random_module.c --- a/src/stream/ngx_stream_upstream_random_module.c Mon Feb 26 20:00:16 2024 +0000 +++ b/src/stream/ngx_stream_upstream_random_module.c Mon Feb 26 20:00:18 2024 +0000 @@ -112,6 +112,35 @@ } +static void +ngx_stream_upstream_free_random(ngx_stream_upstream_srv_conf_t *us) +{ +#if (NGX_STREAM_UPSTREAM_ZONE) + + ngx_stream_upstream_rr_peers_t *peers; + ngx_stream_upstream_random_srv_conf_t *rcf; + + peers = us->peer.data; + + if (peers->shpool) { + + rcf = ngx_stream_conf_upstream_srv_conf(us, + ngx_stream_upstream_random_module); + + if (rcf->ranges) { + ngx_log_debug1(NGX_LOG_DEBUG_STREAM, ngx_cycle->log, 0, + "free ranges: %p", rcf->ranges); + ngx_free(rcf->ranges); + rcf->ranges = NULL; + } + } + +#endif + + ngx_stream_upstream_free_round_robin(us); +} + + static ngx_int_t ngx_stream_upstream_update_random(ngx_pool_t *pool, ngx_stream_upstream_srv_conf_t *us) @@ -465,6 +494,7 @@ } uscf->peer.init_upstream = ngx_stream_upstream_init_random; + uscf->peer.free_upstream = ngx_stream_upstream_free_random; uscf->flags = NGX_STREAM_UPSTREAM_CREATE |NGX_STREAM_UPSTREAM_WEIGHT diff -r f8d9fb94eab2 -r 8edb4003177d src/stream/ngx_stream_upstream_round_robin.c --- a/src/stream/ngx_stream_upstream_round_robin.c Mon Feb 26 20:00:16 2024 +0000 +++ b/src/stream/ngx_stream_upstream_round_robin.c Mon Feb 26 20:00:18 2024 +0000 @@ -883,3 +883,34 @@ } #endif + + +void +ngx_stream_upstream_free_round_robin(ngx_stream_upstream_srv_conf_t *us) +{ +#if (NGX_STREAM_SSL) + + ngx_uint_t i; + ngx_stream_upstream_rr_peer_t *peer; + ngx_stream_upstream_rr_peers_t *peers; + + peers = us->peer.data; + +#if (NGX_STREAM_UPSTREAM_ZONE) + if (peers->shpool) { + return; + } +#endif + + for (peer = peers->peer, i = 0; peer; peer = peer->next, i++) { + + if (peer->ssl_session) { + ngx_log_debug1(NGX_LOG_DEBUG_STREAM, ngx_cycle->log, 0, + "free session: %p", peer->ssl_session); + ngx_ssl_free_session(peer->ssl_session); + peer->ssl_session = NULL; + } + } + +#endif +} diff -r f8d9fb94eab2 -r 8edb4003177d src/stream/ngx_stream_upstream_round_robin.h --- a/src/stream/ngx_stream_upstream_round_robin.h Mon Feb 26 20:00:16 2024 +0000 +++ b/src/stream/ngx_stream_upstream_round_robin.h Mon Feb 26 20:00:18 2024 +0000 @@ -142,6 +142,7 @@ void *data); void ngx_stream_upstream_free_round_robin_peer(ngx_peer_connection_t *pc, void *data, ngx_uint_t state); +void ngx_stream_upstream_free_round_robin(ngx_stream_upstream_srv_conf_t *us); #endif /* _NGX_STREAM_UPSTREAM_ROUND_ROBIN_H_INCLUDED_ */ From piotr at aviatrix.com Wed Feb 28 01:21:32 2024 From: piotr at aviatrix.com (Piotr Sikora) Date: Wed, 28 Feb 2024 01:21:32 +0000 Subject: [PATCH] Correctly initialize ngx_str_t Message-ID: <52936793ac076072c354.1709083292@ip-172-31-36-66.ec2.internal> # HG changeset patch # User Piotr Sikora # Date 1708977619 0 # Mon Feb 26 20:00:19 2024 +0000 # Branch patch004 # Node ID 52936793ac076072c3544aa4e27f973d2f8fecda # Parent 8edb4003177dac56301aed7f86f8d2a564b47552 Correctly initialize ngx_str_t. Previously, only the "len" field was set, which resulted in an uninitialized "data" field accessed elsewhere in the code. Note that "r->uri" is initialized to an empty string to avoid changing the existing value for "$uri" in case of invalid URI. Found with MemorySanitizer. Signed-off-by: Piotr Sikora diff -r 8edb4003177d -r 52936793ac07 src/event/ngx_event_openssl.c --- a/src/event/ngx_event_openssl.c Mon Feb 26 20:00:18 2024 +0000 +++ b/src/event/ngx_event_openssl.c Mon Feb 26 20:00:19 2024 +0000 @@ -5064,7 +5064,7 @@ n = SSL_get0_raw_cipherlist(c->ssl->connection, &ciphers); if (n <= 0) { - s->len = 0; + ngx_str_null(s); return NGX_OK; } @@ -5116,7 +5116,7 @@ if (SSL_get_shared_ciphers(c->ssl->connection, (char *) buf, 4096) == NULL) { - s->len = 0; + ngx_str_null(s); return NGX_OK; } @@ -5165,7 +5165,7 @@ #endif - s->len = 0; + ngx_str_null(s); return NGX_OK; } @@ -5182,7 +5182,7 @@ n = SSL_get1_curves(c->ssl->connection, NULL); if (n <= 0) { - s->len = 0; + ngx_str_null(s); return NGX_OK; } @@ -5233,7 +5233,7 @@ #else - s->len = 0; + ngx_str_null(s); #endif @@ -5250,7 +5250,7 @@ sess = SSL_get0_session(c->ssl->connection); if (sess == NULL) { - s->len = 0; + ngx_str_null(s); return NGX_OK; } @@ -5285,7 +5285,7 @@ ngx_int_t ngx_ssl_get_early_data(ngx_connection_t *c, ngx_pool_t *pool, ngx_str_t *s) { - s->len = 0; + ngx_str_null(s); #ifdef SSL_ERROR_EARLY_DATA_REJECTED @@ -5335,7 +5335,7 @@ #endif - s->len = 0; + ngx_str_null(s); return NGX_OK; } @@ -5365,7 +5365,7 @@ #endif - s->len = 0; + ngx_str_null(s); return NGX_OK; } @@ -5377,10 +5377,9 @@ BIO *bio; X509 *cert; - s->len = 0; - cert = SSL_get_peer_certificate(c->ssl->connection); if (cert == NULL) { + ngx_str_null(s); return NGX_OK; } @@ -5433,7 +5432,7 @@ } if (cert.len == 0) { - s->len = 0; + ngx_str_null(s); return NGX_OK; } @@ -5476,7 +5475,7 @@ } if (cert.len == 0) { - s->len = 0; + ngx_str_null(s); return NGX_OK; } @@ -5501,10 +5500,9 @@ X509 *cert; X509_NAME *name; - s->len = 0; - cert = SSL_get_peer_certificate(c->ssl->connection); if (cert == NULL) { + ngx_str_null(s); return NGX_OK; } @@ -5555,10 +5553,9 @@ X509 *cert; X509_NAME *name; - s->len = 0; - cert = SSL_get_peer_certificate(c->ssl->connection); if (cert == NULL) { + ngx_str_null(s); return NGX_OK; } @@ -5611,10 +5608,9 @@ X509 *cert; X509_NAME *name; - s->len = 0; - cert = SSL_get_peer_certificate(c->ssl->connection); if (cert == NULL) { + ngx_str_null(s); return NGX_OK; } @@ -5659,10 +5655,9 @@ X509 *cert; X509_NAME *name; - s->len = 0; - cert = SSL_get_peer_certificate(c->ssl->connection); if (cert == NULL) { + ngx_str_null(s); return NGX_OK; } @@ -5705,10 +5700,9 @@ X509 *cert; BIO *bio; - s->len = 0; - cert = SSL_get_peer_certificate(c->ssl->connection); if (cert == NULL) { + ngx_str_null(s); return NGX_OK; } @@ -5745,10 +5739,9 @@ unsigned int len; u_char buf[EVP_MAX_MD_SIZE]; - s->len = 0; - cert = SSL_get_peer_certificate(c->ssl->connection); if (cert == NULL) { + ngx_str_null(s); return NGX_OK; } @@ -5818,10 +5811,9 @@ X509 *cert; size_t len; - s->len = 0; - cert = SSL_get_peer_certificate(c->ssl->connection); if (cert == NULL) { + ngx_str_null(s); return NGX_OK; } @@ -5863,10 +5855,9 @@ X509 *cert; size_t len; - s->len = 0; - cert = SSL_get_peer_certificate(c->ssl->connection); if (cert == NULL) { + ngx_str_null(s); return NGX_OK; } @@ -5907,10 +5898,9 @@ X509 *cert; time_t now, end; - s->len = 0; - cert = SSL_get_peer_certificate(c->ssl->connection); if (cert == NULL) { + ngx_str_null(s); return NGX_OK; } diff -r 8edb4003177d -r 52936793ac07 src/event/quic/ngx_event_quic_streams.c --- a/src/event/quic/ngx_event_quic_streams.c Mon Feb 26 20:00:18 2024 +0000 +++ b/src/event/quic/ngx_event_quic_streams.c Mon Feb 26 20:00:19 2024 +0000 @@ -719,8 +719,7 @@ addr_text.len = c->addr_text.len; } else { - addr_text.len = 0; - addr_text.data = NULL; + ngx_str_null(&addr_text); } reusable = c->reusable; diff -r 8edb4003177d -r 52936793ac07 src/http/modules/ngx_http_auth_request_module.c --- a/src/http/modules/ngx_http_auth_request_module.c Mon Feb 26 20:00:18 2024 +0000 +++ b/src/http/modules/ngx_http_auth_request_module.c Mon Feb 26 20:00:19 2024 +0000 @@ -373,9 +373,7 @@ value = cf->args->elts; if (ngx_strcmp(value[1].data, "off") == 0) { - arcf->uri.len = 0; - arcf->uri.data = (u_char *) ""; - + ngx_str_set(&arcf->uri, ""); return NGX_CONF_OK; } diff -r 8edb4003177d -r 52936793ac07 src/http/modules/ngx_http_autoindex_module.c --- a/src/http/modules/ngx_http_autoindex_module.c Mon Feb 26 20:00:18 2024 +0000 +++ b/src/http/modules/ngx_http_autoindex_module.c Mon Feb 26 20:00:19 2024 +0000 @@ -816,7 +816,7 @@ ngx_uint_t i; if (ngx_http_arg(r, (u_char *) "callback", 8, callback) != NGX_OK) { - callback->len = 0; + ngx_str_null(callback); return NGX_OK; } diff -r 8edb4003177d -r 52936793ac07 src/http/modules/ngx_http_charset_filter_module.c --- a/src/http/modules/ngx_http_charset_filter_module.c Mon Feb 26 20:00:18 2024 +0000 +++ b/src/http/modules/ngx_http_charset_filter_module.c Mon Feb 26 20:00:19 2024 +0000 @@ -437,7 +437,7 @@ charset = lcf->source_charset; if (charset == NGX_HTTP_CHARSET_OFF) { - name->len = 0; + ngx_str_null(name); return charset; } @@ -502,7 +502,7 @@ * use this charset instead of the next page charset */ - r->headers_out.charset.len = 0; + ngx_str_null(&r->headers_out.charset); return; } diff -r 8edb4003177d -r 52936793ac07 src/http/modules/ngx_http_limit_conn_module.c --- a/src/http/modules/ngx_http_limit_conn_module.c Mon Feb 26 20:00:18 2024 +0000 +++ b/src/http/modules/ngx_http_limit_conn_module.c Mon Feb 26 20:00:19 2024 +0000 @@ -587,7 +587,7 @@ } size = 0; - name.len = 0; + ngx_str_null(&name); for (i = 2; i < cf->args->nelts; i++) { diff -r 8edb4003177d -r 52936793ac07 src/http/modules/ngx_http_limit_req_module.c --- a/src/http/modules/ngx_http_limit_req_module.c Mon Feb 26 20:00:18 2024 +0000 +++ b/src/http/modules/ngx_http_limit_req_module.c Mon Feb 26 20:00:19 2024 +0000 @@ -862,7 +862,7 @@ size = 0; rate = 1; scale = 1; - name.len = 0; + ngx_str_null(&name); for (i = 2; i < cf->args->nelts; i++) { diff -r 8edb4003177d -r 52936793ac07 src/http/modules/ngx_http_not_modified_filter_module.c --- a/src/http/modules/ngx_http_not_modified_filter_module.c Mon Feb 26 20:00:18 2024 +0000 +++ b/src/http/modules/ngx_http_not_modified_filter_module.c Mon Feb 26 20:00:19 2024 +0000 @@ -92,8 +92,8 @@ /* not modified */ r->headers_out.status = NGX_HTTP_NOT_MODIFIED; - r->headers_out.status_line.len = 0; - r->headers_out.content_type.len = 0; + ngx_str_null(&r->headers_out.status_line); + ngx_str_null(&r->headers_out.content_type); ngx_http_clear_content_length(r); ngx_http_clear_accept_ranges(r); diff -r 8edb4003177d -r 52936793ac07 src/http/modules/ngx_http_proxy_module.c --- a/src/http/modules/ngx_http_proxy_module.c Mon Feb 26 20:00:18 2024 +0000 +++ b/src/http/modules/ngx_http_proxy_module.c Mon Feb 26 20:00:19 2024 +0000 @@ -4223,7 +4223,7 @@ return NGX_CONF_ERROR; } - plcf->location.len = 0; + ngx_str_null(&plcf->location); } plcf->url = *url; diff -r 8edb4003177d -r 52936793ac07 src/http/modules/ngx_http_range_filter_module.c --- a/src/http/modules/ngx_http_range_filter_module.c Mon Feb 26 20:00:18 2024 +0000 +++ b/src/http/modules/ngx_http_range_filter_module.c Mon Feb 26 20:00:19 2024 +0000 @@ -232,7 +232,7 @@ ngx_http_set_ctx(r, ctx, ngx_http_range_body_filter_module); r->headers_out.status = NGX_HTTP_PARTIAL_CONTENT; - r->headers_out.status_line.len = 0; + ngx_str_null(&r->headers_out.status_line); if (ctx->ranges.nelts == 1) { return ngx_http_range_singlepart_header(r, ctx); @@ -551,7 +551,7 @@ r->headers_out.content_type_len = r->headers_out.content_type.len; - r->headers_out.charset.len = 0; + ngx_str_null(&r->headers_out.charset); /* the size of the last boundary CRLF "--0123456789--" CRLF */ diff -r 8edb4003177d -r 52936793ac07 src/http/modules/ngx_http_slice_filter_module.c --- a/src/http/modules/ngx_http_slice_filter_module.c Mon Feb 26 20:00:18 2024 +0000 +++ b/src/http/modules/ngx_http_slice_filter_module.c Mon Feb 26 20:00:19 2024 +0000 @@ -174,7 +174,7 @@ ctx->active = 1; r->headers_out.status = NGX_HTTP_OK; - r->headers_out.status_line.len = 0; + ngx_str_null(&r->headers_out.status_line); r->headers_out.content_length_n = cr.complete_length; r->headers_out.content_offset = cr.start; r->headers_out.content_range->hash = 0; diff -r 8edb4003177d -r 52936793ac07 src/http/modules/perl/ngx_http_perl_module.c --- a/src/http/modules/perl/ngx_http_perl_module.c Mon Feb 26 20:00:18 2024 +0000 +++ b/src/http/modules/perl/ngx_http_perl_module.c Mon Feb 26 20:00:19 2024 +0000 @@ -240,11 +240,11 @@ uri = ctx->redirect_uri; } else { - uri.len = 0; + ngx_str_null(&uri); } - ctx->filename.data = NULL; - ctx->redirect_uri.len = 0; + ngx_str_null(&ctx->filename); + ngx_str_null(&ctx->redirect_uri); if (rc == NGX_ERROR) { ngx_http_finalize_request(r, rc); @@ -366,8 +366,8 @@ } ctx->variable = saved; - ctx->filename.data = NULL; - ctx->redirect_uri.len = 0; + ngx_str_null(&ctx->filename); + ngx_str_null(&ctx->redirect_uri); ngx_log_debug0(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, "perl variable done"); @@ -469,8 +469,8 @@ } - ctx->filename.data = NULL; - ctx->redirect_uri.len = 0; + ngx_str_null(&ctx->filename); + ngx_str_null(&ctx->redirect_uri); ctx->ssi = NULL; ngx_log_debug0(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, "perl ssi done"); @@ -793,7 +793,7 @@ return NGX_ERROR; } - ctx->redirect_uri.len = 0; + ngx_str_null(&ctx->redirect_uri); if (ctx->header_sent) { return NGX_ERROR; diff -r 8edb4003177d -r 52936793ac07 src/http/ngx_http_core_module.c --- a/src/http/ngx_http_core_module.c Mon Feb 26 20:00:18 2024 +0000 +++ b/src/http/ngx_http_core_module.c Mon Feb 26 20:00:19 2024 +0000 @@ -1843,7 +1843,7 @@ if (r->err_status) { r->headers_out.status = r->err_status; - r->headers_out.status_line.len = 0; + ngx_str_null(&r->headers_out.status_line); } return ngx_http_top_header_filter(r); diff -r 8edb4003177d -r 52936793ac07 src/http/ngx_http_file_cache.c --- a/src/http/ngx_http_file_cache.c Mon Feb 26 20:00:18 2024 +0000 +++ b/src/http/ngx_http_file_cache.c Mon Feb 26 20:00:19 2024 +0000 @@ -1290,7 +1290,7 @@ ngx_shmtx_unlock(&cache->shpool->mutex); c->secondary = 1; - c->file.name.len = 0; + ngx_str_null(&c->file.name); c->body_start = c->buffer_size; ngx_memcpy(c->key, c->variant, NGX_HTTP_CACHE_KEY_LEN); @@ -1397,7 +1397,7 @@ ngx_shmtx_unlock(&cache->shpool->mutex); - c->file.name.len = 0; + ngx_str_null(&c->file.name); c->update_variant = 1; ngx_memcpy(c->key, c->main, NGX_HTTP_CACHE_KEY_LEN); @@ -2414,7 +2414,7 @@ manager_sleep = 50; manager_threshold = 200; - name.len = 0; + ngx_str_null(&name); size = 0; max_size = NGX_MAX_OFF_T_VALUE; min_free = 0; diff -r 8edb4003177d -r 52936793ac07 src/http/ngx_http_parse.c --- a/src/http/ngx_http_parse.c Mon Feb 26 20:00:18 2024 +0000 +++ b/src/http/ngx_http_parse.c Mon Feb 26 20:00:19 2024 +0000 @@ -2133,7 +2133,7 @@ args->data = p; } else { - args->len = 0; + ngx_str_null(args); } } diff -r 8edb4003177d -r 52936793ac07 src/http/ngx_http_request.c --- a/src/http/ngx_http_request.c Mon Feb 26 20:00:18 2024 +0000 +++ b/src/http/ngx_http_request.c Mon Feb 26 20:00:19 2024 +0000 @@ -1268,7 +1268,7 @@ cscf = ngx_http_get_module_srv_conf(r, ngx_http_core_module); if (ngx_http_parse_complex_uri(r, cscf->merge_slashes) != NGX_OK) { - r->uri.len = 0; + ngx_str_set(&r->uri, ""); ngx_log_error(NGX_LOG_INFO, r->connection->log, 0, "client sent invalid request"); @@ -3774,7 +3774,7 @@ ctx = log->data; ctx->request = NULL; - r->request_line.len = 0; + ngx_str_null(&r->request_line); r->connection->destroyed = 1; diff -r 8edb4003177d -r 52936793ac07 src/http/ngx_http_script.c --- a/src/http/ngx_http_script.c Mon Feb 26 20:00:18 2024 +0000 +++ b/src/http/ngx_http_script.c Mon Feb 26 20:00:19 2024 +0000 @@ -469,7 +469,7 @@ for (i = 0; i < sc->source->len; /* void */ ) { - name.len = 0; + ngx_str_null(&name); if (sc->source->data[i] == '$') { @@ -1268,7 +1268,7 @@ e->buf.len = e->pos - e->buf.data; if (!code->add_args) { - r->args.len = 0; + ngx_str_null(&r->args); } } diff -r 8edb4003177d -r 52936793ac07 src/http/ngx_http_special_response.c --- a/src/http/ngx_http_special_response.c Mon Feb 26 20:00:18 2024 +0000 +++ b/src/http/ngx_http_special_response.c Mon Feb 26 20:00:19 2024 +0000 @@ -449,7 +449,7 @@ } } - r->headers_out.content_type.len = 0; + ngx_str_null(&r->headers_out.content_type); clcf = ngx_http_get_module_loc_conf(r, ngx_http_core_module); diff -r 8edb4003177d -r 52936793ac07 src/http/v3/ngx_http_v3_parse.c --- a/src/http/v3/ngx_http_v3_parse.c Mon Feb 26 20:00:18 2024 +0000 +++ b/src/http/v3/ngx_http_v3_parse.c Mon Feb 26 20:00:19 2024 +0000 @@ -1515,7 +1515,7 @@ st->literal.length = st->pint.value; if (st->literal.length == 0) { - st->value.len = 0; + ngx_str_null(&st->value); goto done; } @@ -1634,7 +1634,7 @@ st->literal.length = st->pint.value; if (st->literal.length == 0) { - st->value.len = 0; + ngx_str_null(&st->value); goto done; } diff -r 8edb4003177d -r 52936793ac07 src/mail/ngx_mail_imap_handler.c --- a/src/mail/ngx_mail_imap_handler.c Mon Feb 26 20:00:18 2024 +0000 +++ b/src/mail/ngx_mail_imap_handler.c Mon Feb 26 20:00:19 2024 +0000 @@ -149,7 +149,7 @@ } tag = 1; - s->text.len = 0; + ngx_str_null(&s->text); ngx_str_set(&s->out, imap_ok); if (rc == NGX_OK) { @@ -287,7 +287,7 @@ s->buffer->last = s->buffer->start; } - s->tag.len = 0; + ngx_str_null(&s->tag); } } diff -r 8edb4003177d -r 52936793ac07 src/mail/ngx_mail_proxy_module.c --- a/src/mail/ngx_mail_proxy_module.c Mon Feb 26 20:00:18 2024 +0000 +++ b/src/mail/ngx_mail_proxy_module.c Mon Feb 26 20:00:19 2024 +0000 @@ -178,7 +178,7 @@ s->proxy->proxy_protocol = pcf->proxy_protocol; - s->out.len = 0; + ngx_str_null(&s->out); switch (s->protocol) { diff -r 8edb4003177d -r 52936793ac07 src/stream/ngx_stream_limit_conn_module.c --- a/src/stream/ngx_stream_limit_conn_module.c Mon Feb 26 20:00:18 2024 +0000 +++ b/src/stream/ngx_stream_limit_conn_module.c Mon Feb 26 20:00:19 2024 +0000 @@ -566,7 +566,7 @@ } size = 0; - name.len = 0; + ngx_str_null(&name); for (i = 2; i < cf->args->nelts; i++) { diff -r 8edb4003177d -r 52936793ac07 src/stream/ngx_stream_script.c --- a/src/stream/ngx_stream_script.c Mon Feb 26 20:00:18 2024 +0000 +++ b/src/stream/ngx_stream_script.c Mon Feb 26 20:00:19 2024 +0000 @@ -373,7 +373,7 @@ for (i = 0; i < sc->source->len; /* void */ ) { - name.len = 0; + ngx_str_null(&name); if (sc->source->data[i] == '$') { From piotr at aviatrix.com Wed Feb 28 01:21:40 2024 From: piotr at aviatrix.com (Piotr Sikora) Date: Wed, 28 Feb 2024 01:21:40 +0000 Subject: [PATCH] Geo: fix uninitialized memory access Message-ID: # HG changeset patch # User Piotr Sikora # Date 1708977621 0 # Mon Feb 26 20:00:21 2024 +0000 # Branch patch005 # Node ID fe6f8a72d42970df176ea53f4f0aea16947ba5b8 # Parent 52936793ac076072c3544aa4e27f973d2f8fecda Geo: fix uninitialized memory access. Found with MemorySanitizer. Signed-off-by: Piotr Sikora diff -r 52936793ac07 -r fe6f8a72d429 src/http/modules/ngx_http_geo_module.c --- a/src/http/modules/ngx_http_geo_module.c Mon Feb 26 20:00:19 2024 +0000 +++ b/src/http/modules/ngx_http_geo_module.c Mon Feb 26 20:00:21 2024 +0000 @@ -1259,7 +1259,7 @@ return gvvn->value; } - val = ngx_palloc(ctx->pool, sizeof(ngx_http_variable_value_t)); + val = ngx_pcalloc(ctx->pool, sizeof(ngx_http_variable_value_t)); if (val == NULL) { return NULL; } diff -r 52936793ac07 -r fe6f8a72d429 src/stream/ngx_stream_geo_module.c --- a/src/stream/ngx_stream_geo_module.c Mon Feb 26 20:00:19 2024 +0000 +++ b/src/stream/ngx_stream_geo_module.c Mon Feb 26 20:00:21 2024 +0000 @@ -1209,7 +1209,7 @@ return gvvn->value; } - val = ngx_palloc(ctx->pool, sizeof(ngx_stream_variable_value_t)); + val = ngx_pcalloc(ctx->pool, sizeof(ngx_stream_variable_value_t)); if (val == NULL) { return NULL; } From piotr at aviatrix.com Wed Feb 28 01:21:49 2024 From: piotr at aviatrix.com (Piotr Sikora) Date: Wed, 28 Feb 2024 01:21:49 +0000 Subject: [PATCH] Core: fix conversion of IPv4-mapped IPv6 addresses Message-ID: <5584232259d28489efba.1709083309@ip-172-31-36-66.ec2.internal> # HG changeset patch # User Piotr Sikora # Date 1708977626 0 # Mon Feb 26 20:00:26 2024 +0000 # Branch patch007 # Node ID 5584232259d28489efba149f2f5ae730691ff0d4 # Parent 03e5549976765912818120e11f6b08410a2af6a9 Core: fix conversion of IPv4-mapped IPv6 addresses. Found with UndefinedBehaviorSanitizer (shift). Signed-off-by: Piotr Sikora diff -r 03e554997676 -r 5584232259d2 src/core/ngx_inet.c --- a/src/core/ngx_inet.c Mon Feb 26 20:00:23 2024 +0000 +++ b/src/core/ngx_inet.c Mon Feb 26 20:00:26 2024 +0000 @@ -507,10 +507,10 @@ p = inaddr6->s6_addr; - inaddr = p[12] << 24; - inaddr += p[13] << 16; - inaddr += p[14] << 8; - inaddr += p[15]; + inaddr = (in_addr_t) p[12] << 24; + inaddr += (in_addr_t) p[13] << 16; + inaddr += (in_addr_t) p[14] << 8; + inaddr += (in_addr_t) p[15]; inaddr = htonl(inaddr); } diff -r 03e554997676 -r 5584232259d2 src/http/modules/ngx_http_access_module.c --- a/src/http/modules/ngx_http_access_module.c Mon Feb 26 20:00:23 2024 +0000 +++ b/src/http/modules/ngx_http_access_module.c Mon Feb 26 20:00:26 2024 +0000 @@ -148,10 +148,10 @@ p = sin6->sin6_addr.s6_addr; if (alcf->rules && IN6_IS_ADDR_V4MAPPED(&sin6->sin6_addr)) { - addr = p[12] << 24; - addr += p[13] << 16; - addr += p[14] << 8; - addr += p[15]; + addr = (in_addr_t) p[12] << 24; + addr += (in_addr_t) p[13] << 16; + addr += (in_addr_t) p[14] << 8; + addr += (in_addr_t) p[15]; return ngx_http_access_inet(r, alcf, htonl(addr)); } diff -r 03e554997676 -r 5584232259d2 src/http/modules/ngx_http_geo_module.c --- a/src/http/modules/ngx_http_geo_module.c Mon Feb 26 20:00:23 2024 +0000 +++ b/src/http/modules/ngx_http_geo_module.c Mon Feb 26 20:00:26 2024 +0000 @@ -199,10 +199,10 @@ p = inaddr6->s6_addr; if (IN6_IS_ADDR_V4MAPPED(inaddr6)) { - inaddr = p[12] << 24; - inaddr += p[13] << 16; - inaddr += p[14] << 8; - inaddr += p[15]; + inaddr = (in_addr_t) p[12] << 24; + inaddr += (in_addr_t) p[13] << 16; + inaddr += (in_addr_t) p[14] << 8; + inaddr += (in_addr_t) p[15]; vv = (ngx_http_variable_value_t *) ngx_radix32tree_find(ctx->u.trees.tree, inaddr); @@ -272,10 +272,10 @@ if (IN6_IS_ADDR_V4MAPPED(inaddr6)) { p = inaddr6->s6_addr; - inaddr = p[12] << 24; - inaddr += p[13] << 16; - inaddr += p[14] << 8; - inaddr += p[15]; + inaddr = (in_addr_t) p[12] << 24; + inaddr += (in_addr_t) p[13] << 16; + inaddr += (in_addr_t) p[14] << 8; + inaddr += (in_addr_t) p[15]; } else { inaddr = INADDR_NONE; diff -r 03e554997676 -r 5584232259d2 src/http/modules/ngx_http_geoip_module.c --- a/src/http/modules/ngx_http_geoip_module.c Mon Feb 26 20:00:23 2024 +0000 +++ b/src/http/modules/ngx_http_geoip_module.c Mon Feb 26 20:00:26 2024 +0000 @@ -266,10 +266,10 @@ if (IN6_IS_ADDR_V4MAPPED(inaddr6)) { p = inaddr6->s6_addr; - inaddr = p[12] << 24; - inaddr += p[13] << 16; - inaddr += p[14] << 8; - inaddr += p[15]; + inaddr = (in_addr_t) p[12] << 24; + inaddr += (in_addr_t) p[13] << 16; + inaddr += (in_addr_t) p[14] << 8; + inaddr += (in_addr_t) p[15]; return inaddr; } diff -r 03e554997676 -r 5584232259d2 src/stream/ngx_stream_access_module.c --- a/src/stream/ngx_stream_access_module.c Mon Feb 26 20:00:23 2024 +0000 +++ b/src/stream/ngx_stream_access_module.c Mon Feb 26 20:00:26 2024 +0000 @@ -144,10 +144,10 @@ p = sin6->sin6_addr.s6_addr; if (ascf->rules && IN6_IS_ADDR_V4MAPPED(&sin6->sin6_addr)) { - addr = p[12] << 24; - addr += p[13] << 16; - addr += p[14] << 8; - addr += p[15]; + addr = (in_addr_t) p[12] << 24; + addr += (in_addr_t) p[13] << 16; + addr += (in_addr_t) p[14] << 8; + addr += (in_addr_t) p[15]; return ngx_stream_access_inet(s, ascf, htonl(addr)); } diff -r 03e554997676 -r 5584232259d2 src/stream/ngx_stream_geo_module.c --- a/src/stream/ngx_stream_geo_module.c Mon Feb 26 20:00:23 2024 +0000 +++ b/src/stream/ngx_stream_geo_module.c Mon Feb 26 20:00:26 2024 +0000 @@ -190,10 +190,10 @@ p = inaddr6->s6_addr; if (IN6_IS_ADDR_V4MAPPED(inaddr6)) { - inaddr = p[12] << 24; - inaddr += p[13] << 16; - inaddr += p[14] << 8; - inaddr += p[15]; + inaddr = (in_addr_t) p[12] << 24; + inaddr += (in_addr_t) p[13] << 16; + inaddr += (in_addr_t) p[14] << 8; + inaddr += (in_addr_t) p[15]; vv = (ngx_stream_variable_value_t *) ngx_radix32tree_find(ctx->u.trees.tree, inaddr); @@ -263,10 +263,10 @@ if (IN6_IS_ADDR_V4MAPPED(inaddr6)) { p = inaddr6->s6_addr; - inaddr = p[12] << 24; - inaddr += p[13] << 16; - inaddr += p[14] << 8; - inaddr += p[15]; + inaddr = (in_addr_t) p[12] << 24; + inaddr += (in_addr_t) p[13] << 16; + inaddr += (in_addr_t) p[14] << 8; + inaddr += (in_addr_t) p[15]; } else { inaddr = INADDR_NONE; diff -r 03e554997676 -r 5584232259d2 src/stream/ngx_stream_geoip_module.c --- a/src/stream/ngx_stream_geoip_module.c Mon Feb 26 20:00:23 2024 +0000 +++ b/src/stream/ngx_stream_geoip_module.c Mon Feb 26 20:00:26 2024 +0000 @@ -236,10 +236,10 @@ if (IN6_IS_ADDR_V4MAPPED(inaddr6)) { p = inaddr6->s6_addr; - inaddr = p[12] << 24; - inaddr += p[13] << 16; - inaddr += p[14] << 8; - inaddr += p[15]; + inaddr = (in_addr_t) p[12] << 24; + inaddr += (in_addr_t) p[13] << 16; + inaddr += (in_addr_t) p[14] << 8; + inaddr += (in_addr_t) p[15]; return inaddr; } From piotr at aviatrix.com Wed Feb 28 01:21:57 2024 From: piotr at aviatrix.com (Piotr Sikora) Date: Wed, 28 Feb 2024 01:21:57 +0000 Subject: [PATCH] Rewrite: fix "return" directive without response text Message-ID: <3cde11b747c08c69889e.1709083317@ip-172-31-36-66.ec2.internal> # HG changeset patch # User Piotr Sikora # Date 1708977628 0 # Mon Feb 26 20:00:28 2024 +0000 # Branch patch008 # Node ID 3cde11b747c08c69889edc014a700317fe4d1d88 # Parent 5584232259d28489efba149f2f5ae730691ff0d4 Rewrite: fix "return" directive without response text. Previously, the response text wasn't initialized and the rewrite module was sending response body set to NULL. Found with UndefinedBehaviorSanitizer (pointer-overflow). Signed-off-by: Piotr Sikora diff -r 5584232259d2 -r 3cde11b747c0 src/http/modules/ngx_http_rewrite_module.c --- a/src/http/modules/ngx_http_rewrite_module.c Mon Feb 26 20:00:26 2024 +0000 +++ b/src/http/modules/ngx_http_rewrite_module.c Mon Feb 26 20:00:28 2024 +0000 @@ -489,6 +489,7 @@ } if (cf->args->nelts == 2) { + ngx_str_set(&ret->text.value, ""); return NGX_CONF_OK; } From piotr at aviatrix.com Wed Feb 28 01:22:15 2024 From: piotr at aviatrix.com (Piotr Sikora) Date: Wed, 28 Feb 2024 01:22:15 +0000 Subject: [PATCH 2 of 2] SSL: add $ssl_curve when using AWS-LC In-Reply-To: <5e923992006199748e79.1709083334@ip-172-31-36-66.ec2.internal> References: <5e923992006199748e79.1709083334@ip-172-31-36-66.ec2.internal> Message-ID: # HG changeset patch # User Piotr Sikora # Date 1708977632 0 # Mon Feb 26 20:00:32 2024 +0000 # Branch patch009 # Node ID dfffc67d286b788204f60701ef4179566d933a1b # Parent 5e923992006199748e79b08b1e65c4ef41f07495 SSL: add $ssl_curve when using AWS-LC. Signed-off-by: Piotr Sikora diff -r 5e9239920061 -r dfffc67d286b src/event/ngx_event_openssl.c --- a/src/event/ngx_event_openssl.c Mon Feb 26 20:00:30 2024 +0000 +++ b/src/event/ngx_event_openssl.c Mon Feb 26 20:00:32 2024 +0000 @@ -5163,6 +5163,72 @@ return NGX_OK; } +#elif defined(OPENSSL_IS_AWSLC) + + uint16_t curve_id; + + curve_id = SSL_get_curve_id(c->ssl->connection); + + /* + * Hardcoded table with ANSI / SECG curve names (e.g. "prime256v1"), + * which is the same format that OpenSSL returns for $ssl_curve. + * + * Without this table, we'd need to make 3 additional library calls + * to convert from curve_id to ANSI / SECG curve name: + * + * nist_name = SSL_get_curve_name(curve_id); + * nid = EC_curve_nist2nid(nist_name); + * ansi_name = OBJ_nid2sn(nid); + */ + + switch (curve_id) { + +#ifdef SSL_CURVE_SECP224R1 + case SSL_CURVE_SECP224R1: + ngx_str_set(s, "secp224r1"); + return NGX_OK; +#endif + +#ifdef SSL_CURVE_SECP256R1 + case SSL_CURVE_SECP256R1: + ngx_str_set(s, "prime256v1"); + return NGX_OK; +#endif + +#ifdef SSL_CURVE_SECP384R1 + case SSL_CURVE_SECP384R1: + ngx_str_set(s, "secp384r1"); + return NGX_OK; +#endif + +#ifdef SSL_CURVE_SECP521R1 + case SSL_CURVE_SECP521R1: + ngx_str_set(s, "secp521r1"); + return NGX_OK; +#endif + +#ifdef SSL_CURVE_X25519 + case SSL_CURVE_X25519: + ngx_str_set(s, "x25519"); + return NGX_OK; +#endif + + case 0: + break; + + default: + s->len = sizeof("0x0000") - 1; + + s->data = ngx_pnalloc(pool, s->len); + if (s->data == NULL) { + return NGX_ERROR; + } + + ngx_sprintf(s->data, "0x%04xd", curve_id); + + return NGX_OK; + } + #endif ngx_str_null(s); From piotr at aviatrix.com Wed Feb 28 01:22:14 2024 From: piotr at aviatrix.com (Piotr Sikora) Date: Wed, 28 Feb 2024 01:22:14 +0000 Subject: [PATCH 1 of 2] SSL: add support for AWS-LC Message-ID: <5e923992006199748e79.1709083334@ip-172-31-36-66.ec2.internal> # HG changeset patch # User Piotr Sikora # Date 1708977630 0 # Mon Feb 26 20:00:30 2024 +0000 # Branch patch009 # Node ID 5e923992006199748e79b08b1e65c4ef41f07495 # Parent 3cde11b747c08c69889edc014a700317fe4d1d88 SSL: add support for AWS-LC. AWS-LC is a fork of BoringSSL with some performance improvements, useful features (OCSP and multiple certificates), and support for more platforms. Signed-off-by: Piotr Sikora diff -r 3cde11b747c0 -r 5e9239920061 src/event/ngx_event_openssl.h --- a/src/event/ngx_event_openssl.h Mon Feb 26 20:00:28 2024 +0000 +++ b/src/event/ngx_event_openssl.h Mon Feb 26 20:00:30 2024 +0000 @@ -25,7 +25,7 @@ #endif #include #if (NGX_QUIC) -#ifdef OPENSSL_IS_BORINGSSL +#if defined(OPENSSL_IS_BORINGSSL) || defined(OPENSSL_IS_AWSLC) #include #include #else diff -r 3cde11b747c0 -r 5e9239920061 src/event/quic/ngx_event_quic.c --- a/src/event/quic/ngx_event_quic.c Mon Feb 26 20:00:28 2024 +0000 +++ b/src/event/quic/ngx_event_quic.c Mon Feb 26 20:00:30 2024 +0000 @@ -962,7 +962,7 @@ return NGX_DECLINED; } -#if !defined (OPENSSL_IS_BORINGSSL) +#if !defined(OPENSSL_IS_BORINGSSL) && !defined(OPENSSL_IS_AWSLC) /* OpenSSL provides read keys for an application level before it's ready */ if (pkt->level == ssl_encryption_application && !c->ssl->handshaked) { diff -r 3cde11b747c0 -r 5e9239920061 src/event/quic/ngx_event_quic_protection.c --- a/src/event/quic/ngx_event_quic_protection.c Mon Feb 26 20:00:28 2024 +0000 +++ b/src/event/quic/ngx_event_quic_protection.c Mon Feb 26 20:00:30 2024 +0000 @@ -30,7 +30,7 @@ static ngx_int_t ngx_quic_crypto_open(ngx_quic_secret_t *s, ngx_str_t *out, u_char *nonce, ngx_str_t *in, ngx_str_t *ad, ngx_log_t *log); -#ifndef OPENSSL_IS_BORINGSSL +#if !defined(OPENSSL_IS_BORINGSSL) && !defined(OPENSSL_IS_AWSLC) static ngx_int_t ngx_quic_crypto_common(ngx_quic_secret_t *s, ngx_str_t *out, u_char *nonce, ngx_str_t *in, ngx_str_t *ad, ngx_log_t *log); #endif @@ -55,7 +55,7 @@ switch (id) { case TLS1_3_CK_AES_128_GCM_SHA256: -#ifdef OPENSSL_IS_BORINGSSL +#if defined(OPENSSL_IS_BORINGSSL) || defined(OPENSSL_IS_AWSLC) ciphers->c = EVP_aead_aes_128_gcm(); #else ciphers->c = EVP_aes_128_gcm(); @@ -66,7 +66,7 @@ break; case TLS1_3_CK_AES_256_GCM_SHA384: -#ifdef OPENSSL_IS_BORINGSSL +#if defined(OPENSSL_IS_BORINGSSL) || defined(OPENSSL_IS_AWSLC) ciphers->c = EVP_aead_aes_256_gcm(); #else ciphers->c = EVP_aes_256_gcm(); @@ -77,12 +77,12 @@ break; case TLS1_3_CK_CHACHA20_POLY1305_SHA256: -#ifdef OPENSSL_IS_BORINGSSL +#if defined(OPENSSL_IS_BORINGSSL) || defined(OPENSSL_IS_AWSLC) ciphers->c = EVP_aead_chacha20_poly1305(); #else ciphers->c = EVP_chacha20_poly1305(); #endif -#ifdef OPENSSL_IS_BORINGSSL +#if defined(OPENSSL_IS_BORINGSSL) || defined(OPENSSL_IS_AWSLC) ciphers->hp = (const EVP_CIPHER *) EVP_aead_chacha20_poly1305(); #else ciphers->hp = EVP_chacha20(); @@ -91,7 +91,7 @@ len = 32; break; -#ifndef OPENSSL_IS_BORINGSSL +#if !defined(OPENSSL_IS_BORINGSSL) && !defined(OPENSSL_IS_AWSLC) case TLS1_3_CK_AES_128_CCM_SHA256: ciphers->c = EVP_aes_128_ccm(); ciphers->hp = EVP_aes_128_ctr(); @@ -259,7 +259,7 @@ ngx_hkdf_expand(u_char *out_key, size_t out_len, const EVP_MD *digest, const uint8_t *prk, size_t prk_len, const u_char *info, size_t info_len) { -#ifdef OPENSSL_IS_BORINGSSL +#if defined(OPENSSL_IS_BORINGSSL) || defined(OPENSSL_IS_AWSLC) if (HKDF_expand(out_key, out_len, digest, prk, prk_len, info, info_len) == 0) @@ -321,7 +321,7 @@ const u_char *secret, size_t secret_len, const u_char *salt, size_t salt_len) { -#ifdef OPENSSL_IS_BORINGSSL +#if defined(OPENSSL_IS_BORINGSSL) || defined(OPENSSL_IS_AWSLC) if (HKDF_extract(out_key, out_len, digest, secret, secret_len, salt, salt_len) @@ -384,7 +384,7 @@ ngx_quic_md_t *key, ngx_int_t enc, ngx_log_t *log) { -#ifdef OPENSSL_IS_BORINGSSL +#if defined(OPENSSL_IS_BORINGSSL) || defined(OPENSSL_IS_AWSLC) EVP_AEAD_CTX *ctx; ctx = EVP_AEAD_CTX_new(cipher, key->data, key->len, @@ -444,7 +444,7 @@ ngx_quic_crypto_open(ngx_quic_secret_t *s, ngx_str_t *out, u_char *nonce, ngx_str_t *in, ngx_str_t *ad, ngx_log_t *log) { -#ifdef OPENSSL_IS_BORINGSSL +#if defined(OPENSSL_IS_BORINGSSL) || defined(OPENSSL_IS_AWSLC) if (EVP_AEAD_CTX_open(s->ctx, out->data, &out->len, out->len, nonce, s->iv.len, in->data, in->len, ad->data, ad->len) != 1) @@ -464,7 +464,7 @@ ngx_quic_crypto_seal(ngx_quic_secret_t *s, ngx_str_t *out, u_char *nonce, ngx_str_t *in, ngx_str_t *ad, ngx_log_t *log) { -#ifdef OPENSSL_IS_BORINGSSL +#if defined(OPENSSL_IS_BORINGSSL) || defined(OPENSSL_IS_AWSLC) if (EVP_AEAD_CTX_seal(s->ctx, out->data, &out->len, out->len, nonce, s->iv.len, in->data, in->len, ad->data, ad->len) != 1) @@ -480,7 +480,7 @@ } -#ifndef OPENSSL_IS_BORINGSSL +#if !defined(OPENSSL_IS_BORINGSSL) && !defined(OPENSSL_IS_AWSLC) static ngx_int_t ngx_quic_crypto_common(ngx_quic_secret_t *s, ngx_str_t *out, u_char *nonce, @@ -559,7 +559,7 @@ ngx_quic_crypto_cleanup(ngx_quic_secret_t *s) { if (s->ctx) { -#ifdef OPENSSL_IS_BORINGSSL +#if defined(OPENSSL_IS_BORINGSSL) || defined(OPENSSL_IS_AWSLC) EVP_AEAD_CTX_free(s->ctx); #else EVP_CIPHER_CTX_free(s->ctx); @@ -575,7 +575,7 @@ { EVP_CIPHER_CTX *ctx; -#ifdef OPENSSL_IS_BORINGSSL +#if defined(OPENSSL_IS_BORINGSSL) || defined(OPENSSL_IS_AWSLC) if (cipher == (EVP_CIPHER *) EVP_aead_chacha20_poly1305()) { /* no EVP interface */ s->hp_ctx = NULL; @@ -610,7 +610,7 @@ ctx = s->hp_ctx; -#ifdef OPENSSL_IS_BORINGSSL +#if defined(OPENSSL_IS_BORINGSSL) || defined(OPENSSL_IS_AWSLC) uint32_t cnt; if (ctx == NULL) { diff -r 3cde11b747c0 -r 5e9239920061 src/event/quic/ngx_event_quic_protection.h --- a/src/event/quic/ngx_event_quic_protection.h Mon Feb 26 20:00:28 2024 +0000 +++ b/src/event/quic/ngx_event_quic_protection.h Mon Feb 26 20:00:30 2024 +0000 @@ -24,7 +24,7 @@ #define NGX_QUIC_MAX_MD_SIZE 48 -#ifdef OPENSSL_IS_BORINGSSL +#if defined(OPENSSL_IS_BORINGSSL) || defined(OPENSSL_IS_AWSLC) #define ngx_quic_cipher_t EVP_AEAD #define ngx_quic_crypto_ctx_t EVP_AEAD_CTX #else diff -r 3cde11b747c0 -r 5e9239920061 src/event/quic/ngx_event_quic_ssl.c --- a/src/event/quic/ngx_event_quic_ssl.c Mon Feb 26 20:00:28 2024 +0000 +++ b/src/event/quic/ngx_event_quic_ssl.c Mon Feb 26 20:00:30 2024 +0000 @@ -11,6 +11,7 @@ #if defined OPENSSL_IS_BORINGSSL \ + || defined OPENSSL_IS_AWSLC \ || defined LIBRESSL_VERSION_NUMBER \ || NGX_QUIC_OPENSSL_COMPAT #define NGX_QUIC_BORINGSSL_API 1 @@ -578,7 +579,7 @@ return NGX_ERROR; } -#ifdef OPENSSL_IS_BORINGSSL +#if defined(OPENSSL_IS_BORINGSSL) || defined(OPENSSL_IS_AWSLC) if (SSL_set_quic_early_data_context(ssl_conn, p, clen) == 0) { ngx_log_error(NGX_LOG_INFO, c->log, 0, "quic SSL_set_quic_early_data_context() failed"); From piotr at aviatrix.com Wed Feb 28 01:23:08 2024 From: piotr at aviatrix.com (Piotr Sikora) Date: Wed, 28 Feb 2024 01:23:08 +0000 Subject: [PATCH] Core: fix build without libcrypt Message-ID: # HG changeset patch # User Piotr Sikora # Date 1708977637 0 # Mon Feb 26 20:00:37 2024 +0000 # Branch patch013 # Node ID cdc173477ea99fd6c952a85e5cd11db66452076a # Parent 04e3155b3b9651fee708898aaf82ac35532806ee Core: fix build without libcrypt. libcrypt is no longer part of glibc, so it might not be available. Signed-off-by: Piotr Sikora diff -r 04e3155b3b96 -r cdc173477ea9 auto/unix --- a/auto/unix Mon Feb 26 20:00:35 2024 +0000 +++ b/auto/unix Mon Feb 26 20:00:37 2024 +0000 @@ -150,7 +150,7 @@ ngx_feature="crypt()" -ngx_feature_name= +ngx_feature_name="NGX_HAVE_CRYPT" ngx_feature_run=no ngx_feature_incs= ngx_feature_path= @@ -162,7 +162,7 @@ if [ $ngx_found = no ]; then ngx_feature="crypt() in libcrypt" - ngx_feature_name= + ngx_feature_name="NGX_HAVE_CRYPT" ngx_feature_run=no ngx_feature_incs= ngx_feature_path= diff -r 04e3155b3b96 -r cdc173477ea9 src/os/unix/ngx_linux_config.h --- a/src/os/unix/ngx_linux_config.h Mon Feb 26 20:00:35 2024 +0000 +++ b/src/os/unix/ngx_linux_config.h Mon Feb 26 20:00:37 2024 +0000 @@ -52,7 +52,6 @@ #include /* memalign() */ #include /* IOV_MAX */ #include -#include #include /* uname() */ #include @@ -61,6 +60,11 @@ #include +#if (NGX_HAVE_CRYPT_H) +#include +#endif + + #if (NGX_HAVE_POSIX_SEM) #include #endif diff -r 04e3155b3b96 -r cdc173477ea9 src/os/unix/ngx_user.c --- a/src/os/unix/ngx_user.c Mon Feb 26 20:00:35 2024 +0000 +++ b/src/os/unix/ngx_user.c Mon Feb 26 20:00:37 2024 +0000 @@ -44,7 +44,7 @@ return NGX_ERROR; } -#else +#elif (NGX_HAVE_CRYPT) ngx_int_t ngx_libc_crypt(ngx_pool_t *pool, u_char *key, u_char *salt, u_char **encrypted) @@ -76,6 +76,14 @@ return NGX_ERROR; } +#else + +ngx_int_t +ngx_libc_crypt(ngx_pool_t *pool, u_char *key, u_char *salt, u_char **encrypted) +{ + return NGX_ERROR; +} + #endif #endif /* NGX_CRYPT */ From piotr at aviatrix.com Wed Feb 28 01:23:19 2024 From: piotr at aviatrix.com (Piotr Sikora) Date: Wed, 28 Feb 2024 01:23:19 +0000 Subject: [PATCH] Configure: link libcrypt when a feature using it is detected Message-ID: <570e97dddeeddb79c715.1709083399@ip-172-31-36-66.ec2.internal> # HG changeset patch # User Piotr Sikora # Date 1708977638 0 # Mon Feb 26 20:00:38 2024 +0000 # Branch patch014 # Node ID 570e97dddeeddb79c71587aa8a10150b64404beb # Parent cdc173477ea99fd6c952a85e5cd11db66452076a Configure: link libcrypt when a feature using it is detected. Previously, this worked only because libcrypt was added in a separate test for crypt() in auto/unix. Signed-off-by: Piotr Sikora diff -r cdc173477ea9 -r 570e97dddeed auto/os/linux --- a/auto/os/linux Mon Feb 26 20:00:37 2024 +0000 +++ b/auto/os/linux Mon Feb 26 20:00:38 2024 +0000 @@ -228,6 +228,10 @@ crypt_r(\"key\", \"salt\", &cd);" . auto/feature +if [ $ngx_found = yes ]; then + CRYPT_LIB="-lcrypt" +fi + ngx_include="sys/vfs.h"; . auto/include From piotr at aviatrix.com Wed Feb 28 01:24:06 2024 From: piotr at aviatrix.com (Piotr Sikora) Date: Wed, 28 Feb 2024 01:24:06 +0000 Subject: [PATCH] macOS: detect cache line size at runtime Message-ID: # HG changeset patch # User Piotr Sikora # Date 1708977640 0 # Mon Feb 26 20:00:40 2024 +0000 # Branch patch015 # Node ID f58bc1041ebca635517b919d58b49923bf24f76d # Parent 570e97dddeeddb79c71587aa8a10150b64404beb macOS: detect cache line size at runtime. Notably, Apple Silicon CPUs have 128 byte cache line size, which is twice the default configured for generic aarch64. Signed-off-by: Piotr Sikora diff -r 570e97dddeed -r f58bc1041ebc src/os/unix/ngx_darwin_init.c --- a/src/os/unix/ngx_darwin_init.c Mon Feb 26 20:00:38 2024 +0000 +++ b/src/os/unix/ngx_darwin_init.c Mon Feb 26 20:00:40 2024 +0000 @@ -11,6 +11,7 @@ char ngx_darwin_kern_ostype[16]; char ngx_darwin_kern_osrelease[128]; +int64_t ngx_darwin_hw_cachelinesize; int ngx_darwin_hw_ncpu; int ngx_darwin_kern_ipc_somaxconn; u_long ngx_darwin_net_inet_tcp_sendspace; @@ -44,6 +45,10 @@ sysctl_t sysctls[] = { + { "hw.cachelinesize", + &ngx_darwin_hw_cachelinesize, + sizeof(ngx_darwin_hw_cachelinesize), 0 }, + { "hw.ncpu", &ngx_darwin_hw_ncpu, sizeof(ngx_darwin_hw_ncpu), 0 }, @@ -155,6 +160,7 @@ return NGX_ERROR; } + ngx_cacheline_size = ngx_darwin_hw_cachelinesize; ngx_ncpu = ngx_darwin_hw_ncpu; if (ngx_darwin_kern_ipc_somaxconn > 32767) { diff -r 570e97dddeed -r f58bc1041ebc src/os/unix/ngx_posix_init.c --- a/src/os/unix/ngx_posix_init.c Mon Feb 26 20:00:38 2024 +0000 +++ b/src/os/unix/ngx_posix_init.c Mon Feb 26 20:00:40 2024 +0000 @@ -51,7 +51,10 @@ } ngx_pagesize = getpagesize(); - ngx_cacheline_size = NGX_CPU_CACHE_LINE; + + if (ngx_cacheline_size == 0) { + ngx_cacheline_size = NGX_CPU_CACHE_LINE; + } for (n = ngx_pagesize; n >>= 1; ngx_pagesize_shift++) { /* void */ } From piotr at aviatrix.com Wed Feb 28 01:24:18 2024 From: piotr at aviatrix.com (Piotr Sikora) Date: Wed, 28 Feb 2024 01:24:18 +0000 Subject: [PATCH] Configure: set cache line sizes for more architectures Message-ID: # HG changeset patch # User Piotr Sikora # Date 1708977642 0 # Mon Feb 26 20:00:42 2024 +0000 # Branch patch016 # Node ID bb99cbe3a343ae581d2369b990aee66e69679ca2 # Parent f58bc1041ebca635517b919d58b49923bf24f76d Configure: set cache line sizes for more architectures. Signed-off-by: Piotr Sikora diff -r f58bc1041ebc -r bb99cbe3a343 auto/os/conf --- a/auto/os/conf Mon Feb 26 20:00:40 2024 +0000 +++ b/auto/os/conf Mon Feb 26 20:00:42 2024 +0000 @@ -115,6 +115,21 @@ NGX_MACH_CACHE_LINE=64 ;; + ppc64 | ppc64le) + have=NGX_ALIGNMENT value=16 . auto/define + NGX_MACH_CACHE_LINE=128 + ;; + + riscv64) + have=NGX_ALIGNMENT value=16 . auto/define + NGX_MACH_CACHE_LINE=64 + ;; + + s390x) + have=NGX_ALIGNMENT value=16 . auto/define + NGX_MACH_CACHE_LINE=256 + ;; + *) have=NGX_ALIGNMENT value=16 . auto/define NGX_MACH_CACHE_LINE=32 From piotr at aviatrix.com Wed Feb 28 01:24:26 2024 From: piotr at aviatrix.com (Piotr Sikora) Date: Wed, 28 Feb 2024 01:24:26 +0000 Subject: [PATCH] Configure: add support for Homebrew on Apple Silicon Message-ID: # HG changeset patch # User Piotr Sikora # Date 1708977643 0 # Mon Feb 26 20:00:43 2024 +0000 # Branch patch017 # Node ID dd95daa55cf6131a7e845edd6ad3b429bcef6f98 # Parent bb99cbe3a343ae581d2369b990aee66e69679ca2 Configure: add support for Homebrew on Apple Silicon. Signed-off-by: Piotr Sikora diff -r bb99cbe3a343 -r dd95daa55cf6 auto/lib/geoip/conf --- a/auto/lib/geoip/conf Mon Feb 26 20:00:42 2024 +0000 +++ b/auto/lib/geoip/conf Mon Feb 26 20:00:43 2024 +0000 @@ -64,6 +64,23 @@ fi +if [ $ngx_found = no ]; then + + # Homebrew on Apple Silicon + + ngx_feature="GeoIP library in /opt/homebrew/" + ngx_feature_path="/opt/homebrew/include" + + if [ $NGX_RPATH = YES ]; then + ngx_feature_libs="-R/opt/homebrew/lib -L/opt/homebrew/lib -lGeoIP" + else + ngx_feature_libs="-L/opt/homebrew/lib -lGeoIP" + fi + + . auto/feature +fi + + if [ $ngx_found = yes ]; then CORE_INCS="$CORE_INCS $ngx_feature_path" diff -r bb99cbe3a343 -r dd95daa55cf6 auto/lib/google-perftools/conf --- a/auto/lib/google-perftools/conf Mon Feb 26 20:00:42 2024 +0000 +++ b/auto/lib/google-perftools/conf Mon Feb 26 20:00:43 2024 +0000 @@ -46,6 +46,22 @@ fi +if [ $ngx_found = no ]; then + + # Homebrew on Apple Silicon + + ngx_feature="Google perftools in /opt/homebrew/" + + if [ $NGX_RPATH = YES ]; then + ngx_feature_libs="-R/opt/homebrew/lib -L/opt/homebrew/lib -lprofiler" + else + ngx_feature_libs="-L/opt/homebrew/lib -lprofiler" + fi + + . auto/feature +fi + + if [ $ngx_found = yes ]; then CORE_LIBS="$CORE_LIBS $ngx_feature_libs" diff -r bb99cbe3a343 -r dd95daa55cf6 auto/lib/libgd/conf --- a/auto/lib/libgd/conf Mon Feb 26 20:00:42 2024 +0000 +++ b/auto/lib/libgd/conf Mon Feb 26 20:00:43 2024 +0000 @@ -65,6 +65,23 @@ fi +if [ $ngx_found = no ]; then + + # Homebrew on Apple Silicon + + ngx_feature="GD library in /opt/homebrew/" + ngx_feature_path="/opt/homebrew/include" + + if [ $NGX_RPATH = YES ]; then + ngx_feature_libs="-R/opt/homebrew/lib -L/opt/homebrew/lib -lgd" + else + ngx_feature_libs="-L/opt/homebrew/lib -lgd" + fi + + . auto/feature +fi + + if [ $ngx_found = yes ]; then CORE_INCS="$CORE_INCS $ngx_feature_path" diff -r bb99cbe3a343 -r dd95daa55cf6 auto/lib/openssl/conf --- a/auto/lib/openssl/conf Mon Feb 26 20:00:42 2024 +0000 +++ b/auto/lib/openssl/conf Mon Feb 26 20:00:43 2024 +0000 @@ -122,6 +122,24 @@ . auto/feature fi + if [ $ngx_found = no ]; then + + # Homebrew on Apple Silicon + + ngx_feature="OpenSSL library in /opt/homebrew/" + ngx_feature_path="/opt/homebrew/include" + + if [ $NGX_RPATH = YES ]; then + ngx_feature_libs="-R/opt/homebrew/lib -L/opt/homebrew/lib -lssl -lcrypto" + else + ngx_feature_libs="-L/opt/homebrew/lib -lssl -lcrypto" + fi + + ngx_feature_libs="$ngx_feature_libs $NGX_LIBDL $NGX_LIBPTHREAD" + + . auto/feature + fi + if [ $ngx_found = yes ]; then have=NGX_SSL . auto/have CORE_INCS="$CORE_INCS $ngx_feature_path" diff -r bb99cbe3a343 -r dd95daa55cf6 auto/lib/pcre/conf --- a/auto/lib/pcre/conf Mon Feb 26 20:00:42 2024 +0000 +++ b/auto/lib/pcre/conf Mon Feb 26 20:00:43 2024 +0000 @@ -182,6 +182,22 @@ . auto/feature fi + if [ $ngx_found = no ]; then + + # Homebrew on Apple Silicon + + ngx_feature="PCRE library in /opt/homebrew/" + ngx_feature_path="/opt/homebrew/include" + + if [ $NGX_RPATH = YES ]; then + ngx_feature_libs="-R/opt/homebrew/lib -L/opt/homebrew/lib -lpcre" + else + ngx_feature_libs="-L/opt/homebrew/lib -lpcre" + fi + + . auto/feature + fi + if [ $ngx_found = yes ]; then CORE_INCS="$CORE_INCS $ngx_feature_path" CORE_LIBS="$CORE_LIBS $ngx_feature_libs" From piotr at aviatrix.com Wed Feb 28 01:25:07 2024 From: piotr at aviatrix.com (Piotr Sikora) Date: Wed, 28 Feb 2024 01:25:07 +0000 Subject: [PATCH] Win32: include missing Message-ID: <9b57470dc49f8d8d10ab.1709083507@ip-172-31-36-66.ec2.internal> # HG changeset patch # User Piotr Sikora # Date 1708977633 0 # Mon Feb 26 20:00:33 2024 +0000 # Branch patch011 # Node ID 9b57470dc49f8d8d10abe30a5df628732d7618dc # Parent 480071fe7251829912a4f42301e8fc85da2d1905 Win32: include missing . Signed-off-by: Piotr Sikora diff -r 480071fe7251 -r 9b57470dc49f src/os/win32/ngx_win32_config.h --- a/src/os/win32/ngx_win32_config.h Mon Feb 26 20:00:32 2024 +0000 +++ b/src/os/win32/ngx_win32_config.h Mon Feb 26 20:00:33 2024 +0000 @@ -62,6 +62,7 @@ #include #endif #include +#include #include #ifdef __WATCOMC__ From piotr at aviatrix.com Wed Feb 28 01:25:23 2024 From: piotr at aviatrix.com (Piotr Sikora) Date: Wed, 28 Feb 2024 01:25:23 +0000 Subject: [PATCH] Configure: allow cross-compiling to Windows using Clang Message-ID: <77eab4d83413b053d968.1709083523@ip-172-31-36-66.ec2.internal> # HG changeset patch # User Piotr Sikora # Date 1708977648 0 # Mon Feb 26 20:00:48 2024 +0000 # Branch patch019 # Node ID 77eab4d83413b053d9681611d243335a95ee5567 # Parent ea1ab31c166c52372b40429a1cccece9ec9e003b Configure: allow cross-compiling to Windows using Clang. Signed-off-by: Piotr Sikora diff -r ea1ab31c166c -r 77eab4d83413 auto/os/win32 --- a/auto/os/win32 Mon Feb 26 20:00:46 2024 +0000 +++ b/auto/os/win32 Mon Feb 26 20:00:48 2024 +0000 @@ -18,7 +18,7 @@ case "$NGX_CC_NAME" in - gcc) + clang | gcc) CORE_LIBS="$CORE_LIBS -ladvapi32 -lws2_32" MAIN_LINK="$MAIN_LINK -Wl,--export-all-symbols" MAIN_LINK="$MAIN_LINK -Wl,--out-implib=$NGX_OBJS/libnginx.a" From piotr at aviatrix.com Wed Feb 28 01:25:15 2024 From: piotr at aviatrix.com (Piotr Sikora) Date: Wed, 28 Feb 2024 01:25:15 +0000 Subject: [PATCH] Win32: fix unique file index calculations Message-ID: <04e3155b3b9651fee708.1709083515@ip-172-31-36-66.ec2.internal> # HG changeset patch # User Piotr Sikora # Date 1708977635 0 # Mon Feb 26 20:00:35 2024 +0000 # Branch patch012 # Node ID 04e3155b3b9651fee708898aaf82ac35532806ee # Parent 9b57470dc49f8d8d10abe30a5df628732d7618dc Win32: fix unique file index calculations. The old code was breaking strict aliasing rules. Signed-off-by: Piotr Sikora diff -r 9b57470dc49f -r 04e3155b3b96 src/os/win32/ngx_files.h --- a/src/os/win32/ngx_files.h Mon Feb 26 20:00:33 2024 +0000 +++ b/src/os/win32/ngx_files.h Mon Feb 26 20:00:35 2024 +0000 @@ -154,7 +154,8 @@ (((off_t) (fi)->nFileSizeHigh << 32) | (fi)->nFileSizeLow) #define ngx_file_fs_size(fi) ngx_file_size(fi) -#define ngx_file_uniq(fi) (*(ngx_file_uniq_t *) &(fi)->nFileIndexHigh) +#define ngx_file_uniq(fi) \ + (((ngx_file_uniq_t) (fi)->nFileIndexHigh << 32) | (fi)->nFileIndexLow) /* 116444736000000000 is commented in src/os/win32/ngx_time.c */ From piotr at aviatrix.com Wed Feb 28 01:25:30 2024 From: piotr at aviatrix.com (Piotr Sikora) Date: Wed, 28 Feb 2024 01:25:30 +0000 Subject: [PATCH] Configure: fix "make install" when cross-compiling to Windows Message-ID: # HG changeset patch # User Piotr Sikora # Date 1708977646 0 # Mon Feb 26 20:00:46 2024 +0000 # Branch patch018 # Node ID ea1ab31c166c52372b40429a1cccece9ec9e003b # Parent dd95daa55cf6131a7e845edd6ad3b429bcef6f98 Configure: fix "make install" when cross-compiling to Windows. Signed-off-by: Piotr Sikora diff -r dd95daa55cf6 -r ea1ab31c166c auto/install --- a/auto/install Mon Feb 26 20:00:43 2024 +0000 +++ b/auto/install Mon Feb 26 20:00:46 2024 +0000 @@ -112,7 +112,7 @@ test ! -f '\$(DESTDIR)$NGX_SBIN_PATH' \\ || mv '\$(DESTDIR)$NGX_SBIN_PATH' \\ '\$(DESTDIR)$NGX_SBIN_PATH.old' - cp $NGX_OBJS/nginx '\$(DESTDIR)$NGX_SBIN_PATH' + cp $NGX_OBJS/nginx$ngx_binext '\$(DESTDIR)$NGX_SBIN_PATH' test -d '\$(DESTDIR)$NGX_CONF_PREFIX' \\ || mkdir -p '\$(DESTDIR)$NGX_CONF_PREFIX' From pluknet at nginx.com Wed Feb 28 10:15:40 2024 From: pluknet at nginx.com (Sergey Kandaurov) Date: Wed, 28 Feb 2024 14:15:40 +0400 Subject: [PATCH 3 of 3] Stream: ngx_stream_pass_module In-Reply-To: <20240221133751.hrnz43d77aq455ps@N00W24XTQX> References: <3cab85fe55272835674b.1699610841@arut-laptop> <8C0B7CF6-7BE8-4B63-8BA7-9608C455D30A@nginx.com> <20240221133751.hrnz43d77aq455ps@N00W24XTQX> Message-ID: On Wed, Feb 21, 2024 at 05:37:51PM +0400, Roman Arutyunyan wrote: > Hi, > > On Tue, Feb 13, 2024 at 02:46:35PM +0400, Sergey Kandaurov wrote: > > > > > On 10 Nov 2023, at 14:07, Roman Arutyunyan wrote: > > > > > > # HG changeset patch > > > # User Roman Arutyunyan > > > # Date 1699543504 -14400 > > > # Thu Nov 09 19:25:04 2023 +0400 > > > # Node ID 3cab85fe55272835674b7f1c296796955256d019 > > > # Parent 1d3464283405a4d8ac54caae9bf1815c723f04c5 > > > Stream: ngx_stream_pass_module. > > > > > > The module allows to pass connections from Stream to other modules such as HTTP > > > or Mail, as well as back to Stream. Previously, this was only possible with > > > proxying. Connections with preread buffer read out from socket cannot be > > > passed. > > > > > > The module allows to terminate SSL selectively based on SNI. > > > > > > stream { > > > server { > > > listen 8000 default_server; > > > ssl_preread on; > > > ... > > > } > > > > > > server { > > > listen 8000; > > > server_name foo.example.com; > > > pass 8001; # to HTTP > > > } > > > > > > server { > > > listen 8000; > > > server_name bar.example.com; > > > ... > > > } > > > } > > > > > > http { > > > server { > > > listen 8001 ssl; > > > ... > > > > > > location / { > > > root html; > > > } > > > } > > > } > > > > > > diff --git a/auto/modules b/auto/modules > > > --- a/auto/modules > > > +++ b/auto/modules > > > @@ -1166,6 +1166,16 @@ if [ $STREAM != NO ]; then > > > . auto/module > > > fi > > > > > > + if [ $STREAM_PASS = YES ]; then > > > + ngx_module_name=ngx_stream_pass_module > > > + ngx_module_deps= > > > + ngx_module_srcs=src/stream/ngx_stream_pass_module.c > > > + ngx_module_libs= > > > + ngx_module_link=$STREAM_PASS > > > + > > > + . auto/module > > > + fi > > > + > > > if [ $STREAM_SET = YES ]; then > > > ngx_module_name=ngx_stream_set_module > > > ngx_module_deps= > > > diff --git a/auto/options b/auto/options > > > --- a/auto/options > > > +++ b/auto/options > > > @@ -127,6 +127,7 @@ STREAM_GEOIP=NO > > > STREAM_MAP=YES > > > STREAM_SPLIT_CLIENTS=YES > > > STREAM_RETURN=YES > > > +STREAM_PASS=YES > > > STREAM_SET=YES > > > STREAM_UPSTREAM_HASH=YES > > > STREAM_UPSTREAM_LEAST_CONN=YES > > > @@ -337,6 +338,7 @@ use the \"--with-mail_ssl_module\" optio > > > --without-stream_split_clients_module) > > > STREAM_SPLIT_CLIENTS=NO ;; > > > --without-stream_return_module) STREAM_RETURN=NO ;; > > > + --without-stream_pass_module) STREAM_PASS=NO ;; > > > --without-stream_set_module) STREAM_SET=NO ;; > > > --without-stream_upstream_hash_module) > > > STREAM_UPSTREAM_HASH=NO ;; > > > @@ -556,6 +558,7 @@ cat << END > > > --without-stream_split_clients_module > > > disable ngx_stream_split_clients_module > > > --without-stream_return_module disable ngx_stream_return_module > > > + --without-stream_pass_module disable ngx_stream_pass_module > > > --without-stream_set_module disable ngx_stream_set_module > > > --without-stream_upstream_hash_module > > > disable ngx_stream_upstream_hash_module > > > diff --git a/src/stream/ngx_stream_pass_module.c b/src/stream/ngx_stream_pass_module.c > > > new file mode 100644 > > > --- /dev/null > > > +++ b/src/stream/ngx_stream_pass_module.c > > > @@ -0,0 +1,245 @@ > > > + > > > +/* > > > + * Copyright (C) Roman Arutyunyan > > > + * Copyright (C) Nginx, Inc. > > > + */ > > > + > > > + > > > +#include > > > +#include > > > +#include > > > + > > > + > > > +typedef struct { > > > + ngx_addr_t *addr; > > > + ngx_stream_complex_value_t *addr_value; > > > +} ngx_stream_pass_srv_conf_t; > > > + > > > + > > > +static void ngx_stream_pass_handler(ngx_stream_session_t *s); > > > +static void *ngx_stream_pass_create_srv_conf(ngx_conf_t *cf); > > > +static char *ngx_stream_pass(ngx_conf_t *cf, ngx_command_t *cmd, void *conf); > > > + > > > + > > > +static ngx_command_t ngx_stream_pass_commands[] = { > > > + > > > + { ngx_string("pass"), > > > + NGX_STREAM_SRV_CONF|NGX_CONF_TAKE1, > > > + ngx_stream_pass, > > > + NGX_STREAM_SRV_CONF_OFFSET, > > > + 0, > > > + NULL }, > > > + > > > + ngx_null_command > > > +}; > > > + > > > + > > > +static ngx_stream_module_t ngx_stream_pass_module_ctx = { > > > + NULL, /* preconfiguration */ > > > + NULL, /* postconfiguration */ > > > + > > > + NULL, /* create main configuration */ > > > + NULL, /* init main configuration */ > > > + > > > + ngx_stream_pass_create_srv_conf, /* create server configuration */ > > > + NULL /* merge server configuration */ > > > +}; > > > + > > > + > > > +ngx_module_t ngx_stream_pass_module = { > > > + NGX_MODULE_V1, > > > + &ngx_stream_pass_module_ctx, /* module conaddr */ > > > + ngx_stream_pass_commands, /* module directives */ > > > + NGX_STREAM_MODULE, /* module type */ > > > + NULL, /* init master */ > > > + NULL, /* init module */ > > > + NULL, /* init process */ > > > + NULL, /* init thread */ > > > + NULL, /* exit thread */ > > > + NULL, /* exit process */ > > > + NULL, /* exit master */ > > > + NGX_MODULE_V1_PADDING > > > +}; > > > + > > > + > > > +static void > > > +ngx_stream_pass_handler(ngx_stream_session_t *s) > > > +{ > > > + ngx_url_t u; > > > + ngx_str_t url; > > > + ngx_addr_t *addr; > > > + ngx_uint_t i; > > > + ngx_listening_t *ls; > > > + ngx_connection_t *c; > > > + ngx_stream_pass_srv_conf_t *pscf; > > > + > > > + c = s->connection; > > > + > > > + c->log->action = "passing connection to another module"; > > > + > > > + if (c->buffer && c->buffer->pos != c->buffer->last) { > > > + ngx_log_error(NGX_LOG_ERR, s->connection->log, 0, > > > + "cannot pass connection with preread data"); > > > + goto failed; > > > + } > > > + > > > + pscf = ngx_stream_get_module_srv_conf(s, ngx_stream_pass_module); > > > + > > > + addr = pscf->addr; > > > + > > > + if (addr == NULL) { > > > + if (ngx_stream_complex_value(s, pscf->addr_value, &url) != NGX_OK) { > > > + goto failed; > > > + } > > > + > > > + ngx_memzero(&u, sizeof(ngx_url_t)); > > > + > > > + u.url = url; > > > + u.listen = 1; > > > + u.no_resolve = 1; > > > + > > > + if (ngx_parse_url(s->connection->pool, &u) != NGX_OK) { > > > + if (u.err) { > > > + ngx_log_error(NGX_LOG_ERR, s->connection->log, 0, > > > + "%s in pass \"%V\"", u.err, &u.url); > > > + } > > > + > > > + goto failed; > > > + } > > > + > > > + if (u.naddrs == 0) { > > > + ngx_log_error(NGX_LOG_ERR, s->connection->log, 0, > > > + "no addresses in pass \"%V\"", &u.url); > > > + goto failed; > > > + } > > > + > > > + addr = &u.addrs[0]; > > > + } > > > + > > > + ngx_log_debug1(NGX_LOG_DEBUG_STREAM, c->log, 0, > > > + "stream pass addr: \"%V\"", &addr->name); > > > + > > > + ls = ngx_cycle->listening.elts; > > > + > > > + for (i = 0; i < ngx_cycle->listening.nelts; i++) { > > > + if (ngx_cmp_sockaddr(ls[i].sockaddr, ls[i].socklen, > > > + addr->sockaddr, addr->socklen, 1) > > > + == NGX_OK) > > > + { > > > + c->listening = &ls[i]; > > > > The address configuration (addr_conf) is stored depending on the > > protocol family of the listening socket, it's different for AF_INET6. > > So, if the protocol family is switched when passing a connection, > > it may happen that c->local_sockaddr->sa_family will keep a wrong > > value, the listen handler will dereference addr_conf incorrectly. > > > > Consider the following example: > > > > server { > > listen 127.0.0.1:8081; > > pass [::1]:8091; > > } > > > > server { > > listen [::1]:8091; > > ... > > } > > > > When ls->handler is invoked, c->local_sockaddr is kept inherited > > from the originally accepted connection, which is of AF_INET. > > To fix this, c->local_sockaddr and c->local_socklen should be > > updated according to the new listen socket configuration. > > Sure, thanks. > > > OTOH, c->sockaddr / c->socklen should be kept intact. > > Note that this makes possible cross protocol family > > configurations in e.g. realip and access modules; > > from now on this will have to be taken into account. > > This is already possible with proxy_protocol+realip and is known to cause minor > issues with third-party code that's too pedantic about families. > > Also I've just sent an updated patch which fixes PROXY protocol headers > generated for mixed family addresses. > > > > + > > > + c->data = NULL; > > > + c->buffer = NULL; > > > + > > > + *c->log = c->listening->log; > > > + c->log->handler = NULL; > > > + c->log->data = NULL; > > > + > > > + c->listening->handler(c); > > > + > > > + return; > > > + } > > > + } > > > + > > > + ngx_log_error(NGX_LOG_ERR, c->log, 0, > > > + "listen not found for \"%V\"", &addr->name); > > > + > > > + ngx_stream_finalize_session(s, NGX_STREAM_OK); > > > + > > > + return; > > > + > > > +failed: > > > + > > > + ngx_stream_finalize_session(s, NGX_STREAM_INTERNAL_SERVER_ERROR); > > > +} > > > + > > > + > > > +static void * > > > +ngx_stream_pass_create_srv_conf(ngx_conf_t *cf) > > > +{ > > > + ngx_stream_pass_srv_conf_t *conf; > > > + > > > + conf = ngx_pcalloc(cf->pool, sizeof(ngx_stream_pass_srv_conf_t)); > > > + if (conf == NULL) { > > > + return NULL; > > > + } > > > + > > > + /* > > > + * set by ngx_pcalloc(): > > > + * > > > + * conf->addr = NULL; > > > + * conf->addr_value = NULL; > > > + */ > > > + > > > + return conf; > > > +} > > > + > > > + > > > +static char * > > > +ngx_stream_pass(ngx_conf_t *cf, ngx_command_t *cmd, void *conf) > > > +{ > > > + ngx_stream_pass_srv_conf_t *pscf = conf; > > > + > > > + ngx_url_t u; > > > + ngx_str_t *value, *url; > > > + ngx_stream_complex_value_t cv; > > > + ngx_stream_core_srv_conf_t *cscf; > > > + ngx_stream_compile_complex_value_t ccv; > > > + > > > + if (pscf->addr || pscf->addr_value) { > > > + return "is duplicate"; > > > + } > > > + > > > + cscf = ngx_stream_conf_get_module_srv_conf(cf, ngx_stream_core_module); > > > + > > > + cscf->handler = ngx_stream_pass_handler; > > > + > > > + value = cf->args->elts; > > > + > > > + url = &value[1]; > > > + > > > + ngx_memzero(&ccv, sizeof(ngx_stream_compile_complex_value_t)); > > > + > > > + ccv.cf = cf; > > > + ccv.value = url; > > > + ccv.complex_value = &cv; > > > + > > > + if (ngx_stream_compile_complex_value(&ccv) != NGX_OK) { > > > + return NGX_CONF_ERROR; > > > + } > > > + > > > + if (cv.lengths) { > > > + pscf->addr_value = ngx_palloc(cf->pool, > > > + sizeof(ngx_stream_complex_value_t)); > > > + if (pscf->addr_value == NULL) { > > > + return NGX_CONF_ERROR; > > > + } > > > + > > > + *pscf->addr_value = cv; > > > + > > > + return NGX_CONF_OK; > > > + } > > > + > > > + ngx_memzero(&u, sizeof(ngx_url_t)); > > > + > > > + u.url = *url; > > > + u.listen = 1; > > > + > > > + if (ngx_parse_url(cf->pool, &u) != NGX_OK) { > > > + if (u.err) { > > > + ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, > > > + "%s in \"%V\" of the \"pass\" directive", > > > + u.err, &u.url); > > > + } > > > + > > > + return NGX_CONF_ERROR; > > > + } > > > + > > > + if (u.naddrs == 0) { > > > + return "has no addresses"; > > > + } > > > + > > > + pscf->addr = &u.addrs[0]; > > > + > > > + return NGX_CONF_OK; > > > +} > > Attached is an improved version with the following changes: > > - Removed 'listen = 1' flag when parsing "pass" parameter. > Now it's treated like "proxy_pass" parameter. > - Listen match reworked to be able to match wildcards. > - Local_sockaddr is copied to the connection after match. > - Fixes in log action, log messages, commit log etc. > > -- > Roman Arutyunyan > # HG changeset patch > # User Roman Arutyunyan > # Date 1708522562 -14400 > # Wed Feb 21 17:36:02 2024 +0400 > # Node ID 44da04c2d4db94ad4eefa84b299e07c5fa4a00b9 > # Parent 4eb76c257fd07a69fc9e9386e845edcc9e2b1b08 > Stream: ngx_stream_pass_module. > > The module allows to pass connections from Stream to other modules such as HTTP > or Mail, as well as back to Stream. Previously, this was only possible with > proxying. Connections with preread buffer read out from socket cannot be > passed. > > The module allows selective SSL termination based on SNI. > > stream { > server { > listen 8000 default_server; > ssl_preread on; > ... > } > > server { > listen 8000; > server_name foo.example.com; > pass 127.0.0.1:8001; # to HTTP > } > > server { > listen 8000; > server_name bar.example.com; > ... > } > } > > http { > server { > listen 8001 ssl; > ... > > location / { > root html; > } > } > } > > diff --git a/auto/modules b/auto/modules > --- a/auto/modules > +++ b/auto/modules > @@ -1166,6 +1166,16 @@ if [ $STREAM != NO ]; then > . auto/module > fi > > + if [ $STREAM_PASS = YES ]; then > + ngx_module_name=ngx_stream_pass_module > + ngx_module_deps= > + ngx_module_srcs=src/stream/ngx_stream_pass_module.c > + ngx_module_libs= > + ngx_module_link=$STREAM_PASS > + > + . auto/module > + fi > + > if [ $STREAM_SET = YES ]; then > ngx_module_name=ngx_stream_set_module > ngx_module_deps= > diff --git a/auto/options b/auto/options > --- a/auto/options > +++ b/auto/options > @@ -127,6 +127,7 @@ STREAM_GEOIP=NO > STREAM_MAP=YES > STREAM_SPLIT_CLIENTS=YES > STREAM_RETURN=YES > +STREAM_PASS=YES > STREAM_SET=YES > STREAM_UPSTREAM_HASH=YES > STREAM_UPSTREAM_LEAST_CONN=YES > @@ -337,6 +338,7 @@ use the \"--with-mail_ssl_module\" optio > --without-stream_split_clients_module) > STREAM_SPLIT_CLIENTS=NO ;; > --without-stream_return_module) STREAM_RETURN=NO ;; > + --without-stream_pass_module) STREAM_PASS=NO ;; > --without-stream_set_module) STREAM_SET=NO ;; > --without-stream_upstream_hash_module) > STREAM_UPSTREAM_HASH=NO ;; > @@ -556,6 +558,7 @@ cat << END > --without-stream_split_clients_module > disable ngx_stream_split_clients_module > --without-stream_return_module disable ngx_stream_return_module > + --without-stream_pass_module disable ngx_stream_pass_module > --without-stream_set_module disable ngx_stream_set_module > --without-stream_upstream_hash_module > disable ngx_stream_upstream_hash_module > diff --git a/src/stream/ngx_stream_pass_module.c b/src/stream/ngx_stream_pass_module.c > new file mode 100644 > --- /dev/null > +++ b/src/stream/ngx_stream_pass_module.c > @@ -0,0 +1,272 @@ > + > +/* > + * Copyright (C) Roman Arutyunyan > + * Copyright (C) Nginx, Inc. > + */ > + > + > +#include > +#include > +#include > + > + > +typedef struct { > + ngx_addr_t *addr; > + ngx_stream_complex_value_t *addr_value; > +} ngx_stream_pass_srv_conf_t; > + > + > +static void ngx_stream_pass_handler(ngx_stream_session_t *s); > +static ngx_int_t ngx_stream_pass_match(ngx_listening_t *ls, ngx_addr_t *addr); > +static void *ngx_stream_pass_create_srv_conf(ngx_conf_t *cf); > +static char *ngx_stream_pass(ngx_conf_t *cf, ngx_command_t *cmd, void *conf); > + > + > +static ngx_command_t ngx_stream_pass_commands[] = { > + > + { ngx_string("pass"), > + NGX_STREAM_SRV_CONF|NGX_CONF_TAKE1, > + ngx_stream_pass, > + NGX_STREAM_SRV_CONF_OFFSET, > + 0, > + NULL }, > + > + ngx_null_command > +}; > + > + > +static ngx_stream_module_t ngx_stream_pass_module_ctx = { > + NULL, /* preconfiguration */ > + NULL, /* postconfiguration */ > + > + NULL, /* create main configuration */ > + NULL, /* init main configuration */ > + > + ngx_stream_pass_create_srv_conf, /* create server configuration */ > + NULL /* merge server configuration */ > +}; > + > + > +ngx_module_t ngx_stream_pass_module = { > + NGX_MODULE_V1, > + &ngx_stream_pass_module_ctx, /* module context */ > + ngx_stream_pass_commands, /* module directives */ > + NGX_STREAM_MODULE, /* module type */ > + NULL, /* init master */ > + NULL, /* init module */ > + NULL, /* init process */ > + NULL, /* init thread */ > + NULL, /* exit thread */ > + NULL, /* exit process */ > + NULL, /* exit master */ > + NGX_MODULE_V1_PADDING > +}; > + > + > +static void > +ngx_stream_pass_handler(ngx_stream_session_t *s) > +{ > + ngx_url_t u; > + ngx_str_t url; > + ngx_addr_t *addr; > + ngx_uint_t i; > + ngx_listening_t *ls; > + struct sockaddr *sa; > + ngx_connection_t *c; > + ngx_stream_pass_srv_conf_t *pscf; > + > + c = s->connection; > + > + c->log->action = "passing connection to port"; > + > + if (c->buffer && c->buffer->pos != c->buffer->last) { > + ngx_log_error(NGX_LOG_ERR, c->log, 0, > + "cannot pass connection with preread data"); > + goto failed; > + } > + > + pscf = ngx_stream_get_module_srv_conf(s, ngx_stream_pass_module); > + > + addr = pscf->addr; > + > + if (addr == NULL) { > + if (ngx_stream_complex_value(s, pscf->addr_value, &url) != NGX_OK) { > + goto failed; > + } > + > + ngx_memzero(&u, sizeof(ngx_url_t)); > + > + u.url = url; > + u.no_resolve = 1; This makes configurations with variables of limited use. > + > + if (ngx_parse_url(c->pool, &u) != NGX_OK) { > + if (u.err) { > + ngx_log_error(NGX_LOG_ERR, c->log, 0, > + "%s in pass \"%V\"", u.err, &u.url); > + } > + > + goto failed; > + } > + > + if (u.naddrs == 0) { > + ngx_log_error(NGX_LOG_ERR, c->log, 0, > + "no addresses in pass \"%V\"", &u.url); > + goto failed; > + } > + > + addr = &u.addrs[0]; > + } > + > + ngx_log_debug1(NGX_LOG_DEBUG_STREAM, c->log, 0, > + "stream pass addr: \"%V\"", &addr->name); > + > + ls = ngx_cycle->listening.elts; > + > + for (i = 0; i < ngx_cycle->listening.nelts; i++) { > + > + if (ngx_stream_pass_match(&ls[i], addr) != NGX_OK) { > + continue; > + } > + > + c->listening = &ls[i]; > + > + c->data = NULL; > + c->buffer = NULL; > + > + *c->log = c->listening->log; > + c->log->handler = NULL; > + c->log->data = NULL; > + > + sa = ngx_palloc(c->pool, addr->socklen); > + if (sa == NULL) { > + goto failed; > + } Is there a reason to (re-)allocate memory for c->local_sockaddr ? Either way, "addr" is stored in some pool, allocated in ngx_parse_url() through ngx_inet_add_addr(). It should be safe to reference it there. > + > + ngx_memcpy(sa, addr->sockaddr, addr->socklen); > + c->local_sockaddr = sa; > + c->local_socklen = addr->socklen; > + > + c->listening->handler(c); > + > + return; > + } > + > + ngx_log_error(NGX_LOG_ERR, c->log, 0, > + "port not found for \"%V\"", &addr->name); > + > + ngx_stream_finalize_session(s, NGX_STREAM_OK); > + > + return; > + > +failed: > + > + ngx_stream_finalize_session(s, NGX_STREAM_INTERNAL_SERVER_ERROR); > +} > + > + > +static ngx_int_t > +ngx_stream_pass_match(ngx_listening_t *ls, ngx_addr_t *addr) > +{ > + if (!ls->wildcard) { > + return ngx_cmp_sockaddr(ls->sockaddr, ls->socklen, > + addr->sockaddr, addr->socklen, 1); > + } > + > + if (ls->sockaddr->sa_family == addr->sockaddr->sa_family > + && ngx_inet_get_port(ls->sockaddr) == ngx_inet_get_port(addr->sockaddr)) > + { > + return NGX_OK; > + } > + > + return NGX_DECLINED; > +} > + > + > +static void * > +ngx_stream_pass_create_srv_conf(ngx_conf_t *cf) > +{ > + ngx_stream_pass_srv_conf_t *conf; > + > + conf = ngx_pcalloc(cf->pool, sizeof(ngx_stream_pass_srv_conf_t)); > + if (conf == NULL) { > + return NULL; > + } > + > + /* > + * set by ngx_pcalloc(): > + * > + * conf->addr = NULL; > + * conf->addr_value = NULL; > + */ > + > + return conf; > +} > + > + > +static char * > +ngx_stream_pass(ngx_conf_t *cf, ngx_command_t *cmd, void *conf) > +{ > + ngx_stream_pass_srv_conf_t *pscf = conf; > + > + ngx_url_t u; > + ngx_str_t *value, *url; > + ngx_stream_complex_value_t cv; > + ngx_stream_core_srv_conf_t *cscf; > + ngx_stream_compile_complex_value_t ccv; > + > + if (pscf->addr || pscf->addr_value) { > + return "is duplicate"; > + } > + > + cscf = ngx_stream_conf_get_module_srv_conf(cf, ngx_stream_core_module); > + > + cscf->handler = ngx_stream_pass_handler; > + > + value = cf->args->elts; > + > + url = &value[1]; > + > + ngx_memzero(&ccv, sizeof(ngx_stream_compile_complex_value_t)); > + > + ccv.cf = cf; > + ccv.value = url; > + ccv.complex_value = &cv; > + > + if (ngx_stream_compile_complex_value(&ccv) != NGX_OK) { > + return NGX_CONF_ERROR; > + } > + > + if (cv.lengths) { > + pscf->addr_value = ngx_palloc(cf->pool, > + sizeof(ngx_stream_complex_value_t)); > + if (pscf->addr_value == NULL) { > + return NGX_CONF_ERROR; > + } > + > + *pscf->addr_value = cv; > + > + return NGX_CONF_OK; > + } > + > + ngx_memzero(&u, sizeof(ngx_url_t)); > + > + u.url = *url; > + > + if (ngx_parse_url(cf->pool, &u) != NGX_OK) { > + if (u.err) { > + ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, > + "%s in \"%V\" of the \"pass\" directive", > + u.err, &u.url); > + } > + > + return NGX_CONF_ERROR; > + } Although you've changed the commit example from "pass 8081" to "pass 127.0.0.1:8001", the former syntax is still allowed. This may be misleading: with the current code, unlike "u.listen = 1", this means "8081" will be tested as an address (without a port) written in a decimal format, as finally resolved by getaddrinfo(3). So, using "pass 8081" corresponds to 0x1f91, or "0.0.31.145". Further, since it has no port, it will never match listen addresses. I'd check and forbid this explicitly: if (u.no_port) { return "has no port"; } > + > + if (u.naddrs == 0) { > + return "has no addresses"; > + } It seems that this check can never happen if neither "u.no_resolve" nor "u.listen" set, such as in here. In the worst case, when the address couldn't be parsed as a literal, and ngx_parse_url() falls back to ngx_inet_resolve_host() to resolve as a name, which is either NXDOMAIN or has none of A/AAAA records, then ngx_parse_url() will return an error with the "host not found" diagnostics. It looks like another left-over from "u.listen = 1". > + > + pscf->addr = &u.addrs[0]; > + > + return NGX_CONF_OK; > +} From arut at nginx.com Wed Feb 28 14:22:34 2024 From: arut at nginx.com (Roman Arutyunyan) Date: Wed, 28 Feb 2024 18:22:34 +0400 Subject: [PATCH 3 of 3] Stream: ngx_stream_pass_module In-Reply-To: References: <3cab85fe55272835674b.1699610841@arut-laptop> <8C0B7CF6-7BE8-4B63-8BA7-9608C455D30A@nginx.com> <20240221133751.hrnz43d77aq455ps@N00W24XTQX> Message-ID: <20240228142234.y2q23j25tcq55oq5@N00W24XTQX> Hi, On Wed, Feb 28, 2024 at 02:15:40PM +0400, Sergey Kandaurov wrote: > On Wed, Feb 21, 2024 at 05:37:51PM +0400, Roman Arutyunyan wrote: > > Hi, > > > > On Tue, Feb 13, 2024 at 02:46:35PM +0400, Sergey Kandaurov wrote: > > > > > > > On 10 Nov 2023, at 14:07, Roman Arutyunyan wrote: > > > > > > > > # HG changeset patch > > > > # User Roman Arutyunyan > > > > # Date 1699543504 -14400 > > > > # Thu Nov 09 19:25:04 2023 +0400 > > > > # Node ID 3cab85fe55272835674b7f1c296796955256d019 > > > > # Parent 1d3464283405a4d8ac54caae9bf1815c723f04c5 > > > > Stream: ngx_stream_pass_module. > > > > > > > > The module allows to pass connections from Stream to other modules such as HTTP > > > > or Mail, as well as back to Stream. Previously, this was only possible with > > > > proxying. Connections with preread buffer read out from socket cannot be > > > > passed. > > > > > > > > The module allows to terminate SSL selectively based on SNI. > > > > > > > > stream { > > > > server { > > > > listen 8000 default_server; > > > > ssl_preread on; > > > > ... > > > > } > > > > > > > > server { > > > > listen 8000; > > > > server_name foo.example.com; > > > > pass 8001; # to HTTP > > > > } > > > > > > > > server { > > > > listen 8000; > > > > server_name bar.example.com; > > > > ... > > > > } > > > > } > > > > > > > > http { > > > > server { > > > > listen 8001 ssl; > > > > ... > > > > > > > > location / { > > > > root html; > > > > } > > > > } > > > > } > > > > > > > > diff --git a/auto/modules b/auto/modules > > > > --- a/auto/modules > > > > +++ b/auto/modules > > > > @@ -1166,6 +1166,16 @@ if [ $STREAM != NO ]; then > > > > . auto/module > > > > fi > > > > > > > > + if [ $STREAM_PASS = YES ]; then > > > > + ngx_module_name=ngx_stream_pass_module > > > > + ngx_module_deps= > > > > + ngx_module_srcs=src/stream/ngx_stream_pass_module.c > > > > + ngx_module_libs= > > > > + ngx_module_link=$STREAM_PASS > > > > + > > > > + . auto/module > > > > + fi > > > > + > > > > if [ $STREAM_SET = YES ]; then > > > > ngx_module_name=ngx_stream_set_module > > > > ngx_module_deps= > > > > diff --git a/auto/options b/auto/options > > > > --- a/auto/options > > > > +++ b/auto/options > > > > @@ -127,6 +127,7 @@ STREAM_GEOIP=NO > > > > STREAM_MAP=YES > > > > STREAM_SPLIT_CLIENTS=YES > > > > STREAM_RETURN=YES > > > > +STREAM_PASS=YES > > > > STREAM_SET=YES > > > > STREAM_UPSTREAM_HASH=YES > > > > STREAM_UPSTREAM_LEAST_CONN=YES > > > > @@ -337,6 +338,7 @@ use the \"--with-mail_ssl_module\" optio > > > > --without-stream_split_clients_module) > > > > STREAM_SPLIT_CLIENTS=NO ;; > > > > --without-stream_return_module) STREAM_RETURN=NO ;; > > > > + --without-stream_pass_module) STREAM_PASS=NO ;; > > > > --without-stream_set_module) STREAM_SET=NO ;; > > > > --without-stream_upstream_hash_module) > > > > STREAM_UPSTREAM_HASH=NO ;; > > > > @@ -556,6 +558,7 @@ cat << END > > > > --without-stream_split_clients_module > > > > disable ngx_stream_split_clients_module > > > > --without-stream_return_module disable ngx_stream_return_module > > > > + --without-stream_pass_module disable ngx_stream_pass_module > > > > --without-stream_set_module disable ngx_stream_set_module > > > > --without-stream_upstream_hash_module > > > > disable ngx_stream_upstream_hash_module > > > > diff --git a/src/stream/ngx_stream_pass_module.c b/src/stream/ngx_stream_pass_module.c > > > > new file mode 100644 > > > > --- /dev/null > > > > +++ b/src/stream/ngx_stream_pass_module.c > > > > @@ -0,0 +1,245 @@ > > > > + > > > > +/* > > > > + * Copyright (C) Roman Arutyunyan > > > > + * Copyright (C) Nginx, Inc. > > > > + */ > > > > + > > > > + > > > > +#include > > > > +#include > > > > +#include > > > > + > > > > + > > > > +typedef struct { > > > > + ngx_addr_t *addr; > > > > + ngx_stream_complex_value_t *addr_value; > > > > +} ngx_stream_pass_srv_conf_t; > > > > + > > > > + > > > > +static void ngx_stream_pass_handler(ngx_stream_session_t *s); > > > > +static void *ngx_stream_pass_create_srv_conf(ngx_conf_t *cf); > > > > +static char *ngx_stream_pass(ngx_conf_t *cf, ngx_command_t *cmd, void *conf); > > > > + > > > > + > > > > +static ngx_command_t ngx_stream_pass_commands[] = { > > > > + > > > > + { ngx_string("pass"), > > > > + NGX_STREAM_SRV_CONF|NGX_CONF_TAKE1, > > > > + ngx_stream_pass, > > > > + NGX_STREAM_SRV_CONF_OFFSET, > > > > + 0, > > > > + NULL }, > > > > + > > > > + ngx_null_command > > > > +}; > > > > + > > > > + > > > > +static ngx_stream_module_t ngx_stream_pass_module_ctx = { > > > > + NULL, /* preconfiguration */ > > > > + NULL, /* postconfiguration */ > > > > + > > > > + NULL, /* create main configuration */ > > > > + NULL, /* init main configuration */ > > > > + > > > > + ngx_stream_pass_create_srv_conf, /* create server configuration */ > > > > + NULL /* merge server configuration */ > > > > +}; > > > > + > > > > + > > > > +ngx_module_t ngx_stream_pass_module = { > > > > + NGX_MODULE_V1, > > > > + &ngx_stream_pass_module_ctx, /* module conaddr */ > > > > + ngx_stream_pass_commands, /* module directives */ > > > > + NGX_STREAM_MODULE, /* module type */ > > > > + NULL, /* init master */ > > > > + NULL, /* init module */ > > > > + NULL, /* init process */ > > > > + NULL, /* init thread */ > > > > + NULL, /* exit thread */ > > > > + NULL, /* exit process */ > > > > + NULL, /* exit master */ > > > > + NGX_MODULE_V1_PADDING > > > > +}; > > > > + > > > > + > > > > +static void > > > > +ngx_stream_pass_handler(ngx_stream_session_t *s) > > > > +{ > > > > + ngx_url_t u; > > > > + ngx_str_t url; > > > > + ngx_addr_t *addr; > > > > + ngx_uint_t i; > > > > + ngx_listening_t *ls; > > > > + ngx_connection_t *c; > > > > + ngx_stream_pass_srv_conf_t *pscf; > > > > + > > > > + c = s->connection; > > > > + > > > > + c->log->action = "passing connection to another module"; > > > > + > > > > + if (c->buffer && c->buffer->pos != c->buffer->last) { > > > > + ngx_log_error(NGX_LOG_ERR, s->connection->log, 0, > > > > + "cannot pass connection with preread data"); > > > > + goto failed; > > > > + } > > > > + > > > > + pscf = ngx_stream_get_module_srv_conf(s, ngx_stream_pass_module); > > > > + > > > > + addr = pscf->addr; > > > > + > > > > + if (addr == NULL) { > > > > + if (ngx_stream_complex_value(s, pscf->addr_value, &url) != NGX_OK) { > > > > + goto failed; > > > > + } > > > > + > > > > + ngx_memzero(&u, sizeof(ngx_url_t)); > > > > + > > > > + u.url = url; > > > > + u.listen = 1; > > > > + u.no_resolve = 1; > > > > + > > > > + if (ngx_parse_url(s->connection->pool, &u) != NGX_OK) { > > > > + if (u.err) { > > > > + ngx_log_error(NGX_LOG_ERR, s->connection->log, 0, > > > > + "%s in pass \"%V\"", u.err, &u.url); > > > > + } > > > > + > > > > + goto failed; > > > > + } > > > > + > > > > + if (u.naddrs == 0) { > > > > + ngx_log_error(NGX_LOG_ERR, s->connection->log, 0, > > > > + "no addresses in pass \"%V\"", &u.url); > > > > + goto failed; > > > > + } > > > > + > > > > + addr = &u.addrs[0]; > > > > + } > > > > + > > > > + ngx_log_debug1(NGX_LOG_DEBUG_STREAM, c->log, 0, > > > > + "stream pass addr: \"%V\"", &addr->name); > > > > + > > > > + ls = ngx_cycle->listening.elts; > > > > + > > > > + for (i = 0; i < ngx_cycle->listening.nelts; i++) { > > > > + if (ngx_cmp_sockaddr(ls[i].sockaddr, ls[i].socklen, > > > > + addr->sockaddr, addr->socklen, 1) > > > > + == NGX_OK) > > > > + { > > > > + c->listening = &ls[i]; > > > > > > The address configuration (addr_conf) is stored depending on the > > > protocol family of the listening socket, it's different for AF_INET6. > > > So, if the protocol family is switched when passing a connection, > > > it may happen that c->local_sockaddr->sa_family will keep a wrong > > > value, the listen handler will dereference addr_conf incorrectly. > > > > > > Consider the following example: > > > > > > server { > > > listen 127.0.0.1:8081; > > > pass [::1]:8091; > > > } > > > > > > server { > > > listen [::1]:8091; > > > ... > > > } > > > > > > When ls->handler is invoked, c->local_sockaddr is kept inherited > > > from the originally accepted connection, which is of AF_INET. > > > To fix this, c->local_sockaddr and c->local_socklen should be > > > updated according to the new listen socket configuration. > > > > Sure, thanks. > > > > > OTOH, c->sockaddr / c->socklen should be kept intact. > > > Note that this makes possible cross protocol family > > > configurations in e.g. realip and access modules; > > > from now on this will have to be taken into account. > > > > This is already possible with proxy_protocol+realip and is known to cause minor > > issues with third-party code that's too pedantic about families. > > > > Also I've just sent an updated patch which fixes PROXY protocol headers > > generated for mixed family addresses. > > > > > > + > > > > + c->data = NULL; > > > > + c->buffer = NULL; > > > > + > > > > + *c->log = c->listening->log; > > > > + c->log->handler = NULL; > > > > + c->log->data = NULL; > > > > + > > > > + c->listening->handler(c); > > > > + > > > > + return; > > > > + } > > > > + } > > > > + > > > > + ngx_log_error(NGX_LOG_ERR, c->log, 0, > > > > + "listen not found for \"%V\"", &addr->name); > > > > + > > > > + ngx_stream_finalize_session(s, NGX_STREAM_OK); > > > > + > > > > + return; > > > > + > > > > +failed: > > > > + > > > > + ngx_stream_finalize_session(s, NGX_STREAM_INTERNAL_SERVER_ERROR); > > > > +} > > > > + > > > > + > > > > +static void * > > > > +ngx_stream_pass_create_srv_conf(ngx_conf_t *cf) > > > > +{ > > > > + ngx_stream_pass_srv_conf_t *conf; > > > > + > > > > + conf = ngx_pcalloc(cf->pool, sizeof(ngx_stream_pass_srv_conf_t)); > > > > + if (conf == NULL) { > > > > + return NULL; > > > > + } > > > > + > > > > + /* > > > > + * set by ngx_pcalloc(): > > > > + * > > > > + * conf->addr = NULL; > > > > + * conf->addr_value = NULL; > > > > + */ > > > > + > > > > + return conf; > > > > +} > > > > + > > > > + > > > > +static char * > > > > +ngx_stream_pass(ngx_conf_t *cf, ngx_command_t *cmd, void *conf) > > > > +{ > > > > + ngx_stream_pass_srv_conf_t *pscf = conf; > > > > + > > > > + ngx_url_t u; > > > > + ngx_str_t *value, *url; > > > > + ngx_stream_complex_value_t cv; > > > > + ngx_stream_core_srv_conf_t *cscf; > > > > + ngx_stream_compile_complex_value_t ccv; > > > > + > > > > + if (pscf->addr || pscf->addr_value) { > > > > + return "is duplicate"; > > > > + } > > > > + > > > > + cscf = ngx_stream_conf_get_module_srv_conf(cf, ngx_stream_core_module); > > > > + > > > > + cscf->handler = ngx_stream_pass_handler; > > > > + > > > > + value = cf->args->elts; > > > > + > > > > + url = &value[1]; > > > > + > > > > + ngx_memzero(&ccv, sizeof(ngx_stream_compile_complex_value_t)); > > > > + > > > > + ccv.cf = cf; > > > > + ccv.value = url; > > > > + ccv.complex_value = &cv; > > > > + > > > > + if (ngx_stream_compile_complex_value(&ccv) != NGX_OK) { > > > > + return NGX_CONF_ERROR; > > > > + } > > > > + > > > > + if (cv.lengths) { > > > > + pscf->addr_value = ngx_palloc(cf->pool, > > > > + sizeof(ngx_stream_complex_value_t)); > > > > + if (pscf->addr_value == NULL) { > > > > + return NGX_CONF_ERROR; > > > > + } > > > > + > > > > + *pscf->addr_value = cv; > > > > + > > > > + return NGX_CONF_OK; > > > > + } > > > > + > > > > + ngx_memzero(&u, sizeof(ngx_url_t)); > > > > + > > > > + u.url = *url; > > > > + u.listen = 1; > > > > + > > > > + if (ngx_parse_url(cf->pool, &u) != NGX_OK) { > > > > + if (u.err) { > > > > + ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, > > > > + "%s in \"%V\" of the \"pass\" directive", > > > > + u.err, &u.url); > > > > + } > > > > + > > > > + return NGX_CONF_ERROR; > > > > + } > > > > + > > > > + if (u.naddrs == 0) { > > > > + return "has no addresses"; > > > > + } > > > > + > > > > + pscf->addr = &u.addrs[0]; > > > > + > > > > + return NGX_CONF_OK; > > > > +} > > > > Attached is an improved version with the following changes: > > > > - Removed 'listen = 1' flag when parsing "pass" parameter. > > Now it's treated like "proxy_pass" parameter. > > - Listen match reworked to be able to match wildcards. > > - Local_sockaddr is copied to the connection after match. > > - Fixes in log action, log messages, commit log etc. > > > > -- > > Roman Arutyunyan > > > # HG changeset patch > > # User Roman Arutyunyan > > # Date 1708522562 -14400 > > # Wed Feb 21 17:36:02 2024 +0400 > > # Node ID 44da04c2d4db94ad4eefa84b299e07c5fa4a00b9 > > # Parent 4eb76c257fd07a69fc9e9386e845edcc9e2b1b08 > > Stream: ngx_stream_pass_module. > > > > The module allows to pass connections from Stream to other modules such as HTTP > > or Mail, as well as back to Stream. Previously, this was only possible with > > proxying. Connections with preread buffer read out from socket cannot be > > passed. > > > > The module allows selective SSL termination based on SNI. > > > > stream { > > server { > > listen 8000 default_server; > > ssl_preread on; > > ... > > } > > > > server { > > listen 8000; > > server_name foo.example.com; > > pass 127.0.0.1:8001; # to HTTP > > } > > > > server { > > listen 8000; > > server_name bar.example.com; > > ... > > } > > } > > > > http { > > server { > > listen 8001 ssl; > > ... > > > > location / { > > root html; > > } > > } > > } > > > > diff --git a/auto/modules b/auto/modules > > --- a/auto/modules > > +++ b/auto/modules > > @@ -1166,6 +1166,16 @@ if [ $STREAM != NO ]; then > > . auto/module > > fi > > > > + if [ $STREAM_PASS = YES ]; then > > + ngx_module_name=ngx_stream_pass_module > > + ngx_module_deps= > > + ngx_module_srcs=src/stream/ngx_stream_pass_module.c > > + ngx_module_libs= > > + ngx_module_link=$STREAM_PASS > > + > > + . auto/module > > + fi > > + > > if [ $STREAM_SET = YES ]; then > > ngx_module_name=ngx_stream_set_module > > ngx_module_deps= > > diff --git a/auto/options b/auto/options > > --- a/auto/options > > +++ b/auto/options > > @@ -127,6 +127,7 @@ STREAM_GEOIP=NO > > STREAM_MAP=YES > > STREAM_SPLIT_CLIENTS=YES > > STREAM_RETURN=YES > > +STREAM_PASS=YES > > STREAM_SET=YES > > STREAM_UPSTREAM_HASH=YES > > STREAM_UPSTREAM_LEAST_CONN=YES > > @@ -337,6 +338,7 @@ use the \"--with-mail_ssl_module\" optio > > --without-stream_split_clients_module) > > STREAM_SPLIT_CLIENTS=NO ;; > > --without-stream_return_module) STREAM_RETURN=NO ;; > > + --without-stream_pass_module) STREAM_PASS=NO ;; > > --without-stream_set_module) STREAM_SET=NO ;; > > --without-stream_upstream_hash_module) > > STREAM_UPSTREAM_HASH=NO ;; > > @@ -556,6 +558,7 @@ cat << END > > --without-stream_split_clients_module > > disable ngx_stream_split_clients_module > > --without-stream_return_module disable ngx_stream_return_module > > + --without-stream_pass_module disable ngx_stream_pass_module > > --without-stream_set_module disable ngx_stream_set_module > > --without-stream_upstream_hash_module > > disable ngx_stream_upstream_hash_module > > diff --git a/src/stream/ngx_stream_pass_module.c b/src/stream/ngx_stream_pass_module.c > > new file mode 100644 > > --- /dev/null > > +++ b/src/stream/ngx_stream_pass_module.c > > @@ -0,0 +1,272 @@ > > + > > +/* > > + * Copyright (C) Roman Arutyunyan > > + * Copyright (C) Nginx, Inc. > > + */ > > + > > + > > +#include > > +#include > > +#include > > + > > + > > +typedef struct { > > + ngx_addr_t *addr; > > + ngx_stream_complex_value_t *addr_value; > > +} ngx_stream_pass_srv_conf_t; > > + > > + > > +static void ngx_stream_pass_handler(ngx_stream_session_t *s); > > +static ngx_int_t ngx_stream_pass_match(ngx_listening_t *ls, ngx_addr_t *addr); > > +static void *ngx_stream_pass_create_srv_conf(ngx_conf_t *cf); > > +static char *ngx_stream_pass(ngx_conf_t *cf, ngx_command_t *cmd, void *conf); > > + > > + > > +static ngx_command_t ngx_stream_pass_commands[] = { > > + > > + { ngx_string("pass"), > > + NGX_STREAM_SRV_CONF|NGX_CONF_TAKE1, > > + ngx_stream_pass, > > + NGX_STREAM_SRV_CONF_OFFSET, > > + 0, > > + NULL }, > > + > > + ngx_null_command > > +}; > > + > > + > > +static ngx_stream_module_t ngx_stream_pass_module_ctx = { > > + NULL, /* preconfiguration */ > > + NULL, /* postconfiguration */ > > + > > + NULL, /* create main configuration */ > > + NULL, /* init main configuration */ > > + > > + ngx_stream_pass_create_srv_conf, /* create server configuration */ > > + NULL /* merge server configuration */ > > +}; > > + > > + > > +ngx_module_t ngx_stream_pass_module = { > > + NGX_MODULE_V1, > > + &ngx_stream_pass_module_ctx, /* module context */ > > + ngx_stream_pass_commands, /* module directives */ > > + NGX_STREAM_MODULE, /* module type */ > > + NULL, /* init master */ > > + NULL, /* init module */ > > + NULL, /* init process */ > > + NULL, /* init thread */ > > + NULL, /* exit thread */ > > + NULL, /* exit process */ > > + NULL, /* exit master */ > > + NGX_MODULE_V1_PADDING > > +}; > > + > > + > > +static void > > +ngx_stream_pass_handler(ngx_stream_session_t *s) > > +{ > > + ngx_url_t u; > > + ngx_str_t url; > > + ngx_addr_t *addr; > > + ngx_uint_t i; > > + ngx_listening_t *ls; > > + struct sockaddr *sa; > > + ngx_connection_t *c; > > + ngx_stream_pass_srv_conf_t *pscf; > > + > > + c = s->connection; > > + > > + c->log->action = "passing connection to port"; > > + > > + if (c->buffer && c->buffer->pos != c->buffer->last) { > > + ngx_log_error(NGX_LOG_ERR, c->log, 0, > > + "cannot pass connection with preread data"); > > + goto failed; > > + } > > + > > + pscf = ngx_stream_get_module_srv_conf(s, ngx_stream_pass_module); > > + > > + addr = pscf->addr; > > + > > + if (addr == NULL) { > > + if (ngx_stream_complex_value(s, pscf->addr_value, &url) != NGX_OK) { > > + goto failed; > > + } > > + > > + ngx_memzero(&u, sizeof(ngx_url_t)); > > + > > + u.url = url; > > + u.no_resolve = 1; > > This makes configurations with variables of limited use. The functionality is indeed different for static and dynamic addresses. Currently it's the best we can do without introducing more complexity. We can add dynamic resolving here like in the upstream module and eliminate the confusion. However a better way to eliminate it is to disable resolve completely for both static and dynamic configurations. It seems to me, resolving names in the pass module makes little sense. All addresses are local anyway. > > + if (ngx_parse_url(c->pool, &u) != NGX_OK) { > > + if (u.err) { > > + ngx_log_error(NGX_LOG_ERR, c->log, 0, > > + "%s in pass \"%V\"", u.err, &u.url); > > + } > > + > > + goto failed; > > + } > > + > > + if (u.naddrs == 0) { > > + ngx_log_error(NGX_LOG_ERR, c->log, 0, > > + "no addresses in pass \"%V\"", &u.url); > > + goto failed; > > + } > > + > > + addr = &u.addrs[0]; > > + } > > + > > + ngx_log_debug1(NGX_LOG_DEBUG_STREAM, c->log, 0, > > + "stream pass addr: \"%V\"", &addr->name); > > + > > + ls = ngx_cycle->listening.elts; > > + > > + for (i = 0; i < ngx_cycle->listening.nelts; i++) { > > + > > + if (ngx_stream_pass_match(&ls[i], addr) != NGX_OK) { > > + continue; > > + } > > + > > + c->listening = &ls[i]; > > + > > + c->data = NULL; > > + c->buffer = NULL; > > + > > + *c->log = c->listening->log; > > + c->log->handler = NULL; > > + c->log->data = NULL; > > + > > + sa = ngx_palloc(c->pool, addr->socklen); > > + if (sa == NULL) { > > + goto failed; > > + } > > Is there a reason to (re-)allocate memory for c->local_sockaddr ? > > Either way, "addr" is stored in some pool, allocated in ngx_parse_url() > through ngx_inet_add_addr(). It should be safe to reference it there. Sure, removed the allocation. > > + ngx_memcpy(sa, addr->sockaddr, addr->socklen); > > + c->local_sockaddr = sa; > > + c->local_socklen = addr->socklen; > > + > > + c->listening->handler(c); > > + > > + return; > > + } > > + > > + ngx_log_error(NGX_LOG_ERR, c->log, 0, > > + "port not found for \"%V\"", &addr->name); > > + > > + ngx_stream_finalize_session(s, NGX_STREAM_OK); > > + > > + return; > > + > > +failed: > > + > > + ngx_stream_finalize_session(s, NGX_STREAM_INTERNAL_SERVER_ERROR); > > +} > > + > > + > > +static ngx_int_t > > +ngx_stream_pass_match(ngx_listening_t *ls, ngx_addr_t *addr) > > +{ > > + if (!ls->wildcard) { > > + return ngx_cmp_sockaddr(ls->sockaddr, ls->socklen, > > + addr->sockaddr, addr->socklen, 1); > > + } > > + > > + if (ls->sockaddr->sa_family == addr->sockaddr->sa_family > > + && ngx_inet_get_port(ls->sockaddr) == ngx_inet_get_port(addr->sockaddr)) > > + { > > + return NGX_OK; > > + } > > + > > + return NGX_DECLINED; > > +} > > + > > + > > +static void * > > +ngx_stream_pass_create_srv_conf(ngx_conf_t *cf) > > +{ > > + ngx_stream_pass_srv_conf_t *conf; > > + > > + conf = ngx_pcalloc(cf->pool, sizeof(ngx_stream_pass_srv_conf_t)); > > + if (conf == NULL) { > > + return NULL; > > + } > > + > > + /* > > + * set by ngx_pcalloc(): > > + * > > + * conf->addr = NULL; > > + * conf->addr_value = NULL; > > + */ > > + > > + return conf; > > +} > > + > > + > > +static char * > > +ngx_stream_pass(ngx_conf_t *cf, ngx_command_t *cmd, void *conf) > > +{ > > + ngx_stream_pass_srv_conf_t *pscf = conf; > > + > > + ngx_url_t u; > > + ngx_str_t *value, *url; > > + ngx_stream_complex_value_t cv; > > + ngx_stream_core_srv_conf_t *cscf; > > + ngx_stream_compile_complex_value_t ccv; > > + > > + if (pscf->addr || pscf->addr_value) { > > + return "is duplicate"; > > + } > > + > > + cscf = ngx_stream_conf_get_module_srv_conf(cf, ngx_stream_core_module); > > + > > + cscf->handler = ngx_stream_pass_handler; > > + > > + value = cf->args->elts; > > + > > + url = &value[1]; > > + > > + ngx_memzero(&ccv, sizeof(ngx_stream_compile_complex_value_t)); > > + > > + ccv.cf = cf; > > + ccv.value = url; > > + ccv.complex_value = &cv; > > + > > + if (ngx_stream_compile_complex_value(&ccv) != NGX_OK) { > > + return NGX_CONF_ERROR; > > + } > > + > > + if (cv.lengths) { > > + pscf->addr_value = ngx_palloc(cf->pool, > > + sizeof(ngx_stream_complex_value_t)); > > + if (pscf->addr_value == NULL) { > > + return NGX_CONF_ERROR; > > + } > > + > > + *pscf->addr_value = cv; > > + > > + return NGX_CONF_OK; > > + } > > + > > + ngx_memzero(&u, sizeof(ngx_url_t)); > > + > > + u.url = *url; > > + > > + if (ngx_parse_url(cf->pool, &u) != NGX_OK) { > > + if (u.err) { > > + ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, > > + "%s in \"%V\" of the \"pass\" directive", > > + u.err, &u.url); > > + } > > + > > + return NGX_CONF_ERROR; > > + } > > Although you've changed the commit example from "pass 8081" to > "pass 127.0.0.1:8001", the former syntax is still allowed. > > This may be misleading: with the current code, unlike "u.listen = 1", > this means "8081" will be tested as an address (without a port) > written in a decimal format, as finally resolved by getaddrinfo(3). > So, using "pass 8081" corresponds to 0x1f91, or "0.0.31.145". > Further, since it has no port, it will never match listen addresses. Similarly we can do this in the http proxy module: "proxy_pass http://8081". Of course there's a default port there, but it does not change the fact that listen-like syntax is allowed in proxy_pass, and produces misleading results as well. > I'd check and forbid this explicitly: > > if (u.no_port) { > return "has no port"; > } Makes sense, thanks. > > + > > + if (u.naddrs == 0) { > > + return "has no addresses"; > > + } > > It seems that this check can never happen if neither "u.no_resolve" > nor "u.listen" set, such as in here. > In the worst case, when the address couldn't be parsed as a literal, > and ngx_parse_url() falls back to ngx_inet_resolve_host() to resolve > as a name, which is either NXDOMAIN or has none of A/AAAA records, > then ngx_parse_url() will return an error with the "host not found" > diagnostics. It looks like another left-over from "u.listen = 1". If we add "u.no_resolve = 1", as discussed above, this condition will make sense again. > > + pscf->addr = &u.addrs[0]; > > + > > + return NGX_CONF_OK; > > +} > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > https://mailman.nginx.org/mailman/listinfo/nginx-devel Diff attached. -- Roman Arutyunyan -------------- next part -------------- # HG changeset patch # User Roman Arutyunyan # Date 1709125245 -14400 # Wed Feb 28 17:00:45 2024 +0400 # Node ID f533e218c56a1d14be02c0b81409bcc12bed3562 # Parent 44da04c2d4db94ad4eefa84b299e07c5fa4a00b9 imported patch stream-pass-fix1 diff --git a/src/stream/ngx_stream_pass_module.c b/src/stream/ngx_stream_pass_module.c --- a/src/stream/ngx_stream_pass_module.c +++ b/src/stream/ngx_stream_pass_module.c @@ -71,7 +71,6 @@ ngx_stream_pass_handler(ngx_stream_sessi ngx_addr_t *addr; ngx_uint_t i; ngx_listening_t *ls; - struct sockaddr *sa; ngx_connection_t *c; ngx_stream_pass_srv_conf_t *pscf; @@ -114,6 +113,12 @@ ngx_stream_pass_handler(ngx_stream_sessi goto failed; } + if (u.no_port) { + ngx_log_error(NGX_LOG_ERR, c->log, 0, + "no port in pass \"%V\"", &u.url); + goto failed; + } + addr = &u.addrs[0]; } @@ -137,13 +142,7 @@ ngx_stream_pass_handler(ngx_stream_sessi c->log->handler = NULL; c->log->data = NULL; - sa = ngx_palloc(c->pool, addr->socklen); - if (sa == NULL) { - goto failed; - } - - ngx_memcpy(sa, addr->sockaddr, addr->socklen); - c->local_sockaddr = sa; + c->local_sockaddr = addr->sockaddr; c->local_socklen = addr->socklen; c->listening->handler(c); @@ -251,6 +250,7 @@ ngx_stream_pass(ngx_conf_t *cf, ngx_comm ngx_memzero(&u, sizeof(ngx_url_t)); u.url = *url; + u.no_resolve = 1; if (ngx_parse_url(cf->pool, &u) != NGX_OK) { if (u.err) { @@ -266,6 +266,10 @@ ngx_stream_pass(ngx_conf_t *cf, ngx_comm return "has no addresses"; } + if (u.no_port) { + return "has no port"; + } + pscf->addr = &u.addrs[0]; return NGX_CONF_OK; From pluknet at nginx.com Thu Feb 29 11:38:55 2024 From: pluknet at nginx.com (Sergey Kandaurov) Date: Thu, 29 Feb 2024 15:38:55 +0400 Subject: [PATCH 3 of 3] Stream: ngx_stream_pass_module In-Reply-To: <20240228142234.y2q23j25tcq55oq5@N00W24XTQX> References: <3cab85fe55272835674b.1699610841@arut-laptop> <8C0B7CF6-7BE8-4B63-8BA7-9608C455D30A@nginx.com> <20240221133751.hrnz43d77aq455ps@N00W24XTQX> <20240228142234.y2q23j25tcq55oq5@N00W24XTQX> Message-ID: On Wed, Feb 28, 2024 at 06:22:34PM +0400, Roman Arutyunyan wrote: > Hi, > > On Wed, Feb 28, 2024 at 02:15:40PM +0400, Sergey Kandaurov wrote: > > On Wed, Feb 21, 2024 at 05:37:51PM +0400, Roman Arutyunyan wrote: > > > Hi, > > > > > > Attached is an improved version with the following changes: > > > > > > - Removed 'listen = 1' flag when parsing "pass" parameter. > > > Now it's treated like "proxy_pass" parameter. > > > - Listen match reworked to be able to match wildcards. > > > - Local_sockaddr is copied to the connection after match. > > > - Fixes in log action, log messages, commit log etc. > > > > > > -- > > > Roman Arutyunyan > > > > > # HG changeset patch > > > # User Roman Arutyunyan > > > # Date 1708522562 -14400 > > > # Wed Feb 21 17:36:02 2024 +0400 > > > # Node ID 44da04c2d4db94ad4eefa84b299e07c5fa4a00b9 > > > # Parent 4eb76c257fd07a69fc9e9386e845edcc9e2b1b08 > > > Stream: ngx_stream_pass_module. > > > > > > The module allows to pass connections from Stream to other modules such as HTTP > > > or Mail, as well as back to Stream. Previously, this was only possible with > > > proxying. Connections with preread buffer read out from socket cannot be > > > passed. > > > > > > The module allows selective SSL termination based on SNI. > > > > > > stream { > > > server { > > > listen 8000 default_server; > > > ssl_preread on; > > > ... > > > } > > > > > > server { > > > listen 8000; > > > server_name foo.example.com; > > > pass 127.0.0.1:8001; # to HTTP > > > } > > > > > > server { > > > listen 8000; > > > server_name bar.example.com; > > > ... > > > } > > > } > > > > > > http { > > > server { > > > listen 8001 ssl; > > > ... > > > > > > location / { > > > root html; > > > } > > > } > > > } > > > > > > diff --git a/auto/modules b/auto/modules > > > --- a/auto/modules > > > +++ b/auto/modules > > > @@ -1166,6 +1166,16 @@ if [ $STREAM != NO ]; then > > > . auto/module > > > fi > > > > > > + if [ $STREAM_PASS = YES ]; then > > > + ngx_module_name=ngx_stream_pass_module > > > + ngx_module_deps= > > > + ngx_module_srcs=src/stream/ngx_stream_pass_module.c > > > + ngx_module_libs= > > > + ngx_module_link=$STREAM_PASS > > > + > > > + . auto/module > > > + fi > > > + > > > if [ $STREAM_SET = YES ]; then > > > ngx_module_name=ngx_stream_set_module > > > ngx_module_deps= > > > diff --git a/auto/options b/auto/options > > > --- a/auto/options > > > +++ b/auto/options > > > @@ -127,6 +127,7 @@ STREAM_GEOIP=NO > > > STREAM_MAP=YES > > > STREAM_SPLIT_CLIENTS=YES > > > STREAM_RETURN=YES > > > +STREAM_PASS=YES > > > STREAM_SET=YES > > > STREAM_UPSTREAM_HASH=YES > > > STREAM_UPSTREAM_LEAST_CONN=YES > > > @@ -337,6 +338,7 @@ use the \"--with-mail_ssl_module\" optio > > > --without-stream_split_clients_module) > > > STREAM_SPLIT_CLIENTS=NO ;; > > > --without-stream_return_module) STREAM_RETURN=NO ;; > > > + --without-stream_pass_module) STREAM_PASS=NO ;; > > > --without-stream_set_module) STREAM_SET=NO ;; > > > --without-stream_upstream_hash_module) > > > STREAM_UPSTREAM_HASH=NO ;; > > > @@ -556,6 +558,7 @@ cat << END > > > --without-stream_split_clients_module > > > disable ngx_stream_split_clients_module > > > --without-stream_return_module disable ngx_stream_return_module > > > + --without-stream_pass_module disable ngx_stream_pass_module > > > --without-stream_set_module disable ngx_stream_set_module > > > --without-stream_upstream_hash_module > > > disable ngx_stream_upstream_hash_module > > > diff --git a/src/stream/ngx_stream_pass_module.c b/src/stream/ngx_stream_pass_module.c > > > new file mode 100644 > > > --- /dev/null > > > +++ b/src/stream/ngx_stream_pass_module.c > > > @@ -0,0 +1,272 @@ > > > + > > > +/* > > > + * Copyright (C) Roman Arutyunyan > > > + * Copyright (C) Nginx, Inc. > > > + */ > > > + > > > + > > > +#include > > > +#include > > > +#include > > > + > > > + > > > +typedef struct { > > > + ngx_addr_t *addr; > > > + ngx_stream_complex_value_t *addr_value; > > > +} ngx_stream_pass_srv_conf_t; > > > + > > > + > > > +static void ngx_stream_pass_handler(ngx_stream_session_t *s); > > > +static ngx_int_t ngx_stream_pass_match(ngx_listening_t *ls, ngx_addr_t *addr); > > > +static void *ngx_stream_pass_create_srv_conf(ngx_conf_t *cf); > > > +static char *ngx_stream_pass(ngx_conf_t *cf, ngx_command_t *cmd, void *conf); > > > + > > > + > > > +static ngx_command_t ngx_stream_pass_commands[] = { > > > + > > > + { ngx_string("pass"), > > > + NGX_STREAM_SRV_CONF|NGX_CONF_TAKE1, > > > + ngx_stream_pass, > > > + NGX_STREAM_SRV_CONF_OFFSET, > > > + 0, > > > + NULL }, > > > + > > > + ngx_null_command > > > +}; > > > + > > > + > > > +static ngx_stream_module_t ngx_stream_pass_module_ctx = { > > > + NULL, /* preconfiguration */ > > > + NULL, /* postconfiguration */ > > > + > > > + NULL, /* create main configuration */ > > > + NULL, /* init main configuration */ > > > + > > > + ngx_stream_pass_create_srv_conf, /* create server configuration */ > > > + NULL /* merge server configuration */ > > > +}; > > > + > > > + > > > +ngx_module_t ngx_stream_pass_module = { > > > + NGX_MODULE_V1, > > > + &ngx_stream_pass_module_ctx, /* module context */ > > > + ngx_stream_pass_commands, /* module directives */ > > > + NGX_STREAM_MODULE, /* module type */ > > > + NULL, /* init master */ > > > + NULL, /* init module */ > > > + NULL, /* init process */ > > > + NULL, /* init thread */ > > > + NULL, /* exit thread */ > > > + NULL, /* exit process */ > > > + NULL, /* exit master */ > > > + NGX_MODULE_V1_PADDING > > > +}; > > > + > > > + > > > +static void > > > +ngx_stream_pass_handler(ngx_stream_session_t *s) > > > +{ > > > + ngx_url_t u; > > > + ngx_str_t url; > > > + ngx_addr_t *addr; > > > + ngx_uint_t i; > > > + ngx_listening_t *ls; > > > + struct sockaddr *sa; > > > + ngx_connection_t *c; > > > + ngx_stream_pass_srv_conf_t *pscf; > > > + > > > + c = s->connection; > > > + > > > + c->log->action = "passing connection to port"; > > > + > > > + if (c->buffer && c->buffer->pos != c->buffer->last) { > > > + ngx_log_error(NGX_LOG_ERR, c->log, 0, > > > + "cannot pass connection with preread data"); > > > + goto failed; > > > + } > > > + > > > + pscf = ngx_stream_get_module_srv_conf(s, ngx_stream_pass_module); > > > + > > > + addr = pscf->addr; > > > + > > > + if (addr == NULL) { > > > + if (ngx_stream_complex_value(s, pscf->addr_value, &url) != NGX_OK) { > > > + goto failed; > > > + } > > > + > > > + ngx_memzero(&u, sizeof(ngx_url_t)); > > > + > > > + u.url = url; > > > + u.no_resolve = 1; > > > > This makes configurations with variables of limited use. > > The functionality is indeed different for static and dynamic addresses. > Currently it's the best we can do without introducing more complexity. > We can add dynamic resolving here like in the upstream module and eliminate the > confusion. However a better way to eliminate it is to disable resolve > completely for both static and dynamic configurations. It seems to me, > resolving names in the pass module makes little sense. All addresses are > local anyway. Agree. > > > > + if (ngx_parse_url(c->pool, &u) != NGX_OK) { > > > + if (u.err) { > > > + ngx_log_error(NGX_LOG_ERR, c->log, 0, > > > + "%s in pass \"%V\"", u.err, &u.url); > > > + } > > > + > > > + goto failed; > > > + } > > > + > > > + if (u.naddrs == 0) { > > > + ngx_log_error(NGX_LOG_ERR, c->log, 0, > > > + "no addresses in pass \"%V\"", &u.url); > > > + goto failed; > > > + } > > > + > > > + addr = &u.addrs[0]; > > > + } > > > + > > > + ngx_log_debug1(NGX_LOG_DEBUG_STREAM, c->log, 0, > > > + "stream pass addr: \"%V\"", &addr->name); > > > + > > > + ls = ngx_cycle->listening.elts; > > > + > > > + for (i = 0; i < ngx_cycle->listening.nelts; i++) { > > > + > > > + if (ngx_stream_pass_match(&ls[i], addr) != NGX_OK) { > > > + continue; > > > + } > > > + > > > + c->listening = &ls[i]; > > > + > > > + c->data = NULL; > > > + c->buffer = NULL; > > > + > > > + *c->log = c->listening->log; > > > + c->log->handler = NULL; > > > + c->log->data = NULL; > > > + > > > + sa = ngx_palloc(c->pool, addr->socklen); > > > + if (sa == NULL) { > > > + goto failed; > > > + } > > > > Is there a reason to (re-)allocate memory for c->local_sockaddr ? > > > > Either way, "addr" is stored in some pool, allocated in ngx_parse_url() > > through ngx_inet_add_addr(). It should be safe to reference it there. > > Sure, removed the allocation. > > > > + ngx_memcpy(sa, addr->sockaddr, addr->socklen); > > > + c->local_sockaddr = sa; > > > + c->local_socklen = addr->socklen; > > > + > > > + c->listening->handler(c); > > > + > > > + return; > > > + } > > > + > > > + ngx_log_error(NGX_LOG_ERR, c->log, 0, > > > + "port not found for \"%V\"", &addr->name); > > > + > > > + ngx_stream_finalize_session(s, NGX_STREAM_OK); > > > + > > > + return; > > > + > > > +failed: > > > + > > > + ngx_stream_finalize_session(s, NGX_STREAM_INTERNAL_SERVER_ERROR); > > > +} > > > + > > > + > > > +static ngx_int_t > > > +ngx_stream_pass_match(ngx_listening_t *ls, ngx_addr_t *addr) > > > +{ > > > + if (!ls->wildcard) { > > > + return ngx_cmp_sockaddr(ls->sockaddr, ls->socklen, > > > + addr->sockaddr, addr->socklen, 1); > > > + } > > > + > > > + if (ls->sockaddr->sa_family == addr->sockaddr->sa_family > > > + && ngx_inet_get_port(ls->sockaddr) == ngx_inet_get_port(addr->sockaddr)) > > > + { > > > + return NGX_OK; > > > + } > > > + > > > + return NGX_DECLINED; > > > +} > > > + > > > + > > > +static void * > > > +ngx_stream_pass_create_srv_conf(ngx_conf_t *cf) > > > +{ > > > + ngx_stream_pass_srv_conf_t *conf; > > > + > > > + conf = ngx_pcalloc(cf->pool, sizeof(ngx_stream_pass_srv_conf_t)); > > > + if (conf == NULL) { > > > + return NULL; > > > + } > > > + > > > + /* > > > + * set by ngx_pcalloc(): > > > + * > > > + * conf->addr = NULL; > > > + * conf->addr_value = NULL; > > > + */ > > > + > > > + return conf; > > > +} > > > + > > > + > > > +static char * > > > +ngx_stream_pass(ngx_conf_t *cf, ngx_command_t *cmd, void *conf) > > > +{ > > > + ngx_stream_pass_srv_conf_t *pscf = conf; > > > + > > > + ngx_url_t u; > > > + ngx_str_t *value, *url; > > > + ngx_stream_complex_value_t cv; > > > + ngx_stream_core_srv_conf_t *cscf; > > > + ngx_stream_compile_complex_value_t ccv; > > > + > > > + if (pscf->addr || pscf->addr_value) { > > > + return "is duplicate"; > > > + } > > > + > > > + cscf = ngx_stream_conf_get_module_srv_conf(cf, ngx_stream_core_module); > > > + > > > + cscf->handler = ngx_stream_pass_handler; > > > + > > > + value = cf->args->elts; > > > + > > > + url = &value[1]; > > > + > > > + ngx_memzero(&ccv, sizeof(ngx_stream_compile_complex_value_t)); > > > + > > > + ccv.cf = cf; > > > + ccv.value = url; > > > + ccv.complex_value = &cv; > > > + > > > + if (ngx_stream_compile_complex_value(&ccv) != NGX_OK) { > > > + return NGX_CONF_ERROR; > > > + } > > > + > > > + if (cv.lengths) { > > > + pscf->addr_value = ngx_palloc(cf->pool, > > > + sizeof(ngx_stream_complex_value_t)); > > > + if (pscf->addr_value == NULL) { > > > + return NGX_CONF_ERROR; > > > + } > > > + > > > + *pscf->addr_value = cv; > > > + > > > + return NGX_CONF_OK; > > > + } > > > + > > > + ngx_memzero(&u, sizeof(ngx_url_t)); > > > + > > > + u.url = *url; > > > + > > > + if (ngx_parse_url(cf->pool, &u) != NGX_OK) { > > > + if (u.err) { > > > + ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, > > > + "%s in \"%V\" of the \"pass\" directive", > > > + u.err, &u.url); > > > + } > > > + > > > + return NGX_CONF_ERROR; > > > + } > > > > Although you've changed the commit example from "pass 8081" to > > "pass 127.0.0.1:8001", the former syntax is still allowed. > > > > This may be misleading: with the current code, unlike "u.listen = 1", > > this means "8081" will be tested as an address (without a port) > > written in a decimal format, as finally resolved by getaddrinfo(3). > > So, using "pass 8081" corresponds to 0x1f91, or "0.0.31.145". > > Further, since it has no port, it will never match listen addresses. > > Similarly we can do this in the http proxy module: "proxy_pass http://8081". > Of course there's a default port there, but it does not change the fact > that listen-like syntax is allowed in proxy_pass, and produces misleading > results as well. > > > I'd check and forbid this explicitly: > > > > if (u.no_port) { > > return "has no port"; > > } > > Makes sense, thanks. > > > > + > > > + if (u.naddrs == 0) { > > > + return "has no addresses"; > > > + } > > > > It seems that this check can never happen if neither "u.no_resolve" > > nor "u.listen" set, such as in here. > > In the worst case, when the address couldn't be parsed as a literal, > > and ngx_parse_url() falls back to ngx_inet_resolve_host() to resolve > > as a name, which is either NXDOMAIN or has none of A/AAAA records, > > then ngx_parse_url() will return an error with the "host not found" > > diagnostics. It looks like another left-over from "u.listen = 1". > > If we add "u.no_resolve = 1", as discussed above, this condition will make > sense again. Ok. > > > > + pscf->addr = &u.addrs[0]; > > > + > > > + return NGX_CONF_OK; > > > +} > > Diff attached. > > -- > Roman Arutyunyan > # HG changeset patch > # User Roman Arutyunyan > # Date 1709125245 -14400 > # Wed Feb 28 17:00:45 2024 +0400 > # Node ID f533e218c56a1d14be02c0b81409bcc12bed3562 > # Parent 44da04c2d4db94ad4eefa84b299e07c5fa4a00b9 > imported patch stream-pass-fix1 > > diff --git a/src/stream/ngx_stream_pass_module.c b/src/stream/ngx_stream_pass_module.c > --- a/src/stream/ngx_stream_pass_module.c > +++ b/src/stream/ngx_stream_pass_module.c > @@ -71,7 +71,6 @@ ngx_stream_pass_handler(ngx_stream_sessi > ngx_addr_t *addr; > ngx_uint_t i; > ngx_listening_t *ls; > - struct sockaddr *sa; > ngx_connection_t *c; > ngx_stream_pass_srv_conf_t *pscf; > > @@ -114,6 +113,12 @@ ngx_stream_pass_handler(ngx_stream_sessi > goto failed; > } > > + if (u.no_port) { > + ngx_log_error(NGX_LOG_ERR, c->log, 0, > + "no port in pass \"%V\"", &u.url); > + goto failed; > + } > + > addr = &u.addrs[0]; > } > > @@ -137,13 +142,7 @@ ngx_stream_pass_handler(ngx_stream_sessi > c->log->handler = NULL; > c->log->data = NULL; > > - sa = ngx_palloc(c->pool, addr->socklen); > - if (sa == NULL) { > - goto failed; > - } > - > - ngx_memcpy(sa, addr->sockaddr, addr->socklen); > - c->local_sockaddr = sa; > + c->local_sockaddr = addr->sockaddr; > c->local_socklen = addr->socklen; > > c->listening->handler(c); > @@ -251,6 +250,7 @@ ngx_stream_pass(ngx_conf_t *cf, ngx_comm > ngx_memzero(&u, sizeof(ngx_url_t)); > > u.url = *url; > + u.no_resolve = 1; > > if (ngx_parse_url(cf->pool, &u) != NGX_OK) { > if (u.err) { > @@ -266,6 +266,10 @@ ngx_stream_pass(ngx_conf_t *cf, ngx_comm > return "has no addresses"; > } > > + if (u.no_port) { > + return "has no port"; > + } > + > pscf->addr = &u.addrs[0]; > > return NGX_CONF_OK; Looks good to me. From osa at freebsd.org.ru Thu Feb 29 14:03:50 2024 From: osa at freebsd.org.ru (Sergey A. Osokin) Date: Thu, 29 Feb 2024 17:03:50 +0300 Subject: [PATCH] HTTP: stop emitting server version by default In-Reply-To: References: Message-ID: Hi Piotr, thank you for the patch. On Wed, Feb 28, 2024 at 01:20:35AM +0000, Piotr Sikora via nginx-devel wrote: [...] > HTTP: stop emitting server version by default. > This information is only useful to attackers. > The previous behavior can be restored using "server_tokens on". [...] I don't think this is a good idea to change the default behaviour for the directive we have for a long-long time. It's always possible to set `server_tokens off;' in the configuration file. Also, this change is required a corresponding change in the documentation on the nginx.org website. Thank you. -- Sergey A. Osokin From xeioex at nginx.com Thu Feb 29 17:00:31 2024 From: xeioex at nginx.com (=?utf-8?q?Dmitry_Volyntsev?=) Date: Thu, 29 Feb 2024 17:00:31 +0000 Subject: [njs] Simplified working with global value. Message-ID: details: https://hg.nginx.org/njs/rev/e73d4947372d branches: changeset: 2291:e73d4947372d user: Dmitry Volyntsev date: Tue Feb 27 23:24:55 2024 -0800 description: Simplified working with global value. diffstat: src/njs_builtin.c | 6 ++---- src/njs_function.c | 2 +- src/njs_vm.c | 4 +--- 3 files changed, 4 insertions(+), 8 deletions(-) diffs (63 lines): diff -r cb3e068a511c -r e73d4947372d src/njs_builtin.c --- a/src/njs_builtin.c Thu Feb 22 20:25:43 2024 -0800 +++ b/src/njs_builtin.c Tue Feb 27 23:24:55 2024 -0800 @@ -258,8 +258,6 @@ njs_builtin_objects_create(njs_vm_t *vm) vm->global_object = shared->objects[0]; vm->global_object.shared = 0; - njs_set_object(&vm->global_value, &vm->global_object); - string_object = &shared->string_object; njs_lvlhsh_init(&string_object->hash); string_object->shared_hash = shared->string_instance_hash; @@ -442,7 +440,7 @@ njs_builtin_completions(njs_vm_t *vm) ctx.type = NJS_BUILTIN_TRAVERSE_KEYS; njs_lvlhsh_init(&ctx.keys); - ret = njs_object_traverse(vm, &vm->global_object, &ctx, + ret = njs_object_traverse(vm, njs_object(&vm->global_value), &ctx, njs_builtin_traverse); if (njs_slow_path(ret != NJS_OK)) { return NULL; @@ -753,7 +751,7 @@ njs_builtin_match_native_function(njs_vm ctx.match = njs_str_value(""); - ret = njs_object_traverse(vm, &vm->global_object, &ctx, + ret = njs_object_traverse(vm, njs_object(&vm->global_value), &ctx, njs_builtin_traverse); if (ret == NJS_DONE) { diff -r cb3e068a511c -r e73d4947372d src/njs_function.c --- a/src/njs_function.c Thu Feb 22 20:25:43 2024 -0800 +++ b/src/njs_function.c Tue Feb 27 23:24:55 2024 -0800 @@ -435,7 +435,7 @@ njs_function_lambda_frame(njs_vm_t *vm, if (njs_slow_path(function->global_this && njs_is_null_or_undefined(this))) { - njs_set_object(native_frame->local[0], &vm->global_object); + njs_value_assign(native_frame->local[0], &vm->global_value); } /* Copy arguments. */ diff -r cb3e068a511c -r e73d4947372d src/njs_vm.c --- a/src/njs_vm.c Thu Feb 22 20:25:43 2024 -0800 +++ b/src/njs_vm.c Tue Feb 27 23:24:55 2024 -0800 @@ -425,8 +425,6 @@ njs_vm_clone(njs_vm_t *vm, njs_external_ nvm->levels[NJS_LEVEL_GLOBAL] = global; - njs_set_object(&nvm->global_value, &nvm->global_object); - /* globalThis and this */ njs_scope_value_set(nvm, njs_scope_global_this_index(), &nvm->global_value); @@ -826,7 +824,7 @@ njs_vm_value(njs_vm_t *vm, const njs_str start = path->start; end = start + path->length; - njs_set_object(&value, &vm->global_object); + njs_value_assign(&value, &vm->global_value); for ( ;; ) { p = njs_strlchr(start, end, '.'); From xeioex at nginx.com Thu Feb 29 17:00:33 2024 From: xeioex at nginx.com (=?utf-8?q?Dmitry_Volyntsev?=) Date: Thu, 29 Feb 2024 17:00:33 +0000 Subject: [njs] Shell: fixed memory pool issues introduced in cb3e068a511c. Message-ID: details: https://hg.nginx.org/njs/rev/46699330f4f2 branches: changeset: 2292:46699330f4f2 user: Dmitry Volyntsev date: Tue Feb 27 23:25:05 2024 -0800 description: Shell: fixed memory pool issues introduced in cb3e068a511c. diffstat: external/njs_shell.c | 15 +++++++++------ 1 files changed, 9 insertions(+), 6 deletions(-) diffs (52 lines): diff -r e73d4947372d -r 46699330f4f2 external/njs_shell.c --- a/external/njs_shell.c Tue Feb 27 23:24:55 2024 -0800 +++ b/external/njs_shell.c Tue Feb 27 23:25:05 2024 -0800 @@ -1270,7 +1270,7 @@ njs_module_loader(njs_vm_t *vm, njs_exte return NULL; } - ret = njs_module_read(njs_vm_memory_pool(vm), info.fd, &text); + ret = njs_module_read(console->engine->pool, info.fd, &text); (void) close(info.fd); @@ -1293,10 +1293,10 @@ njs_module_loader(njs_vm_t *vm, njs_exte module = njs_vm_compile_module(vm, &info.file, &start, &text.start[text.length]); - njs_mp_free(njs_vm_memory_pool(vm), console->cwd.start); + njs_mp_free(console->engine->pool, console->cwd.start); console->cwd = prev_cwd; - njs_mp_free(njs_vm_memory_pool(vm), text.start); + njs_mp_free(console->engine->pool, text.start); return module; } @@ -3716,7 +3716,7 @@ njs_clear_timeout(njs_vm_t *vm, njs_valu njs_queue_remove(&ev->link); njs_rbtree_delete(&console->events, (njs_rbtree_part_t *) rb); - njs_mp_free(console->engine->pool, ev); + njs_mp_free(njs_vm_memory_pool(vm), ev); njs_value_undefined_set(retval); @@ -3780,12 +3780,15 @@ njs_console_time(njs_console_t *console, link = njs_queue_next(link); } - label = njs_mp_alloc(console->engine->pool, sizeof(njs_timelabel_t)); + label = njs_mp_alloc(console->engine->pool, + sizeof(njs_timelabel_t) + name->length); if (njs_slow_path(label == NULL)) { return NJS_ERROR; } - label->name = *name; + label->name.start = (u_char *) label + sizeof(njs_timelabel_t); + memcpy(label->name.start, name->start, name->length); + label->name.length = name->length; label->time = njs_time(); njs_queue_insert_tail(&console->labels, &label->link); From xeioex at nginx.com Thu Feb 29 17:00:35 2024 From: xeioex at nginx.com (=?utf-8?q?Dmitry_Volyntsev?=) Date: Thu, 29 Feb 2024 17:00:35 +0000 Subject: [njs] Removed duplicate expect tests introduced in cb3e068a511c. Message-ID: details: https://hg.nginx.org/njs/rev/49417e2749e0 branches: changeset: 2293:49417e2749e0 user: Dmitry Volyntsev date: Tue Feb 27 23:25:11 2024 -0800 description: Removed duplicate expect tests introduced in cb3e068a511c. diffstat: test/shell_test_njs.exp | 32 -------------------------------- 1 files changed, 0 insertions(+), 32 deletions(-) diffs (42 lines): diff -r 46699330f4f2 -r 49417e2749e0 test/shell_test_njs.exp --- a/test/shell_test_njs.exp Tue Feb 27 23:25:05 2024 -0800 +++ b/test/shell_test_njs.exp Tue Feb 27 23:25:11 2024 -0800 @@ -212,38 +212,6 @@ njs_test { "TypeError: cannot get property \"a\" of undefined"} } -# console.time* functions -njs_test { - {"console.time()\r\n" - "console.time()\r\nundefined\r\n>> "} - {"console.timeEnd()\r\n" - "console.timeEnd()\r\ndefault: *.*ms\r\nundefined\r\n>> "} - {"console.time(undefined)\r\n" - "console.time(undefined)\r\nundefined\r\n>> "} - {"console.timeEnd(undefined)\r\n" - "console.timeEnd(undefined)\r\ndefault: *.*ms\r\nundefined\r\n>> "} - {"console.time('abc')\r\n" - "console.time('abc')\r\nundefined\r\n>> "} - {"console.time('abc')\r\n" - "console.time('abc')\r\nTimer \"abc\" already exists.\r\nundefined\r\n>> "} - {"console.timeEnd('abc')\r\n" - "console.timeEnd('abc')\r\nabc: *.*ms\r\nundefined\r\n>> "} - {"console.time(true)\r\n" - "console.time(true)\r\nundefined\r\n>> "} - {"console.timeEnd(true)\r\n" - "console.timeEnd(true)\r\ntrue: *.*ms\r\nundefined\r\n>> "} - {"console.time(42)\r\n" - "console.time(42)\r\nundefined\r\n>> "} - {"console.timeEnd(42)\r\n" - "console.timeEnd(42)\r\n42: *.*ms\r\nundefined\r\n>> "} - {"console.timeEnd()\r\n" - "console.timeEnd()\r\nTimer \"default\" doesn’t exist."} - {"console.timeEnd('abc')\r\n" - "console.timeEnd('abc')\r\nTimer \"abc\" doesn’t exist."} - {"console.time('abc')\r\n" - "console.time('abc')\r\nundefined\r\n>> "} -} - njs_test { {"console.ll()\r\n" "console.ll()\r\nThrown:\r\nTypeError: (intermediate value)\\\[\"ll\"] is not a function"}