From thdbsdox12 at gmail.com Tue Dec 1 10:57:54 2020 From: thdbsdox12 at gmail.com (=?UTF-8?B?7ISx7IaM7Jyk?=) Date: Tue, 1 Dec 2020 19:57:54 +0900 Subject: [PATCH 8 of 8] new io_uring event module Message-ID: I thank you so much for your feedback. It was my first attempt writing code of open source project. And I learned many thing from Nginx codes and your feedback. Thank you for reviewing my patchset. Also I will look at AIO functionality and try to build external module. SoYun On Mon, Nov 30, 2020 at 11:39 PM Vladimir Homutov wrote: > First, thank you for sharing the patchset! > > We are always looking at new features that appear in kernels and may be > useful in nginx. There are a lof of shiny features, but it is a long > way for them to mature and be adopted in nginx. Currently we are not > considering adding such functionality. > > The io_uring interface looks like promising candidate to support AIO > nginx functionality in linux. You may want to start looking at > nginx.org/r/aio directive and related functionality. The task is > quite complex (at some degree due to poor interfaces available), > but we hope it has an elegant solution. > > Note also we prefer to use system calls directly, without introducing > dependencies to such things like liburing (and for sure, the method > of integration definitely is not cloning its copy in nginx). > > You may also want to consider building your modules externally and > minimizing changes to nginx core. While patching nginx often seen > as a simple and quick solution, we would appretiate attempts to > integrate external code using some generic approach/interface. > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel -------------- next part -------------- An HTML attachment was scrubbed... URL: From iippolitov at nginx.com Tue Dec 1 15:01:21 2020 From: iippolitov at nginx.com (Igor Ippolitov) Date: Tue, 01 Dec 2020 15:01:21 +0000 Subject: [njs] Version 0.5.0. Message-ID: details: https://hg.nginx.org/njs/rev/69f07c615162 branches: changeset: 1579:69f07c615162 user: Dmitry Volyntsev date: Tue Dec 01 12:32:31 2020 +0000 description: Version 0.5.0. diffstat: CHANGES | 77 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ 1 files changed, 77 insertions(+), 0 deletions(-) diffs (84 lines): diff -r 5e29ce36383e -r 69f07c615162 CHANGES --- a/CHANGES Wed Nov 25 00:12:04 2020 +0100 +++ b/CHANGES Tue Dec 01 12:32:31 2020 +0000 @@ -1,3 +1,80 @@ + +Changes with njs 0.5.0 01 Dec 2020 + + nginx modules: + + *) Feature: introduced global "ngx" object. + The following methods were added: + ngx.log(level, msg) + + The following properties were added: + ngx.INFO, + ngx.WARN, + ngx.ERR. + + *) Feature: added support for Buffer object where string + is expected. + + *) Feature: added Buffer version of existing properties. + The following properties were added: + r.requestBuffer (r.requestBody), + r.responseBuffer (r.responseBody), + r.rawVariables (r.variables), + s.rawVariables (s.variables). + + The following events were added in stream module: + upstream (upload), + downstream (download). + + *) Improvement: added aliases to existing properties. + The following properties were added: + r.requestText (r.requestBody), + r.responseText (r.responseBody). + + *) Improvement: throwing an exception in r.internalRedirect() + for a subrequest. + + *) Bugfix: fixed promise r.subrequest() with error_page redirect. + + *) Bugfix: fixed promise events handling. + + Core: + + *) Feature: added TypeScript definitions for built-in + modules. + Thanks to Jakub Jirutka. + + *) Feature: tracking unhandled promise rejection. + + *) Feature: added initial iterator support. + Thanks to Artem S. Povalyukhin. + + *) Improvement: TypeScript definitions are refactored. + Thanks to Jakub Jirutka. + + *) Improvement: added forgotten support for + Object.prototype.valueOf() in Buffer.from(). + + *) Bugfix: fixed heap-use-after-free in JSON.parse(). + + *) Bugfix: fixed heap-use-after-free in JSON.stringify(). + + *) Bugfix: fixed JSON.stringify() for arrays resizable via + getters. + + *) Bugfix: fixed heap-buffer-overflow for + RegExp.prototype[Symbol.replace]. + + *) Bugfix: fixed returned value for Buffer.prototype.write* + functions. + + *) Bugfix: fixed querystring.stringify(). + Thanks to Artem S. Povalyukhin. + + *) Bugfix: fixed the catch handler for + Promise.prototype.finally(). + + *) Bugfix: fixed querystring.parse(). Changes with njs 0.4.4 29 Sep 2020 From iippolitov at nginx.com Tue Dec 1 15:01:23 2020 From: iippolitov at nginx.com (Igor Ippolitov) Date: Tue, 01 Dec 2020 15:01:23 +0000 Subject: [njs] Added tag 0.5.0 for changeset 69f07c615162 Message-ID: details: https://hg.nginx.org/njs/rev/e7d2b2d7f8bd branches: changeset: 1580:e7d2b2d7f8bd user: Dmitry Volyntsev date: Tue Dec 01 12:59:48 2020 +0000 description: Added tag 0.5.0 for changeset 69f07c615162 diffstat: .hgtags | 1 + 1 files changed, 1 insertions(+), 0 deletions(-) diffs (8 lines): diff -r 69f07c615162 -r e7d2b2d7f8bd .hgtags --- a/.hgtags Tue Dec 01 12:32:31 2020 +0000 +++ b/.hgtags Tue Dec 01 12:59:48 2020 +0000 @@ -38,3 +38,4 @@ 9400790bf53843001f94c77b47bc99b05518af78 b409e86fd02a6f2cb3d741a41b6562471e1b66ef 0.4.2 1ada1061a040e5cd5ec55744bfa916dfc6744e4c 0.4.3 fdfd580b0dd617a884ed9287d98341ebef03ee9f 0.4.4 +69f07c6151628880bf7d5ac28bd8287ce96d8a36 0.5.0 From kasei at kasei.im Fri Dec 4 04:15:53 2020 From: kasei at kasei.im (kasei at kasei.im) Date: Fri, 04 Dec 2020 12:15:53 +0800 Subject: [PATCH] os/unix: don't stop old workers if reload failed Message-ID: <85e647df56dbe4e1f87106620b848b8f36de4fc8.camel@kasei.im> Hello, We found sometimes nginx might failed to start new worker processes during reconfiguring, (the number of processes exceeds NGX_MAX_PROCESSES for example). In that case, the master process still send QUIT signal to old workers, leads to the situation that there is no worker processes accept socket from listening sockets. And we found actually there is a return code in os/win32's ngx_start_worker_processes funcation. It skip quiting workers if there is no worker processes be spawned. So I written this patch followed win32's implementation and tested it. Cloud you please check this patch to see if there is anything missed? Thanks very much. # HG changeset patch # User Kasei Wang # Date 1607052987 -28800 # Fri Dec 04 11:36:27 2020 +0800 # Node ID 0711cc5f95b5fc2aad0b51328b382b5cfe08f8bb # Parent 90cc7194e993f8d722347e9f46a00f65dffc3935 os/unix: don't stop old workers if start new worker processes failed during reconfiguring. diff -r 90cc7194e993 -r 0711cc5f95b5 src/os/unix/ngx_process_cycle.c --- a/src/os/unix/ngx_process_cycle.c Fri Nov 27 00:01:20 2020 +0300 +++ b/src/os/unix/ngx_process_cycle.c Fri Dec 04 11:36:27 2020 +0800 @@ -11,7 +11,7 @@ #include -static void ngx_start_worker_processes(ngx_cycle_t *cycle, ngx_int_t n, +static ngx_int_t ngx_start_worker_processes(ngx_cycle_t *cycle, ngx_int_t n, ngx_int_t type); static void ngx_start_cache_manager_processes(ngx_cycle_t *cycle, ngx_uint_t respawn); @@ -127,8 +127,13 @@ ccf = (ngx_core_conf_t *) ngx_get_conf(cycle->conf_ctx, ngx_core_module); - ngx_start_worker_processes(cycle, ccf->worker_processes, - NGX_PROCESS_RESPAWN); + if (ngx_start_worker_processes(cycle, ccf->worker_processes, + NGX_PROCESS_RESPAWN) == 0) + { + ngx_log_error(NGX_LOG_EMERG, cycle->log, 0, + "start worker processes failed"); + exit(2); + } ngx_start_cache_manager_processes(cycle, 0); ngx_new_binary = 0; @@ -231,8 +236,14 @@ ngx_cycle = cycle; ccf = (ngx_core_conf_t *) ngx_get_conf(cycle->conf_ctx, ngx_core_module); - ngx_start_worker_processes(cycle, ccf->worker_processes, - NGX_PROCESS_JUST_RESPAWN); + /* TODO: close reuseport listening sockets not handled by worker */ + if (ngx_start_worker_processes(cycle, ccf- >worker_processes, + NGX_PROCESS_JUST_RESPAWN) == 0) + { + ngx_log_error(NGX_LOG_WARN, cycle->log, 0, + "start worker processes failed during reconfiguring"); + continue; + } ngx_start_cache_manager_processes(cycle, 1); /* allow new processes to start */ @@ -332,7 +343,7 @@ } -static void +static ngx_int_t ngx_start_worker_processes(ngx_cycle_t *cycle, ngx_int_t n, ngx_int_t type) { ngx_int_t i; @@ -346,8 +357,12 @@ for (i = 0; i < n; i++) { - ngx_spawn_process(cycle, ngx_worker_process_cycle, - (void *) (intptr_t) i, "worker process", type); + if (ngx_spawn_process(cycle, ngx_worker_process_cycle, + (void *) (intptr_t) i, "worker process", type) + == NGX_INVALID_PID) + { + break; + } ch.pid = ngx_processes[ngx_process_slot].pid; ch.slot = ngx_process_slot; @@ -355,6 +370,8 @@ ngx_pass_open_channel(cycle, &ch); } + + return i; } From kasei at kasei.im Fri Dec 4 04:29:51 2020 From: kasei at kasei.im (kasei at kasei.im) Date: Fri, 04 Dec 2020 12:29:51 +0800 Subject: [PATCH] os/unix: don't stop old workers if reload failed In-Reply-To: <85e647df56dbe4e1f87106620b848b8f36de4fc8.camel@kasei.im> References: <85e647df56dbe4e1f87106620b848b8f36de4fc8.camel@kasei.im> Message-ID: *Sorry for the format issue, resending it. Hello, We found sometimes nginx might failed to start new worker processes during reconfiguring, (the number of processes exceeds NGX_MAX_PROCESSES for example). In that case, the master process still send QUIT signal to old workers, leads to the situation that there is no worker processes accept socket from listening sockets. And we found actually there is a return code in os/win32's ngx_start_worker_processes funcation. It skip quiting workers if there is no worker processes be spawned. So I written this patch followed win32's implementation and tested it. Cloud you please check this patch to see if there is anything missed? Thanks very much. # HG changeset patch # User Kasei Wang # Date 1607052987 -28800 # Fri Dec 04 11:36:27 2020 +0800 # Node ID 0711cc5f95b5fc2aad0b51328b382b5cfe08f8bb # Parent 90cc7194e993f8d722347e9f46a00f65dffc3935 os/unix: don't stop old workers if start new worker processes failed during reconfiguring. diff -r 90cc7194e993 -r 0711cc5f95b5 src/os/unix/ngx_process_cycle.c --- a/src/os/unix/ngx_process_cycle.c Fri Nov 27 00:01:20 2020 +0300 +++ b/src/os/unix/ngx_process_cycle.c Fri Dec 04 11:36:27 2020 +0800 @@ -11,7 +11,7 @@ #include -static void ngx_start_worker_processes(ngx_cycle_t *cycle, ngx_int_t n, +static ngx_int_t ngx_start_worker_processes(ngx_cycle_t *cycle, ngx_int_t n, ngx_int_t type); static void ngx_start_cache_manager_processes(ngx_cycle_t *cycle, ngx_uint_t respawn); @@ -127,8 +127,13 @@ ccf = (ngx_core_conf_t *) ngx_get_conf(cycle->conf_ctx, ngx_core_module); - ngx_start_worker_processes(cycle, ccf->worker_processes, - NGX_PROCESS_RESPAWN); + if (ngx_start_worker_processes(cycle, ccf->worker_processes, + NGX_PROCESS_RESPAWN) == 0) + { + ngx_log_error(NGX_LOG_EMERG, cycle->log, 0, + "start worker processes failed"); + exit(2); + } ngx_start_cache_manager_processes(cycle, 0); ngx_new_binary = 0; @@ -231,8 +236,14 @@ ngx_cycle = cycle; ccf = (ngx_core_conf_t *) ngx_get_conf(cycle->conf_ctx, ngx_core_module); - ngx_start_worker_processes(cycle, ccf->worker_processes, - NGX_PROCESS_JUST_RESPAWN); + /* TODO: close reuseport listening sockets not handled by worker */ + if (ngx_start_worker_processes(cycle, ccf->worker_processes, + NGX_PROCESS_JUST_RESPAWN) == 0) + { + ngx_log_error(NGX_LOG_WARN, cycle->log, 0, + "start worker processes failed during reconfiguring"); + continue; + } ngx_start_cache_manager_processes(cycle, 1); /* allow new processes to start */ @@ -332,7 +343,7 @@ } -static void +static ngx_int_t ngx_start_worker_processes(ngx_cycle_t *cycle, ngx_int_t n, ngx_int_t type) { ngx_int_t i; @@ -346,8 +357,12 @@ for (i = 0; i < n; i++) { - ngx_spawn_process(cycle, ngx_worker_process_cycle, - (void *) (intptr_t) i, "worker process", type); + if (ngx_spawn_process(cycle, ngx_worker_process_cycle, + (void *) (intptr_t) i, "worker process", type) + == NGX_INVALID_PID) + { + break; + } ch.pid = ngx_processes[ngx_process_slot].pid; ch.slot = ngx_process_slot; @@ -355,6 +370,8 @@ ngx_pass_open_channel(cycle, &ch); } + + return i; } From ru at nginx.com Mon Dec 7 22:44:17 2020 From: ru at nginx.com (Ruslan Ermilov) Date: Mon, 07 Dec 2020 22:44:17 +0000 Subject: [nginx] SSL: fixed SSL shutdown on lingering close. Message-ID: details: https://hg.nginx.org/nginx/rev/7efae6b4cfb0 branches: changeset: 7751:7efae6b4cfb0 user: Ruslan Ermilov date: Tue Dec 08 01:43:36 2020 +0300 description: SSL: fixed SSL shutdown on lingering close. Ensure c->recv is properly reset to ngx_recv if SSL_shutdown() blocks on writing. The bug had appeared in 554c6ae25ffc. diffstat: src/event/ngx_event_openssl.c | 4 ++++ src/http/ngx_http_request.c | 2 -- src/http/v2/ngx_http_v2.c | 2 -- 3 files changed, 4 insertions(+), 4 deletions(-) diffs (59 lines): diff -r 90cc7194e993 -r 7efae6b4cfb0 src/event/ngx_event_openssl.c --- a/src/event/ngx_event_openssl.c Fri Nov 27 00:01:20 2020 +0300 +++ b/src/event/ngx_event_openssl.c Tue Dec 08 01:43:36 2020 +0300 @@ -2880,6 +2880,7 @@ ngx_ssl_shutdown(ngx_connection_t *c) SSL_free(c->ssl->connection); c->ssl = NULL; + c->recv = ngx_recv; return NGX_OK; } @@ -2925,6 +2926,7 @@ ngx_ssl_shutdown(ngx_connection_t *c) if (n == 1) { SSL_free(c->ssl->connection); c->ssl = NULL; + c->recv = ngx_recv; return NGX_OK; } @@ -2967,6 +2969,7 @@ ngx_ssl_shutdown(ngx_connection_t *c) if (sslerr == SSL_ERROR_ZERO_RETURN || ERR_peek_error() == 0) { SSL_free(c->ssl->connection); c->ssl = NULL; + c->recv = ngx_recv; return NGX_OK; } @@ -2977,6 +2980,7 @@ ngx_ssl_shutdown(ngx_connection_t *c) SSL_free(c->ssl->connection); c->ssl = NULL; + c->recv = ngx_recv; return NGX_ERROR; } diff -r 90cc7194e993 -r 7efae6b4cfb0 src/http/ngx_http_request.c --- a/src/http/ngx_http_request.c Fri Nov 27 00:01:20 2020 +0300 +++ b/src/http/ngx_http_request.c Tue Dec 08 01:43:36 2020 +0300 @@ -3397,8 +3397,6 @@ ngx_http_set_lingering_close(ngx_connect c->ssl->handler = ngx_http_set_lingering_close; return; } - - c->recv = ngx_recv; } #endif diff -r 90cc7194e993 -r 7efae6b4cfb0 src/http/v2/ngx_http_v2.c --- a/src/http/v2/ngx_http_v2.c Fri Nov 27 00:01:20 2020 +0300 +++ b/src/http/v2/ngx_http_v2.c Tue Dec 08 01:43:36 2020 +0300 @@ -739,8 +739,6 @@ ngx_http_v2_lingering_close(ngx_connecti c->ssl->handler = ngx_http_v2_lingering_close; return; } - - c->recv = ngx_recv; } #endif From cnewton at netflix.com Thu Dec 10 16:04:44 2020 From: cnewton at netflix.com (Chris Newton) Date: Thu, 10 Dec 2020 16:04:44 +0000 Subject: filesystem entries that are neither 'file' nor 'dir' can result in double ngx_close_file() if processed as FLV or MP4 Message-ID: Hello It has been noticed that when 'of' as returned by ngx_open_cached_file() is not is_file, but otherwise valid and also not is_dir, then both the ngx_http_flv_handler() and ngx_http_mp4_handler() functions will call ngx_close_file() immediately. However, the ngx_pool_cleanup_file() will still be called, leading to a duplicate ngx_close_file() being performed. It seems that these calls to ngx_close_file() should just be removed; eg., with the following. *--- a/src/http/modules/ngx_http_mp4_module.c* *+++ b/src/http/modules/ngx_http_mp4_module.c* @@ -522,10 +522,8 @@ ngx_http_mp4_handler(ngx_http_request_t *r) if (!of.is_file) { - if (ngx_close_file(of.fd) == NGX_FILE_ERROR) { - ngx_log_error(NGX_LOG_ALERT, log, ngx_errno, - ngx_close_file_n " \"%s\" failed", path.data); - } + ngx_log_debug2(NGX_LOG_DEBUG_HTTP, log, 0, + "%s: %V is not a file", __func__, &path); return NGX_DECLINED; } *--- a/src/http/modules/ngx_http_flv_module.c* *+++ b/src/http/modules/ngx_http_flv_module.c* @@ -157,10 +157,8 @@ ngx_http_flv_handler(ngx_http_request_t *r) if (!of.is_file) { - if (ngx_close_file(of.fd) == NGX_FILE_ERROR) { - ngx_log_error(NGX_LOG_ALERT, log, ngx_errno, - ngx_close_file_n " \"%s\" failed", path.data); - } + ngx_log_debug2(NGX_LOG_DEBUG_HTTP, log, 0, + "%s: %V is not a file", __func__, &path); return NGX_DECLINED; } TIA Chris -------------- next part -------------- An HTML attachment was scrubbed... URL: From cnewton at netflix.com Thu Dec 10 16:17:01 2020 From: cnewton at netflix.com (Chris Newton) Date: Thu, 10 Dec 2020 16:17:01 +0000 Subject: proposed solution for ticket 686 (With some condition, ngx_palloc() function will alloc a illegal memory address) Message-ID: Ticket 686 is marked as 'wontfix' as the constraints on the size of the memory pool are documented. I'd like to suggest that the constraints are enforced by the code to prevent issues. eg., *--- a/src/core/ngx_palloc.c* *+++ b/src/core/ngx_palloc.c* @@ -20,6 +20,12 @@ ngx_create_pool(size_t size, ngx_log_t *log) { ngx_pool_t *p; + if (size < NGX_MIN_POOL_SIZE) + size = NGX_MIN_POOL_SIZE; + + if (size % NGX_POOL_ALIGNMENT != 0) + size = ngx_align(size, NGX_POOL_ALIGNMENT); + p = ngx_memalign(NGX_POOL_ALIGNMENT, size, log); if (p == NULL) { return NULL; TIA Chris -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Thu Dec 10 17:13:18 2020 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 10 Dec 2020 17:13:18 +0000 Subject: [nginx] Fixed parsing of absolute URIs with empty path (ticket #2079). Message-ID: details: https://hg.nginx.org/nginx/rev/8989fbd2f89a branches: changeset: 7752:8989fbd2f89a user: Maxim Dounin date: Thu Dec 10 20:09:30 2020 +0300 description: Fixed parsing of absolute URIs with empty path (ticket #2079). When the request line contains request-target in the absolute-URI form, it can contain path-empty instead of a single slash (see RFC 7230, RFC 3986). Previously, the ngx_http_parse_request_line() function only accepted empty path when there was no query string. With this change, non-empty query is also correctly handled. That is, request line "GET http://example.com?foo HTTP/1.1" is accepted and results in $uri "/" and $args "foo". Note that $request_uri remains "?foo", similarly to how spaces in URIs are handled. Providing "/?foo", similarly to how "/" is provided for "GET http://example.com HTTP/1.1", requires allocation. diffstat: src/http/ngx_http_parse.c | 17 +++++++++++++++++ src/http/ngx_http_request.c | 8 ++++++-- src/http/ngx_http_request.h | 3 +++ 3 files changed, 26 insertions(+), 2 deletions(-) diffs (79 lines): diff -r 7efae6b4cfb0 -r 8989fbd2f89a src/http/ngx_http_parse.c --- a/src/http/ngx_http_parse.c Tue Dec 08 01:43:36 2020 +0300 +++ b/src/http/ngx_http_parse.c Thu Dec 10 20:09:30 2020 +0300 @@ -380,6 +380,12 @@ ngx_http_parse_request_line(ngx_http_req r->uri_start = p; state = sw_after_slash_in_uri; break; + case '?': + r->uri_start = p; + r->args_start = p + 1; + r->empty_path_in_uri = 1; + state = sw_uri; + break; case ' ': /* * use single "/" from request line to preserve pointers, @@ -446,6 +452,13 @@ ngx_http_parse_request_line(ngx_http_req r->uri_start = p; state = sw_after_slash_in_uri; break; + case '?': + r->port_end = p; + r->uri_start = p; + r->args_start = p + 1; + r->empty_path_in_uri = 1; + state = sw_uri; + break; case ' ': r->port_end = p; /* @@ -1287,6 +1300,10 @@ ngx_http_parse_complex_uri(ngx_http_requ r->uri_ext = NULL; r->args_start = NULL; + if (r->empty_path_in_uri) { + *u++ = '/'; + } + ch = *p++; while (p <= r->uri_end) { diff -r 7efae6b4cfb0 -r 8989fbd2f89a src/http/ngx_http_request.c --- a/src/http/ngx_http_request.c Tue Dec 08 01:43:36 2020 +0300 +++ b/src/http/ngx_http_request.c Thu Dec 10 20:09:30 2020 +0300 @@ -1224,7 +1224,11 @@ ngx_http_process_request_uri(ngx_http_re r->uri.len = r->uri_end - r->uri_start; } - if (r->complex_uri || r->quoted_uri) { + if (r->complex_uri || r->quoted_uri || r->empty_path_in_uri) { + + if (r->empty_path_in_uri) { + r->uri.len++; + } r->uri.data = ngx_pnalloc(r->pool, r->uri.len + 1); if (r->uri.data == NULL) { @@ -1250,7 +1254,7 @@ ngx_http_process_request_uri(ngx_http_re r->unparsed_uri.len = r->uri_end - r->uri_start; r->unparsed_uri.data = r->uri_start; - r->valid_unparsed_uri = r->space_in_uri ? 0 : 1; + r->valid_unparsed_uri = (r->space_in_uri || r->empty_path_in_uri) ? 0 : 1; if (r->uri_ext) { if (r->args_start) { diff -r 7efae6b4cfb0 -r 8989fbd2f89a src/http/ngx_http_request.h --- a/src/http/ngx_http_request.h Tue Dec 08 01:43:36 2020 +0300 +++ b/src/http/ngx_http_request.h Thu Dec 10 20:09:30 2020 +0300 @@ -470,6 +470,9 @@ struct ngx_http_request_s { /* URI with " " */ unsigned space_in_uri:1; + /* URI with empty path */ + unsigned empty_path_in_uri:1; + unsigned invalid_header:1; unsigned add_uri_to_alias:1; From mdounin at mdounin.ru Thu Dec 10 17:13:21 2020 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 10 Dec 2020 17:13:21 +0000 Subject: [nginx] Removed extra allocation for r->uri. Message-ID: details: https://hg.nginx.org/nginx/rev/2fec22332ff4 branches: changeset: 7753:2fec22332ff4 user: Maxim Dounin date: Thu Dec 10 20:09:39 2020 +0300 description: Removed extra allocation for r->uri. The ngx_http_parse_complex_uri() function cannot make URI longer and does not null-terminate URI, so there is no need to allocate an extra byte. This allocation appears to be a leftover from changes in 461:a88a3e4e158f (0.1.5), where null-termination of r->uri and many other strings was removed. diffstat: src/http/ngx_http_request.c | 2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diffs (12 lines): diff -r 8989fbd2f89a -r 2fec22332ff4 src/http/ngx_http_request.c --- a/src/http/ngx_http_request.c Thu Dec 10 20:09:30 2020 +0300 +++ b/src/http/ngx_http_request.c Thu Dec 10 20:09:39 2020 +0300 @@ -1230,7 +1230,7 @@ ngx_http_process_request_uri(ngx_http_re r->uri.len++; } - r->uri.data = ngx_pnalloc(r->pool, r->uri.len + 1); + r->uri.data = ngx_pnalloc(r->pool, r->uri.len); if (r->uri.data == NULL) { ngx_http_close_request(r, NGX_HTTP_INTERNAL_SERVER_ERROR); return NGX_ERROR; From mdounin at mdounin.ru Thu Dec 10 17:24:16 2020 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 10 Dec 2020 20:24:16 +0300 Subject: filesystem entries that are neither 'file' nor 'dir' can result in double ngx_close_file() if processed as FLV or MP4 In-Reply-To: References: Message-ID: <20201210172416.GD1147@mdounin.ru> Hello! On Thu, Dec 10, 2020 at 04:04:44PM +0000, Chris Newton wrote: > It has been noticed that when 'of' as returned by ngx_open_cached_file() is > not is_file, but otherwise valid and also not is_dir, then both the > ngx_http_flv_handler() and ngx_http_mp4_handler() functions will call > ngx_close_file() immediately. However, the ngx_pool_cleanup_file() will > still be called, leading to a duplicate ngx_close_file() being performed. > > It seems that these calls to ngx_close_file() should just be removed; eg., > with the following. Thanks for the report. This was an omission in the flv module during introduction of open_file_cache in 1454:f497ed7682a7. And later it was copied to the mp4 module. Indeed, removing the ngx_close_file() close call is the most simple solution, and that's what 1454:f497ed7682a7 does in the static module. # HG changeset patch # User Maxim Dounin # Date 1607620894 -10800 # Thu Dec 10 20:21:34 2020 +0300 # Node ID 09b25d66cf7e8fe1dc1c521867387ee828c7245e # Parent 2fec22332ff45b220b59e72266c5d0a622f21d15 Fixed double close of non-regular files in flv and mp4. With introduction of open_file_cache in 1454:f497ed7682a7, opening a file with ngx_open_cached_file() automatically adds a cleanup handler to close the file. As such, calling ngx_close_file() directly for non-regular files is no longer needed and will result in duplicate close() call. In 1454:f497ed7682a7 ngx_close_file() call for non-regular files was removed in the static module, but wasn't in the flv module. And the resulting incorrect code was later copied to the mp4 module. Fix is to remove the ngx_close_file() call from both modules. Reported by Chris Newton. diff --git a/src/http/modules/ngx_http_flv_module.c b/src/http/modules/ngx_http_flv_module.c --- a/src/http/modules/ngx_http_flv_module.c +++ b/src/http/modules/ngx_http_flv_module.c @@ -156,12 +156,6 @@ ngx_http_flv_handler(ngx_http_request_t } if (!of.is_file) { - - if (ngx_close_file(of.fd) == NGX_FILE_ERROR) { - ngx_log_error(NGX_LOG_ALERT, log, ngx_errno, - ngx_close_file_n " \"%s\" failed", path.data); - } - return NGX_DECLINED; } diff --git a/src/http/modules/ngx_http_mp4_module.c b/src/http/modules/ngx_http_mp4_module.c --- a/src/http/modules/ngx_http_mp4_module.c +++ b/src/http/modules/ngx_http_mp4_module.c @@ -521,12 +521,6 @@ ngx_http_mp4_handler(ngx_http_request_t } if (!of.is_file) { - - if (ngx_close_file(of.fd) == NGX_FILE_ERROR) { - ngx_log_error(NGX_LOG_ALERT, log, ngx_errno, - ngx_close_file_n " \"%s\" failed", path.data); - } - return NGX_DECLINED; } -- Maxim Dounin http://mdounin.ru/ From mdounin at mdounin.ru Thu Dec 10 17:36:26 2020 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 10 Dec 2020 20:36:26 +0300 Subject: proposed solution for ticket 686 (With some condition, ngx_palloc() function will alloc a illegal memory address) In-Reply-To: References: Message-ID: <20201210173626.GE1147@mdounin.ru> Hello! On Thu, Dec 10, 2020 at 04:17:01PM +0000, Chris Newton wrote: > Ticket 686 is marked as 'wontfix' as the constraints on the size of the > memory pool are documented. > > I'd like to suggest that the constraints are enforced by the code to > prevent issues. eg., Thanks for your suggestion. This was considered previously, and the answer is no, as such enforcement introduces generally unneeded run-time code. Futher, in your particular variant it masks bugs in the calling code instead of encouraging the authors to fix them. Instead, consider introducing appropriate checking during configuration parsing if you provide your own pools with configurable sizes (and provide appropriate compiled-in sizes if there are any). -- Maxim Dounin http://mdounin.ru/ From mdounin at mdounin.ru Thu Dec 10 19:47:36 2020 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 10 Dec 2020 22:47:36 +0300 Subject: [PATCH] os/unix: don't stop old workers if reload failed In-Reply-To: References: <85e647df56dbe4e1f87106620b848b8f36de4fc8.camel@kasei.im> Message-ID: <20201210194736.GG1147@mdounin.ru> Hello! On Fri, Dec 04, 2020 at 12:29:51PM +0800, kasei at kasei.im wrote: > *Sorry for the format issue, resending it. > Hello, We found sometimes nginx might failed to start new worker processes > during reconfiguring, (the number of processes exceeds NGX_MAX_PROCESSES for > example). In that case, the master process still send QUIT signal to old > workers, leads to the situation that there is no worker processes accept socket > from listening sockets. > And we found actually there is a return code in os/win32's > ngx_start_worker_processes funcation. It skip quiting workers if there is no > worker processes be spawned. So I written this patch followed win32's > implementation and tested it. Cloud you please check this patch to see if there > is anything missed? Thanks very much. Thanks for the patch. The win32 implementation works this way because spawning new processes on Windows might easily fail for multiple reasons as there is no fork(). On the other hand, this is not expected to ever happen on Unix systems, except may be hitting variouos limits, either nginx own NGX_MAX_PROCESSES limit or system limits. That is, on Unix systems starting worker processes more or less is only expected to fail in case of severe misconfiguration. Further, I'm not sure that current win32 behaviour is a safe one, as resulting state is inconsistent: worker processes does not match master configuration. This probably doesn't mater for win32 as starting worker processes is anyway racy, but might make things worse on Unix systems. And, as your own TODO mentions, the approach "do not stop old worker processes if no new worker processes were started" won't prevent degraded service if some worker processes were not able to start and listening sockets with the reuseport option are used. Could you please clarify - are you trying to solve some problem you are facing in practice? If yes, could you please provide some more details? -- Maxim Dounin http://mdounin.ru/ From mdounin at mdounin.ru Sat Dec 12 00:32:45 2020 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sat, 12 Dec 2020 00:32:45 +0000 Subject: [nginx] Fixed double close of non-regular files in flv and mp4. Message-ID: details: https://hg.nginx.org/nginx/rev/7a55311b0dc3 branches: changeset: 7754:7a55311b0dc3 user: Maxim Dounin date: Fri Dec 11 13:42:07 2020 +0300 description: Fixed double close of non-regular files in flv and mp4. With introduction of open_file_cache in 1454:f497ed7682a7, opening a file with ngx_open_cached_file() automatically adds a cleanup handler to close the file. As such, calling ngx_close_file() directly for non-regular files is no longer needed and will result in duplicate close() call. In 1454:f497ed7682a7 ngx_close_file() call for non-regular files was removed in the static module, but wasn't in the flv module. And the resulting incorrect code was later copied to the mp4 module. Fix is to remove the ngx_close_file() call from both modules. Reported by Chris Newton. diffstat: src/http/modules/ngx_http_flv_module.c | 6 ------ src/http/modules/ngx_http_mp4_module.c | 6 ------ 2 files changed, 0 insertions(+), 12 deletions(-) diffs (32 lines): diff -r 2fec22332ff4 -r 7a55311b0dc3 src/http/modules/ngx_http_flv_module.c --- a/src/http/modules/ngx_http_flv_module.c Thu Dec 10 20:09:39 2020 +0300 +++ b/src/http/modules/ngx_http_flv_module.c Fri Dec 11 13:42:07 2020 +0300 @@ -156,12 +156,6 @@ ngx_http_flv_handler(ngx_http_request_t } if (!of.is_file) { - - if (ngx_close_file(of.fd) == NGX_FILE_ERROR) { - ngx_log_error(NGX_LOG_ALERT, log, ngx_errno, - ngx_close_file_n " \"%s\" failed", path.data); - } - return NGX_DECLINED; } diff -r 2fec22332ff4 -r 7a55311b0dc3 src/http/modules/ngx_http_mp4_module.c --- a/src/http/modules/ngx_http_mp4_module.c Thu Dec 10 20:09:39 2020 +0300 +++ b/src/http/modules/ngx_http_mp4_module.c Fri Dec 11 13:42:07 2020 +0300 @@ -521,12 +521,6 @@ ngx_http_mp4_handler(ngx_http_request_t } if (!of.is_file) { - - if (ngx_close_file(of.fd) == NGX_FILE_ERROR) { - ngx_log_error(NGX_LOG_ALERT, log, ngx_errno, - ngx_close_file_n " \"%s\" failed", path.data); - } - return NGX_DECLINED; } From mdounin at mdounin.ru Sat Dec 12 00:35:15 2020 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sat, 12 Dec 2020 03:35:15 +0300 Subject: filesystem entries that are neither 'file' nor 'dir' can result in double ngx_close_file() if processed as FLV or MP4 In-Reply-To: <20201210172416.GD1147@mdounin.ru> References: <20201210172416.GD1147@mdounin.ru> Message-ID: <20201212003515.GM1147@mdounin.ru> Hello! On Thu, Dec 10, 2020 at 08:24:16PM +0300, Maxim Dounin wrote: > Hello! > > On Thu, Dec 10, 2020 at 04:04:44PM +0000, Chris Newton wrote: > > > It has been noticed that when 'of' as returned by ngx_open_cached_file() is > > not is_file, but otherwise valid and also not is_dir, then both the > > ngx_http_flv_handler() and ngx_http_mp4_handler() functions will call > > ngx_close_file() immediately. However, the ngx_pool_cleanup_file() will > > still be called, leading to a duplicate ngx_close_file() being performed. > > > > It seems that these calls to ngx_close_file() should just be removed; eg., > > with the following. > > Thanks for the report. This was an omission in the flv module > during introduction of open_file_cache in 1454:f497ed7682a7. And > later it was copied to the mp4 module. > > Indeed, removing the ngx_close_file() close call is the most > simple solution, and that's what 1454:f497ed7682a7 does in the > static module. > > # HG changeset patch > # User Maxim Dounin > # Date 1607620894 -10800 > # Thu Dec 10 20:21:34 2020 +0300 > # Node ID 09b25d66cf7e8fe1dc1c521867387ee828c7245e > # Parent 2fec22332ff45b220b59e72266c5d0a622f21d15 > Fixed double close of non-regular files in flv and mp4. > > With introduction of open_file_cache in 1454:f497ed7682a7, opening a file > with ngx_open_cached_file() automatically adds a cleanup handler to close > the file. As such, calling ngx_close_file() directly for non-regular files > is no longer needed and will result in duplicate close() call. > > In 1454:f497ed7682a7 ngx_close_file() call for non-regular files was removed > in the static module, but wasn't in the flv module. And the resulting > incorrect code was later copied to the mp4 module. Fix is to remove the > ngx_close_file() call from both modules. > > Reported by Chris Newton. > > diff --git a/src/http/modules/ngx_http_flv_module.c b/src/http/modules/ngx_http_flv_module.c > --- a/src/http/modules/ngx_http_flv_module.c > +++ b/src/http/modules/ngx_http_flv_module.c > @@ -156,12 +156,6 @@ ngx_http_flv_handler(ngx_http_request_t > } > > if (!of.is_file) { > - > - if (ngx_close_file(of.fd) == NGX_FILE_ERROR) { > - ngx_log_error(NGX_LOG_ALERT, log, ngx_errno, > - ngx_close_file_n " \"%s\" failed", path.data); > - } > - > return NGX_DECLINED; > } > > diff --git a/src/http/modules/ngx_http_mp4_module.c b/src/http/modules/ngx_http_mp4_module.c > --- a/src/http/modules/ngx_http_mp4_module.c > +++ b/src/http/modules/ngx_http_mp4_module.c > @@ -521,12 +521,6 @@ ngx_http_mp4_handler(ngx_http_request_t > } > > if (!of.is_file) { > - > - if (ngx_close_file(of.fd) == NGX_FILE_ERROR) { > - ngx_log_error(NGX_LOG_ALERT, log, ngx_errno, > - ngx_close_file_n " \"%s\" failed", path.data); > - } > - > return NGX_DECLINED; > } > Committed after an internal review: http://hg.nginx.org/nginx/rev/7a55311b0dc3 Thanks. -- Maxim Dounin http://mdounin.ru/ From mdounin at mdounin.ru Tue Dec 15 14:46:55 2020 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 15 Dec 2020 14:46:55 +0000 Subject: [nginx] Updated OpenSSL used for win32 builds. Message-ID: details: https://hg.nginx.org/nginx/rev/7d6ba2a00e2f branches: changeset: 7755:7d6ba2a00e2f user: Maxim Dounin date: Tue Dec 15 16:49:24 2020 +0300 description: Updated OpenSSL used for win32 builds. diffstat: misc/GNUmakefile | 2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diffs (12 lines): diff -r 7a55311b0dc3 -r 7d6ba2a00e2f misc/GNUmakefile --- a/misc/GNUmakefile Fri Dec 11 13:42:07 2020 +0300 +++ b/misc/GNUmakefile Tue Dec 15 16:49:24 2020 +0300 @@ -6,7 +6,7 @@ TEMP = tmp CC = cl OBJS = objs.msvc8 -OPENSSL = openssl-1.1.1h +OPENSSL = openssl-1.1.1i ZLIB = zlib-1.2.11 PCRE = pcre-8.44 From mdounin at mdounin.ru Tue Dec 15 14:46:58 2020 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 15 Dec 2020 14:46:58 +0000 Subject: [nginx] nginx-1.19.6-RELEASE Message-ID: details: https://hg.nginx.org/nginx/rev/f618488eb769 branches: changeset: 7756:f618488eb769 user: Maxim Dounin date: Tue Dec 15 17:41:39 2020 +0300 description: nginx-1.19.6-RELEASE diffstat: docs/xml/nginx/changes.xml | 49 ++++++++++++++++++++++++++++++++++++++++++++++ 1 files changed, 49 insertions(+), 0 deletions(-) diffs (59 lines): diff -r 7d6ba2a00e2f -r f618488eb769 docs/xml/nginx/changes.xml --- a/docs/xml/nginx/changes.xml Tue Dec 15 16:49:24 2020 +0300 +++ b/docs/xml/nginx/changes.xml Tue Dec 15 17:41:39 2020 +0300 @@ -5,6 +5,55 @@ + + + + +?????? "no live upstreams", +???? server ? ????? upstream ??? ??????? ??? down. + + +"no live upstreams" errors +if a "server" inside "upstream" block was marked as "down". + + + + + +??? ????????????? HTTPS ? ??????? ???????? ??? ????????? segmentation fault; +?????? ????????? ? 1.19.5. + + +a segmentation fault might occur in a worker process if HTTPS was used; +the bug had appeared in 1.19.5. + + + + + +nginx ????????? ?????? 400 ?? ??????? ???? +"GET http://example.com?args HTTP/1.0". + + +nginx returned the 400 response on requests like +"GET http://example.com?args HTTP/1.0". + + + + + +? ??????? ngx_http_flv_module ? ngx_http_mp4_module.
+??????? Chris Newton. +
+ +in the ngx_http_flv_module and ngx_http_mp4_module.
+Thanks to Chris Newton. +
+
+ +
+ + From mdounin at mdounin.ru Tue Dec 15 14:47:01 2020 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 15 Dec 2020 14:47:01 +0000 Subject: [nginx] release-1.19.6 tag Message-ID: details: https://hg.nginx.org/nginx/rev/82228f955153 branches: changeset: 7757:82228f955153 user: Maxim Dounin date: Tue Dec 15 17:41:39 2020 +0300 description: release-1.19.6 tag diffstat: .hgtags | 1 + 1 files changed, 1 insertions(+), 0 deletions(-) diffs (8 lines): diff -r f618488eb769 -r 82228f955153 .hgtags --- a/.hgtags Tue Dec 15 17:41:39 2020 +0300 +++ b/.hgtags Tue Dec 15 17:41:39 2020 +0300 @@ -455,3 +455,4 @@ a7b46539f507e6c64efa0efda69ad60b6f4ffbce 3cbc2602325f0ac08917a4397d76f5155c34b7b1 release-1.19.3 dc0cc425fa63a80315f6efb68697cadb6626cdf2 release-1.19.4 8e5b068f761cd512d10c9671fbde0b568c1fd08b release-1.19.5 +f618488eb769e0ed74ef0d93cd118d2ad79ef94d release-1.19.6 From hawkxiang.cpp at gmail.com Wed Dec 16 06:41:49 2020 From: hawkxiang.cpp at gmail.com (=?UTF-8?B?5byg57+U?=) Date: Wed, 16 Dec 2020 14:41:49 +0800 Subject: accept4() support SOCK_CLOEXEC flag Message-ID: # HG changeset patch # User Zhang Xiang # Date 1608099124 -28800 # Wed Dec 16 14:12:04 2020 +0800 # Node ID a685d9c04acdb4ec71fd9f176415917c217af630 # Parent 82228f955153527fba12211f52bf102c90f38dfb Mail: accept4() support SOCK_CLOEXEC flag The close-on-exec flag on the new FD can be set via SOCK_CLOEXEC diff -r 82228f955153 -r a685d9c04acd auto/unix --- a/auto/unix Tue Dec 15 17:41:39 2020 +0300 +++ b/auto/unix Wed Dec 16 14:12:04 2020 +0800 @@ -510,7 +510,7 @@ ngx_feature_incs="#include " ngx_feature_path= ngx_feature_libs= -ngx_feature_test="accept4(0, NULL, NULL, SOCK_NONBLOCK)" +ngx_feature_test="accept4(0, NULL, NULL, SOCK_NONBLOCK | SOCK_CLOEXEC)" . auto/feature if [ $NGX_FILE_AIO = YES ]; then diff -r 82228f955153 -r a685d9c04acd src/event/ngx_event_accept.c --- a/src/event/ngx_event_accept.c Tue Dec 15 17:41:39 2020 +0300 +++ b/src/event/ngx_event_accept.c Wed Dec 16 14:12:04 2020 +0800 @@ -57,7 +57,7 @@ #if (NGX_HAVE_ACCEPT4) if (use_accept4) { - s = accept4(lc->fd, &sa.sockaddr, &socklen, SOCK_NONBLOCK); + s = accept4(lc->fd, &sa.sockaddr, &socklen, SOCK_NONBLOCK | SOCK_CLOEXEC); } else { s = accept(lc->fd, &sa.sockaddr, &socklen); } -------------- next part -------------- An HTML attachment was scrubbed... URL: From hawkxiang.cpp at gmail.com Wed Dec 16 07:05:44 2020 From: hawkxiang.cpp at gmail.com (=?UTF-8?B?5byg57+U?=) Date: Wed, 16 Dec 2020 15:05:44 +0800 Subject: [NGINX] accept4() support SOCK_CLOEXEC flag Message-ID: # HG changeset patch # User Zhang Xiang # Date 1608099124 -28800 # Wed Dec 16 14:12:04 2020 +0800 # Node ID a685d9c04acdb4ec71fd9f176415917c217af630 # Parent 82228f955153527fba12211f52bf102c90f38dfb Event: accept4() support SOCK_CLOEXEC flag The close-on-exec flag on the new FD can be set via SOCK_CLOEXEC diff -r 82228f955153 -r a685d9c04acd auto/unix --- a/auto/unix Tue Dec 15 17:41:39 2020 +0300 +++ b/auto/unix Wed Dec 16 14:12:04 2020 +0800 @@ -510,7 +510,7 @@ ngx_feature_incs="#include " ngx_feature_path= ngx_feature_libs= -ngx_feature_test="accept4(0, NULL, NULL, SOCK_NONBLOCK)" +ngx_feature_test="accept4(0, NULL, NULL, SOCK_NONBLOCK | SOCK_CLOEXEC)" . auto/feature if [ $NGX_FILE_AIO = YES ]; then diff -r 82228f955153 -r a685d9c04acd src/event/ngx_event_accept.c --- a/src/event/ngx_event_accept.c Tue Dec 15 17:41:39 2020 +0300 +++ b/src/event/ngx_event_accept.c Wed Dec 16 14:12:04 2020 +0800 @@ -57,7 +57,7 @@ #if (NGX_HAVE_ACCEPT4) if (use_accept4) { - s = accept4(lc->fd, &sa.sockaddr, &socklen, SOCK_NONBLOCK); + s = accept4(lc->fd, &sa.sockaddr, &socklen, SOCK_NONBLOCK | SOCK_CLOEXEC); } else { s = accept(lc->fd, &sa.sockaddr, &socklen); } -------------- next part -------------- An HTML attachment was scrubbed... URL: From vl at nginx.com Wed Dec 16 07:12:53 2020 From: vl at nginx.com (Vladimir Homutov) Date: Wed, 16 Dec 2020 10:12:53 +0300 Subject: [NGINX] accept4() support SOCK_CLOEXEC flag In-Reply-To: References: Message-ID: 16.12.2020 10:05, ?? ?????: > # HG changeset patch > # User Zhang Xiang > > # Date 1608099124 -28800 > # ? ? ?Wed Dec 16 14:12:04 2020 +0800 > # Node ID a685d9c04acdb4ec71fd9f176415917c217af630 > # Parent ?82228f955153527fba12211f52bf102c90f38dfb > Event:?accept4() support SOCK_CLOEXEC flag > > The close-on-exec flag on the new FD can be set via SOCK_CLOEXEC > > diff -r 82228f955153 -r a685d9c04acd auto/unix > --- a/auto/unix Tue Dec 15 17:41:39 2020 +0300 > +++ b/auto/unix Wed Dec 16 14:12:04 2020 +0800 > @@ -510,7 +510,7 @@ > ?ngx_feature_incs="#include " > ?ngx_feature_path= > ?ngx_feature_libs= > -ngx_feature_test="accept4(0, NULL, NULL, SOCK_NONBLOCK)" > +ngx_feature_test="accept4(0, NULL, NULL, SOCK_NONBLOCK | SOCK_CLOEXEC)" > ?. auto/feature > > ?if [ $NGX_FILE_AIO = YES ]; then > diff -r 82228f955153 -r a685d9c04acd src/event/ngx_event_accept.c > --- a/src/event/ngx_event_accept.c ? ? ?Tue Dec 15 17:41:39 2020 +0300 > +++ b/src/event/ngx_event_accept.c ? ? ?Wed Dec 16 14:12:04 2020 +0800 > @@ -57,7 +57,7 @@ > > ?#if (NGX_HAVE_ACCEPT4) > ? ? ? ? ?if (use_accept4) { > - ? ? ? ? ? ?s = accept4(lc->fd, &sa.sockaddr, &socklen, SOCK_NONBLOCK); > + ? ? ? ? ? ?s = accept4(lc->fd, &sa.sockaddr, &socklen, SOCK_NONBLOCK | > SOCK_CLOEXEC); > ? ? ? ? ?} else { > ? ? ? ? ? ? ?s = accept(lc->fd, &sa.sockaddr, &socklen); > ? ? ? ? ?} > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > Thank you, but we don't need such sockets behaviour - we use exactly opposite to pass sockets to new process during binary upgrade (which is basically fork + exec) From vl at nginx.com Wed Dec 16 13:14:53 2020 From: vl at nginx.com (Vladimir Homutov) Date: Wed, 16 Dec 2020 16:14:53 +0300 Subject: accept4() support SOCK_CLOEXEC flag In-Reply-To: References: Message-ID: <9be955c6-1570-d32b-5828-5b8257f20184@nginx.com> 16.12.2020 09:41, ?? ?????: > # HG changeset patch > # User Zhang Xiang > > # Date 1608099124 -28800 > # ? ? ?Wed Dec 16 14:12:04 2020 +0800 > # Node ID a685d9c04acdb4ec71fd9f176415917c217af630 > # Parent ?82228f955153527fba12211f52bf102c90f38dfb > Mail:?accept4() support SOCK_CLOEXEC flag > > The close-on-exec flag on the new FD can be set via SOCK_CLOEXEC > > diff -r 82228f955153 -r a685d9c04acd auto/unix > --- a/auto/unix Tue Dec 15 17:41:39 2020 +0300 > +++ b/auto/unix Wed Dec 16 14:12:04 2020 +0800 > @@ -510,7 +510,7 @@ > ?ngx_feature_incs="#include " > ?ngx_feature_path= > ?ngx_feature_libs= > -ngx_feature_test="accept4(0, NULL, NULL, SOCK_NONBLOCK)" > +ngx_feature_test="accept4(0, NULL, NULL, SOCK_NONBLOCK | SOCK_CLOEXEC)" > ?. auto/feature > > ?if [ $NGX_FILE_AIO = YES ]; then > diff -r 82228f955153 -r a685d9c04acd src/event/ngx_event_accept.c > --- a/src/event/ngx_event_accept.c ? ? ?Tue Dec 15 17:41:39 2020 +0300 > +++ b/src/event/ngx_event_accept.c ? ? ?Wed Dec 16 14:12:04 2020 +0800 > @@ -57,7 +57,7 @@ > > ?#if (NGX_HAVE_ACCEPT4) > ? ? ? ? ?if (use_accept4) { > - ? ? ? ? ? ?s = accept4(lc->fd, &sa.sockaddr, &socklen, SOCK_NONBLOCK); > + ? ? ? ? ? ?s = accept4(lc->fd, &sa.sockaddr, &socklen, SOCK_NONBLOCK | > SOCK_CLOEXEC); > ? ? ? ? ?} else { > ? ? ? ? ? ? ?s = accept(lc->fd, &sa.sockaddr, &socklen); > ? ? ? ? ?} > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > just realized you are talking about accepted sockets. Why do you think it is useful? Normally we don't fork workers and don't exec anything. From ranier.vf at gmail.com Wed Dec 16 14:01:54 2020 From: ranier.vf at gmail.com (Ranier Vilela) Date: Wed, 16 Dec 2020 11:01:54 -0300 Subject: accept4() support SOCK_CLOEXEC flag In-Reply-To: <9be955c6-1570-d32b-5828-5b8257f20184@nginx.com> References: <9be955c6-1570-d32b-5828-5b8257f20184@nginx.com> Message-ID: Em qua., 16 de dez. de 2020 ?s 10:14, Vladimir Homutov escreveu: > 16.12.2020 09:41, ?? ?????: > > # HG changeset patch > > # User Zhang Xiang > > > > # Date 1608099124 -28800 > > # Wed Dec 16 14:12:04 2020 +0800 > > # Node ID a685d9c04acdb4ec71fd9f176415917c217af630 > > # Parent 82228f955153527fba12211f52bf102c90f38dfb > > Mail: accept4() support SOCK_CLOEXEC flag > > > > The close-on-exec flag on the new FD can be set via SOCK_CLOEXEC > > > > diff -r 82228f955153 -r a685d9c04acd auto/unix > > --- a/auto/unix Tue Dec 15 17:41:39 2020 +0300 > > +++ b/auto/unix Wed Dec 16 14:12:04 2020 +0800 > > @@ -510,7 +510,7 @@ > > ngx_feature_incs="#include " > > ngx_feature_path= > > ngx_feature_libs= > > -ngx_feature_test="accept4(0, NULL, NULL, SOCK_NONBLOCK)" > > +ngx_feature_test="accept4(0, NULL, NULL, SOCK_NONBLOCK | SOCK_CLOEXEC)" > > . auto/feature > > > > if [ $NGX_FILE_AIO = YES ]; then > > diff -r 82228f955153 -r a685d9c04acd src/event/ngx_event_accept.c > > --- a/src/event/ngx_event_accept.c Tue Dec 15 17:41:39 2020 +0300 > > +++ b/src/event/ngx_event_accept.c Wed Dec 16 14:12:04 2020 +0800 > > @@ -57,7 +57,7 @@ > > > > #if (NGX_HAVE_ACCEPT4) > > if (use_accept4) { > > - s = accept4(lc->fd, &sa.sockaddr, &socklen, SOCK_NONBLOCK); > > + s = accept4(lc->fd, &sa.sockaddr, &socklen, SOCK_NONBLOCK | > > SOCK_CLOEXEC); > > } else { > > s = accept(lc->fd, &sa.sockaddr, &socklen); > > } > > > > _______________________________________________ > > nginx-devel mailing list > > nginx-devel at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx-devel > > > > just realized you are talking about accepted sockets. > Why do you think it is useful? Normally we don't fork workers and don't > exec anything. > Sorry for meddling. While we wait for a response, I would like to include just for help. *O_CLOEXEC *(since Linux 2.6.23) Enable the close-on-exec flag for the new file descriptor. Specifying this flag permits a program to avoid additional fcntl(2) *F_SETFD *operations to set the *FD_CLOEXEC *flag. Note that the use of this flag is essential in some multithreaded programs, because using a separate fcntl(2) *F_SETFD *operation to set the *FD_CLOEXEC *flag does not suffice to avoid race conditions where one thread opens a file descriptor and attempts to set its close-on-exec flag using fcntl(2) at the same time as another thread does a fork(2) plus execve(2) . Depending on the order of execution, the race may lead to the file descriptor returned by *open*() being unintentionally leaked to the program executed by the child process created by fork(2) . (This kind of race is in principle possible for any system call that creates a file descriptor whose close-on-exec flag should be set, and various other Linux system calls provide an equivalent of the *O_CLOEXEC *flag to deal with this problem.) Ranier Vilela -------------- next part -------------- An HTML attachment was scrubbed... URL: From xeioex at nginx.com Wed Dec 16 20:28:44 2020 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Wed, 16 Dec 2020 20:28:44 +0000 Subject: [njs] Version bump. Message-ID: details: https://hg.nginx.org/njs/rev/4f5feafc1afc branches: changeset: 1581:4f5feafc1afc user: Dmitry Volyntsev date: Wed Dec 16 20:27:27 2020 +0000 description: Version bump. diffstat: src/njs.h | 2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diffs (12 lines): diff -r e7d2b2d7f8bd -r 4f5feafc1afc src/njs.h --- a/src/njs.h Tue Dec 01 12:59:48 2020 +0000 +++ b/src/njs.h Wed Dec 16 20:27:27 2020 +0000 @@ -11,7 +11,7 @@ #include -#define NJS_VERSION "0.5.0" +#define NJS_VERSION "0.5.1" #include /* STDOUT_FILENO, STDERR_FILENO */ From xeioex at nginx.com Wed Dec 16 20:28:46 2020 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Wed, 16 Dec 2020 20:28:46 +0000 Subject: [njs] Avoiding modification of vm->retval in njs_promise_alloc(). Message-ID: details: https://hg.nginx.org/njs/rev/1f862b9dec16 branches: changeset: 1582:1f862b9dec16 user: Dmitry Volyntsev date: Wed Dec 16 20:27:31 2020 +0000 description: Avoiding modification of vm->retval in njs_promise_alloc(). Alloc functions are not expected to modify existing values. diffstat: src/njs_promise.c | 1 - 1 files changed, 0 insertions(+), 1 deletions(-) diffs (11 lines): diff -r 4f5feafc1afc -r 1f862b9dec16 src/njs_promise.c --- a/src/njs_promise.c Wed Dec 16 20:27:27 2020 +0000 +++ b/src/njs_promise.c Wed Dec 16 20:27:31 2020 +0000 @@ -107,7 +107,6 @@ njs_promise_alloc(njs_vm_t *vm) njs_queue_init(&data->fulfill_queue); njs_queue_init(&data->reject_queue); - njs_set_promise(&vm->retval, promise); njs_set_data(&promise->value, data, 0); return promise; From xeioex at nginx.com Wed Dec 16 20:28:48 2020 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Wed, 16 Dec 2020 20:28:48 +0000 Subject: [njs] Avoiding modification of vm->retval in njs_add_event(). Message-ID: details: https://hg.nginx.org/njs/rev/d8e94445f59b branches: changeset: 1583:d8e94445f59b user: Dmitry Volyntsev date: Wed Dec 16 20:27:43 2020 +0000 description: Avoiding modification of vm->retval in njs_add_event(). diffstat: src/njs_event.c | 2 -- src/njs_timer.c | 6 +++++- 2 files changed, 5 insertions(+), 3 deletions(-) diffs (28 lines): diff -r 1f862b9dec16 -r d8e94445f59b src/njs_event.c --- a/src/njs_event.c Wed Dec 16 20:27:31 2020 +0000 +++ b/src/njs_event.c Wed Dec 16 20:27:43 2020 +0000 @@ -62,8 +62,6 @@ njs_add_event(njs_vm_t *vm, njs_event_t return NJS_ERROR; } - njs_set_number(&vm->retval, vm->event_id - 1); - return NJS_OK; } diff -r 1f862b9dec16 -r d8e94445f59b src/njs_timer.c --- a/src/njs_timer.c Wed Dec 16 20:27:31 2020 +0000 +++ b/src/njs_timer.c Wed Dec 16 20:27:43 2020 +0000 @@ -68,7 +68,11 @@ njs_set_timer(njs_vm_t *vm, njs_value_t return NJS_ERROR; } - return njs_add_event(vm, event); + if (njs_add_event(vm, event) == NJS_OK) { + njs_set_number(&vm->retval, vm->event_id - 1); + } + + return NJS_OK; memory_error: From kasei at kasei.im Thu Dec 17 07:20:19 2020 From: kasei at kasei.im (kasei at kasei.im) Date: Thu, 17 Dec 2020 15:20:19 +0800 Subject: [PATCH] os/unix: don't stop old workers if reload failed In-Reply-To: <20201210194736.GG1147@mdounin.ru> References: <85e647df56dbe4e1f87106620b848b8f36de4fc8.camel@kasei.im> <20201210194736.GG1147@mdounin.ru> Message-ID: <6ec7021b040c3160acbfd382c281e0a78cfee948.camel@kasei.im> ? 2020-12-10???? 22:47 +0300?Maxim Dounin??? > Hello! > > On Fri, Dec 04, 2020 at 12:29:51PM +0800, kasei at kasei.im?wrote: > > > *Sorry for the format issue, resending it. > > ? Hello, We found sometimes nginx might failed to start new worker > > processes > > during reconfiguring, (the number of processes exceeds > > NGX_MAX_PROCESSES for > > example). In that case, the master process still send QUIT signal > > to old > > workers, leads to the situation that there is no worker processes > > accept socket > > from listening sockets. > > ? And we found actually there is a return code in os/win32's > > ngx_start_worker_processes funcation. It skip quiting workers if > > there is no > > worker processes be spawned. So I written this patch followed > > win32's > > implementation and tested it. Cloud you please check this patch to > > see if there > > is anything missed? Thanks very much. > > Thanks for the patch. > > The win32 implementation works this way because spawning new > processes on Windows might easily fail for multiple reasons as > there is no fork().? On the other hand, this is not expected to > ever happen on Unix systems, except may be hitting variouos > limits, either nginx own NGX_MAX_PROCESSES limit or system limits.? > That is, on Unix systems starting worker processes more or less is > only expected to fail in case of severe misconfiguration. > > Further, I'm not sure that current win32 behaviour is a safe one, > as resulting state is inconsistent: worker processes does not > match master configuration.? This probably doesn't mater for win32 > as starting worker processes is anyway racy, but might make things > worse on Unix systems. > > And, as your own TODO mentions, the approach "do not stop old > worker processes if no new worker processes were started" won't > prevent degraded service if some worker processes were not able to > start and listening sockets with the reuseport option are used. > > Could you please clarify - are you trying to solve some problem > you are facing in practice?? If yes, could you please provide some > more details? > Hello, thanks for your reply!? Yes, I am tring slove a problem in production environment. In short, sometimes we reload nginx in a high frquency like 2-3 minutes, then some long connections may holds many shutting down workers. If the number of existing workers exceeds NGX_MAX_PROCESSES, nginx can't spawn new workers. Yes, I understand it's an abuse of nginx, and the frequency of reloading should be reduced. But I'm thinking it might better to let nginx works with old config (equals reloading failed) instead of stop accpeting new connections. From mdounin at mdounin.ru Thu Dec 17 13:27:48 2020 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 17 Dec 2020 16:27:48 +0300 Subject: [PATCH] os/unix: don't stop old workers if reload failed In-Reply-To: <6ec7021b040c3160acbfd382c281e0a78cfee948.camel@kasei.im> References: <85e647df56dbe4e1f87106620b848b8f36de4fc8.camel@kasei.im> <20201210194736.GG1147@mdounin.ru> <6ec7021b040c3160acbfd382c281e0a78cfee948.camel@kasei.im> Message-ID: <20201217132748.GB1147@mdounin.ru> Hello! On Thu, Dec 17, 2020 at 03:20:19PM +0800, kasei at kasei.im wrote: > ? 2020-12-10???? 22:47 +0300?Maxim Dounin??? > > Hello! > > > > On Fri, Dec 04, 2020 at 12:29:51PM +0800, kasei at kasei.im?wrote: > > > > > *Sorry for the format issue, resending it. > > > ? Hello, We found sometimes nginx might failed to start new worker > > > processes > > > during reconfiguring, (the number of processes exceeds > > > NGX_MAX_PROCESSES for > > > example). In that case, the master process still send QUIT signal > > > to old > > > workers, leads to the situation that there is no worker processes > > > accept socket > > > from listening sockets. > > > ? And we found actually there is a return code in os/win32's > > > ngx_start_worker_processes funcation. It skip quiting workers if > > > there is no > > > worker processes be spawned. So I written this patch followed > > > win32's > > > implementation and tested it. Cloud you please check this patch to > > > see if there > > > is anything missed? Thanks very much. > > > > Thanks for the patch. > > > > The win32 implementation works this way because spawning new > > processes on Windows might easily fail for multiple reasons as > > there is no fork().? On the other hand, this is not expected to > > ever happen on Unix systems, except may be hitting variouos > > limits, either nginx own NGX_MAX_PROCESSES limit or system limits.? > > That is, on Unix systems starting worker processes more or less is > > only expected to fail in case of severe misconfiguration. > > > > Further, I'm not sure that current win32 behaviour is a safe one, > > as resulting state is inconsistent: worker processes does not > > match master configuration.? This probably doesn't mater for win32 > > as starting worker processes is anyway racy, but might make things > > worse on Unix systems. > > > > And, as your own TODO mentions, the approach "do not stop old > > worker processes if no new worker processes were started" won't > > prevent degraded service if some worker processes were not able to > > start and listening sockets with the reuseport option are used. > > > > Could you please clarify - are you trying to solve some problem > > you are facing in practice?? If yes, could you please provide some > > more details? > > > Hello, thanks for your reply!? > > Yes, I am tring slove a problem in production environment. In short, > sometimes we reload nginx in a high frquency like 2-3 minutes, then > some long connections may holds many shutting down workers. If the > number of existing workers exceeds NGX_MAX_PROCESSES, nginx can't spawn > new workers. Yes, I understand it's an abuse of nginx, and the > frequency of reloading should be reduced. But I'm thinking it might > better to let nginx works with old config (equals reloading failed) > instead of stop accpeting new connections. Ok, thanks for the details. As previously suggested, it looks more like a misconfiguration to me. And if we want to prevent it from happening by nginx, a better solution might be to complain and fail earlier, before applying the new configuration. For example, nginx might refuse to reload if there are more worker processes configured than free process slots available, similarly to how it currently refuses to upgrade if previous binary upgrade isn't complete yet. I'm not sure it worth the effort though. In your particular case it may make sense to avoid reloading nginx if some number of previous reloads weren't complete yet. E.g., you may want to modify your reload scripts (or procedures, if reloading is manual) to avoid reloading nginx if there are, say, more than 100 worker processes are still running. Alternatively, you may want to configure worker_shutdown_timeout (http://nginx.org/r/worker_shutdown_timeout) to a reasonable value. It will make sure that old worker processes are terminated in a reasonable time even if there are active connections. -- Maxim Dounin http://mdounin.ru/ From goodlord at gmail.com Mon Dec 21 13:46:23 2020 From: goodlord at gmail.com (Surinder Sund) Date: Mon, 21 Dec 2020 19:16:23 +0530 Subject: NGINX-QUIC: OPENSSL_internal:NO_SUPPORTED_VERSIONS_ENABLED In-Reply-To: References: Message-ID: I'm trying to get NGINX QUIC to work on a fresh install of Ubuntu 20.04. But I'm getting this error: **1 SSL_do_handshake() failed (SSL: error:10000118:SSL routines:OPENSSL_internal:NO_SUPPORTED_VERSIONS_ENABLED)* Looks like some issue with the way Boringssl is set up, or being used by Nginx? HOW I BUILT BORINGSSL cd boringssl; mkdir build ; cd build ; cmake -GNinja .. ninja NGINX DETAILS *~/nginx-quic# nginx -V* nginx version: nginx/1.19.6 built by gcc 9.3.0 (Ubuntu 9.3.0-17ubuntu1~20.04) built with OpenSSL 1.1.1 (compatible; BoringSSL) (running with BoringSSL) TLS SNI support enabled configure arguments: --with-debug --with-http_v3_module --with-cc-opt=-I../boringssl/include --with-ld-opt='-L../boringssl/build/ssl -L../boringssl/build/crypto' --with-http_quic_module --with-stream_quic_module --with-http_image_filter_module --with-http_sub_module --with-stream --add-module=/usr/local/src/ngx_brotli --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --modules-path=/usr/lib/nginx/modules --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --pid-path=/var/run/nginx.pid HOW I BUILT NGINX QUIC: cd ~/nginx-quic ; ./auto/configure --with-debug --with-http_v3_module \ --with-cc-opt="-I../boringssl/include" \ --with-ld-opt="-L../boringssl/build/ssl \ -L../boringssl/build/crypto" \ --with-http_quic_module --with-stream_quic_module --with-http_image_filter_module --with-http_sub_module --with-stream --add-module=/usr/local/src/ngx_brotli --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --modules-path=/usr/lib/nginx/modules --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --pid-path=/var/run/nginx.pid MY NGINX BUILD CONFIGURATION SUMMARY: Configuration summary + using system PCRE library + using system OpenSSL library + using system zlib library nginx path prefix: "/etc/nginx" nginx binary file: "/usr/sbin/nginx" nginx modules path: "/usr/lib/nginx/modules" nginx configuration prefix: "/etc/nginx" nginx configuration file: "/etc/nginx/nginx.conf" nginx pid file: "/var/run/nginx.pid" nginx error log file: "/var/log/nginx/error.log" nginx http access log file: "/etc/nginx/logs/access.log" nginx http client request body temporary files: "client_body_temp" nginx http proxy temporary files: "proxy_temp" nginx http fastcgi temporary files: "fastcgi_temp" nginx http uwsgi temporary files: "uwsgi_temp" nginx http scgi temporary files: "scgi_temp" MY SITE CONFIGURATION listen 80; listen [::]:80; listen 443 ssl http2 fastopen=150; listen [::]:443 ipv6only=on ssl fastopen=150; include snippets/ssl-params.conf; server_name blah.blah; root /var/wordpress; index index.html index.htm index.php; access_log /var/log/nginx/xx.log; error_log /var/log/nginx/xx-error_log; ssl_early_data on; listen 443 http3 reuseport; listen [::]:443 http3 reuseport; add_header Alt-Svc '$http3=":8443"; ma=86400'; *in nginx.conf I've added this:* ssl_protocols TLSv1.3; #disabled 1.1 & 1.2 UDP is open on port 441, I've double checked this from the outside. So it's not a port issue. -------------- next part -------------- An HTML attachment was scrubbed... URL: From goodlord at gmail.com Mon Dec 21 14:41:34 2020 From: goodlord at gmail.com (Surinder Sund) Date: Mon, 21 Dec 2020 20:11:34 +0530 Subject: NGINX-QUIC: OPENSSL_internal:NO_SUPPORTED_VERSIONS_ENABLED In-Reply-To: References: Message-ID: forgot to add that this affects only http3 requests [I've tested from more than one machine and multiple clients, including cURL and FF] http2 request work fine with no change in configuration. On Mon, Dec 21, 2020 at 7:16 PM Surinder Sund wrote: > I'm trying to get NGINX QUIC to work on a fresh install of Ubuntu 20.04. > > But I'm getting this error: > > **1 SSL_do_handshake() failed (SSL: error:10000118:SSL > routines:OPENSSL_internal:NO_SUPPORTED_VERSIONS_ENABLED)* > > Looks like some issue with the way Boringssl is set up, or being used by > Nginx? > > > HOW I BUILT BORINGSSL > > cd boringssl; mkdir build ; cd build ; cmake -GNinja .. > ninja > > NGINX DETAILS > > *~/nginx-quic# nginx -V* > > nginx version: nginx/1.19.6 > built by gcc 9.3.0 (Ubuntu 9.3.0-17ubuntu1~20.04) > built with OpenSSL 1.1.1 (compatible; BoringSSL) (running with BoringSSL) > TLS SNI support enabled > configure arguments: --with-debug --with-http_v3_module > --with-cc-opt=-I../boringssl/include > --with-ld-opt='-L../boringssl/build/ssl -L../boringssl/build/crypto' > --with-http_quic_module --with-stream_quic_module > --with-http_image_filter_module --with-http_sub_module --with-stream > --add-module=/usr/local/src/ngx_brotli --prefix=/etc/nginx > --sbin-path=/usr/sbin/nginx --modules-path=/usr/lib/nginx/modules > --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log > --pid-path=/var/run/nginx.pid > > > HOW I BUILT NGINX QUIC: > > cd ~/nginx-quic ; > ./auto/configure --with-debug --with-http_v3_module \ > --with-cc-opt="-I../boringssl/include" \ > --with-ld-opt="-L../boringssl/build/ssl \ > -L../boringssl/build/crypto" \ > --with-http_quic_module --with-stream_quic_module > --with-http_image_filter_module --with-http_sub_module --with-stream > --add-module=/usr/local/src/ngx_brotli --prefix=/etc/nginx > --sbin-path=/usr/sbin/nginx --modules-path=/usr/lib/nginx/modules > --conf-path=/etc/nginx/nginx.conf > --error-log-path=/var/log/nginx/error.log --pid-path=/var/run/nginx.pid > > > MY NGINX BUILD CONFIGURATION SUMMARY: > > Configuration summary > + using system PCRE library > + using system OpenSSL library > + using system zlib library > > nginx path prefix: "/etc/nginx" > nginx binary file: "/usr/sbin/nginx" > nginx modules path: "/usr/lib/nginx/modules" > nginx configuration prefix: "/etc/nginx" > nginx configuration file: "/etc/nginx/nginx.conf" > nginx pid file: "/var/run/nginx.pid" > nginx error log file: "/var/log/nginx/error.log" > nginx http access log file: "/etc/nginx/logs/access.log" > nginx http client request body temporary files: "client_body_temp" > nginx http proxy temporary files: "proxy_temp" > nginx http fastcgi temporary files: "fastcgi_temp" > nginx http uwsgi temporary files: "uwsgi_temp" > nginx http scgi temporary files: "scgi_temp" > > > > > MY SITE CONFIGURATION > > > listen 80; > listen [::]:80; > listen 443 ssl http2 fastopen=150; > listen [::]:443 ipv6only=on ssl fastopen=150; > include snippets/ssl-params.conf; > server_name blah.blah; > root /var/wordpress; > index index.html index.htm index.php; > access_log /var/log/nginx/xx.log; > error_log /var/log/nginx/xx-error_log; > ssl_early_data on; > listen 443 http3 reuseport; > listen [::]:443 http3 reuseport; > add_header Alt-Svc '$http3=":8443"; ma=86400'; > > > *in nginx.conf I've added this:* > > ssl_protocols TLSv1.3; #disabled 1.1 & 1.2 > > > UDP is open on port 441, I've double checked this from the outside. So > it's not a port issue. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From triptothefuture.cs at gmail.com Mon Dec 21 14:54:54 2020 From: triptothefuture.cs at gmail.com (M L) Date: Mon, 21 Dec 2020 20:54:54 +0600 Subject: Request Counter Clarification Message-ID: Dear NGINX community, I am developing an NGINX module which would check the contents of the request and if the key components found, would block it. Currently, it seems to be working correctly, but I would like to clarify some parts and make sure that I am not hard-coding anything. So, the question is mainly about the request counter. During the execution of the request handler (which is registered on the HTTP_REWRITE_PHASE), the request counter is kept as it is. But once the handler finishes the request processing, the counter is changed to 1. But changing the counter to 1 does not seem like a right decision, as many other modules more often decrease it in the post_handler or call the "finalize request" function. However, the use of "finalize" cannot be implemented, as neither connection, nor request should not be finalized after the handler execution. Instead, the request needs to be handed over to the other phase handlers (return NGX_DECLINED). As for the decrementing in the post_handler of the ngx_http_read_client_request_body function, on the heavy loads, it results in the segfaults. Finally, leaving the counter unchanged throughout the process leads to memory leaks. Therefore, the above-described value assignment was implemented, but, perhaps, there are better ways of handling the request counter issue? And why the change in the request counter can cause a segfault in the first place? With best regards, doughnut -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonnybarnes at gmail.com Mon Dec 21 16:53:57 2020 From: jonnybarnes at gmail.com (Jonny Barnes) Date: Mon, 21 Dec 2020 16:53:57 +0000 Subject: NGINX-QUIC: OPENSSL_internal:NO_SUPPORTED_VERSIONS_ENABLED In-Reply-To: References: Message-ID: I think your Alt Svc header should be pointing to port 443, not 8443 On Mon, 21 Dec 2020 at 14:41, Surinder Sund wrote: > forgot to add that this affects only http3 requests [I've tested from more > than one machine and multiple clients, including cURL and FF] > > http2 request work fine with no change in configuration. > > On Mon, Dec 21, 2020 at 7:16 PM Surinder Sund wrote: > >> I'm trying to get NGINX QUIC to work on a fresh install of Ubuntu 20.04. >> >> But I'm getting this error: >> >> **1 SSL_do_handshake() failed (SSL: error:10000118:SSL >> routines:OPENSSL_internal:NO_SUPPORTED_VERSIONS_ENABLED)* >> >> Looks like some issue with the way Boringssl is set up, or being used by >> Nginx? >> >> >> HOW I BUILT BORINGSSL >> >> cd boringssl; mkdir build ; cd build ; cmake -GNinja .. >> ninja >> >> NGINX DETAILS >> >> *~/nginx-quic# nginx -V* >> >> nginx version: nginx/1.19.6 >> built by gcc 9.3.0 (Ubuntu 9.3.0-17ubuntu1~20.04) >> built with OpenSSL 1.1.1 (compatible; BoringSSL) (running with BoringSSL) >> TLS SNI support enabled >> configure arguments: --with-debug --with-http_v3_module >> --with-cc-opt=-I../boringssl/include >> --with-ld-opt='-L../boringssl/build/ssl -L../boringssl/build/crypto' >> --with-http_quic_module --with-stream_quic_module >> --with-http_image_filter_module --with-http_sub_module --with-stream >> --add-module=/usr/local/src/ngx_brotli --prefix=/etc/nginx >> --sbin-path=/usr/sbin/nginx --modules-path=/usr/lib/nginx/modules >> --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log >> --pid-path=/var/run/nginx.pid >> >> >> HOW I BUILT NGINX QUIC: >> >> cd ~/nginx-quic ; >> ./auto/configure --with-debug --with-http_v3_module \ >> --with-cc-opt="-I../boringssl/include" \ >> --with-ld-opt="-L../boringssl/build/ssl \ >> -L../boringssl/build/crypto" \ >> --with-http_quic_module --with-stream_quic_module >> --with-http_image_filter_module --with-http_sub_module --with-stream >> --add-module=/usr/local/src/ngx_brotli --prefix=/etc/nginx >> --sbin-path=/usr/sbin/nginx --modules-path=/usr/lib/nginx/modules >> --conf-path=/etc/nginx/nginx.conf >> --error-log-path=/var/log/nginx/error.log --pid-path=/var/run/nginx.pid >> >> >> MY NGINX BUILD CONFIGURATION SUMMARY: >> >> Configuration summary >> + using system PCRE library >> + using system OpenSSL library >> + using system zlib library >> >> nginx path prefix: "/etc/nginx" >> nginx binary file: "/usr/sbin/nginx" >> nginx modules path: "/usr/lib/nginx/modules" >> nginx configuration prefix: "/etc/nginx" >> nginx configuration file: "/etc/nginx/nginx.conf" >> nginx pid file: "/var/run/nginx.pid" >> nginx error log file: "/var/log/nginx/error.log" >> nginx http access log file: "/etc/nginx/logs/access.log" >> nginx http client request body temporary files: "client_body_temp" >> nginx http proxy temporary files: "proxy_temp" >> nginx http fastcgi temporary files: "fastcgi_temp" >> nginx http uwsgi temporary files: "uwsgi_temp" >> nginx http scgi temporary files: "scgi_temp" >> >> >> >> >> MY SITE CONFIGURATION >> >> >> listen 80; >> listen [::]:80; >> listen 443 ssl http2 fastopen=150; >> listen [::]:443 ipv6only=on ssl fastopen=150; >> include snippets/ssl-params.conf; >> server_name blah.blah; >> root /var/wordpress; >> index index.html index.htm index.php; >> access_log /var/log/nginx/xx.log; >> error_log /var/log/nginx/xx-error_log; >> ssl_early_data on; >> listen 443 http3 reuseport; >> listen [::]:443 http3 reuseport; >> add_header Alt-Svc '$http3=":8443"; ma=86400'; >> >> >> *in nginx.conf I've added this:* >> >> ssl_protocols TLSv1.3; #disabled 1.1 & 1.2 >> >> >> UDP is open on port 441, I've double checked this from the outside. So >> it's not a port issue. >> >> _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel -------------- next part -------------- An HTML attachment was scrubbed... URL: From goodlord at gmail.com Tue Dec 22 13:08:25 2020 From: goodlord at gmail.com (Surinder Sund) Date: Tue, 22 Dec 2020 18:38:25 +0530 Subject: NGINX-QUIC: OPENSSL_internal:NO_SUPPORTED_VERSIONS_ENABLED In-Reply-To: References: Message-ID: Thank You Johny. I fixed that (In fact, I'd fixed it in the trial machine earlier, but when I restored a backup, it came back in). Unfortunately, the error still remains. Pls see the picture below. I can confirm that the traffic is hitting 443/UDP, but nothing is being returned. https://drive.google.com/file/d/1knHKb_jUcjdY71wCz-w1TG4QupxH9CN3/view?usp=sharing [image: image.png] Looks like no cigar for me yet. On Mon, Dec 21, 2020 at 10:24 PM Jonny Barnes wrote: > I think your Alt Svc header should be pointing to port 443, not 8443 > > On Mon, 21 Dec 2020 at 14:41, Surinder Sund wrote: > >> forgot to add that this affects only http3 requests [I've tested from >> more than one machine and multiple clients, including cURL and FF] >> >> http2 request work fine with no change in configuration. >> >> On Mon, Dec 21, 2020 at 7:16 PM Surinder Sund wrote: >> >>> I'm trying to get NGINX QUIC to work on a fresh install of Ubuntu 20.04. >>> >>> But I'm getting this error: >>> >>> **1 SSL_do_handshake() failed (SSL: error:10000118:SSL >>> routines:OPENSSL_internal:NO_SUPPORTED_VERSIONS_ENABLED)* >>> >>> Looks like some issue with the way Boringssl is set up, or being used by >>> Nginx? >>> >>> >>> HOW I BUILT BORINGSSL >>> >>> cd boringssl; mkdir build ; cd build ; cmake -GNinja .. >>> ninja >>> >>> NGINX DETAILS >>> >>> *~/nginx-quic# nginx -V* >>> >>> nginx version: nginx/1.19.6 >>> built by gcc 9.3.0 (Ubuntu 9.3.0-17ubuntu1~20.04) >>> built with OpenSSL 1.1.1 (compatible; BoringSSL) (running with BoringSSL) >>> TLS SNI support enabled >>> configure arguments: --with-debug --with-http_v3_module >>> --with-cc-opt=-I../boringssl/include >>> --with-ld-opt='-L../boringssl/build/ssl -L../boringssl/build/crypto' >>> --with-http_quic_module --with-stream_quic_module >>> --with-http_image_filter_module --with-http_sub_module --with-stream >>> --add-module=/usr/local/src/ngx_brotli --prefix=/etc/nginx >>> --sbin-path=/usr/sbin/nginx --modules-path=/usr/lib/nginx/modules >>> --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log >>> --pid-path=/var/run/nginx.pid >>> >>> >>> HOW I BUILT NGINX QUIC: >>> >>> cd ~/nginx-quic ; >>> ./auto/configure --with-debug --with-http_v3_module \ >>> --with-cc-opt="-I../boringssl/include" \ >>> --with-ld-opt="-L../boringssl/build/ssl \ >>> -L../boringssl/build/crypto" \ >>> --with-http_quic_module --with-stream_quic_module >>> --with-http_image_filter_module --with-http_sub_module --with-stream >>> --add-module=/usr/local/src/ngx_brotli --prefix=/etc/nginx >>> --sbin-path=/usr/sbin/nginx --modules-path=/usr/lib/nginx/modules >>> --conf-path=/etc/nginx/nginx.conf >>> --error-log-path=/var/log/nginx/error.log --pid-path=/var/run/nginx.pid >>> >>> >>> MY NGINX BUILD CONFIGURATION SUMMARY: >>> >>> Configuration summary >>> + using system PCRE library >>> + using system OpenSSL library >>> + using system zlib library >>> >>> nginx path prefix: "/etc/nginx" >>> nginx binary file: "/usr/sbin/nginx" >>> nginx modules path: "/usr/lib/nginx/modules" >>> nginx configuration prefix: "/etc/nginx" >>> nginx configuration file: "/etc/nginx/nginx.conf" >>> nginx pid file: "/var/run/nginx.pid" >>> nginx error log file: "/var/log/nginx/error.log" >>> nginx http access log file: "/etc/nginx/logs/access.log" >>> nginx http client request body temporary files: "client_body_temp" >>> nginx http proxy temporary files: "proxy_temp" >>> nginx http fastcgi temporary files: "fastcgi_temp" >>> nginx http uwsgi temporary files: "uwsgi_temp" >>> nginx http scgi temporary files: "scgi_temp" >>> >>> >>> >>> >>> MY SITE CONFIGURATION >>> >>> >>> listen 80; >>> listen [::]:80; >>> listen 443 ssl http2 fastopen=150; >>> listen [::]:443 ipv6only=on ssl fastopen=150; >>> include snippets/ssl-params.conf; >>> server_name blah.blah; >>> root /var/wordpress; >>> index index.html index.htm index.php; >>> access_log /var/log/nginx/xx.log; >>> error_log /var/log/nginx/xx-error_log; >>> ssl_early_data on; >>> listen 443 http3 reuseport; >>> listen [::]:443 http3 reuseport; >>> add_header Alt-Svc '$http3=":8443"; ma=86400'; >>> >>> >>> *in nginx.conf I've added this:* >>> >>> ssl_protocols TLSv1.3; #disabled 1.1 & 1.2 >>> >>> >>> UDP is open on port 441, I've double checked this from the outside. So >>> it's not a port issue. >>> >>> _______________________________________________ >> nginx-devel mailing list >> nginx-devel at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx-devel > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 34331 bytes Desc: not available URL: From alexander.borisov at nginx.com Tue Dec 22 15:37:41 2020 From: alexander.borisov at nginx.com (Alexander Borisov) Date: Tue, 22 Dec 2020 15:37:41 +0000 Subject: [njs] Fixed encoding matching for base64url in String.bytesFrom(). Message-ID: details: https://hg.nginx.org/njs/rev/92c0493b2aff branches: changeset: 1584:92c0493b2aff user: Disconnect3d date: Tue Dec 22 13:27:01 2020 +0100 description: Fixed encoding matching for base64url in String.bytesFrom(). This closes #363 PR on GitHub. diffstat: src/njs_string.c | 2 +- src/test/njs_unit_test.c | 3 +++ 2 files changed, 4 insertions(+), 1 deletions(-) diffs (25 lines): diff -r d8e94445f59b -r 92c0493b2aff src/njs_string.c --- a/src/njs_string.c Wed Dec 16 20:27:43 2020 +0000 +++ b/src/njs_string.c Tue Dec 22 13:27:01 2020 +0100 @@ -1753,7 +1753,7 @@ njs_string_bytes_from_string(njs_vm_t *v } else if (enc.length == 6 && memcmp(enc.start, "base64", 6) == 0) { return njs_string_decode_base64(vm, &vm->retval, &str); - } else if (enc.length == 9 && memcmp(enc.start, "base64url", 6) == 0) { + } else if (enc.length == 9 && memcmp(enc.start, "base64url", 9) == 0) { return njs_string_decode_base64url(vm, &vm->retval, &str); } diff -r d8e94445f59b -r 92c0493b2aff src/test/njs_unit_test.c --- a/src/test/njs_unit_test.c Wed Dec 16 20:27:43 2020 +0000 +++ b/src/test/njs_unit_test.c Tue Dec 22 13:27:01 2020 +0100 @@ -9016,6 +9016,9 @@ static njs_unit_test_t njs_test[] = { njs_str("String.bytesFrom('QUJDRA#', 'base64url')"), njs_str("ABCD") }, + { njs_str("String.bytesFrom('QUJDRA#', 'base64lol')"), + njs_str("TypeError: Unknown encoding: \"base64lol\"") }, + { njs_str("encodeURI.name"), njs_str("encodeURI")}, From jonnybarnes at gmail.com Tue Dec 22 17:04:38 2020 From: jonnybarnes at gmail.com (Jonny Barnes) Date: Tue, 22 Dec 2020 17:04:38 +0000 Subject: NGINX-QUIC: OPENSSL_internal:NO_SUPPORTED_VERSIONS_ENABLED In-Reply-To: References: Message-ID: Do you have a firewall setup on the server to only allow traffic on 443 if it?s tcp traffic? Rule needs to be added for udp as well On Tue, 22 Dec 2020 at 13:08, Surinder Sund wrote: > Thank You Johny. > > I fixed that (In fact, I'd fixed it in the trial machine earlier, but when > I restored a backup, it came back in). > > Unfortunately, the error still remains. > > Pls see the picture below. I can confirm that the traffic is hitting > 443/UDP, but nothing is being returned. > > > https://drive.google.com/file/d/1knHKb_jUcjdY71wCz-w1TG4QupxH9CN3/view?usp=sharing > > [image: image.png] > > Looks like no cigar for me yet. > > > > > > On Mon, Dec 21, 2020 at 10:24 PM Jonny Barnes > wrote: > >> I think your Alt Svc header should be pointing to port 443, not 8443 >> >> On Mon, 21 Dec 2020 at 14:41, Surinder Sund wrote: >> >>> forgot to add that this affects only http3 requests [I've tested from >>> more than one machine and multiple clients, including cURL and FF] >>> >>> http2 request work fine with no change in configuration. >>> >>> On Mon, Dec 21, 2020 at 7:16 PM Surinder Sund >>> wrote: >>> >>>> I'm trying to get NGINX QUIC to work on a fresh install of Ubuntu 20.04. >>>> >>>> But I'm getting this error: >>>> >>>> **1 SSL_do_handshake() failed (SSL: error:10000118:SSL >>>> routines:OPENSSL_internal:NO_SUPPORTED_VERSIONS_ENABLED)* >>>> >>>> Looks like some issue with the way Boringssl is set up, or being used >>>> by Nginx? >>>> >>>> >>>> HOW I BUILT BORINGSSL >>>> >>>> cd boringssl; mkdir build ; cd build ; cmake -GNinja .. >>>> ninja >>>> >>>> NGINX DETAILS >>>> >>>> *~/nginx-quic# nginx -V* >>>> >>>> nginx version: nginx/1.19.6 >>>> built by gcc 9.3.0 (Ubuntu 9.3.0-17ubuntu1~20.04) >>>> built with OpenSSL 1.1.1 (compatible; BoringSSL) (running with >>>> BoringSSL) >>>> TLS SNI support enabled >>>> configure arguments: --with-debug --with-http_v3_module >>>> --with-cc-opt=-I../boringssl/include >>>> --with-ld-opt='-L../boringssl/build/ssl -L../boringssl/build/crypto' >>>> --with-http_quic_module --with-stream_quic_module >>>> --with-http_image_filter_module --with-http_sub_module --with-stream >>>> --add-module=/usr/local/src/ngx_brotli --prefix=/etc/nginx >>>> --sbin-path=/usr/sbin/nginx --modules-path=/usr/lib/nginx/modules >>>> --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log >>>> --pid-path=/var/run/nginx.pid >>>> >>>> >>>> HOW I BUILT NGINX QUIC: >>>> >>>> cd ~/nginx-quic ; >>>> ./auto/configure --with-debug --with-http_v3_module \ >>>> --with-cc-opt="-I../boringssl/include" \ >>>> --with-ld-opt="-L../boringssl/build/ssl \ >>>> -L../boringssl/build/crypto" \ >>>> --with-http_quic_module --with-stream_quic_module >>>> --with-http_image_filter_module --with-http_sub_module --with-stream >>>> --add-module=/usr/local/src/ngx_brotli --prefix=/etc/nginx >>>> --sbin-path=/usr/sbin/nginx --modules-path=/usr/lib/nginx/modules >>>> --conf-path=/etc/nginx/nginx.conf >>>> --error-log-path=/var/log/nginx/error.log --pid-path=/var/run/nginx.pid >>>> >>>> >>>> MY NGINX BUILD CONFIGURATION SUMMARY: >>>> >>>> Configuration summary >>>> + using system PCRE library >>>> + using system OpenSSL library >>>> + using system zlib library >>>> >>>> nginx path prefix: "/etc/nginx" >>>> nginx binary file: "/usr/sbin/nginx" >>>> nginx modules path: "/usr/lib/nginx/modules" >>>> nginx configuration prefix: "/etc/nginx" >>>> nginx configuration file: "/etc/nginx/nginx.conf" >>>> nginx pid file: "/var/run/nginx.pid" >>>> nginx error log file: "/var/log/nginx/error.log" >>>> nginx http access log file: "/etc/nginx/logs/access.log" >>>> nginx http client request body temporary files: "client_body_temp" >>>> nginx http proxy temporary files: "proxy_temp" >>>> nginx http fastcgi temporary files: "fastcgi_temp" >>>> nginx http uwsgi temporary files: "uwsgi_temp" >>>> nginx http scgi temporary files: "scgi_temp" >>>> >>>> >>>> >>>> >>>> MY SITE CONFIGURATION >>>> >>>> >>>> listen 80; >>>> listen [::]:80; >>>> listen 443 ssl http2 fastopen=150; >>>> listen [::]:443 ipv6only=on ssl fastopen=150; >>>> include snippets/ssl-params.conf; >>>> server_name blah.blah; >>>> root /var/wordpress; >>>> index index.html index.htm index.php; >>>> access_log /var/log/nginx/xx.log; >>>> error_log /var/log/nginx/xx-error_log; >>>> ssl_early_data on; >>>> listen 443 http3 reuseport; >>>> listen [::]:443 http3 reuseport; >>>> add_header Alt-Svc '$http3=":8443"; ma=86400'; >>>> >>>> >>>> *in nginx.conf I've added this:* >>>> >>>> ssl_protocols TLSv1.3; #disabled 1.1 & 1.2 >>>> >>>> >>>> UDP is open on port 441, I've double checked this from the outside. So >>>> it's not a port issue. >>>> >>>> _______________________________________________ >>> nginx-devel mailing list >>> nginx-devel at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx-devel >> >> _______________________________________________ >> nginx-devel mailing list >> nginx-devel at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx-devel > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 34331 bytes Desc: not available URL: From xeioex at nginx.com Wed Dec 23 11:28:13 2020 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Wed, 23 Dec 2020 11:28:13 +0000 Subject: [njs] Stream: simplified vm events handling. Message-ID: details: https://hg.nginx.org/njs/rev/1e5f3455d2d1 branches: changeset: 1585:1e5f3455d2d1 user: Dmitry Volyntsev date: Wed Dec 23 11:27:50 2020 +0000 description: Stream: simplified vm events handling. No functional changes. diffstat: nginx/ngx_stream_js_module.c | 133 +++++++++++++++--------------------------- 1 files changed, 47 insertions(+), 86 deletions(-) diffs (206 lines): diff -r 92c0493b2aff -r 1e5f3455d2d1 nginx/ngx_stream_js_module.c --- a/nginx/ngx_stream_js_module.c Tue Dec 22 13:27:01 2020 +0100 +++ b/nginx/ngx_stream_js_module.c Wed Dec 23 11:27:50 2020 +0000 @@ -84,10 +84,8 @@ static ngx_int_t ngx_stream_js_init_vm(n static void ngx_stream_js_drop_events(ngx_stream_js_ctx_t *ctx); static void ngx_stream_js_cleanup_ctx(void *data); static void ngx_stream_js_cleanup_vm(void *data); -static njs_int_t ngx_stream_js_buffer_arg(ngx_stream_session_t *s, - njs_value_t *buffer, ngx_uint_t data_type); -static njs_int_t ngx_stream_js_flags_arg(ngx_stream_session_t *s, - njs_value_t *flags); +static njs_int_t ngx_stream_js_run_event(ngx_stream_session_t *s, + ngx_stream_js_ctx_t *ctx, ngx_stream_js_ev_t *event); static njs_vm_event_t *ngx_stream_js_event(ngx_stream_session_t *s, njs_str_t *event); @@ -429,7 +427,6 @@ ngx_stream_js_phase_handler(ngx_stream_s njs_int_t ret; ngx_int_t rc; ngx_connection_t *c; - ngx_stream_js_ev_t *event; ngx_stream_js_ctx_t *ctx; if (name->len == 0) { @@ -462,26 +459,14 @@ ngx_stream_js_phase_handler(ngx_stream_s } } - event = &ctx->events[NGX_JS_EVENT_UPLOAD]; - - if (event->ev != NULL) { - ret = ngx_stream_js_buffer_arg(s, njs_value_arg(&ctx->args[1]), - event->data_type); - if (ret != NJS_OK) { - goto exception; - } + ret = ngx_stream_js_run_event(s, ctx, &ctx->events[NGX_JS_EVENT_UPLOAD]); + if (ret != NJS_OK) { + njs_vm_retval_string(ctx->vm, &exception); - ret = ngx_stream_js_flags_arg(s, njs_value_arg(&ctx->args[2])); - if (ret != NJS_OK) { - goto exception; - } + ngx_log_error(NGX_LOG_ERR, c->log, 0, "js exception: %*s", + exception.length, exception.start); - njs_vm_post_event(ctx->vm, event->ev, njs_value_arg(&ctx->args[1]), 2); - - rc = njs_vm_run(ctx->vm); - if (rc == NJS_ERROR) { - goto exception; - } + return NGX_ERROR; } if (njs_vm_pending(ctx->vm)) { @@ -497,15 +482,6 @@ ngx_stream_js_phase_handler(ngx_stream_s rc); return rc; - -exception: - - njs_vm_retval_string(ctx->vm, &exception); - - ngx_log_error(NGX_LOG_ERR, c->log, 0, "js exception: %*s", - exception.length, exception.start); - - return NGX_ERROR; } @@ -567,23 +543,14 @@ ngx_stream_js_body_filter(ngx_stream_ses event = ngx_stream_event(from_upstream); if (event->ev != NULL) { - ret = ngx_stream_js_buffer_arg(s, njs_value_arg(&ctx->args[1]), - event->data_type); - if (ret != NJS_OK) { - goto exception; - } - - ret = ngx_stream_js_flags_arg(s, njs_value_arg(&ctx->args[2])); + ret = ngx_stream_js_run_event(s, ctx, event); if (ret != NJS_OK) { - goto exception; - } + njs_vm_retval_string(ctx->vm, &exception); - njs_vm_post_event(ctx->vm, event->ev, - njs_value_arg(&ctx->args[1]), 2); + ngx_log_error(NGX_LOG_ERR, c->log, 0, "js exception: %*s", + exception.length, exception.start); - rc = njs_vm_run(ctx->vm); - if (rc == NJS_ERROR) { - goto exception; + return NGX_ERROR; } ctx->buf->pos = ctx->buf->last; @@ -616,15 +583,6 @@ ngx_stream_js_body_filter(ngx_stream_ses } return rc; - -exception: - - njs_vm_retval_string(ctx->vm, &exception); - - ngx_log_error(NGX_LOG_ERR, c->log, 0, "js exception: %*s", - exception.length, exception.start); - - return NGX_ERROR; } @@ -786,18 +744,23 @@ ngx_stream_js_cleanup_vm(void *data) static njs_int_t -ngx_stream_js_buffer_arg(ngx_stream_session_t *s, njs_value_t *buffer, - ngx_uint_t data_type) +ngx_stream_js_run_event(ngx_stream_session_t *s, ngx_stream_js_ctx_t *ctx, + ngx_stream_js_ev_t *event) { - size_t len; - u_char *p; - ngx_buf_t *b; - ngx_connection_t *c; - ngx_stream_js_ctx_t *ctx; + size_t len; + u_char *p; + njs_int_t ret; + ngx_buf_t *b; + ngx_connection_t *c; + njs_opaque_value_t last_key, last; + + static const njs_str_t last_str = njs_str("last"); + + if (event->ev == NULL) { + return NJS_OK; + } c = s->connection; - ctx = ngx_stream_get_module_ctx(s, ngx_stream_js_module); - b = ctx->filter ? ctx->buf : c->buffer; len = b ? b->last - b->pos : 0; @@ -812,34 +775,32 @@ ngx_stream_js_buffer_arg(ngx_stream_sess ngx_memcpy(p, b->pos, len); } - return ngx_js_prop(ctx->vm, data_type, buffer, p, len); -} - - -static njs_int_t -ngx_stream_js_flags_arg(ngx_stream_session_t *s, njs_value_t *flags) -{ - ngx_buf_t *b; - ngx_connection_t *c; - njs_opaque_value_t last_key; - njs_opaque_value_t values[1]; - ngx_stream_js_ctx_t *ctx; - - static const njs_str_t last_str = njs_str("last"); - - ctx = ngx_stream_get_module_ctx(s, ngx_stream_js_module); + ret = ngx_js_prop(ctx->vm, event->data_type, njs_value_arg(&ctx->args[1]), + p, len); + if (ret != NJS_OK) { + return ret; + } njs_vm_value_string_set(ctx->vm, njs_value_arg(&last_key), last_str.start, last_str.length); - c = s->connection; + njs_value_boolean_set(njs_value_arg(&last), b && b->last_buf); + + ret = njs_vm_object_alloc(ctx->vm, njs_value_arg(&ctx->args[2]), + njs_value_arg(&last_key), + njs_value_arg(&last), NULL); + if (ret != NJS_OK) { + return ret; + } - b = ctx->filter ? ctx->buf : c->buffer; - njs_value_boolean_set(njs_value_arg(&values[0]), b && b->last_buf); + njs_vm_post_event(ctx->vm, event->ev, njs_value_arg(&ctx->args[1]), 2); - return njs_vm_object_alloc(ctx->vm, flags, - njs_value_arg(&last_key), - njs_value_arg(&values[0]), NULL); + ret = njs_vm_run(ctx->vm); + if (ret == NJS_ERROR) { + return ret; + } + + return NJS_OK; } From xeioex at nginx.com Thu Dec 24 18:36:52 2020 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Thu, 24 Dec 2020 18:36:52 +0000 Subject: [njs] Refactored working with external prototypes. Message-ID: details: https://hg.nginx.org/njs/rev/40dc1818a485 branches: changeset: 1586:40dc1818a485 user: Dmitry Volyntsev date: Thu Dec 24 18:35:18 2020 +0000 description: Refactored working with external prototypes. Previously, njs_vm_external_prototype() returned the pointer to a created prototype structure. Which were expected to be passed to njs_vm_external_create() as is. The returned pointer is needed to be stored somewhere by user code which complicates user code in cases when many prototypes are created. Instead, an index in the VM internal table is returned. njs_vm_external_create() is changed accordingly. This simplifies user code because the index is known at static time for most cases. diffstat: nginx/ngx_http_js_module.c | 26 ++++++------------ nginx/ngx_js.c | 15 +++++------ nginx/ngx_js.h | 2 + nginx/ngx_stream_js_module.c | 15 +++------- src/njs.h | 5 +-- src/njs_extern.c | 34 +++++++++++++++++++------ src/njs_shell.c | 11 +++---- src/njs_vm.h | 1 + src/test/njs_benchmark.c | 7 ++--- src/test/njs_externals_test.c | 57 +++++++++++++++++++++--------------------- src/test/njs_externals_test.h | 4 +- src/test/njs_unit_test.c | 27 +++++++++---------- 12 files changed, 104 insertions(+), 100 deletions(-) diffs (584 lines): diff -r 1e5f3455d2d1 -r 40dc1818a485 nginx/ngx_http_js_module.c --- a/nginx/ngx_http_js_module.c Wed Dec 23 11:27:50 2020 +0000 +++ b/nginx/ngx_http_js_module.c Thu Dec 24 18:35:18 2020 +0000 @@ -19,7 +19,6 @@ typedef struct { ngx_uint_t line; ngx_array_t *imports; ngx_array_t *paths; - njs_external_proto_t req_proto; } ngx_http_js_main_conf_t; @@ -836,7 +835,7 @@ ngx_http_js_init_vm(ngx_http_request_t * } rc = njs_vm_external_create(ctx->vm, njs_value_arg(&ctx->request), - jmcf->req_proto, r, 0); + NGX_JS_PROTO_MAIN, r, 0); if (rc != NJS_OK) { return NGX_ERROR; } @@ -2708,10 +2707,9 @@ ngx_http_js_subrequest_done(ngx_http_req { njs_vm_event_t vm_event = data; - njs_int_t ret; - ngx_http_js_ctx_t *ctx; - njs_opaque_value_t reply; - ngx_http_js_main_conf_t *jmcf; + njs_int_t ret; + ngx_http_js_ctx_t *ctx; + njs_opaque_value_t reply; if (rc != NGX_OK || r->connection->error || r->buffered) { return rc; @@ -2734,8 +2732,6 @@ ngx_http_js_subrequest_done(ngx_http_req ctx->done = 1; - jmcf = ngx_http_get_module_main_conf(r, ngx_http_js_module); - ctx = ngx_http_get_module_ctx(r->parent, ngx_http_js_module); ngx_log_debug2(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, @@ -2750,7 +2746,7 @@ ngx_http_js_subrequest_done(ngx_http_req } ret = njs_vm_external_create(ctx->vm, njs_value_arg(&reply), - jmcf->req_proto, r, 0); + NGX_JS_PROTO_MAIN, r, 0); if (ret != NJS_OK) { ngx_log_error(NGX_LOG_ERR, r->connection->log, 0, "js subrequest reply creation failed"); @@ -2954,7 +2950,7 @@ ngx_http_js_init_main_conf(ngx_conf_t *c ssize_t n; ngx_fd_t fd; ngx_str_t *m, file; - njs_int_t rc; + njs_int_t rc, proto_id; njs_str_t text, path; ngx_uint_t i; njs_value_t *value; @@ -2962,7 +2958,6 @@ ngx_http_js_init_main_conf(ngx_conf_t *c ngx_file_info_t fi; ngx_pool_cleanup_t *cln; njs_opaque_value_t lvalue, exception; - njs_external_proto_t proto; ngx_http_js_import_t *import; static const njs_str_t line_number_key = njs_str("lineNumber"); @@ -3114,16 +3109,14 @@ ngx_http_js_init_main_conf(ngx_conf_t *c } } - proto = njs_vm_external_prototype(jmcf->vm, ngx_http_js_ext_request, - njs_nitems(ngx_http_js_ext_request)); - if (proto == NULL) { + proto_id = njs_vm_external_prototype(jmcf->vm, ngx_http_js_ext_request, + njs_nitems(ngx_http_js_ext_request)); + if (proto_id < 0) { ngx_log_error(NGX_LOG_EMERG, cf->log, 0, "failed to add js request proto"); return NGX_CONF_ERROR; } - jmcf->req_proto = proto; - rc = ngx_js_core_init(jmcf->vm, cf->log); if (njs_slow_path(rc != NJS_OK)) { return NGX_CONF_ERROR; @@ -3379,7 +3372,6 @@ ngx_http_js_create_main_conf(ngx_conf_t * conf->include = { 0, NULL }; * conf->file = NULL; * conf->line = 0; - * conf->req_proto = NULL; */ conf->paths = NGX_CONF_UNSET_PTR; diff -r 1e5f3455d2d1 -r 40dc1818a485 nginx/ngx_js.c --- a/nginx/ngx_js.c Wed Dec 23 11:27:50 2020 +0000 +++ b/nginx/ngx_js.c Thu Dec 24 18:35:18 2020 +0000 @@ -117,19 +117,18 @@ ngx_js_string(njs_vm_t *vm, njs_value_t ngx_int_t ngx_js_core_init(njs_vm_t *vm, ngx_log_t *log) { - njs_int_t ret; - njs_str_t name; - njs_opaque_value_t value; - njs_external_proto_t proto; + njs_int_t ret, proto_id; + njs_str_t name; + njs_opaque_value_t value; - proto = njs_vm_external_prototype(vm, ngx_js_ext_core, - njs_nitems(ngx_js_ext_core)); - if (proto == NULL) { + proto_id = njs_vm_external_prototype(vm, ngx_js_ext_core, + njs_nitems(ngx_js_ext_core)); + if (proto_id < 0) { ngx_log_error(NGX_LOG_EMERG, log, 0, "failed to add js core proto"); return NGX_ERROR; } - ret = njs_vm_external_create(vm, njs_value_arg(&value), proto, NULL, 1); + ret = njs_vm_external_create(vm, njs_value_arg(&value), proto_id, NULL, 1); if (njs_slow_path(ret != NJS_OK)) { ngx_log_error(NGX_LOG_EMERG, log, 0, "njs_vm_external_create() failed\n"); diff -r 1e5f3455d2d1 -r 40dc1818a485 nginx/ngx_js.h --- a/nginx/ngx_js.h Wed Dec 23 11:27:50 2020 +0000 +++ b/nginx/ngx_js.h Thu Dec 24 18:35:18 2020 +0000 @@ -19,6 +19,8 @@ #define NGX_JS_STRING 1 #define NGX_JS_BUFFER 2 +#define NGX_JS_PROTO_MAIN 0 + #define ngx_external_connection(vm, ext) \ (*((ngx_connection_t **) ((u_char *) ext + njs_vm_meta(vm, 0)))) diff -r 1e5f3455d2d1 -r 40dc1818a485 nginx/ngx_stream_js_module.c --- a/nginx/ngx_stream_js_module.c Wed Dec 23 11:27:50 2020 +0000 +++ b/nginx/ngx_stream_js_module.c Thu Dec 24 18:35:18 2020 +0000 @@ -19,7 +19,6 @@ typedef struct { ngx_uint_t line; ngx_array_t *imports; ngx_array_t *paths; - njs_external_proto_t proto; } ngx_stream_js_main_conf_t; @@ -696,7 +695,7 @@ ngx_stream_js_init_vm(ngx_stream_session } rc = njs_vm_external_create(ctx->vm, njs_value_arg(&ctx->args[0]), - jmcf->proto, s, 0); + NGX_JS_PROTO_MAIN, s, 0); if (rc != NJS_OK) { return NGX_ERROR; } @@ -1306,7 +1305,7 @@ ngx_stream_js_init_main_conf(ngx_conf_t ssize_t n; ngx_fd_t fd; ngx_str_t *m, file; - njs_int_t rc; + njs_int_t rc, proto_id; njs_str_t text, path; ngx_uint_t i; njs_value_t *value; @@ -1314,7 +1313,6 @@ ngx_stream_js_init_main_conf(ngx_conf_t ngx_file_info_t fi; ngx_pool_cleanup_t *cln; njs_opaque_value_t lvalue, exception; - njs_external_proto_t proto; ngx_stream_js_import_t *import; static const njs_str_t line_number_key = njs_str("lineNumber"); @@ -1466,16 +1464,14 @@ ngx_stream_js_init_main_conf(ngx_conf_t } } - proto = njs_vm_external_prototype(jmcf->vm, ngx_stream_js_ext_session, - njs_nitems(ngx_stream_js_ext_session)); - if (proto == NULL) { + proto_id = njs_vm_external_prototype(jmcf->vm, ngx_stream_js_ext_session, + njs_nitems(ngx_stream_js_ext_session)); + if (proto_id < 0) { ngx_log_error(NGX_LOG_EMERG, cf->log, 0, "failed to add js request proto"); return NGX_CONF_ERROR; } - jmcf->proto = proto; - rc = ngx_js_core_init(jmcf->vm, cf->log); if (njs_slow_path(rc != NJS_OK)) { return NGX_CONF_ERROR; @@ -1708,7 +1704,6 @@ ngx_stream_js_create_main_conf(ngx_conf_ * conf->include = { 0, NULL }; * conf->file = NULL; * conf->line = 0; - * conf->proto = NULL; */ conf->paths = NGX_CONF_UNSET_PTR; diff -r 1e5f3455d2d1 -r 40dc1818a485 src/njs.h --- a/src/njs.h Wed Dec 23 11:27:50 2020 +0000 +++ b/src/njs.h Thu Dec 24 18:35:18 2020 +0000 @@ -29,7 +29,6 @@ typedef struct njs_function_s njs_ typedef struct njs_vm_shared_s njs_vm_shared_t; typedef struct njs_object_prop_s njs_object_prop_t; typedef struct njs_external_s njs_external_t; -typedef void * njs_external_proto_t; /* * njs_opaque_value_t is the external storage type for native njs_value_t type. @@ -297,10 +296,10 @@ NJS_EXPORT njs_int_t njs_vm_start(njs_vm NJS_EXPORT njs_int_t njs_vm_add_path(njs_vm_t *vm, const njs_str_t *path); -NJS_EXPORT njs_external_proto_t njs_vm_external_prototype(njs_vm_t *vm, +NJS_EXPORT njs_int_t njs_vm_external_prototype(njs_vm_t *vm, const njs_external_t *definition, njs_uint_t n); NJS_EXPORT njs_int_t njs_vm_external_create(njs_vm_t *vm, njs_value_t *value, - njs_external_proto_t proto, njs_external_ptr_t external, njs_bool_t shared); + njs_int_t proto_id, njs_external_ptr_t external, njs_bool_t shared); NJS_EXPORT njs_external_ptr_t njs_vm_external(njs_vm_t *vm, const njs_value_t *value); NJS_EXPORT uintptr_t njs_vm_meta(njs_vm_t *vm, njs_uint_t index); diff -r 1e5f3455d2d1 -r 40dc1818a485 src/njs_extern.c --- a/src/njs_extern.c Wed Dec 23 11:27:50 2020 +0000 +++ b/src/njs_extern.c Thu Dec 24 18:35:18 2020 +0000 @@ -256,12 +256,13 @@ njs_external_protos(const njs_external_t } -njs_external_proto_t +njs_int_t njs_vm_external_prototype(njs_vm_t *vm, const njs_external_t *definition, njs_uint_t n) { njs_arr_t *protos; njs_int_t ret; + uintptr_t *pr; njs_uint_t size; size = njs_external_protos(definition, n) + 1; @@ -269,38 +270,55 @@ njs_vm_external_prototype(njs_vm_t *vm, protos = njs_arr_create(vm->mem_pool, size, sizeof(njs_exotic_slots_t)); if (njs_slow_path(protos == NULL)) { njs_memory_error(vm); - return NULL; + return -1; } ret = njs_external_add(vm, protos, definition, n); if (njs_slow_path(ret != NJS_OK)) { njs_internal_error(vm, "njs_vm_external_add() failed"); - return NULL; + return -1; } - return protos; + if (vm->protos == NULL) { + vm->protos = njs_arr_create(vm->mem_pool, 4, sizeof(uintptr_t)); + if (njs_slow_path(vm->protos == NULL)) { + return -1; + } + } + + pr = njs_arr_add(vm->protos); + if (njs_slow_path(pr == NULL)) { + return -1; + } + + *pr = (uintptr_t) protos; + + return vm->protos->items - 1; } njs_int_t -njs_vm_external_create(njs_vm_t *vm, njs_value_t *value, - njs_external_proto_t proto, njs_external_ptr_t external, njs_bool_t shared) +njs_vm_external_create(njs_vm_t *vm, njs_value_t *value, njs_int_t proto_id, + njs_external_ptr_t external, njs_bool_t shared) { njs_arr_t *protos; + uintptr_t proto; njs_object_value_t *ov; njs_exotic_slots_t *slots; - if (njs_slow_path(proto == NULL)) { + if (vm->protos == NULL || (njs_int_t) vm->protos->items <= proto_id) { return NJS_ERROR; } + proto = ((uintptr_t *) vm->protos->start)[proto_id]; + ov = njs_mp_alloc(vm->mem_pool, sizeof(njs_object_value_t)); if (njs_slow_path(ov == NULL)) { njs_memory_error(vm); return NJS_ERROR; } - protos = proto; + protos = (njs_arr_t *) proto; slots = protos->start; njs_lvlhsh_init(&ov->object.hash); diff -r 1e5f3455d2d1 -r 40dc1818a485 src/njs_shell.c --- a/src/njs_shell.c Wed Dec 23 11:27:50 2020 +0000 +++ b/src/njs_shell.c Thu Dec 24 18:35:18 2020 +0000 @@ -653,12 +653,11 @@ static njs_value_t * njs_external_add(njs_vm_t *vm, njs_external_t *definition, njs_uint_t n, const njs_str_t *name, njs_external_ptr_t external) { - njs_int_t ret; - njs_value_t *value; - njs_external_proto_t proto; + njs_int_t ret, proto_id; + njs_value_t *value; - proto = njs_vm_external_prototype(vm, definition, n); - if (njs_slow_path(proto == NULL)) { + proto_id = njs_vm_external_prototype(vm, definition, n); + if (njs_slow_path(proto_id < 0)) { njs_stderror("failed to add \"%V\" proto\n", name); return NULL; } @@ -668,7 +667,7 @@ njs_external_add(njs_vm_t *vm, njs_exter return NULL; } - ret = njs_vm_external_create(vm, value, proto, external, 0); + ret = njs_vm_external_create(vm, value, proto_id, external, 0); if (njs_slow_path(ret != NJS_OK)) { return NULL; } diff -r 1e5f3455d2d1 -r 40dc1818a485 src/njs_vm.h --- a/src/njs_vm.h Wed Dec 23 11:27:50 2020 +0000 +++ b/src/njs_vm.h Thu Dec 24 18:35:18 2020 +0000 @@ -180,6 +180,7 @@ struct njs_vm_s { njs_value_t retval; njs_arr_t *paths; + njs_arr_t *protos; njs_value_t *scopes[NJS_SCOPES]; diff -r 1e5f3455d2d1 -r 40dc1818a485 src/test/njs_benchmark.c --- a/src/test/njs_benchmark.c Wed Dec 23 11:27:50 2020 +0000 +++ b/src/test/njs_benchmark.c Thu Dec 24 18:35:18 2020 +0000 @@ -36,13 +36,12 @@ njs_benchmark_test(njs_vm_t *parent, njs u_char *start; njs_vm_t *vm, *nvm; uint64_t us; - njs_int_t ret; + njs_int_t ret, proto_id; njs_str_t s, *expected; njs_uint_t i, n; njs_bool_t success; njs_value_t *result, name, usec, times; njs_vm_opt_t options; - njs_external_proto_t proto; static const njs_value_t name_key = njs_string("name"); static const njs_value_t usec_key = njs_string("usec"); @@ -68,8 +67,8 @@ njs_benchmark_test(njs_vm_t *parent, njs goto done; } - proto = njs_externals_shared_init(vm); - if (proto == NULL) { + proto_id = njs_externals_shared_init(vm); + if (proto_id < 0) { goto done; } diff -r 1e5f3455d2d1 -r 40dc1818a485 src/test/njs_externals_test.c --- a/src/test/njs_externals_test.c Wed Dec 23 11:27:50 2020 +0000 +++ b/src/test/njs_externals_test.c Thu Dec 24 18:35:18 2020 +0000 @@ -11,7 +11,7 @@ typedef struct { njs_lvlhsh_t hash; - njs_external_proto_t proto; + njs_int_t proto_id; uint32_t a; uint32_t d; @@ -394,9 +394,9 @@ njs_unit_test_r_create(njs_vm_t *vm, njs return NJS_ERROR; } - sr->proto = r->proto; + sr->proto_id = r->proto_id; - ret = njs_vm_external_create(vm, &vm->retval, sr->proto, sr, 0); + ret = njs_vm_external_create(vm, &vm->retval, sr->proto_id, sr, 0); if (ret != NJS_OK) { return NJS_ERROR; } @@ -678,8 +678,8 @@ static njs_unit_test_req_init_t njs_test }; -static njs_external_proto_t -njs_externals_init_internal(njs_vm_t *vm, njs_external_proto_t proto, +static njs_int_t +njs_externals_init_internal(njs_vm_t *vm, njs_int_t proto_id, njs_unit_test_req_init_t *init, njs_uint_t n, njs_bool_t shared) { njs_int_t ret; @@ -687,37 +687,37 @@ njs_externals_init_internal(njs_vm_t *vm njs_unit_test_req_t *requests; njs_unit_test_prop_t *prop; - if (proto == NULL) { - proto = njs_vm_external_prototype(vm, njs_unit_test_r_external, - njs_nitems(njs_unit_test_r_external)); - if (njs_slow_path(proto == NULL)) { + if (proto_id == -1) { + proto_id = njs_vm_external_prototype(vm, njs_unit_test_r_external, + njs_nitems(njs_unit_test_r_external)); + if (njs_slow_path(proto_id < 0)) { njs_printf("njs_vm_external_prototype() failed\n"); - return NULL; + return -1; } } requests = njs_mp_zalloc(vm->mem_pool, n * sizeof(njs_unit_test_req_t)); if (njs_slow_path(requests == NULL)) { - return NULL; + return -1; } for (i = 0; i < n; i++) { requests[i] = init[i].request; - requests[i].proto = proto; + requests[i].proto_id = proto_id; ret = njs_vm_external_create(vm, njs_value_arg(&requests[i].value), - proto, &requests[i], shared); + proto_id, &requests[i], shared); if (njs_slow_path(ret != NJS_OK)) { njs_printf("njs_vm_external_create() failed\n"); - return NULL; + return -1; } ret = njs_vm_bind(vm, &init[i].name, njs_value_arg(&requests[i].value), shared); if (njs_slow_path(ret != NJS_OK)) { njs_printf("njs_vm_bind() failed\n"); - return NULL; + return -1; } for (j = 0; j < njs_nitems(init[i].props); j++) { @@ -726,33 +726,34 @@ njs_externals_init_internal(njs_vm_t *vm if (njs_slow_path(prop == NULL)) { njs_printf("lvlhsh_unit_test_alloc() failed\n"); - return NULL; + return -1; } ret = lvlhsh_unit_test_add(vm->mem_pool, &requests[i], prop); if (njs_slow_path(ret != NJS_OK)) { njs_printf("lvlhsh_unit_test_add() failed\n"); - return NULL; + return -1; } } } - return proto; -} - - -njs_external_proto_t -njs_externals_shared_init(njs_vm_t *vm) -{ - return njs_externals_init_internal(vm, NULL, njs_test_requests, 1, 1); + return proto_id; } njs_int_t -njs_externals_init(njs_vm_t *vm, njs_external_proto_t proto) +njs_externals_shared_init(njs_vm_t *vm) { - proto = njs_externals_init_internal(vm, proto, &njs_test_requests[1], 3, 0); - if (proto == NULL) { + return njs_externals_init_internal(vm, -1, njs_test_requests, 1, 1); +} + + +njs_int_t +njs_externals_init(njs_vm_t *vm, njs_int_t proto_id) +{ + proto_id = njs_externals_init_internal(vm, proto_id, &njs_test_requests[1], + 3, 0); + if (proto_id < 0) { return NJS_ERROR; } diff -r 1e5f3455d2d1 -r 40dc1818a485 src/test/njs_externals_test.h --- a/src/test/njs_externals_test.h Wed Dec 23 11:27:50 2020 +0000 +++ b/src/test/njs_externals_test.h Thu Dec 24 18:35:18 2020 +0000 @@ -8,8 +8,8 @@ #define _NJS_EXTERNALS_TEST_H_INCLUDED_ -njs_external_proto_t njs_externals_shared_init(njs_vm_t *vm); -njs_int_t njs_externals_init(njs_vm_t *vm, njs_external_proto_t proto); +njs_int_t njs_externals_shared_init(njs_vm_t *vm); +njs_int_t njs_externals_init(njs_vm_t *vm, njs_int_t proto_id); #endif /* _NJS_EXTERNALS_TEST_H_INCLUDED_ */ diff -r 1e5f3455d2d1 -r 40dc1818a485 src/test/njs_unit_test.c --- a/src/test/njs_unit_test.c Wed Dec 23 11:27:50 2020 +0000 +++ b/src/test/njs_unit_test.c Thu Dec 24 18:35:18 2020 +0000 @@ -20483,19 +20483,18 @@ static njs_int_t njs_unit_test(njs_unit_test_t tests[], size_t num, njs_str_t *name, njs_opts_t *opts, njs_stat_t *stat) { - u_char *start, *end; - njs_vm_t *vm, *nvm; - njs_int_t ret; - njs_str_t s; - njs_uint_t i, repeat; - njs_stat_t prev; - njs_bool_t success; - njs_vm_opt_t options; - njs_external_proto_t proto; + u_char *start, *end; + njs_vm_t *vm, *nvm; + njs_int_t ret, proto_id; + njs_str_t s; + njs_uint_t i, repeat; + njs_stat_t prev; + njs_bool_t success; + njs_vm_opt_t options; vm = NULL; nvm = NULL; - proto = NULL; + proto_id = -1; prev = *stat; @@ -20519,8 +20518,8 @@ njs_unit_test(njs_unit_test_t tests[], s } if (opts->externals) { - proto = njs_externals_shared_init(vm); - if (proto == NULL) { + proto_id = njs_externals_shared_init(vm); + if (proto_id < 0) { goto done; } } @@ -20549,7 +20548,7 @@ njs_unit_test(njs_unit_test_t tests[], s } if (opts->externals) { - ret = njs_externals_init(nvm, proto); + ret = njs_externals_init(nvm, proto_id); if (ret != NJS_OK) { goto done; } @@ -20653,7 +20652,7 @@ njs_interactive_test(njs_unit_test_t tes } if (opts->externals) { - ret = njs_externals_init(vm, NULL); + ret = njs_externals_init(vm, -1); if (ret != NJS_OK) { goto done; } From mdounin at mdounin.ru Fri Dec 25 14:47:16 2020 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 25 Dec 2020 17:47:16 +0300 Subject: Request Counter Clarification In-Reply-To: References: Message-ID: <20201225144716.GM1147@mdounin.ru> Hello! On Mon, Dec 21, 2020 at 08:54:54PM +0600, M L wrote: > I am developing an NGINX module which would check the contents of the > request and if the key components found, would block it. Currently, it > seems to be working correctly, but I would like to clarify some parts and > make sure that I am not hard-coding anything. So, the question is mainly > about the request counter. > During the execution of the request handler (which is registered on the > HTTP_REWRITE_PHASE), the request counter is kept as it is. But once the > handler finishes the request processing, the counter is changed to 1. But > changing the counter to 1 does not seem like a right decision, as many > other modules more often decrease it in the post_handler or call the > "finalize request" function. However, the use of "finalize" cannot be > implemented, as neither connection, nor request should not be finalized > after the handler execution. Instead, the request needs to be handed over > to the other phase handlers (return NGX_DECLINED). As for the decrementing > in the post_handler of the ngx_http_read_client_request_body function, on > the heavy loads, it results in the segfaults. Finally, leaving the counter > unchanged throughout the process leads to memory leaks. Therefore, the > above-described value assignment was implemented, but, perhaps, there are > better ways of handling the request counter issue? And why the change in > the request counter can cause a segfault in the first place? In general, you shouldn't touch the request counter yourself unless you really understand what you are doing. Instead, you should correctly call ngx_http_finalize_request() to decrease it (or make sure to return correct return code if the phase handler does this for you, this will properly decrement). Increasing the request counter in most cases is handled by nginx core as well. In no cases you are expected to set the request counter to a specific value. It is something to be done only during forced request termination. Any attempt to do this in your own module is certainly a bug. Incorrectly adjusting request counter can lead to segfaults or to connection/memory leaks, depending on the exact code path. In the particular module you've described it looks like the problem is that you are trying to read the request body from early request processing phases (that is, before the content phase), and do this incorrectly. For a correct example see the mirror module (http://hg.nginx.org/nginx/file/tip/src/http/modules/ngx_http_mirror_module.c). In particular, to start reading the request body, do something like this (note ngx_http_finalize_request(NGX_DONE) call to decrement reference counter, and NGX_DONE return code to stop further processing of the request with the phase handlers): rc = ngx_http_read_client_request_body(r, ngx_http_mirror_body_handler); if (rc >= NGX_HTTP_SPECIAL_RESPONSE) { return rc; } ngx_http_finalize_request(r, NGX_DONE); return NGX_DONE; And to continue processing with other phase handlers you should do something like this in the body handler: r->write_event_handler = ngx_http_core_run_phases; ngx_http_core_run_phases(r); This ensures that appropriate write event handler is set (as it is removed by the request body reading code) and resumes phase handling by calling ngx_http_core_run_phases(). -- Maxim Dounin http://mdounin.ru/ From deso at posteo.net Sun Dec 27 02:56:15 2020 From: deso at posteo.net (deso) Date: Sat, 26 Dec 2020 18:56:15 -0800 Subject: [PATCH] MIME: Add application/wasm type Message-ID: <2a4b8277-225a-2034-a6fd-79c51d19b00f@posteo.net> From 6a3bd185b20d60c6f145847b2dc162b09de6eab6 Mon Sep 17 00:00:00 2001 From: Daniel Mueller Date: Sat, 26 Dec 2020 18:47:06 -0800 Subject: [PATCH] MIME: Add application/wasm type .wasm files should be served with the application/wasm MIME type or some features such as streamed loading and compilation may not be used by browsers. This change adjusts the MIME type configuration to include such a setting. --- conf/mime.types | 1 + 1 file changed, 1 insertion(+) diff --git a/conf/mime.types b/conf/mime.types index 296125..b53f7f 100644 --- a/conf/mime.types +++ b/conf/mime.types @@ -51,6 +51,7 @@ types { application/vnd.openxmlformats-officedocument.wordprocessingml.document docx; application/vnd.wap.wmlc wmlc; + application/wasm wasm; application/x-7z-compressed 7z; application/x-cocoa cco; application/x-java-archive-diff jardiff; -- 2.26.2 -------------- next part -------------- A non-text attachment was scrubbed... Name: 0001-MIME-Add-application-wasm-type.patch Type: text/x-patch Size: 1080 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: OpenPGP_0x952DD6F8F34D8B8E.asc Type: application/pgp-keys Size: 3123 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: OpenPGP_signature Type: application/pgp-signature Size: 840 bytes Desc: OpenPGP digital signature URL: From ouyangjun1999 at gmail.com Sun Dec 27 13:26:44 2020 From: ouyangjun1999 at gmail.com (Attenuation) Date: Sun, 27 Dec 2020 21:26:44 +0800 Subject: [PATCH] Multiple call ngx_parse_url cause index out of bounds bug Message-ID: Hello, I found an array index out of bounds bug in ngx_inet_add_addr() function. In my case, I want to use ngx_parse_url(cf->pool, u) twice to update my address. Consider this situation, my twice function call argument u: u->url.data is string of ip address, and then, call trace is ngx_inet_add_addr (src/core/ngx_inet.c#L1274) ngx_parse_inet_url (src/core/ngx_inet.c#L968) ngx_parse_url (src/core/ngx_inet.c#L700) In first ngx_parse_url() call, u->url.data ip address will successfully add to u->addrs array, and u->naddrs will be increased to 1. And then the second call ngx_parse_url(), u->url.data ip address add to u->addrs array, Because of in first call n->naddrs was increased to 1, so this time our update ip address will add to u->addrs[1], but u->addrs array were allocated 1 * sizeof(ngx_addr_t). src/core/ngx_inet.c#L1275 u->addrs = ngx_palloc(pool, total * nports * sizeof(ngx_addr_t)); So the second time I call this function will cause memory error, and it may even make the program crashes. In order to avoid this bug, We need to check index of u->addrs. Could you help me check where there is a problem? Thanks! # HG changeset patch # User Jun Ouyang # Date 1609070041 -28800 # Sun Dec 27 19:54:01 2020 +0800 # Node ID 978ff553691d3fec538586cfa88e1e2b9858d4b5 # Parent 82228f955153527fba12211f52bf102c90f38dfb Multiple call ngx_parse_url add addr to addrs array cause index out of bounds bug diff -r 82228f955153 -r 978ff553691d src/core/ngx_inet.c --- a/src/core/ngx_inet.c Tue Dec 15 17:41:39 2020 +0300 +++ b/src/core/ngx_inet.c Sun Dec 27 19:54:01 2020 +0800 @@ -1278,6 +1278,10 @@ } } + if (u->naddrs == nports * total) { + u->naddrs = 0; + } + for (i = 0; i < nports; i++) { sa = ngx_pcalloc(pool, socklen); if (sa == NULL) { -- *GPG public key: 4A6D297E6F74638E4D5F8E99152AC7B5F7608B26* -------------- next part -------------- An HTML attachment was scrubbed... URL: From dongbeiouba at gmail.com Tue Dec 29 09:30:14 2020 From: dongbeiouba at gmail.com (=?UTF-8?B?5byg5oiQ6b6Z?=) Date: Tue, 29 Dec 2020 17:30:14 +0800 Subject: Contributing Changes Message-ID: # HG changeset patch # User Chenglong Zhang # Date 1609232548 -28800 # Tue Dec 29 17:02:28 2020 +0800 # Node ID 65bb4c9c296d2c424286e2b36db96a4ba768369e # Parent 82228f955153527fba12211f52bf102c90f38dfb Clear connection pointer just after close it. ngx_close_connection(u->peer.connection); u->peer.connection = NULL; diff -r 82228f955153 -r 65bb4c9c296d src/http/ngx_http_upstream.c --- a/src/http/ngx_http_upstream.c Tue Dec 15 17:41:39 2020 +0300 +++ b/src/http/ngx_http_upstream.c Tue Dec 29 17:02:28 2020 +0800 @@ -4402,9 +4402,8 @@ } ngx_close_connection(u->peer.connection); - } - - u->peer.connection = NULL; + u->peer.connection = NULL; + } if (u->pipe && u->pipe->temp_file) { ngx_log_debug1(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmm at csdoc.com Tue Dec 29 11:18:44 2020 From: gmm at csdoc.com (Gena Makhomed) Date: Tue, 29 Dec 2020 13:18:44 +0200 Subject: [PATCH] Contrib: vim syntax, update core and 3rd party module directives. Message-ID: # HG changeset patch # User Gena Makhomed # Date 1609240437 -7200 # Tue Dec 29 13:13:57 2020 +0200 # Node ID ed5770c4a49f969949a9b7480af6f75d3aa2eaa0 # Parent 82228f955153527fba12211f52bf102c90f38dfb Contrib: vim syntax, update core and 3rd party module directives. diff -r 82228f955153 -r ed5770c4a49f contrib/vim/syntax/nginx.vim --- a/contrib/vim/syntax/nginx.vim Tue Dec 15 17:41:39 2020 +0300 +++ b/contrib/vim/syntax/nginx.vim Tue Dec 29 13:13:57 2020 +0200 @@ -268,6 +268,7 @@ syn keyword ngxDirective contained grpc_ssl_certificate syn keyword ngxDirective contained grpc_ssl_certificate_key syn keyword ngxDirective contained grpc_ssl_ciphers +syn keyword ngxDirective contained grpc_ssl_conf_command syn keyword ngxDirective contained grpc_ssl_crl syn keyword ngxDirective contained grpc_ssl_name syn keyword ngxDirective contained grpc_ssl_password_file @@ -447,6 +448,7 @@ syn keyword ngxDirective contained proxy_cache_valid syn keyword ngxDirective contained proxy_connect_timeout syn keyword ngxDirective contained proxy_cookie_domain +syn keyword ngxDirective contained proxy_cookie_flags syn keyword ngxDirective contained proxy_cookie_path syn keyword ngxDirective contained proxy_download_rate syn keyword ngxDirective contained proxy_force_ranges @@ -480,11 +482,13 @@ syn keyword ngxDirective contained proxy_session_drop syn keyword ngxDirective contained proxy_set_body syn keyword ngxDirective contained proxy_set_header +syn keyword ngxDirective contained proxy_smtp_auth syn keyword ngxDirective contained proxy_socket_keepalive syn keyword ngxDirective contained proxy_ssl syn keyword ngxDirective contained proxy_ssl_certificate syn keyword ngxDirective contained proxy_ssl_certificate_key syn keyword ngxDirective contained proxy_ssl_ciphers +syn keyword ngxDirective contained proxy_ssl_conf_command syn keyword ngxDirective contained proxy_ssl_crl syn keyword ngxDirective contained proxy_ssl_name syn keyword ngxDirective contained proxy_ssl_password_file @@ -592,6 +596,7 @@ syn keyword ngxDirective contained ssl_certificate_key syn keyword ngxDirective contained ssl_ciphers syn keyword ngxDirective contained ssl_client_certificate +syn keyword ngxDirective contained ssl_conf_command syn keyword ngxDirective contained ssl_crl syn keyword ngxDirective contained ssl_dhparam syn keyword ngxDirective contained ssl_early_data @@ -605,6 +610,7 @@ syn keyword ngxDirective contained ssl_prefer_server_ciphers syn keyword ngxDirective contained ssl_preread syn keyword ngxDirective contained ssl_protocols +syn keyword ngxDirective contained ssl_reject_handshake syn keyword ngxDirective contained ssl_session_cache syn keyword ngxDirective contained ssl_session_ticket_key syn keyword ngxDirective contained ssl_session_tickets @@ -643,6 +649,7 @@ syn keyword ngxDirective contained userid syn keyword ngxDirective contained userid_domain syn keyword ngxDirective contained userid_expires +syn keyword ngxDirective contained userid_flags syn keyword ngxDirective contained userid_mark syn keyword ngxDirective contained userid_name syn keyword ngxDirective contained userid_p3p @@ -693,6 +700,7 @@ syn keyword ngxDirective contained uwsgi_ssl_certificate syn keyword ngxDirective contained uwsgi_ssl_certificate_key syn keyword ngxDirective contained uwsgi_ssl_ciphers +syn keyword ngxDirective contained uwsgi_ssl_conf_command syn keyword ngxDirective contained uwsgi_ssl_crl syn keyword ngxDirective contained uwsgi_ssl_name syn keyword ngxDirective contained uwsgi_ssl_password_file @@ -738,6 +746,7 @@ syn keyword ngxDirective contained zone_sync_ssl_certificate syn keyword ngxDirective contained zone_sync_ssl_certificate_key syn keyword ngxDirective contained zone_sync_ssl_ciphers +syn keyword ngxDirective contained zone_sync_ssl_conf_command syn keyword ngxDirective contained zone_sync_ssl_crl syn keyword ngxDirective contained zone_sync_ssl_name syn keyword ngxDirective contained zone_sync_ssl_password_file @@ -1329,6 +1338,8 @@ syn keyword ngxDirectiveThirdParty contained content_by_lua syn keyword ngxDirectiveThirdParty contained content_by_lua_block syn keyword ngxDirectiveThirdParty contained content_by_lua_file +syn keyword ngxDirectiveThirdParty contained exit_worker_by_lua_block +syn keyword ngxDirectiveThirdParty contained exit_worker_by_lua_file syn keyword ngxDirectiveThirdParty contained header_filter_by_lua syn keyword ngxDirectiveThirdParty contained header_filter_by_lua_block syn keyword ngxDirectiveThirdParty contained header_filter_by_lua_file @@ -1370,6 +1381,7 @@ syn keyword ngxDirectiveThirdParty contained lua_ssl_protocols syn keyword ngxDirectiveThirdParty contained lua_ssl_trusted_certificate syn keyword ngxDirectiveThirdParty contained lua_ssl_verify_depth +syn keyword ngxDirectiveThirdParty contained lua_thread_cache_max_entries syn keyword ngxDirectiveThirdParty contained lua_transform_underscores_in_response_headers syn keyword ngxDirectiveThirdParty contained lua_use_default_type syn keyword ngxDirectiveThirdParty contained rewrite_by_lua @@ -2285,6 +2297,7 @@ syn keyword ngxDirectiveThirdParty contained testcookie_refresh_encrypt_cookie_key syn keyword ngxDirectiveThirdParty contained testcookie_refresh_status syn keyword ngxDirectiveThirdParty contained testcookie_refresh_template +syn keyword ngxDirectiveThirdParty contained testcookie_samesite syn keyword ngxDirectiveThirdParty contained testcookie_secret syn keyword ngxDirectiveThirdParty contained testcookie_secure_flag syn keyword ngxDirectiveThirdParty contained testcookie_session @@ -2355,15 +2368,31 @@ " IP2Location Nginx " https://github.com/ip2location/ip2location-nginx -syn keyword ngxDirectiveThirdParty contained ip2location -syn keyword ngxDirectiveThirdParty contained ip2location_access_type syn keyword ngxDirectiveThirdParty contained ip2location_proxy syn keyword ngxDirectiveThirdParty contained ip2location_proxy_recursive +syn keyword ngxDirectiveThirdParty contained ip2location_areacode +syn keyword ngxDirectiveThirdParty contained ip2location_city +syn keyword ngxDirectiveThirdParty contained ip2location_country_long +syn keyword ngxDirectiveThirdParty contained ip2location_country_short +syn keyword ngxDirectiveThirdParty contained ip2location_domain +syn keyword ngxDirectiveThirdParty contained ip2location_elevation +syn keyword ngxDirectiveThirdParty contained ip2location_iddcode +syn keyword ngxDirectiveThirdParty contained ip2location_isp +syn keyword ngxDirectiveThirdParty contained ip2location_latitude +syn keyword ngxDirectiveThirdParty contained ip2location_longitude +syn keyword ngxDirectiveThirdParty contained ip2location_mcc +syn keyword ngxDirectiveThirdParty contained ip2location_mnc +syn keyword ngxDirectiveThirdParty contained ip2location_mobilebrand +syn keyword ngxDirectiveThirdParty contained ip2location_netspeed +syn keyword ngxDirectiveThirdParty contained ip2location_region +syn keyword ngxDirectiveThirdParty contained ip2location_timezone +syn keyword ngxDirectiveThirdParty contained ip2location_usagetype +syn keyword ngxDirectiveThirdParty contained ip2location_weatherstationcode +syn keyword ngxDirectiveThirdParty contained ip2location_weatherstationname +syn keyword ngxDirectiveThirdParty contained ip2location_zipcode " IP2Proxy module for Nginx " https://github.com/ip2location/ip2proxy-nginx -syn keyword ngxDirectiveThirdParty contained ip2proxy -syn keyword ngxDirectiveThirdParty contained ip2proxy_access_type syn keyword ngxDirectiveThirdParty contained ip2proxy_as syn keyword ngxDirectiveThirdParty contained ip2proxy_asn syn keyword ngxDirectiveThirdParty contained ip2proxy_city @@ -2371,12 +2400,14 @@ syn keyword ngxDirectiveThirdParty contained ip2proxy_country_short syn keyword ngxDirectiveThirdParty contained ip2proxy_database syn keyword ngxDirectiveThirdParty contained ip2proxy_domain +syn keyword ngxDirectiveThirdParty contained ip2proxy_isp syn keyword ngxDirectiveThirdParty contained ip2proxy_is_proxy -syn keyword ngxDirectiveThirdParty contained ip2proxy_isp syn keyword ngxDirectiveThirdParty contained ip2proxy_last_seen +syn keyword ngxDirectiveThirdParty contained ip2proxy_proxy +syn keyword ngxDirectiveThirdParty contained ip2proxy_proxy_recursive syn keyword ngxDirectiveThirdParty contained ip2proxy_proxy_type syn keyword ngxDirectiveThirdParty contained ip2proxy_region -syn keyword ngxDirectiveThirdParty contained ip2proxy_reverse_proxy +syn keyword ngxDirectiveThirdParty contained ip2proxy_threat syn keyword ngxDirectiveThirdParty contained ip2proxy_usage_type From weian.chen at fivestars.com Tue Dec 29 17:12:27 2020 From: weian.chen at fivestars.com (Nicole Chen) Date: Tue, 29 Dec 2020 17:12:27 +0000 Subject: Nginx Responds 400 After 59 Seconds Message-ID: Hi all, I have a nginx server running as a container in a pod behind an ELB and in front of a uwsgi container (so client request -> AWS ELB -> nginx -> uwsgi). My ELB times out at 60 seconds. Recently I noticed some requests log a 400 at the nginx layer after 59 seconds ... I'm not sure what could be the cause for the 400. The closeness of the timeout value seems interesting. Has anyone encountered something similar before? Or have pointers on how to investigate? It seems my error_logs log to stderr, so I'm not able to retrieve past error logs. Thanks! -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Tue Dec 29 17:26:42 2020 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 29 Dec 2020 20:26:42 +0300 Subject: [PATCH] Multiple call ngx_parse_url cause index out of bounds bug In-Reply-To: References: Message-ID: <20201229172642.GN1147@mdounin.ru> Hello! On Sun, Dec 27, 2020 at 09:26:44PM +0800, Attenuation wrote: > Hello, I found an array index out of bounds bug in ngx_inet_add_addr() > function. > In my case, I want to use ngx_parse_url(cf->pool, u) twice to update my > address. > Consider this situation, my twice function call argument u: u->url.data is > string > of ip address, and then, call trace is > > ngx_inet_add_addr (src/core/ngx_inet.c#L1274) > ngx_parse_inet_url (src/core/ngx_inet.c#L968) > ngx_parse_url (src/core/ngx_inet.c#L700) > > In first ngx_parse_url() call, u->url.data ip address will successfully add > to u->addrs array, > and u->naddrs will be increased to 1. And then the second > call ngx_parse_url(), > u->url.data ip address add to u->addrs array, Because of in first call > n->naddrs was > increased to 1, so this time our update ip address will add to > u->addrs[1], but u->addrs > array were allocated 1 * sizeof(ngx_addr_t). > > src/core/ngx_inet.c#L1275 u->addrs = ngx_palloc(pool, total * nports * > sizeof(ngx_addr_t)); > > So the second time I call this function will cause memory error, and it may > even make the program crashes. > > In order to avoid this bug, We need to check index of u->addrs. > Could you help me check where there is a problem? Thanks! The ngx_parse_url() function expects the ngx_url_t structure to be zeroed out, and with some input fields set, such as u.url and u.default_port. Calling ngx_parse_url() with the ngx_url_t structure not reinitialized after previous parsing is a bug. That is, you should reconsider your code: if you want to reuse the same ngx_url_t structure for multiple calls of ngx_parse_url(), you have to reinitialize it before each call. -- Maxim Dounin http://mdounin.ru/