From amira.solo at gmail.com Mon Jul 4 03:11:18 2022 From: amira.solo at gmail.com (Amira S) Date: Mon, 4 Jul 2022 06:11:18 +0300 Subject: Adding support for OpenSSL engine in Nginx Ingress controller Message-ID: Hello, I want to add support for an ssl_engine + ssl_certificate/key directives in the nignx.conf that configures an nginx server for ingress on kubernetes. This functionality is not provided by default, and I read that Snippets may be the recommended way to add such support. Could you please assist me in adding such support? The ssl_engine should be part of the main-snippets but the ssl_certificate/key are under http and then under server, so not sure if http-snippets or server-snippets should be used. Also, if anyone could point me to a similar configuration sample, that would be very helpful. Thank you! -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Mon Jul 4 18:20:23 2022 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 4 Jul 2022 21:20:23 +0300 Subject: Adding support for OpenSSL engine in Nginx Ingress controller In-Reply-To: References: Message-ID: Hello! On Mon, Jul 04, 2022 at 06:11:18AM +0300, Amira S wrote: > Hello, > > I want to add support for an ssl_engine + ssl_certificate/key directives in > the nignx.conf that configures an nginx server for ingress on kubernetes. > > This functionality is not provided by default, and I read that Snippets may > be the recommended way to add such support. > > Could you please assist me in adding such support? > The ssl_engine should be part of the main-snippets but the > ssl_certificate/key are under http and then under server, so not sure if > http-snippets or server-snippets should be used. > > Also, if anyone could point me to a similar configuration sample, that > would be very helpful. The Nginx Ingress Controller is a completely separate project, you may want to use its own resources to request additional features. If you need help with configuring nginx, please use the nginx@ mailing list instead; the nginx-devel@ mailing list is dedicated to nginx development. Thank you. -- Maxim Dounin http://mdounin.ru/ From mdounin at mdounin.ru Wed Jul 6 01:23:25 2022 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 6 Jul 2022 04:23:25 +0300 Subject: [PATCH 2 of 2] The "sort=" parameter of the "resolver" directive In-Reply-To: References: <04e314eb6b4d20a48c5d.1656433535@enoparse.local> Message-ID: Hello! On Tue, Jun 28, 2022 at 08:25:36PM +0400, Sergey Kandaurov wrote: > # HG changeset patch > # User Ruslan Ermilov > # Date 1645589387 -10800 > # Wed Feb 23 07:09:47 2022 +0300 > # Node ID e80adbf788f6796c6bdf415938abb19b7aa43e3e > # Parent 04e314eb6b4d20a48c5d7bab0609e1b03b51b406 > The "sort=" parameter of the "resolver" directive. As already noted, should be "prefer=". > > diff -r 04e314eb6b4d -r e80adbf788f6 src/core/ngx_resolver.c > --- a/src/core/ngx_resolver.c Wed Feb 23 07:08:37 2022 +0300 > +++ b/src/core/ngx_resolver.c Wed Feb 23 07:09:47 2022 +0300 > @@ -227,6 +227,7 @@ ngx_resolver_create(ngx_conf_t *cf, ngx_ > } > > #if (NGX_HAVE_INET6) > + > if (ngx_strncmp(names[i].data, "ipv4=", 5) == 0) { > > if (ngx_strcmp(&names[i].data[5], "on") == 0) { > @@ -260,6 +261,24 @@ ngx_resolver_create(ngx_conf_t *cf, ngx_ > > continue; > } > + > + if (ngx_strncmp(names[i].data, "prefer=", 7) == 0) { > + > + if (ngx_strcmp(&names[i].data[7], "ipv4") == 0) { > + r->prefer = NGX_RESOLVE_PREFER_A; > + > + } else if (ngx_strcmp(&names[i].data[7], "ipv6") == 0) { > + r->prefer = NGX_RESOLVE_PREFER_AAAA; > + > + } else { > + ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, > + "invalid parameter: %V", &names[i]); > + return NULL; > + } > + > + continue; > + } > + > #endif > > ngx_memzero(&u, sizeof(ngx_url_t)); > @@ -4250,7 +4269,27 @@ ngx_resolver_export(ngx_resolver_t *r, n > } > > i = 0; > - d = rotate ? ngx_random() % n : 0; > + > + switch (r->prefer) { > + > +#if (NGX_HAVE_INET6) > + case NGX_RESOLVE_PREFER_A: > + d = 0; > + break; > + > + case NGX_RESOLVE_PREFER_AAAA: > + d = rn->naddrs6; > + > + if (d == n) { > + d = 0; > + } > + > + break; > +#endif > + > + default: > + d = rotate ? ngx_random() % n : 0; > + } With this code, a configuration like this: resolver ... prefer=ipv4; set $foo ""; proxy_pass http://example.com$foo; will result in only IPv4 addresses being used assuming successful connections, and IPv6 addresses being used only as a backup. This looks quite different from the current behaviour, as well as from what we do with proxy_pass http://example.com; when using system resolver. Not sure we want to introduce such behaviour. While it might be closer to what RFC 6724 recommends for clients, it is clearly not in line with how we handle multiple upstream addresses in general, and certainly will confuse users. If we want to introduce this, it probably should be at least consistent within resolver vs. system resolver cases. > > if (rn->naddrs) { > j = rotate ? ngx_random() % rn->naddrs : 0; > diff -r 04e314eb6b4d -r e80adbf788f6 src/core/ngx_resolver.h > --- a/src/core/ngx_resolver.h Wed Feb 23 07:08:37 2022 +0300 > +++ b/src/core/ngx_resolver.h Wed Feb 23 07:09:47 2022 +0300 > @@ -36,6 +36,9 @@ > > #define NGX_RESOLVER_MAX_RECURSION 50 > > +#define NGX_RESOLVE_PREFER_A 1 > +#define NGX_RESOLVE_PREFER_AAAA 2 > + > > typedef struct ngx_resolver_s ngx_resolver_t; > > @@ -175,6 +178,8 @@ struct ngx_resolver_s { > ngx_queue_t srv_expire_queue; > ngx_queue_t addr_expire_queue; > > + unsigned prefer:2; > + > unsigned ipv4:1; > > #if (NGX_HAVE_INET6) -- Maxim Dounin http://mdounin.ru/ From mdounin at mdounin.ru Wed Jul 6 01:23:36 2022 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 6 Jul 2022 04:23:36 +0300 Subject: [PATCH 1 of 2] The "ipv4=" parameter of the "resolver" directive In-Reply-To: <04e314eb6b4d20a48c5d.1656433535@enoparse.local> References: <04e314eb6b4d20a48c5d.1656433535@enoparse.local> Message-ID: Hello! On Tue, Jun 28, 2022 at 08:25:35PM +0400, Sergey Kandaurov wrote: > # HG changeset patch > # User Ruslan Ermilov > # Date 1645589317 -10800 > # Wed Feb 23 07:08:37 2022 +0300 > # Node ID 04e314eb6b4d20a48c5d7bab0609e1b03b51b406 > # Parent fecd73db563fb64108f7669eca419badb2aba633 > The "ipv4=" parameter of the "resolver" directive. > > When set to "off", only IPv6 addresses will be resolved, and no > A queries are ever sent (ticket #2196). > > diff -r fecd73db563f -r 04e314eb6b4d src/core/ngx_resolver.c > --- a/src/core/ngx_resolver.c Tue Jun 21 17:25:37 2022 +0300 > +++ b/src/core/ngx_resolver.c Wed Feb 23 07:08:37 2022 +0300 > @@ -157,6 +157,8 @@ ngx_resolver_create(ngx_conf_t *cf, ngx_ > cln->handler = ngx_resolver_cleanup; > cln->data = r; > > + r->ipv4 = 1; > + > ngx_rbtree_init(&r->name_rbtree, &r->name_sentinel, > ngx_resolver_rbtree_insert_value); > > @@ -225,6 +227,23 @@ ngx_resolver_create(ngx_conf_t *cf, ngx_ > } > > #if (NGX_HAVE_INET6) > + if (ngx_strncmp(names[i].data, "ipv4=", 5) == 0) { > + > + if (ngx_strcmp(&names[i].data[5], "on") == 0) { > + r->ipv4 = 1; > + > + } else if (ngx_strcmp(&names[i].data[5], "off") == 0) { > + r->ipv4 = 0; > + > + } else { > + ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, > + "invalid parameter: %V", &names[i]); > + return NULL; > + } > + > + continue; > + } > + > if (ngx_strncmp(names[i].data, "ipv6=", 5) == 0) { > > if (ngx_strcmp(&names[i].data[5], "on") == 0) { > @@ -273,6 +292,14 @@ ngx_resolver_create(ngx_conf_t *cf, ngx_ > } > } > > +#if (NGX_HAVE_INET6) > + if (r->ipv4 + r->ipv6 == 0) { > + ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, > + "\"ipv4\" and \"ipv6\" cannot both be \"off\""); > + return NULL; > + } > +#endif > + > if (n && r->connections.nelts == 0) { > ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, "no name servers defined"); > return NULL; > @@ -836,7 +863,7 @@ ngx_resolve_name_locked(ngx_resolver_t * > r->last_connection = 0; > } > > - rn->naddrs = (u_short) -1; > + rn->naddrs = r->ipv4 ? (u_short) -1 : 0; > rn->tcp = 0; > #if (NGX_HAVE_INET6) > rn->naddrs6 = r->ipv6 ? (u_short) -1 : 0; > @@ -1263,7 +1290,7 @@ ngx_resolver_send_query(ngx_resolver_t * > rec->log.action = "resolving"; > } > > - if (rn->naddrs == (u_short) -1) { > + if (rn->query && rn->naddrs == (u_short) -1) { > rc = rn->tcp ? ngx_resolver_send_tcp_query(r, rec, rn->query, rn->qlen) > : ngx_resolver_send_udp_query(r, rec, rn->query, rn->qlen); > > @@ -1765,10 +1792,13 @@ ngx_resolver_process_response(ngx_resolv > q = ngx_queue_next(q)) > { > rn = ngx_queue_data(q, ngx_resolver_node_t, queue); > - qident = (rn->query[0] << 8) + rn->query[1]; > - > - if (qident == ident) { > - goto dns_error_name; > + > + if (rn->query) { > + qident = (rn->query[0] << 8) + rn->query[1]; > + > + if (qident == ident) { > + goto dns_error_name; > + } > } > > #if (NGX_HAVE_INET6) > @@ -3645,7 +3675,7 @@ ngx_resolver_create_name_query(ngx_resol > len = sizeof(ngx_resolver_hdr_t) + nlen + sizeof(ngx_resolver_qs_t); > > #if (NGX_HAVE_INET6) > - p = ngx_resolver_alloc(r, r->ipv6 ? len * 2 : len); > + p = ngx_resolver_alloc(r, len * (r->ipv4 + r->ipv6)); > #else > p = ngx_resolver_alloc(r, len); > #endif > @@ -3654,23 +3684,28 @@ ngx_resolver_create_name_query(ngx_resol > } > > rn->qlen = (u_short) len; > - rn->query = p; > + > + if (r->ipv4) { > + rn->query = p; > + } > > #if (NGX_HAVE_INET6) > if (r->ipv6) { > - rn->query6 = p + len; > + rn->query6 = r->ipv4 ? (p + len) : p; > } > #endif > > query = (ngx_resolver_hdr_t *) p; > > - ident = ngx_random(); > - > - ngx_log_debug2(NGX_LOG_DEBUG_CORE, r->log, 0, > - "resolve: \"%V\" A %i", name, ident & 0xffff); > - > - query->ident_hi = (u_char) ((ident >> 8) & 0xff); > - query->ident_lo = (u_char) (ident & 0xff); > + if (r->ipv4) { > + ident = ngx_random(); > + > + ngx_log_debug2(NGX_LOG_DEBUG_CORE, r->log, 0, > + "resolve: \"%V\" A %i", name, ident & 0xffff); > + > + query->ident_hi = (u_char) ((ident >> 8) & 0xff); > + query->ident_lo = (u_char) (ident & 0xff); > + } > > /* recursion query */ > query->flags_hi = 1; query->flags_lo = 0; > @@ -3731,7 +3766,9 @@ ngx_resolver_create_name_query(ngx_resol > > p = rn->query6; > > - ngx_memcpy(p, rn->query, rn->qlen); > + if (r->ipv4) { > + ngx_memcpy(p, rn->query, rn->qlen); > + } > > query = (ngx_resolver_hdr_t *) p; > > diff -r fecd73db563f -r 04e314eb6b4d src/core/ngx_resolver.h > --- a/src/core/ngx_resolver.h Tue Jun 21 17:25:37 2022 +0300 > +++ b/src/core/ngx_resolver.h Wed Feb 23 07:08:37 2022 +0300 > @@ -175,8 +175,10 @@ struct ngx_resolver_s { > ngx_queue_t srv_expire_queue; > ngx_queue_t addr_expire_queue; > > + unsigned ipv4:1; > + > #if (NGX_HAVE_INET6) > - ngx_uint_t ipv6; /* unsigned ipv6:1; */ > + unsigned ipv6:1; > ngx_rbtree_t addr6_rbtree; > ngx_rbtree_node_t addr6_sentinel; > ngx_queue_t addr6_resend_queue; Looks good. -- Maxim Dounin http://mdounin.ru/ From amira.solo at gmail.com Wed Jul 6 03:02:31 2022 From: amira.solo at gmail.com (Amira S) Date: Wed, 6 Jul 2022 06:02:31 +0300 Subject: Adding support for OpenSSL engine in Nginx Ingress controller In-Reply-To: References: Message-ID: Thank you - I will post this question in the nginx mailing list. On Mon, Jul 4, 2022 at 9:20 PM Maxim Dounin wrote: > Hello! > > On Mon, Jul 04, 2022 at 06:11:18AM +0300, Amira S wrote: > > > Hello, > > > > I want to add support for an ssl_engine + ssl_certificate/key directives > in > > the nignx.conf that configures an nginx server for ingress on kubernetes. > > > > This functionality is not provided by default, and I read that Snippets > may > > be the recommended way to add such support. > > > > Could you please assist me in adding such support? > > The ssl_engine should be part of the main-snippets but the > > ssl_certificate/key are under http and then under server, so not sure if > > http-snippets or server-snippets should be used. > > > > Also, if anyone could point me to a similar configuration sample, that > > would be very helpful. > > The Nginx Ingress Controller is a completely separate project, you > may want to use its own resources to request additional features. > > If you need help with configuring nginx, please use the nginx@ > mailing list instead; the nginx-devel@ mailing list is dedicated > to nginx development. > > Thank you. > > -- > Maxim Dounin > http://mdounin.ru/ > _______________________________________________ > nginx-devel mailing list -- nginx-devel at nginx.org > To unsubscribe send an email to nginx-devel-leave at nginx.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From xeioex at nginx.com Wed Jul 6 06:02:36 2022 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Wed, 06 Jul 2022 06:02:36 +0000 Subject: [njs] Ensuring that double type is always evaluated at standard precision. Message-ID: details: https://hg.nginx.org/njs/rev/545d2d21dda5 branches: changeset: 1903:545d2d21dda5 user: Dmitry Volyntsev date: Tue Jul 05 22:58:12 2022 -0700 description: Ensuring that double type is always evaluated at standard precision. Previously, GCC on x86 uses extended precision for intermediate calculations by default. This might conflict with njs_diyfp_t because GCC is not always rounds back the intermediate values to standard precision. The fix is to explicitly tell to a compiler to do so. This closes #507 issue on Github. diffstat: auto/types | 54 ++++++++++++++++++++++++++++++++++++++++++++++++++++++ 1 files changed, 54 insertions(+), 0 deletions(-) diffs (61 lines): diff -r b7c4e0f714a9 -r 545d2d21dda5 auto/types --- a/auto/types Tue Jun 28 23:04:00 2022 -0700 +++ b/auto/types Tue Jul 05 22:58:12 2022 -0700 @@ -118,3 +118,57 @@ njs_feature_test="#include return 0; }" . auto/feature + + +# Ensuring that double type is always evaluated at standard +# precision required by njs_diyfp_t + + +case $NJS_CC_NAME in + + gcc) + NJS_CFLAGS="$NJS_CFLAGS -fexcess-precision=standard" + ;; + + clang) + + njs_found=no + + njs_feature="flag -ffp-eval-method=double" + njs_feature_name=NJS_HAVE_FP_EVAL_METHOD + njs_feature_run=no + njs_feature_incs="-ffp-eval-method=double" + njs_feature_libs= + njs_feature_test="int main(void) { + return 0; + }" + + . auto/feature + + if [ $njs_found = yes ]; then + NJS_CFLAGS="$NJS_CFLAGS -ffp-eval-method=double" + fi + + ;; + + SunC) + + njs_found=no + + njs_feature="flag -xarch=sse2" + njs_feature_name=NJS_HAVE_XARCH_SSE2 + njs_feature_run=no + njs_feature_incs="-xarch=sse2" + njs_feature_libs= + njs_feature_test="int main(void) { + return 0; + }" + + . auto/feature + + if [ $njs_found = yes ]; then + NJS_CFLAGS="$NJS_CFLAGS -xarch=sse2" + fi + ;; + +esac From xeioex at nginx.com Wed Jul 6 23:56:44 2022 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Wed, 06 Jul 2022 23:56:44 +0000 Subject: [njs] HTTP: fixed r.headersOut setter for special headers. Message-ID: details: https://hg.nginx.org/njs/rev/ec90809374c0 branches: changeset: 1904:ec90809374c0 user: Dmitry Volyntsev date: Wed Jul 06 16:52:50 2022 -0700 description: HTTP: fixed r.headersOut setter for special headers. The issue was introduced in 5b7676ec600d (0.7.5) when njs module was adapted to changes in nginx/1.23 related to header structures. When special headers (Content-Length, Content-Type, Content-Encoding) were set, the value of the last outgoing header might be overwritten with a new set value. This closes #555 issue on Github. diffstat: nginx/ngx_http_js_module.c | 7 +++++-- 1 files changed, 5 insertions(+), 2 deletions(-) diffs (27 lines): diff -r 545d2d21dda5 -r ec90809374c0 nginx/ngx_http_js_module.c --- a/nginx/ngx_http_js_module.c Tue Jul 05 22:58:12 2022 -0700 +++ b/nginx/ngx_http_js_module.c Wed Jul 06 16:52:50 2022 -0700 @@ -3836,7 +3836,6 @@ ngx_http_js_header_out_special(njs_vm_t return NJS_ERROR; } - h = NULL; part = &headers->part; header = part->elts; @@ -3861,10 +3860,14 @@ ngx_http_js_header_out_special(njs_vm_t if (h->key.len == v->length && ngx_strncasecmp(h->key.data, v->start, v->length) == 0) { - break; + goto done; } } + h = NULL; + +done: + if (h != NULL && s.length == 0) { h->hash = 0; h = NULL; From pluknet at nginx.com Thu Jul 7 15:49:51 2022 From: pluknet at nginx.com (Sergey Kandaurov) Date: Thu, 7 Jul 2022 19:49:51 +0400 Subject: [PATCH 2 of 2] The "sort=" parameter of the "resolver" directive In-Reply-To: References: <04e314eb6b4d20a48c5d.1656433535@enoparse.local> Message-ID: <89FB05FF-EA2A-4AA5-8686-775CEF0A56F8@nginx.com> > On 6 Jul 2022, at 05:23, Maxim Dounin wrote: > > Hello! > > On Tue, Jun 28, 2022 at 08:25:36PM +0400, Sergey Kandaurov wrote: > >> # HG changeset patch >> # User Ruslan Ermilov >> # Date 1645589387 -10800 >> # Wed Feb 23 07:09:47 2022 +0300 >> # Node ID e80adbf788f6796c6bdf415938abb19b7aa43e3e >> # Parent 04e314eb6b4d20a48c5d7bab0609e1b03b51b406 >> The "sort=" parameter of the "resolver" directive. > > As already noted, should be "prefer=". > Fixed, thanks. >> >> diff -r 04e314eb6b4d -r e80adbf788f6 src/core/ngx_resolver.c >> --- a/src/core/ngx_resolver.c Wed Feb 23 07:08:37 2022 +0300 >> +++ b/src/core/ngx_resolver.c Wed Feb 23 07:09:47 2022 +0300 >> @@ -227,6 +227,7 @@ ngx_resolver_create(ngx_conf_t *cf, ngx_ >> } >> >> #if (NGX_HAVE_INET6) >> + >> if (ngx_strncmp(names[i].data, "ipv4=", 5) == 0) { >> >> if (ngx_strcmp(&names[i].data[5], "on") == 0) { >> @@ -260,6 +261,24 @@ ngx_resolver_create(ngx_conf_t *cf, ngx_ >> >> continue; >> } >> + >> + if (ngx_strncmp(names[i].data, "prefer=", 7) == 0) { >> + >> + if (ngx_strcmp(&names[i].data[7], "ipv4") == 0) { >> + r->prefer = NGX_RESOLVE_PREFER_A; >> + >> + } else if (ngx_strcmp(&names[i].data[7], "ipv6") == 0) { >> + r->prefer = NGX_RESOLVE_PREFER_AAAA; >> + >> + } else { >> + ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, >> + "invalid parameter: %V", &names[i]); >> + return NULL; >> + } >> + >> + continue; >> + } >> + >> #endif >> >> ngx_memzero(&u, sizeof(ngx_url_t)); >> @@ -4250,7 +4269,27 @@ ngx_resolver_export(ngx_resolver_t *r, n >> } >> >> i = 0; >> - d = rotate ? ngx_random() % n : 0; >> + >> + switch (r->prefer) { >> + >> +#if (NGX_HAVE_INET6) >> + case NGX_RESOLVE_PREFER_A: >> + d = 0; >> + break; >> + >> + case NGX_RESOLVE_PREFER_AAAA: >> + d = rn->naddrs6; >> + >> + if (d == n) { >> + d = 0; >> + } >> + >> + break; >> +#endif >> + >> + default: >> + d = rotate ? ngx_random() % n : 0; >> + } > > With this code, a configuration like this: > > resolver ... prefer=ipv4; > set $foo ""; > proxy_pass http://example.com$foo; > > will result in only IPv4 addresses being used assuming successful > connections, and IPv6 addresses being used only as a backup. This > looks quite different from the current behaviour, as well as from > what we do with > > proxy_pass http://example.com; > > when using system resolver. Can you please elaborate, what specific concerns are you referring to? The prefer option implements exactly the expected behaviour: first, a flat array is populated with preferred addresses (IPv4 for "prefer=ipv4", if any), then - with the rest, such as IPv6. The API user iterates though them until she gets a "successful" address. If the name is in the resolver cache, then rotation is also applied. The default nginx resolver behaviour is to rotate resolved addresses regardless of address families. Unlike this, in case of "prefer=ipv4", addresses are rotated within address families, that is, AFs are sorted: ipv4_x, ipv4_y, ipv4_z; ipv6_x, ipv6_y, ipv6_z This is close to how system resolver is used with getaddrinfo(), which depends on a preference and, if applicable, AF/address reachability. In the latter, I refer to AI_ADDRCONFIG and to getaddrinfo() implementation that uses connect(2)/getsockname(2) to get address family for reordering. E.g., even if a network interface has IPv6 addresses, they might still not be respected and will be inserted to the tail, regardless of policy (FreeBSD's libc hacked a bit for demo purposes): getaddrinfo: aio[0] flags=400 family=28 socktype=1 proto=6 connect failed aio[1] flags=400 family=28 socktype=1 proto=6 connect failed aio[2] flags=400 family=2 socktype=1 proto=6 srcsa 2 aio[3] flags=400 family=2 socktype=1 proto=6 srcsa 2 qsort(comp_dst) aio[0] flags=400 family=2 socktype=1 proto=6 aio[1] flags=400 family=2 socktype=1 proto=6 aio[2] flags=400 family=28 socktype=1 proto=6 aio[3] flags=400 family=28 socktype=1 proto=6 ngx_inet_resolve_host: family 2 family 2 family 28 family 28 But in general (e.g., for "localhost"), this works well: prefer_ipv4: family 2 family 28 prefer_ipv6: family 28 family 2 > > Not sure we want to introduce such behaviour. While it might be > closer to what RFC 6724 recommends for clients, it is clearly not > in line with how we handle multiple upstream addresses in general, > and certainly will confuse users. If we want to introduce this, > it probably should be at least consistent within resolver vs. > system resolver cases. If you refer to how we balance though multiple addresses in upstream implicitly defined with proxy_pass vs. proxy_pass with variable, then I tend to agree with you. In implicitly defined upstream, addresses are selected with rr balancer, which eventually makes them tried all. OTOH, the prefer option used for proxy_pass with variable, factually moves the unprefer addresses to backup, and since the upstream group isn't preserved across requests, this makes them almost never tried. But this is how proxy_pass with variable is used to work. > >> >> if (rn->naddrs) { >> j = rotate ? ngx_random() % rn->naddrs : 0; >> diff -r 04e314eb6b4d -r e80adbf788f6 src/core/ngx_resolver.h >> --- a/src/core/ngx_resolver.h Wed Feb 23 07:08:37 2022 +0300 >> +++ b/src/core/ngx_resolver.h Wed Feb 23 07:09:47 2022 +0300 >> @@ -36,6 +36,9 @@ >> >> #define NGX_RESOLVER_MAX_RECURSION 50 >> >> +#define NGX_RESOLVE_PREFER_A 1 >> +#define NGX_RESOLVE_PREFER_AAAA 2 >> + >> >> typedef struct ngx_resolver_s ngx_resolver_t; >> >> @@ -175,6 +178,8 @@ struct ngx_resolver_s { >> ngx_queue_t srv_expire_queue; >> ngx_queue_t addr_expire_queue; >> >> + unsigned prefer:2; >> + >> unsigned ipv4:1; >> >> #if (NGX_HAVE_INET6) > -- Sergey Kandaurov From mdounin at mdounin.ru Fri Jul 8 00:35:28 2022 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 8 Jul 2022 03:35:28 +0300 Subject: [PATCH 2 of 2] The "sort=" parameter of the "resolver" directive In-Reply-To: <89FB05FF-EA2A-4AA5-8686-775CEF0A56F8@nginx.com> References: <04e314eb6b4d20a48c5d.1656433535@enoparse.local> <89FB05FF-EA2A-4AA5-8686-775CEF0A56F8@nginx.com> Message-ID: Hello! On Thu, Jul 07, 2022 at 07:49:51PM +0400, Sergey Kandaurov wrote: > >> @@ -4250,7 +4269,27 @@ ngx_resolver_export(ngx_resolver_t *r, n > >> } > >> > >> i = 0; > >> - d = rotate ? ngx_random() % n : 0; > >> + > >> + switch (r->prefer) { > >> + > >> +#if (NGX_HAVE_INET6) > >> + case NGX_RESOLVE_PREFER_A: > >> + d = 0; > >> + break; > >> + > >> + case NGX_RESOLVE_PREFER_AAAA: > >> + d = rn->naddrs6; > >> + > >> + if (d == n) { > >> + d = 0; > >> + } > >> + > >> + break; > >> +#endif > >> + > >> + default: > >> + d = rotate ? ngx_random() % n : 0; > >> + } > > > > With this code, a configuration like this: > > > > resolver ... prefer=ipv4; > > set $foo ""; > > proxy_pass http://example.com$foo; > > > > will result in only IPv4 addresses being used assuming successful > > connections, and IPv6 addresses being used only as a backup. This > > looks quite different from the current behaviour, as well as from > > what we do with > > > > proxy_pass http://example.com; > > > > when using system resolver. > > Can you please elaborate, what specific concerns are you referring to? > The prefer option implements exactly the expected behaviour: > first, a flat array is populated with preferred addresses > (IPv4 for "prefer=ipv4", if any), then - with the rest, such as IPv6. > The API user iterates though them until she gets a "successful" address. > > If the name is in the resolver cache, then rotation is also applied. > The default nginx resolver behaviour is to rotate resolved addresses > regardless of address families. Unlike this, in case of "prefer=ipv4", > addresses are rotated within address families, that is, AFs are sorted: > ipv4_x, ipv4_y, ipv4_z; ipv6_x, ipv6_y, ipv6_z > > This is close to how system resolver is used with getaddrinfo(), which > depends on a preference and, if applicable, AF/address reachability. Try the two above configurations with a name which resolves to 127.0.0.1 and ::1, and with both addresses responding on port 80. Configuration without variables (using system resolver) will balance requests between both addresses, regardless of system resolver settings. Configuration with variables and resolver with "prefer=ipv4" will use only the IPv4 address. server { listen localhost:8080; location /dynamic/ { resolver 8.8.8.8 prefer=ipv4; set $foo ""; proxy_pass http://test.mdounin.ru:8081$foo; } location /static/ { proxy_pass http://test.mdounin.ru:8082; } } server { listen test.mdounin.ru:8081; listen test.mdounin.ru:8082; return 200 $server_addr\n; } Static configuration without variables uses both addresses: $ curl http://127.0.0.1:8080/static/ 127.0.0.1 $ curl http://127.0.0.1:8080/static/ ::1 $ curl http://127.0.0.1:8080/static/ 127.0.0.1 $ curl http://127.0.0.1:8080/static/ ::1 Dynamic configuration with "prefer=ipv4" will only use IPv4 (IPv6 addresses will be used only in case of errors): $ curl http://127.0.0.1:8080/dynamic/ 127.0.0.1 $ curl http://127.0.0.1:8080/dynamic/ 127.0.0.1 $ curl http://127.0.0.1:8080/dynamic/ 127.0.0.1 $ curl http://127.0.0.1:8080/dynamic/ 127.0.0.1 [...] > > Not sure we want to introduce such behaviour. While it might be > > closer to what RFC 6724 recommends for clients, it is clearly not > > in line with how we handle multiple upstream addresses in general, > > and certainly will confuse users. If we want to introduce this, > > it probably should be at least consistent within resolver vs. > > system resolver cases. > > If you refer to how we balance though multiple addresses in upstream > implicitly defined with proxy_pass vs. proxy_pass with variable, then > I tend to agree with you. In implicitly defined upstream, addresses > are selected with rr balancer, which eventually makes them tried all. > OTOH, the prefer option used for proxy_pass with variable, factually > moves the unprefer addresses to backup, and since the upstream group > isn't preserved across requests, this makes them almost never tried. > But this is how proxy_pass with variable is used to work. Yes, I refer to the difference in handling of multiple upstream addresses which is introduced with this change. Right now there are no difference in the behaviour of static proxy_pass (without variables) and dynamic one (with variables). With "prefer=ipv4" as implemented the difference appears, and this certainly breaks POLA. One possible option would be to change "prefer=" to rotate all addresses, so proxy_pass will try them all. With this approach, "prefer=ipv4" would be exactly equivalent to the default behaviour (on the first resolution, resolver returns list of all IPv4 addresses, followed by all IPv6 addresses, and then addresses are rotated) and "prefer=ipv6" would use the reverse order on the first resolution (IPv6 followed by IPv4). Not sure it is at all needed though (but might be still beneficial for tasks like OCSP server resolution). -- Maxim Dounin http://mdounin.ru/ From pluknet at nginx.com Tue Jul 12 14:59:39 2022 From: pluknet at nginx.com (Sergey Kandaurov) Date: Tue, 12 Jul 2022 18:59:39 +0400 Subject: [PATCH 2 of 2] The "sort=" parameter of the "resolver" directive In-Reply-To: References: <04e314eb6b4d20a48c5d.1656433535@enoparse.local> <89FB05FF-EA2A-4AA5-8686-775CEF0A56F8@nginx.com> Message-ID: > On 8 Jul 2022, at 04:35, Maxim Dounin wrote: > > Hello! > > On Thu, Jul 07, 2022 at 07:49:51PM +0400, Sergey Kandaurov wrote: > >>>> @@ -4250,7 +4269,27 @@ ngx_resolver_export(ngx_resolver_t *r, n >>>> } >>>> >>>> i = 0; >>>> - d = rotate ? ngx_random() % n : 0; >>>> + >>>> + switch (r->prefer) { >>>> + >>>> +#if (NGX_HAVE_INET6) >>>> + case NGX_RESOLVE_PREFER_A: >>>> + d = 0; >>>> + break; >>>> + >>>> + case NGX_RESOLVE_PREFER_AAAA: >>>> + d = rn->naddrs6; >>>> + >>>> + if (d == n) { >>>> + d = 0; >>>> + } >>>> + >>>> + break; >>>> +#endif >>>> + >>>> + default: >>>> + d = rotate ? ngx_random() % n : 0; >>>> + } >>> >>> With this code, a configuration like this: >>> >>> resolver ... prefer=ipv4; >>> set $foo ""; >>> proxy_pass http://example.com$foo; >>> >>> will result in only IPv4 addresses being used assuming successful >>> connections, and IPv6 addresses being used only as a backup. This >>> looks quite different from the current behaviour, as well as from >>> what we do with >>> >>> proxy_pass http://example.com; >>> >>> when using system resolver. >> >> Can you please elaborate, what specific concerns are you referring to? >> The prefer option implements exactly the expected behaviour: >> first, a flat array is populated with preferred addresses >> (IPv4 for "prefer=ipv4", if any), then - with the rest, such as IPv6. >> The API user iterates though them until she gets a "successful" address. >> >> If the name is in the resolver cache, then rotation is also applied. >> The default nginx resolver behaviour is to rotate resolved addresses >> regardless of address families. Unlike this, in case of "prefer=ipv4", >> addresses are rotated within address families, that is, AFs are sorted: >> ipv4_x, ipv4_y, ipv4_z; ipv6_x, ipv6_y, ipv6_z >> >> This is close to how system resolver is used with getaddrinfo(), which >> depends on a preference and, if applicable, AF/address reachability. > > Try the two above configurations with a name which resolves to > 127.0.0.1 and ::1, and with both addresses responding on port 80. > Configuration without variables (using system resolver) will > balance requests between both addresses, regardless of system > resolver settings. Configuration with variables and resolver with > "prefer=ipv4" will use only the IPv4 address. > > server { > listen localhost:8080; > > location /dynamic/ { > resolver 8.8.8.8 prefer=ipv4; > set $foo ""; > proxy_pass http://test.mdounin.ru:8081$foo; > } > > location /static/ { > proxy_pass http://test.mdounin.ru:8082; > } > } > > server { > listen test.mdounin.ru:8081; > listen test.mdounin.ru:8082; > return 200 $server_addr\n; > } > > Static configuration without variables uses both addresses: > > $ curl http://127.0.0.1:8080/static/ > 127.0.0.1 > $ curl http://127.0.0.1:8080/static/ > ::1 > $ curl http://127.0.0.1:8080/static/ > 127.0.0.1 > $ curl http://127.0.0.1:8080/static/ > ::1 > > Dynamic configuration with "prefer=ipv4" will only use IPv4 (IPv6 > addresses will be used only in case of errors): > > $ curl http://127.0.0.1:8080/dynamic/ > 127.0.0.1 > $ curl http://127.0.0.1:8080/dynamic/ > 127.0.0.1 > $ curl http://127.0.0.1:8080/dynamic/ > 127.0.0.1 > $ curl http://127.0.0.1:8080/dynamic/ > 127.0.0.1 > > [...] Thanks for clarification. > >>> Not sure we want to introduce such behaviour. While it might be >>> closer to what RFC 6724 recommends for clients, it is clearly not >>> in line with how we handle multiple upstream addresses in general, >>> and certainly will confuse users. If we want to introduce this, >>> it probably should be at least consistent within resolver vs. >>> system resolver cases. >> >> If you refer to how we balance though multiple addresses in upstream >> implicitly defined with proxy_pass vs. proxy_pass with variable, then >> I tend to agree with you. In implicitly defined upstream, addresses >> are selected with rr balancer, which eventually makes them tried all. >> OTOH, the prefer option used for proxy_pass with variable, factually >> moves the unprefer addresses to backup, and since the upstream group >> isn't preserved across requests, this makes them almost never tried. >> But this is how proxy_pass with variable is used to work. > > Yes, I refer to the difference in handling of multiple upstream > addresses which is introduced with this change. Right now there > are no difference in the behaviour of static proxy_pass (without > variables) and dynamic one (with variables). With "prefer=ipv4" > as implemented the difference appears, and this certainly breaks > POLA. > > One possible option would be to change "prefer=" to rotate all > addresses, so proxy_pass will try them all. With this approach, > "prefer=ipv4" would be exactly equivalent to the default behaviour > (on the first resolution, resolver returns list of all IPv4 > addresses, followed by all IPv6 addresses, and then addresses are > rotated) and "prefer=ipv6" would use the reverse order on the > first resolution (IPv6 followed by IPv4). Not sure it is at all > needed though (but might be still beneficial for tasks like OCSP > server resolution). Updated patch to rotate regardless of preference. Below is hg diff -w on top off of the previous one, for clarity: diff -r d9a8c2d87055 src/core/ngx_resolver.c --- a/src/core/ngx_resolver.c Wed Jul 06 17:10:24 2022 +0400 +++ b/src/core/ngx_resolver.c Tue Jul 12 18:55:56 2022 +0400 @@ -4270,6 +4270,10 @@ ngx_resolver_export(ngx_resolver_t *r, n i = 0; + if (rotate) { + d = ngx_random() % n; + + } else { switch (r->prefer) { #if (NGX_HAVE_INET6) @@ -4288,7 +4292,8 @@ ngx_resolver_export(ngx_resolver_t *r, n #endif default: - d = rotate ? ngx_random() % n : 0; + d = 0; + } } if (rn->naddrs) { Personally, I think such patch has a little sense. Keeping preference is reasonable e.g. to avoid connections over certain address family, should they be slow/tunneled or otherwise uneven, which I believe is quite common in practice. Such way, they would be always tried as a last resort and indeed can be seen as a backup option. In that sense, I'm rather skeptical about rotation behaviour in static proxy_pass (by round-robin means) that doesn't distinguish address families and apparently is a legacy from times (4d68c486fcb0) before URLs obtained IPv6 support in eaf95350d75c. Changing it requires more investment though and certainly breaks POLA. OTOH, with "ipv4=" introduction, such unpreferred connections can be simply disabled. # HG changeset patch # User Ruslan Ermilov # Date 1657113024 -14400 # Wed Jul 06 17:10:24 2022 +0400 # Node ID 496a9b8d14048295723bd3e21ef4a9a1129cb97c # Parent d8c0fb82b7a8d3f958b13db06f2f911d95a41644 The "prefer=" parameter of the "resolver" directive. diff -r d8c0fb82b7a8 -r 496a9b8d1404 src/core/ngx_resolver.c --- a/src/core/ngx_resolver.c Wed Jul 06 17:09:40 2022 +0400 +++ b/src/core/ngx_resolver.c Wed Jul 06 17:10:24 2022 +0400 @@ -227,6 +227,7 @@ ngx_resolver_create(ngx_conf_t *cf, ngx_ } #if (NGX_HAVE_INET6) + if (ngx_strncmp(names[i].data, "ipv4=", 5) == 0) { if (ngx_strcmp(&names[i].data[5], "on") == 0) { @@ -260,6 +261,24 @@ ngx_resolver_create(ngx_conf_t *cf, ngx_ continue; } + + if (ngx_strncmp(names[i].data, "prefer=", 7) == 0) { + + if (ngx_strcmp(&names[i].data[7], "ipv4") == 0) { + r->prefer = NGX_RESOLVE_PREFER_A; + + } else if (ngx_strcmp(&names[i].data[7], "ipv6") == 0) { + r->prefer = NGX_RESOLVE_PREFER_AAAA; + + } else { + ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, + "invalid parameter: %V", &names[i]); + return NULL; + } + + continue; + } + #endif ngx_memzero(&u, sizeof(ngx_url_t)); @@ -4250,7 +4269,32 @@ ngx_resolver_export(ngx_resolver_t *r, n } i = 0; - d = rotate ? ngx_random() % n : 0; + + if (rotate) { + d = ngx_random() % n; + + } else { + switch (r->prefer) { + +#if (NGX_HAVE_INET6) + case NGX_RESOLVE_PREFER_A: + d = 0; + break; + + case NGX_RESOLVE_PREFER_AAAA: + d = rn->naddrs6; + + if (d == n) { + d = 0; + } + + break; +#endif + + default: + d = 0; + } + } if (rn->naddrs) { j = rotate ? ngx_random() % rn->naddrs : 0; diff -r d8c0fb82b7a8 -r 496a9b8d1404 src/core/ngx_resolver.h --- a/src/core/ngx_resolver.h Wed Jul 06 17:09:40 2022 +0400 +++ b/src/core/ngx_resolver.h Wed Jul 06 17:10:24 2022 +0400 @@ -36,6 +36,9 @@ #define NGX_RESOLVER_MAX_RECURSION 50 +#define NGX_RESOLVE_PREFER_A 1 +#define NGX_RESOLVE_PREFER_AAAA 2 + typedef struct ngx_resolver_s ngx_resolver_t; @@ -175,6 +178,8 @@ struct ngx_resolver_s { ngx_queue_t srv_expire_queue; ngx_queue_t addr_expire_queue; + unsigned prefer:2; + unsigned ipv4:1; #if (NGX_HAVE_INET6) -- Sergey Kandaurov From pluknet at nginx.com Tue Jul 12 15:15:58 2022 From: pluknet at nginx.com (Sergey Kandaurov) Date: Tue, 12 Jul 2022 15:15:58 +0000 Subject: [nginx] SSL: logging levels of various errors added in OpenSSL 1.1.1. Message-ID: details: https://hg.nginx.org/nginx/rev/cac164d0807e branches: changeset: 8054:cac164d0807e user: Maxim Dounin date: Tue Jul 12 15:55:22 2022 +0300 description: SSL: logging levels of various errors added in OpenSSL 1.1.1. Starting with OpenSSL 1.1.1, various additional errors can be reported by OpenSSL in case of client-related issues, most notably during TLSv1.3 handshakes. In particular, SSL_R_BAD_KEY_SHARE ("bad key share"), SSL_R_BAD_EXTENSION ("bad extension"), SSL_R_BAD_CIPHER ("bad cipher"), SSL_R_BAD_ECPOINT ("bad ecpoint"). These are now logged at the "info" level. diffstat: src/event/ngx_event_openssl.c | 12 ++++++++++++ 1 files changed, 12 insertions(+), 0 deletions(-) diffs (36 lines): diff -r 9d98d524bd02 -r cac164d0807e src/event/ngx_event_openssl.c --- a/src/event/ngx_event_openssl.c Wed Jun 29 02:47:45 2022 +0300 +++ b/src/event/ngx_event_openssl.c Tue Jul 12 15:55:22 2022 +0300 @@ -3343,6 +3343,12 @@ ngx_ssl_connection_error(ngx_connection_ #ifdef SSL_R_NO_SUITABLE_KEY_SHARE || n == SSL_R_NO_SUITABLE_KEY_SHARE /* 101 */ #endif +#ifdef SSL_R_BAD_KEY_SHARE + || n == SSL_R_BAD_KEY_SHARE /* 108 */ +#endif +#ifdef SSL_R_BAD_EXTENSION + || n == SSL_R_BAD_EXTENSION /* 110 */ +#endif #ifdef SSL_R_NO_SUITABLE_SIGNATURE_ALGORITHM || n == SSL_R_NO_SUITABLE_SIGNATURE_ALGORITHM /* 118 */ #endif @@ -3357,6 +3363,9 @@ ngx_ssl_connection_error(ngx_connection_ || n == SSL_R_NO_CIPHERS_PASSED /* 182 */ #endif || n == SSL_R_NO_CIPHERS_SPECIFIED /* 183 */ +#ifdef SSL_R_BAD_CIPHER + || n == SSL_R_BAD_CIPHER /* 186 */ +#endif || n == SSL_R_NO_COMPRESSION_SPECIFIED /* 187 */ || n == SSL_R_NO_SHARED_CIPHER /* 193 */ || n == SSL_R_RECORD_LENGTH_MISMATCH /* 213 */ @@ -3391,6 +3400,9 @@ ngx_ssl_connection_error(ngx_connection_ #ifdef SSL_R_APPLICATION_DATA_ON_SHUTDOWN || n == SSL_R_APPLICATION_DATA_ON_SHUTDOWN /* 291 */ #endif +#ifdef SSL_R_BAD_ECPOINT + || n == SSL_R_BAD_ECPOINT /* 306 */ +#endif #ifdef SSL_R_RENEGOTIATE_EXT_TOO_LONG || n == SSL_R_RENEGOTIATE_EXT_TOO_LONG /* 335 */ || n == SSL_R_RENEGOTIATION_ENCODING_ERR /* 336 */ From xeioex at nginx.com Tue Jul 12 15:57:09 2022 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Tue, 12 Jul 2022 15:57:09 +0000 Subject: [njs] Added btoa() and atob() from WHATWG spec. Message-ID: details: https://hg.nginx.org/njs/rev/3acc8a1d9088 branches: changeset: 1906:3acc8a1d9088 user: Dmitry Volyntsev date: Tue Jul 12 08:56:35 2022 -0700 description: Added btoa() and atob() from WHATWG spec. The functions encode and decode to Base64 and from Base64. diffstat: src/njs_builtin.c | 16 +++ src/njs_string.c | 239 +++++++++++++++++++++++++++++++++++++++++++++- src/njs_string.h | 4 + src/test/njs_unit_test.c | 64 ++++++++++++ 4 files changed, 315 insertions(+), 8 deletions(-) diffs (384 lines): diff -r efff48e84bdb -r 3acc8a1d9088 src/njs_builtin.c --- a/src/njs_builtin.c Mon Jul 11 07:25:03 2022 -0700 +++ b/src/njs_builtin.c Tue Jul 12 08:56:35 2022 -0700 @@ -1247,6 +1247,22 @@ static const njs_object_prop_t njs_glob { .type = NJS_PROPERTY, + .name = njs_long_string("atob"), + .value = njs_native_function(njs_string_atob, 1), + .writable = 1, + .configurable = 1, + }, + + { + .type = NJS_PROPERTY, + .name = njs_long_string("btoa"), + .value = njs_native_function(njs_string_btoa, 1), + .writable = 1, + .configurable = 1, + }, + + { + .type = NJS_PROPERTY, .name = njs_string("eval"), .value = njs_native_function(njs_eval_function, 1), .writable = 1, diff -r efff48e84bdb -r 3acc8a1d9088 src/njs_string.c --- a/src/njs_string.c Mon Jul 11 07:25:03 2022 -0700 +++ b/src/njs_string.c Tue Jul 12 08:56:35 2022 -0700 @@ -50,6 +50,12 @@ static u_char njs_basis64url[] = { }; +static u_char njs_basis64_enc[] = + "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/"; +static u_char njs_basis64url_enc[] = + "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789-_"; + + static void njs_encode_base64_core(njs_str_t *dst, const njs_str_t *src, const u_char *basis, njs_uint_t padding); static njs_int_t njs_string_decode_base64_core(njs_vm_t *vm, @@ -310,10 +316,8 @@ njs_encode_hex_length(const njs_str_t *s void njs_encode_base64(njs_str_t *dst, const njs_str_t *src) { - static u_char basis64[] = - "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/"; - - njs_encode_base64_core(dst, src, basis64, 1); + + njs_encode_base64_core(dst, src, njs_basis64_enc, 1); } @@ -335,10 +339,7 @@ njs_encode_base64_length(const njs_str_t static void njs_encode_base64url(njs_str_t *dst, const njs_str_t *src) { - static u_char basis64[] = - "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789-_"; - - njs_encode_base64_core(dst, src, basis64, 0); + njs_encode_base64_core(dst, src, njs_basis64url_enc, 0); } @@ -4728,6 +4729,228 @@ uri_error: } +njs_int_t +njs_string_btoa(njs_vm_t *vm, njs_value_t *args, njs_uint_t nargs, + njs_index_t unused) +{ + u_char *dst; + size_t len, length; + uint32_t cp0, cp1, cp2; + njs_int_t ret; + njs_value_t *value, lvalue; + const u_char *p, *end; + njs_string_prop_t string; + njs_unicode_decode_t ctx; + + value = njs_lvalue_arg(&lvalue, args, nargs, 1); + + ret = njs_value_to_string(vm, value, value); + if (njs_slow_path(ret != NJS_OK)) { + return ret; + } + + len = njs_string_prop(&string, value); + + p = string.start; + end = string.start + string.size; + + njs_utf8_decode_init(&ctx); + + length = njs_base64_encoded_length(len); + + dst = njs_string_alloc(vm, &vm->retval, length, length); + if (njs_slow_path(dst == NULL)) { + return NJS_ERROR; + } + + while (len > 2 && p < end) { + cp0 = njs_utf8_decode(&ctx, &p, end); + cp1 = njs_utf8_decode(&ctx, &p, end); + cp2 = njs_utf8_decode(&ctx, &p, end); + + if (njs_slow_path(cp0 > 0xff || cp1 > 0xff || cp2 > 0xff)) { + goto error; + } + + *dst++ = njs_basis64_enc[cp0 >> 2]; + *dst++ = njs_basis64_enc[((cp0 & 0x03) << 4) | (cp1 >> 4)]; + *dst++ = njs_basis64_enc[((cp1 & 0x0f) << 2) | (cp2 >> 6)]; + *dst++ = njs_basis64_enc[cp2 & 0x3f]; + + len -= 3; + } + + if (len > 0) { + cp0 = njs_utf8_decode(&ctx, &p, end); + if (njs_slow_path(cp0 > 0xff)) { + goto error; + } + + *dst++ = njs_basis64_enc[cp0 >> 2]; + + if (len == 1) { + *dst++ = njs_basis64_enc[(cp0 & 0x03) << 4]; + *dst++ = '='; + *dst++ = '='; + + } else { + cp1 = njs_utf8_decode(&ctx, &p, end); + if (njs_slow_path(cp1 > 0xff)) { + goto error; + } + + *dst++ = njs_basis64_enc[((cp0 & 0x03) << 4) | (cp1 >> 4)]; + *dst++ = njs_basis64_enc[(cp1 & 0x0f) << 2]; + *dst++ = '='; + } + + } + + return NJS_OK; + +error: + + njs_type_error(vm, "invalid character (>= U+00FF)"); + + return NJS_ERROR; +} + + +njs_inline void +njs_chb_write_byte_as_utf8(njs_chb_t *chain, u_char byte) +{ + njs_utf8_encode(njs_chb_current(chain), byte); + njs_chb_written(chain, njs_utf8_size(byte)); +} + + +njs_int_t +njs_string_atob(njs_vm_t *vm, njs_value_t *args, njs_uint_t nargs, + njs_index_t unused) +{ + size_t i, n, len, pad; + u_char *dst, *tmp, *p; + ssize_t size; + njs_str_t str; + njs_int_t ret; + njs_chb_t chain; + njs_value_t *value, lvalue; + const u_char *b64, *s; + + value = njs_lvalue_arg(&lvalue, args, nargs, 1); + + ret = njs_value_to_string(vm, value, value); + if (njs_slow_path(ret != NJS_OK)) { + return ret; + } + + /* Forgiving-base64 decode. */ + + b64 = njs_basis64; + njs_string_get(value, &str); + + tmp = njs_mp_alloc(vm->mem_pool, str.length); + if (tmp == NULL) { + njs_memory_error(vm); + return NJS_ERROR; + } + + p = tmp; + + for (i = 0; i < str.length; i++) { + if (njs_slow_path(str.start[i] == ' ')) { + continue; + } + + *p++ = str.start[i]; + } + + pad = 0; + str.start = tmp; + str.length = p - tmp; + + if (str.length % 4 == 0) { + if (str.length > 0) { + if (str.start[str.length - 1] == '=') { + pad += 1; + } + + if (str.start[str.length - 2] == '=') { + pad += 1; + } + } + + } else if (str.length % 4 == 1) { + goto error; + } + + for (i = 0; i < str.length - pad; i++) { + if (njs_slow_path(b64[str.start[i]] == 77)) { + goto error; + } + } + + len = njs_base64_decoded_length(str.length, pad); + + njs_chb_init(&chain, vm->mem_pool); + + dst = njs_chb_reserve(&chain, len * 2); + if (njs_slow_path(dst == NULL)) { + njs_memory_error(vm); + return NJS_ERROR; + } + + n = len; + s = str.start; + + while (n >= 3) { + njs_chb_write_byte_as_utf8(&chain, b64[s[0]] << 2 | b64[s[1]] >> 4); + njs_chb_write_byte_as_utf8(&chain, b64[s[1]] << 4 | b64[s[2]] >> 2); + njs_chb_write_byte_as_utf8(&chain, b64[s[2]] << 6 | b64[s[3]]); + + s += 4; + n -= 3; + } + + if (n >= 1) { + njs_chb_write_byte_as_utf8(&chain, b64[s[0]] << 2 | b64[s[1]] >> 4); + } + + if (n >= 2) { + njs_chb_write_byte_as_utf8(&chain, b64[s[1]] << 4 | b64[s[2]] >> 2); + } + + size = njs_chb_size(&chain); + if (njs_slow_path(size < 0)) { + njs_memory_error(vm); + return NJS_ERROR; + } + + if (size == 0) { + njs_value_assign(&vm->retval, &njs_string_empty); + return NJS_OK; + } + + dst = njs_string_alloc(vm, &vm->retval, size, len); + if (njs_slow_path(dst == NULL)) { + return NJS_ERROR; + } + + njs_chb_join_to(&chain, dst); + njs_chb_destroy(&chain); + + njs_mp_free(vm->mem_pool, tmp); + + return NJS_OK; + +error: + + njs_type_error(vm, "the string to be decoded is not correctly encoded"); + + return NJS_ERROR; +} + + const njs_object_type_init_t njs_string_type_init = { .constructor = njs_native_ctor(njs_string_constructor, 1, 0), .constructor_props = &njs_string_constructor_init, diff -r efff48e84bdb -r 3acc8a1d9088 src/njs_string.h --- a/src/njs_string.h Mon Jul 11 07:25:03 2022 -0700 +++ b/src/njs_string.h Tue Jul 12 08:56:35 2022 -0700 @@ -251,6 +251,10 @@ njs_int_t njs_string_encode_uri(njs_vm_t njs_uint_t nargs, njs_index_t component); njs_int_t njs_string_decode_uri(njs_vm_t *vm, njs_value_t *args, njs_uint_t nargs, njs_index_t component); +njs_int_t njs_string_btoa(njs_vm_t *vm, njs_value_t *args, njs_uint_t nargs, + njs_index_t unused); +njs_int_t njs_string_atob(njs_vm_t *vm, njs_value_t *args, njs_uint_t nargs, + njs_index_t unused); njs_int_t njs_string_prototype_concat(njs_vm_t *vm, njs_value_t *args, njs_uint_t nargs, njs_index_t unused); diff -r efff48e84bdb -r 3acc8a1d9088 src/test/njs_unit_test.c --- a/src/test/njs_unit_test.c Mon Jul 11 07:25:03 2022 -0700 +++ b/src/test/njs_unit_test.c Tue Jul 12 08:56:35 2022 -0700 @@ -9439,6 +9439,70 @@ static njs_unit_test_t njs_test[] = ".every(v=>{var r = v(); return (typeof r === 'string') && r === 'undefined';})"), njs_str("true")}, + /* btoa() */ + + { njs_str("[" + " undefined," + " ''," + " '\\x00'," + " '\\x00\\x01'," + " '\\x00\\x01\\x02'," + " '\\x00\\xfe\\xff'," + " String.fromCodePoint(0x100)," + " String.fromCodePoint(0x00, 0x100)," + " String.fromCodePoint(0x00, 0x01, 0x100)," + " String.bytesFrom([0x80])," + " String.bytesFrom([0x60, 0x80])," + " String.bytesFrom([0x60, 0x60, 0x80])," + "].map(v => { try { return btoa(v); } catch (e) { return '#'} })"), + njs_str("dW5kZWZpbmVk,,AA==,AAE=,AAEC,AP7/,#,#,#,#,#,#")}, + + /* atob() */ + + { njs_str("function c(s) {" + " let cp = [];" + " for (var i = 0; i < s.length; i++) {" + " cp.push(s.codePointAt(i));" + " }" + " return cp;" + "};" + "" + "[" + " undefined," + " ''," + " '='," + " '=='," + " '==='," + " '===='," + " 'AA@'," + " '@'," + " 'A==A'," + " btoa(String.fromCharCode.apply(null, [1]))," + " btoa(String.fromCharCode.apply(null, [1, 2]))," + " btoa(String.fromCharCode.apply(null, [1, 2, 255]))," + " btoa(String.fromCharCode.apply(null, [255, 1, 2, 3]))," + "].map(v => { try { return njs.dump(c(atob(v))); } catch (e) { return '#'} })"), + njs_str("#,[],#,#,#,#,#,#,#,[1],[1,2],[1,2,255],[255,1,2,3]")}, + + { njs_str("function c(s) {" + " let cp = [];" + " for (var i = 0; i < s.length; i++) {" + " cp.push(s.codePointAt(i));" + " }" + " return cp;" + "};" + "" + "[" + " 'CDRW'," + " ' CDRW'," + " 'C DRW'," + " 'CD RW'," + " 'CDR W'," + " 'CDRW '," + " ' C D R W '," + "].every(v => c(atob(v)).toString() == '8,52,86')"), + njs_str("true")}, + /* Functions. */ { njs_str("return"), From mdounin at mdounin.ru Tue Jul 12 16:19:43 2022 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 12 Jul 2022 19:19:43 +0300 Subject: [PATCH 2 of 2] The "sort=" parameter of the "resolver" directive In-Reply-To: References: <04e314eb6b4d20a48c5d.1656433535@enoparse.local> <89FB05FF-EA2A-4AA5-8686-775CEF0A56F8@nginx.com> Message-ID: Hello! On Tue, Jul 12, 2022 at 06:59:39PM +0400, Sergey Kandaurov wrote: > > On 8 Jul 2022, at 04:35, Maxim Dounin wrote: > > > > On Thu, Jul 07, 2022 at 07:49:51PM +0400, Sergey Kandaurov wrote: > > > >>>> @@ -4250,7 +4269,27 @@ ngx_resolver_export(ngx_resolver_t *r, n > >>>> } > >>>> > >>>> i = 0; > >>>> - d = rotate ? ngx_random() % n : 0; > >>>> + > >>>> + switch (r->prefer) { > >>>> + > >>>> +#if (NGX_HAVE_INET6) > >>>> + case NGX_RESOLVE_PREFER_A: > >>>> + d = 0; > >>>> + break; > >>>> + > >>>> + case NGX_RESOLVE_PREFER_AAAA: > >>>> + d = rn->naddrs6; > >>>> + > >>>> + if (d == n) { > >>>> + d = 0; > >>>> + } > >>>> + > >>>> + break; > >>>> +#endif > >>>> + > >>>> + default: > >>>> + d = rotate ? ngx_random() % n : 0; > >>>> + } > >>> > >>> With this code, a configuration like this: > >>> > >>> resolver ... prefer=ipv4; > >>> set $foo ""; > >>> proxy_pass http://example.com$foo; > >>> > >>> will result in only IPv4 addresses being used assuming successful > >>> connections, and IPv6 addresses being used only as a backup. This > >>> looks quite different from the current behaviour, as well as from > >>> what we do with > >>> > >>> proxy_pass http://example.com; > >>> > >>> when using system resolver. > >> > >> Can you please elaborate, what specific concerns are you referring to? > >> The prefer option implements exactly the expected behaviour: > >> first, a flat array is populated with preferred addresses > >> (IPv4 for "prefer=ipv4", if any), then - with the rest, such as IPv6. > >> The API user iterates though them until she gets a "successful" address. > >> > >> If the name is in the resolver cache, then rotation is also applied. > >> The default nginx resolver behaviour is to rotate resolved addresses > >> regardless of address families. Unlike this, in case of "prefer=ipv4", > >> addresses are rotated within address families, that is, AFs are sorted: > >> ipv4_x, ipv4_y, ipv4_z; ipv6_x, ipv6_y, ipv6_z > >> > >> This is close to how system resolver is used with getaddrinfo(), which > >> depends on a preference and, if applicable, AF/address reachability. > > > > Try the two above configurations with a name which resolves to > > 127.0.0.1 and ::1, and with both addresses responding on port 80. > > Configuration without variables (using system resolver) will > > balance requests between both addresses, regardless of system > > resolver settings. Configuration with variables and resolver with > > "prefer=ipv4" will use only the IPv4 address. > > > > server { > > listen localhost:8080; > > > > location /dynamic/ { > > resolver 8.8.8.8 prefer=ipv4; > > set $foo ""; > > proxy_pass http://test.mdounin.ru:8081$foo; > > } > > > > location /static/ { > > proxy_pass http://test.mdounin.ru:8082; > > } > > } > > > > server { > > listen test.mdounin.ru:8081; > > listen test.mdounin.ru:8082; > > return 200 $server_addr\n; > > } > > > > Static configuration without variables uses both addresses: > > > > $ curl http://127.0.0.1:8080/static/ > > 127.0.0.1 > > $ curl http://127.0.0.1:8080/static/ > > ::1 > > $ curl http://127.0.0.1:8080/static/ > > 127.0.0.1 > > $ curl http://127.0.0.1:8080/static/ > > ::1 > > > > Dynamic configuration with "prefer=ipv4" will only use IPv4 (IPv6 > > addresses will be used only in case of errors): > > > > $ curl http://127.0.0.1:8080/dynamic/ > > 127.0.0.1 > > $ curl http://127.0.0.1:8080/dynamic/ > > 127.0.0.1 > > $ curl http://127.0.0.1:8080/dynamic/ > > 127.0.0.1 > > $ curl http://127.0.0.1:8080/dynamic/ > > 127.0.0.1 > > > > [...] > > Thanks for clarification. > > > > >>> Not sure we want to introduce such behaviour. While it might be > >>> closer to what RFC 6724 recommends for clients, it is clearly not > >>> in line with how we handle multiple upstream addresses in general, > >>> and certainly will confuse users. If we want to introduce this, > >>> it probably should be at least consistent within resolver vs. > >>> system resolver cases. > >> > >> If you refer to how we balance though multiple addresses in upstream > >> implicitly defined with proxy_pass vs. proxy_pass with variable, then > >> I tend to agree with you. In implicitly defined upstream, addresses > >> are selected with rr balancer, which eventually makes them tried all. > >> OTOH, the prefer option used for proxy_pass with variable, factually > >> moves the unprefer addresses to backup, and since the upstream group > >> isn't preserved across requests, this makes them almost never tried. > >> But this is how proxy_pass with variable is used to work. > > > > Yes, I refer to the difference in handling of multiple upstream > > addresses which is introduced with this change. Right now there > > are no difference in the behaviour of static proxy_pass (without > > variables) and dynamic one (with variables). With "prefer=ipv4" > > as implemented the difference appears, and this certainly breaks > > POLA. > > > > One possible option would be to change "prefer=" to rotate all > > addresses, so proxy_pass will try them all. With this approach, > > "prefer=ipv4" would be exactly equivalent to the default behaviour > > (on the first resolution, resolver returns list of all IPv4 > > addresses, followed by all IPv6 addresses, and then addresses are > > rotated) and "prefer=ipv6" would use the reverse order on the > > first resolution (IPv6 followed by IPv4). Not sure it is at all > > needed though (but might be still beneficial for tasks like OCSP > > server resolution). > > Updated patch to rotate regardless of preference. > Below is hg diff -w on top off of the previous one, for clarity: > > diff -r d9a8c2d87055 src/core/ngx_resolver.c > --- a/src/core/ngx_resolver.c Wed Jul 06 17:10:24 2022 +0400 > +++ b/src/core/ngx_resolver.c Tue Jul 12 18:55:56 2022 +0400 > @@ -4270,6 +4270,10 @@ ngx_resolver_export(ngx_resolver_t *r, n > > i = 0; > > + if (rotate) { > + d = ngx_random() % n; > + > + } else { > switch (r->prefer) { > > #if (NGX_HAVE_INET6) > @@ -4288,7 +4292,8 @@ ngx_resolver_export(ngx_resolver_t *r, n > #endif > > default: > - d = rotate ? ngx_random() % n : 0; > + d = 0; > + } > } > > if (rn->naddrs) { > > Personally, I think such patch has a little sense. Keeping preference > is reasonable e.g. to avoid connections over certain address family, > should they be slow/tunneled or otherwise uneven, which I believe is > quite common in practice. Such way, they would be always tried as a > last resort and indeed can be seen as a backup option. > In that sense, I'm rather skeptical about rotation behaviour in static > proxy_pass (by round-robin means) that doesn't distinguish address families > and apparently is a legacy from times (4d68c486fcb0) before URLs obtained > IPv6 support in eaf95350d75c. > Changing it requires more investment though and certainly breaks POLA. > OTOH, with "ipv4=" introduction, such unpreferred connections can be > simply disabled. I generally agree. For now, the best option seems to postpone the "prefer=" patch. [...] > i = 0; > - d = rotate ? ngx_random() % n : 0; > + > + if (rotate) { > + d = ngx_random() % n; > + > + } else { > + switch (r->prefer) { > + > +#if (NGX_HAVE_INET6) > + case NGX_RESOLVE_PREFER_A: > + d = 0; > + break; > + > + case NGX_RESOLVE_PREFER_AAAA: > + d = rn->naddrs6; > + > + if (d == n) { > + d = 0; > + } > + > + break; > +#endif > + > + default: > + d = 0; Just a side note: in this form, NGX_RESOLVE_PREFER_A is exactly equivalent to no preference, so r->prefer can be represented with just one bit. -- Maxim Dounin http://mdounin.ru/ From pluknet at nginx.com Tue Jul 12 19:47:22 2022 From: pluknet at nginx.com (Sergey Kandaurov) Date: Tue, 12 Jul 2022 19:47:22 +0000 Subject: [nginx] The "ipv4=" parameter of the "resolver" directive. Message-ID: details: https://hg.nginx.org/nginx/rev/2a77754cd9fe branches: changeset: 8055:2a77754cd9fe user: Ruslan Ermilov date: Tue Jul 12 21:44:02 2022 +0400 description: The "ipv4=" parameter of the "resolver" directive. When set to "off", only IPv6 addresses will be resolved, and no A queries are ever sent (ticket #2196). diffstat: src/core/ngx_resolver.c | 71 +++++++++++++++++++++++++++++++++++++----------- src/core/ngx_resolver.h | 4 ++- 2 files changed, 57 insertions(+), 18 deletions(-) diffs (160 lines): diff -r cac164d0807e -r 2a77754cd9fe src/core/ngx_resolver.c --- a/src/core/ngx_resolver.c Tue Jul 12 15:55:22 2022 +0300 +++ b/src/core/ngx_resolver.c Tue Jul 12 21:44:02 2022 +0400 @@ -157,6 +157,8 @@ ngx_resolver_create(ngx_conf_t *cf, ngx_ cln->handler = ngx_resolver_cleanup; cln->data = r; + r->ipv4 = 1; + ngx_rbtree_init(&r->name_rbtree, &r->name_sentinel, ngx_resolver_rbtree_insert_value); @@ -225,6 +227,23 @@ ngx_resolver_create(ngx_conf_t *cf, ngx_ } #if (NGX_HAVE_INET6) + if (ngx_strncmp(names[i].data, "ipv4=", 5) == 0) { + + if (ngx_strcmp(&names[i].data[5], "on") == 0) { + r->ipv4 = 1; + + } else if (ngx_strcmp(&names[i].data[5], "off") == 0) { + r->ipv4 = 0; + + } else { + ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, + "invalid parameter: %V", &names[i]); + return NULL; + } + + continue; + } + if (ngx_strncmp(names[i].data, "ipv6=", 5) == 0) { if (ngx_strcmp(&names[i].data[5], "on") == 0) { @@ -273,6 +292,14 @@ ngx_resolver_create(ngx_conf_t *cf, ngx_ } } +#if (NGX_HAVE_INET6) + if (r->ipv4 + r->ipv6 == 0) { + ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, + "\"ipv4\" and \"ipv6\" cannot both be \"off\""); + return NULL; + } +#endif + if (n && r->connections.nelts == 0) { ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, "no name servers defined"); return NULL; @@ -836,7 +863,7 @@ ngx_resolve_name_locked(ngx_resolver_t * r->last_connection = 0; } - rn->naddrs = (u_short) -1; + rn->naddrs = r->ipv4 ? (u_short) -1 : 0; rn->tcp = 0; #if (NGX_HAVE_INET6) rn->naddrs6 = r->ipv6 ? (u_short) -1 : 0; @@ -1263,7 +1290,7 @@ ngx_resolver_send_query(ngx_resolver_t * rec->log.action = "resolving"; } - if (rn->naddrs == (u_short) -1) { + if (rn->query && rn->naddrs == (u_short) -1) { rc = rn->tcp ? ngx_resolver_send_tcp_query(r, rec, rn->query, rn->qlen) : ngx_resolver_send_udp_query(r, rec, rn->query, rn->qlen); @@ -1765,10 +1792,13 @@ ngx_resolver_process_response(ngx_resolv q = ngx_queue_next(q)) { rn = ngx_queue_data(q, ngx_resolver_node_t, queue); - qident = (rn->query[0] << 8) + rn->query[1]; - - if (qident == ident) { - goto dns_error_name; + + if (rn->query) { + qident = (rn->query[0] << 8) + rn->query[1]; + + if (qident == ident) { + goto dns_error_name; + } } #if (NGX_HAVE_INET6) @@ -3645,7 +3675,7 @@ ngx_resolver_create_name_query(ngx_resol len = sizeof(ngx_resolver_hdr_t) + nlen + sizeof(ngx_resolver_qs_t); #if (NGX_HAVE_INET6) - p = ngx_resolver_alloc(r, r->ipv6 ? len * 2 : len); + p = ngx_resolver_alloc(r, len * (r->ipv4 + r->ipv6)); #else p = ngx_resolver_alloc(r, len); #endif @@ -3654,23 +3684,28 @@ ngx_resolver_create_name_query(ngx_resol } rn->qlen = (u_short) len; - rn->query = p; + + if (r->ipv4) { + rn->query = p; + } #if (NGX_HAVE_INET6) if (r->ipv6) { - rn->query6 = p + len; + rn->query6 = r->ipv4 ? (p + len) : p; } #endif query = (ngx_resolver_hdr_t *) p; - ident = ngx_random(); - - ngx_log_debug2(NGX_LOG_DEBUG_CORE, r->log, 0, - "resolve: \"%V\" A %i", name, ident & 0xffff); - - query->ident_hi = (u_char) ((ident >> 8) & 0xff); - query->ident_lo = (u_char) (ident & 0xff); + if (r->ipv4) { + ident = ngx_random(); + + ngx_log_debug2(NGX_LOG_DEBUG_CORE, r->log, 0, + "resolve: \"%V\" A %i", name, ident & 0xffff); + + query->ident_hi = (u_char) ((ident >> 8) & 0xff); + query->ident_lo = (u_char) (ident & 0xff); + } /* recursion query */ query->flags_hi = 1; query->flags_lo = 0; @@ -3731,7 +3766,9 @@ ngx_resolver_create_name_query(ngx_resol p = rn->query6; - ngx_memcpy(p, rn->query, rn->qlen); + if (r->ipv4) { + ngx_memcpy(p, rn->query, rn->qlen); + } query = (ngx_resolver_hdr_t *) p; diff -r cac164d0807e -r 2a77754cd9fe src/core/ngx_resolver.h --- a/src/core/ngx_resolver.h Tue Jul 12 15:55:22 2022 +0300 +++ b/src/core/ngx_resolver.h Tue Jul 12 21:44:02 2022 +0400 @@ -175,8 +175,10 @@ struct ngx_resolver_s { ngx_queue_t srv_expire_queue; ngx_queue_t addr_expire_queue; + unsigned ipv4:1; + #if (NGX_HAVE_INET6) - ngx_uint_t ipv6; /* unsigned ipv6:1; */ + unsigned ipv6:1; ngx_rbtree_t addr6_rbtree; ngx_rbtree_node_t addr6_sentinel; ngx_queue_t addr6_resend_queue; From pluknet at nginx.com Wed Jul 13 15:03:31 2022 From: pluknet at nginx.com (Sergey Kandaurov) Date: Wed, 13 Jul 2022 19:03:31 +0400 Subject: [PATCH 1 of 2] The "ipv4=" parameter of the "resolver" directive In-Reply-To: <04e314eb6b4d20a48c5d.1656433535@enoparse.local> References: <04e314eb6b4d20a48c5d.1656433535@enoparse.local> Message-ID: > On 28 Jun 2022, at 20:25, Sergey Kandaurov wrote: > > # HG changeset patch > # User Ruslan Ermilov > # Date 1645589317 -10800 > # Wed Feb 23 07:08:37 2022 +0300 > # Node ID 04e314eb6b4d20a48c5d7bab0609e1b03b51b406 > # Parent fecd73db563fb64108f7669eca419badb2aba633 > The "ipv4=" parameter of the "resolver" directive. > > When set to "off", only IPv6 addresses will be resolved, and no > A queries are ever sent (ticket #2196). > > diff -r fecd73db563f -r 04e314eb6b4d src/core/ngx_resolver.c > --- a/src/core/ngx_resolver.c Tue Jun 21 17:25:37 2022 +0300 > +++ b/src/core/ngx_resolver.c Wed Feb 23 07:08:37 2022 +0300 > @@ -157,6 +157,8 @@ ngx_resolver_create(ngx_conf_t *cf, ngx_ > cln->handler = ngx_resolver_cleanup; > cln->data = r; > > + r->ipv4 = 1; > + > ngx_rbtree_init(&r->name_rbtree, &r->name_sentinel, > ngx_resolver_rbtree_insert_value); > > @@ -225,6 +227,23 @@ ngx_resolver_create(ngx_conf_t *cf, ngx_ > } > > #if (NGX_HAVE_INET6) > + if (ngx_strncmp(names[i].data, "ipv4=", 5) == 0) { > + > + if (ngx_strcmp(&names[i].data[5], "on") == 0) { > + r->ipv4 = 1; > + > + } else if (ngx_strcmp(&names[i].data[5], "off") == 0) { > + r->ipv4 = 0; > + > + } else { > + ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, > + "invalid parameter: %V", &names[i]); > + return NULL; > + } > + > + continue; > + } > + > if (ngx_strncmp(names[i].data, "ipv6=", 5) == 0) { > > if (ngx_strcmp(&names[i].data[5], "on") == 0) { > @@ -273,6 +292,14 @@ ngx_resolver_create(ngx_conf_t *cf, ngx_ > } > } > > +#if (NGX_HAVE_INET6) > + if (r->ipv4 + r->ipv6 == 0) { > + ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, > + "\"ipv4\" and \"ipv6\" cannot both be \"off\""); > + return NULL; > + } > +#endif > + > if (n && r->connections.nelts == 0) { > ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, "no name servers defined"); > return NULL; > @@ -836,7 +863,7 @@ ngx_resolve_name_locked(ngx_resolver_t * > r->last_connection = 0; > } > > - rn->naddrs = (u_short) -1; > + rn->naddrs = r->ipv4 ? (u_short) -1 : 0; > rn->tcp = 0; > #if (NGX_HAVE_INET6) > rn->naddrs6 = r->ipv6 ? (u_short) -1 : 0; > @@ -1263,7 +1290,7 @@ ngx_resolver_send_query(ngx_resolver_t * > rec->log.action = "resolving"; > } > > - if (rn->naddrs == (u_short) -1) { > + if (rn->query && rn->naddrs == (u_short) -1) { It should be safe to revert this condition: for PTR and SRV queries, rn->query is always set; for A queries, it is additionally protected with rn->naddrs, which by itself is set to (u_short) -1 only for r->ipv4 == 1. See below for rationale. > rc = rn->tcp ? ngx_resolver_send_tcp_query(r, rec, rn->query, rn->qlen) > : ngx_resolver_send_udp_query(r, rec, rn->query, rn->qlen); > > @@ -1765,10 +1792,13 @@ ngx_resolver_process_response(ngx_resolv > q = ngx_queue_next(q)) > { > rn = ngx_queue_data(q, ngx_resolver_node_t, queue); > - qident = (rn->query[0] << 8) + rn->query[1]; > - > - if (qident == ident) { > - goto dns_error_name; > + > + if (rn->query) { > + qident = (rn->query[0] << 8) + rn->query[1]; > + > + if (qident == ident) { > + goto dns_error_name; > + } This one also looks save to revert. This code is used to check ident match for the FORMERR case. For ipv4=off case, with the below part reverted, both rn->query and rn->query6 will look at the same address, so on ident mismatch both checks for rn->query and rn->query6 just duplicate each other. > } > > #if (NGX_HAVE_INET6) > @@ -3645,7 +3675,7 @@ ngx_resolver_create_name_query(ngx_resol > len = sizeof(ngx_resolver_hdr_t) + nlen + sizeof(ngx_resolver_qs_t); > > #if (NGX_HAVE_INET6) > - p = ngx_resolver_alloc(r, r->ipv6 ? len * 2 : len); > + p = ngx_resolver_alloc(r, len * (r->ipv4 + r->ipv6)); > #else > p = ngx_resolver_alloc(r, len); > #endif > @@ -3654,23 +3684,28 @@ ngx_resolver_create_name_query(ngx_resol > } > > rn->qlen = (u_short) len; > - rn->query = p; > + > + if (r->ipv4) { > + rn->query = p; > + } It turns out that doing conditional allocation prevents from memory deallocation using "ngx_resolver_free(r, rn->query);" idiom. Reverting this part and accompanying changes for rn->query seems to be enough to fix this. See above for more details, and the patch below. > > #if (NGX_HAVE_INET6) > if (r->ipv6) { > - rn->query6 = p + len; > + rn->query6 = r->ipv4 ? (p + len) : p; > } > #endif > > query = (ngx_resolver_hdr_t *) p; > > - ident = ngx_random(); > - > - ngx_log_debug2(NGX_LOG_DEBUG_CORE, r->log, 0, > - "resolve: \"%V\" A %i", name, ident & 0xffff); > - > - query->ident_hi = (u_char) ((ident >> 8) & 0xff); > - query->ident_lo = (u_char) (ident & 0xff); > + if (r->ipv4) { > + ident = ngx_random(); > + > + ngx_log_debug2(NGX_LOG_DEBUG_CORE, r->log, 0, > + "resolve: \"%V\" A %i", name, ident & 0xffff); > + > + query->ident_hi = (u_char) ((ident >> 8) & 0xff); > + query->ident_lo = (u_char) (ident & 0xff); > + } > > /* recursion query */ > query->flags_hi = 1; query->flags_lo = 0; > @@ -3731,7 +3766,9 @@ ngx_resolver_create_name_query(ngx_resol > > p = rn->query6; > > - ngx_memcpy(p, rn->query, rn->qlen); > + if (r->ipv4) { > + ngx_memcpy(p, rn->query, rn->qlen); > + } > > query = (ngx_resolver_hdr_t *) p; > > diff -r fecd73db563f -r 04e314eb6b4d src/core/ngx_resolver.h > --- a/src/core/ngx_resolver.h Tue Jun 21 17:25:37 2022 +0300 > +++ b/src/core/ngx_resolver.h Wed Feb 23 07:08:37 2022 +0300 > @@ -175,8 +175,10 @@ struct ngx_resolver_s { > ngx_queue_t srv_expire_queue; > ngx_queue_t addr_expire_queue; > > + unsigned ipv4:1; > + > #if (NGX_HAVE_INET6) > - ngx_uint_t ipv6; /* unsigned ipv6:1; */ > + unsigned ipv6:1; > ngx_rbtree_t addr6_rbtree; > ngx_rbtree_node_t addr6_sentinel; > ngx_queue_t addr6_resend_queue; # HG changeset patch # User Sergey Kandaurov # Date 1657724523 -14400 # Wed Jul 13 19:02:03 2022 +0400 # Node ID 61fa6c872c85b54ce41af8748ffde933dbaae47e # Parent 2a77754cd9feae752152e8eef7e5c506dd0186d6 Resolver: fixed memory leak for the "ipv4=off" case. This change partially reverts 2a77754cd9fe to properly free rn->query. diff -r 2a77754cd9fe -r 61fa6c872c85 src/core/ngx_resolver.c --- a/src/core/ngx_resolver.c Tue Jul 12 21:44:02 2022 +0400 +++ b/src/core/ngx_resolver.c Wed Jul 13 19:02:03 2022 +0400 @@ -1290,7 +1290,7 @@ ngx_resolver_send_query(ngx_resolver_t * rec->log.action = "resolving"; } - if (rn->query && rn->naddrs == (u_short) -1) { + if (rn->naddrs == (u_short) -1) { rc = rn->tcp ? ngx_resolver_send_tcp_query(r, rec, rn->query, rn->qlen) : ngx_resolver_send_udp_query(r, rec, rn->query, rn->qlen); @@ -1792,13 +1792,10 @@ ngx_resolver_process_response(ngx_resolv q = ngx_queue_next(q)) { rn = ngx_queue_data(q, ngx_resolver_node_t, queue); - - if (rn->query) { - qident = (rn->query[0] << 8) + rn->query[1]; - - if (qident == ident) { - goto dns_error_name; - } + qident = (rn->query[0] << 8) + rn->query[1]; + + if (qident == ident) { + goto dns_error_name; } #if (NGX_HAVE_INET6) @@ -3684,10 +3681,7 @@ ngx_resolver_create_name_query(ngx_resol } rn->qlen = (u_short) len; - - if (r->ipv4) { - rn->query = p; - } + rn->query = p; #if (NGX_HAVE_INET6) if (r->ipv6) { -- Sergey Kandaurov From mdounin at mdounin.ru Thu Jul 14 13:12:28 2022 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 14 Jul 2022 16:12:28 +0300 Subject: [PATCH 1 of 2] The "ipv4=" parameter of the "resolver" directive In-Reply-To: References: <04e314eb6b4d20a48c5d.1656433535@enoparse.local> Message-ID: Hello! On Wed, Jul 13, 2022 at 07:03:31PM +0400, Sergey Kandaurov wrote: > > > On 28 Jun 2022, at 20:25, Sergey Kandaurov wrote: > > > > # HG changeset patch > > # User Ruslan Ermilov > > # Date 1645589317 -10800 > > # Wed Feb 23 07:08:37 2022 +0300 > > # Node ID 04e314eb6b4d20a48c5d7bab0609e1b03b51b406 > > # Parent fecd73db563fb64108f7669eca419badb2aba633 > > The "ipv4=" parameter of the "resolver" directive. > > > > When set to "off", only IPv6 addresses will be resolved, and no > > A queries are ever sent (ticket #2196). > > > > diff -r fecd73db563f -r 04e314eb6b4d src/core/ngx_resolver.c > > --- a/src/core/ngx_resolver.c Tue Jun 21 17:25:37 2022 +0300 > > +++ b/src/core/ngx_resolver.c Wed Feb 23 07:08:37 2022 +0300 > > @@ -157,6 +157,8 @@ ngx_resolver_create(ngx_conf_t *cf, ngx_ > > cln->handler = ngx_resolver_cleanup; > > cln->data = r; > > > > + r->ipv4 = 1; > > + > > ngx_rbtree_init(&r->name_rbtree, &r->name_sentinel, > > ngx_resolver_rbtree_insert_value); > > > > @@ -225,6 +227,23 @@ ngx_resolver_create(ngx_conf_t *cf, ngx_ > > } > > > > #if (NGX_HAVE_INET6) > > + if (ngx_strncmp(names[i].data, "ipv4=", 5) == 0) { > > + > > + if (ngx_strcmp(&names[i].data[5], "on") == 0) { > > + r->ipv4 = 1; > > + > > + } else if (ngx_strcmp(&names[i].data[5], "off") == 0) { > > + r->ipv4 = 0; > > + > > + } else { > > + ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, > > + "invalid parameter: %V", &names[i]); > > + return NULL; > > + } > > + > > + continue; > > + } > > + > > if (ngx_strncmp(names[i].data, "ipv6=", 5) == 0) { > > > > if (ngx_strcmp(&names[i].data[5], "on") == 0) { > > @@ -273,6 +292,14 @@ ngx_resolver_create(ngx_conf_t *cf, ngx_ > > } > > } > > > > +#if (NGX_HAVE_INET6) > > + if (r->ipv4 + r->ipv6 == 0) { > > + ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, > > + "\"ipv4\" and \"ipv6\" cannot both be \"off\""); > > + return NULL; > > + } > > +#endif > > + > > if (n && r->connections.nelts == 0) { > > ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, "no name servers defined"); > > return NULL; > > @@ -836,7 +863,7 @@ ngx_resolve_name_locked(ngx_resolver_t * > > r->last_connection = 0; > > } > > > > - rn->naddrs = (u_short) -1; > > + rn->naddrs = r->ipv4 ? (u_short) -1 : 0; > > rn->tcp = 0; > > #if (NGX_HAVE_INET6) > > rn->naddrs6 = r->ipv6 ? (u_short) -1 : 0; > > @@ -1263,7 +1290,7 @@ ngx_resolver_send_query(ngx_resolver_t * > > rec->log.action = "resolving"; > > } > > > > - if (rn->naddrs == (u_short) -1) { > > + if (rn->query && rn->naddrs == (u_short) -1) { > > It should be safe to revert this condition: > for PTR and SRV queries, rn->query is always set; > for A queries, it is additionally protected with rn->naddrs, > which by itself is set to (u_short) -1 only for r->ipv4 == 1. > See below for rationale. Wouldn't it be better to keep the rn->query check, simply to keep the code in line with the r->query6 case below? Note that we anyway check rn->query at least in ngx_resolver_process_a(). > > rc = rn->tcp ? ngx_resolver_send_tcp_query(r, rec, rn->query, rn->qlen) > > : ngx_resolver_send_udp_query(r, rec, rn->query, rn->qlen); > > > > @@ -1765,10 +1792,13 @@ ngx_resolver_process_response(ngx_resolv > > q = ngx_queue_next(q)) > > { > > rn = ngx_queue_data(q, ngx_resolver_node_t, queue); > > - qident = (rn->query[0] << 8) + rn->query[1]; > > - > > - if (qident == ident) { > > - goto dns_error_name; > > + > > + if (rn->query) { > > + qident = (rn->query[0] << 8) + rn->query[1]; > > + > > + if (qident == ident) { > > + goto dns_error_name; > > + } > > This one also looks save to revert. > This code is used to check ident match for the FORMERR case. > For ipv4=off case, with the below part reverted, both rn->query > and rn->query6 will look at the same address, so on ident mismatch > both checks for rn->query and rn->query6 just duplicate each other. Same here. > > } > > > > #if (NGX_HAVE_INET6) > > @@ -3645,7 +3675,7 @@ ngx_resolver_create_name_query(ngx_resol > > len = sizeof(ngx_resolver_hdr_t) + nlen + sizeof(ngx_resolver_qs_t); > > > > #if (NGX_HAVE_INET6) > > - p = ngx_resolver_alloc(r, r->ipv6 ? len * 2 : len); > > + p = ngx_resolver_alloc(r, len * (r->ipv4 + r->ipv6)); > > #else > > p = ngx_resolver_alloc(r, len); > > #endif > > @@ -3654,23 +3684,28 @@ ngx_resolver_create_name_query(ngx_resol > > } > > > > rn->qlen = (u_short) len; > > - rn->query = p; > > + > > + if (r->ipv4) { > > + rn->query = p; > > + } > > It turns out that doing conditional allocation prevents from > memory deallocation using "ngx_resolver_free(r, rn->query);" idiom. > Reverting this part and accompanying changes for rn->query > seems to be enough to fix this. > See above for more details, and the patch below. > > > > > #if (NGX_HAVE_INET6) > > if (r->ipv6) { > > - rn->query6 = p + len; > > + rn->query6 = r->ipv4 ? (p + len) : p; > > } > > #endif > > > > query = (ngx_resolver_hdr_t *) p; > > > > - ident = ngx_random(); > > - > > - ngx_log_debug2(NGX_LOG_DEBUG_CORE, r->log, 0, > > - "resolve: \"%V\" A %i", name, ident & 0xffff); > > - > > - query->ident_hi = (u_char) ((ident >> 8) & 0xff); > > - query->ident_lo = (u_char) (ident & 0xff); > > + if (r->ipv4) { > > + ident = ngx_random(); > > + > > + ngx_log_debug2(NGX_LOG_DEBUG_CORE, r->log, 0, > > + "resolve: \"%V\" A %i", name, ident & 0xffff); > > + > > + query->ident_hi = (u_char) ((ident >> 8) & 0xff); > > + query->ident_lo = (u_char) (ident & 0xff); > > + } > > > > /* recursion query */ > > query->flags_hi = 1; query->flags_lo = 0; > > @@ -3731,7 +3766,9 @@ ngx_resolver_create_name_query(ngx_resol > > > > p = rn->query6; > > > > - ngx_memcpy(p, rn->query, rn->qlen); > > + if (r->ipv4) { > > + ngx_memcpy(p, rn->query, rn->qlen); > > + } > > > > query = (ngx_resolver_hdr_t *) p; > > > > diff -r fecd73db563f -r 04e314eb6b4d src/core/ngx_resolver.h > > --- a/src/core/ngx_resolver.h Tue Jun 21 17:25:37 2022 +0300 > > +++ b/src/core/ngx_resolver.h Wed Feb 23 07:08:37 2022 +0300 > > @@ -175,8 +175,10 @@ struct ngx_resolver_s { > > ngx_queue_t srv_expire_queue; > > ngx_queue_t addr_expire_queue; > > > > + unsigned ipv4:1; > > + > > #if (NGX_HAVE_INET6) > > - ngx_uint_t ipv6; /* unsigned ipv6:1; */ > > + unsigned ipv6:1; > > ngx_rbtree_t addr6_rbtree; > > ngx_rbtree_node_t addr6_sentinel; > > ngx_queue_t addr6_resend_queue; > > # HG changeset patch > # User Sergey Kandaurov > # Date 1657724523 -14400 > # Wed Jul 13 19:02:03 2022 +0400 > # Node ID 61fa6c872c85b54ce41af8748ffde933dbaae47e > # Parent 2a77754cd9feae752152e8eef7e5c506dd0186d6 > Resolver: fixed memory leak for the "ipv4=off" case. > > This change partially reverts 2a77754cd9fe to properly free rn->query. > > diff -r 2a77754cd9fe -r 61fa6c872c85 src/core/ngx_resolver.c > --- a/src/core/ngx_resolver.c Tue Jul 12 21:44:02 2022 +0400 > +++ b/src/core/ngx_resolver.c Wed Jul 13 19:02:03 2022 +0400 > @@ -1290,7 +1290,7 @@ ngx_resolver_send_query(ngx_resolver_t * > rec->log.action = "resolving"; > } > > - if (rn->query && rn->naddrs == (u_short) -1) { > + if (rn->naddrs == (u_short) -1) { > rc = rn->tcp ? ngx_resolver_send_tcp_query(r, rec, rn->query, rn->qlen) > : ngx_resolver_send_udp_query(r, rec, rn->query, rn->qlen); > > @@ -1792,13 +1792,10 @@ ngx_resolver_process_response(ngx_resolv > q = ngx_queue_next(q)) > { > rn = ngx_queue_data(q, ngx_resolver_node_t, queue); > - > - if (rn->query) { > - qident = (rn->query[0] << 8) + rn->query[1]; > - > - if (qident == ident) { > - goto dns_error_name; > - } > + qident = (rn->query[0] << 8) + rn->query[1]; > + > + if (qident == ident) { > + goto dns_error_name; > } > > #if (NGX_HAVE_INET6) > @@ -3684,10 +3681,7 @@ ngx_resolver_create_name_query(ngx_resol > } > > rn->qlen = (u_short) len; > - > - if (r->ipv4) { > - rn->query = p; > - } > + rn->query = p; > > #if (NGX_HAVE_INET6) > if (r->ipv6) { See above for comments, otherwise looks good. -- Maxim Dounin http://mdounin.ru/ From pluknet at nginx.com Thu Jul 14 15:04:11 2022 From: pluknet at nginx.com (Sergey Kandaurov) Date: Thu, 14 Jul 2022 19:04:11 +0400 Subject: [PATCH 1 of 2] The "ipv4=" parameter of the "resolver" directive In-Reply-To: References: <04e314eb6b4d20a48c5d.1656433535@enoparse.local> Message-ID: <5AB53B8F-2588-42D6-8B21-DCEE94338BEF@nginx.com> > On 14 Jul 2022, at 17:12, Maxim Dounin wrote: > > Hello! > > On Wed, Jul 13, 2022 at 07:03:31PM +0400, Sergey Kandaurov wrote: > >> >>> On 28 Jun 2022, at 20:25, Sergey Kandaurov wrote: >>> >>> # HG changeset patch >>> # User Ruslan Ermilov >>> # Date 1645589317 -10800 >>> # Wed Feb 23 07:08:37 2022 +0300 >>> # Node ID 04e314eb6b4d20a48c5d7bab0609e1b03b51b406 >>> # Parent fecd73db563fb64108f7669eca419badb2aba633 >>> The "ipv4=" parameter of the "resolver" directive. >>> >>> When set to "off", only IPv6 addresses will be resolved, and no >>> A queries are ever sent (ticket #2196). >>> >>> diff -r fecd73db563f -r 04e314eb6b4d src/core/ngx_resolver.c >>> --- a/src/core/ngx_resolver.c Tue Jun 21 17:25:37 2022 +0300 >>> +++ b/src/core/ngx_resolver.c Wed Feb 23 07:08:37 2022 +0300 >>> @@ -157,6 +157,8 @@ ngx_resolver_create(ngx_conf_t *cf, ngx_ >>> cln->handler = ngx_resolver_cleanup; >>> cln->data = r; >>> >>> + r->ipv4 = 1; >>> + >>> ngx_rbtree_init(&r->name_rbtree, &r->name_sentinel, >>> ngx_resolver_rbtree_insert_value); >>> >>> @@ -225,6 +227,23 @@ ngx_resolver_create(ngx_conf_t *cf, ngx_ >>> } >>> >>> #if (NGX_HAVE_INET6) >>> + if (ngx_strncmp(names[i].data, "ipv4=", 5) == 0) { >>> + >>> + if (ngx_strcmp(&names[i].data[5], "on") == 0) { >>> + r->ipv4 = 1; >>> + >>> + } else if (ngx_strcmp(&names[i].data[5], "off") == 0) { >>> + r->ipv4 = 0; >>> + >>> + } else { >>> + ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, >>> + "invalid parameter: %V", &names[i]); >>> + return NULL; >>> + } >>> + >>> + continue; >>> + } >>> + >>> if (ngx_strncmp(names[i].data, "ipv6=", 5) == 0) { >>> >>> if (ngx_strcmp(&names[i].data[5], "on") == 0) { >>> @@ -273,6 +292,14 @@ ngx_resolver_create(ngx_conf_t *cf, ngx_ >>> } >>> } >>> >>> +#if (NGX_HAVE_INET6) >>> + if (r->ipv4 + r->ipv6 == 0) { >>> + ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, >>> + "\"ipv4\" and \"ipv6\" cannot both be \"off\""); >>> + return NULL; >>> + } >>> +#endif >>> + >>> if (n && r->connections.nelts == 0) { >>> ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, "no name servers defined"); >>> return NULL; >>> @@ -836,7 +863,7 @@ ngx_resolve_name_locked(ngx_resolver_t * >>> r->last_connection = 0; >>> } >>> >>> - rn->naddrs = (u_short) -1; >>> + rn->naddrs = r->ipv4 ? (u_short) -1 : 0; >>> rn->tcp = 0; >>> #if (NGX_HAVE_INET6) >>> rn->naddrs6 = r->ipv6 ? (u_short) -1 : 0; >>> @@ -1263,7 +1290,7 @@ ngx_resolver_send_query(ngx_resolver_t * >>> rec->log.action = "resolving"; >>> } >>> >>> - if (rn->naddrs == (u_short) -1) { >>> + if (rn->query && rn->naddrs == (u_short) -1) { >> >> It should be safe to revert this condition: >> for PTR and SRV queries, rn->query is always set; >> for A queries, it is additionally protected with rn->naddrs, >> which by itself is set to (u_short) -1 only for r->ipv4 == 1. >> See below for rationale. > > Wouldn't it be better to keep the rn->query check, simply to keep > the code in line with the r->query6 case below? I agree in general, keeping checks should not affect behaviour, while giving consistent codepath potentially useful in future. > > Note that we anyway check rn->query at least in > ngx_resolver_process_a(). I believe it was made in the same sense of keeping the code in line. > >>> rc = rn->tcp ? ngx_resolver_send_tcp_query(r, rec, rn->query, rn->qlen) >>> : ngx_resolver_send_udp_query(r, rec, rn->query, rn->qlen); >>> >>> @@ -1765,10 +1792,13 @@ ngx_resolver_process_response(ngx_resolv >>> q = ngx_queue_next(q)) >>> { >>> rn = ngx_queue_data(q, ngx_resolver_node_t, queue); >>> - qident = (rn->query[0] << 8) + rn->query[1]; >>> - >>> - if (qident == ident) { >>> - goto dns_error_name; >>> + >>> + if (rn->query) { >>> + qident = (rn->query[0] << 8) + rn->query[1]; >>> + >>> + if (qident == ident) { >>> + goto dns_error_name; >>> + } >> >> This one also looks save to revert. >> This code is used to check ident match for the FORMERR case. >> For ipv4=off case, with the below part reverted, both rn->query >> and rn->query6 will look at the same address, so on ident mismatch >> both checks for rn->query and rn->query6 just duplicate each other. > > Same here. Ok. > >>> } >>> >>> #if (NGX_HAVE_INET6) >>> @@ -3645,7 +3675,7 @@ ngx_resolver_create_name_query(ngx_resol >>> len = sizeof(ngx_resolver_hdr_t) + nlen + sizeof(ngx_resolver_qs_t); >>> >>> #if (NGX_HAVE_INET6) >>> - p = ngx_resolver_alloc(r, r->ipv6 ? len * 2 : len); >>> + p = ngx_resolver_alloc(r, len * (r->ipv4 + r->ipv6)); >>> #else >>> p = ngx_resolver_alloc(r, len); >>> #endif >>> @@ -3654,23 +3684,28 @@ ngx_resolver_create_name_query(ngx_resol >>> } >>> >>> rn->qlen = (u_short) len; >>> - rn->query = p; >>> + >>> + if (r->ipv4) { >>> + rn->query = p; >>> + } >> >> It turns out that doing conditional allocation prevents from >> memory deallocation using "ngx_resolver_free(r, rn->query);" idiom. >> Reverting this part and accompanying changes for rn->query >> seems to be enough to fix this. >> See above for more details, and the patch below. >> >>> >>> #if (NGX_HAVE_INET6) >>> if (r->ipv6) { >>> - rn->query6 = p + len; >>> + rn->query6 = r->ipv4 ? (p + len) : p; >>> } >>> #endif >>> >>> query = (ngx_resolver_hdr_t *) p; >>> >>> - ident = ngx_random(); >>> - >>> - ngx_log_debug2(NGX_LOG_DEBUG_CORE, r->log, 0, >>> - "resolve: \"%V\" A %i", name, ident & 0xffff); >>> - >>> - query->ident_hi = (u_char) ((ident >> 8) & 0xff); >>> - query->ident_lo = (u_char) (ident & 0xff); >>> + if (r->ipv4) { >>> + ident = ngx_random(); >>> + >>> + ngx_log_debug2(NGX_LOG_DEBUG_CORE, r->log, 0, >>> + "resolve: \"%V\" A %i", name, ident & 0xffff); >>> + >>> + query->ident_hi = (u_char) ((ident >> 8) & 0xff); >>> + query->ident_lo = (u_char) (ident & 0xff); >>> + } >>> >>> /* recursion query */ >>> query->flags_hi = 1; query->flags_lo = 0; >>> @@ -3731,7 +3766,9 @@ ngx_resolver_create_name_query(ngx_resol >>> >>> p = rn->query6; >>> >>> - ngx_memcpy(p, rn->query, rn->qlen); >>> + if (r->ipv4) { >>> + ngx_memcpy(p, rn->query, rn->qlen); >>> + } >>> >>> query = (ngx_resolver_hdr_t *) p; >>> >>> diff -r fecd73db563f -r 04e314eb6b4d src/core/ngx_resolver.h >>> --- a/src/core/ngx_resolver.h Tue Jun 21 17:25:37 2022 +0300 >>> +++ b/src/core/ngx_resolver.h Wed Feb 23 07:08:37 2022 +0300 >>> @@ -175,8 +175,10 @@ struct ngx_resolver_s { >>> ngx_queue_t srv_expire_queue; >>> ngx_queue_t addr_expire_queue; >>> >>> + unsigned ipv4:1; >>> + >>> #if (NGX_HAVE_INET6) >>> - ngx_uint_t ipv6; /* unsigned ipv6:1; */ >>> + unsigned ipv6:1; >>> ngx_rbtree_t addr6_rbtree; >>> ngx_rbtree_node_t addr6_sentinel; >>> ngx_queue_t addr6_resend_queue; >> >> # HG changeset patch >> # User Sergey Kandaurov >> # Date 1657724523 -14400 >> # Wed Jul 13 19:02:03 2022 +0400 >> # Node ID 61fa6c872c85b54ce41af8748ffde933dbaae47e >> # Parent 2a77754cd9feae752152e8eef7e5c506dd0186d6 >> Resolver: fixed memory leak for the "ipv4=off" case. >> >> This change partially reverts 2a77754cd9fe to properly free rn->query. >> >> diff -r 2a77754cd9fe -r 61fa6c872c85 src/core/ngx_resolver.c >> --- a/src/core/ngx_resolver.c Tue Jul 12 21:44:02 2022 +0400 >> +++ b/src/core/ngx_resolver.c Wed Jul 13 19:02:03 2022 +0400 >> @@ -1290,7 +1290,7 @@ ngx_resolver_send_query(ngx_resolver_t * >> rec->log.action = "resolving"; >> } >> >> - if (rn->query && rn->naddrs == (u_short) -1) { >> + if (rn->naddrs == (u_short) -1) { >> rc = rn->tcp ? ngx_resolver_send_tcp_query(r, rec, rn->query, rn->qlen) >> : ngx_resolver_send_udp_query(r, rec, rn->query, rn->qlen); >> >> @@ -1792,13 +1792,10 @@ ngx_resolver_process_response(ngx_resolv >> q = ngx_queue_next(q)) >> { >> rn = ngx_queue_data(q, ngx_resolver_node_t, queue); >> - >> - if (rn->query) { >> - qident = (rn->query[0] << 8) + rn->query[1]; >> - >> - if (qident == ident) { >> - goto dns_error_name; >> - } >> + qident = (rn->query[0] << 8) + rn->query[1]; >> + >> + if (qident == ident) { >> + goto dns_error_name; >> } >> >> #if (NGX_HAVE_INET6) >> @@ -3684,10 +3681,7 @@ ngx_resolver_create_name_query(ngx_resol >> } >> >> rn->qlen = (u_short) len; >> - >> - if (r->ipv4) { >> - rn->query = p; >> - } >> + rn->query = p; >> >> #if (NGX_HAVE_INET6) >> if (r->ipv6) { > > See above for comments, otherwise looks good. I intend to commit this later today, keeping it here for the reference. # HG changeset patch # User Sergey Kandaurov # Date 1657808624 -14400 # Thu Jul 14 18:23:44 2022 +0400 # Node ID 06c74eef1cc3a249b73a4edb189e32d453b32fe8 # Parent 2a77754cd9feae752152e8eef7e5c506dd0186d6 Resolver: fixed memory leak for the "ipv4=off" case. This change partially reverts 2a77754cd9fe to properly free rn->query. diff -r 2a77754cd9fe -r 06c74eef1cc3 src/core/ngx_resolver.c --- a/src/core/ngx_resolver.c Tue Jul 12 21:44:02 2022 +0400 +++ b/src/core/ngx_resolver.c Thu Jul 14 18:23:44 2022 +0400 @@ -3684,10 +3684,7 @@ ngx_resolver_create_name_query(ngx_resol } rn->qlen = (u_short) len; - - if (r->ipv4) { - rn->query = p; - } + rn->query = p; #if (NGX_HAVE_INET6) if (r->ipv6) { -- Sergey Kandaurov From pluknet at nginx.com Thu Jul 14 17:31:56 2022 From: pluknet at nginx.com (Sergey Kandaurov) Date: Thu, 14 Jul 2022 17:31:56 +0000 Subject: [nginx] Resolver: fixed memory leak for the "ipv4=off" case. Message-ID: details: https://hg.nginx.org/nginx/rev/0422365794f7 branches: changeset: 8056:0422365794f7 user: Sergey Kandaurov date: Thu Jul 14 21:26:54 2022 +0400 description: Resolver: fixed memory leak for the "ipv4=off" case. This change partially reverts 2a77754cd9fe to properly free rn->query. Found by Coverity (CID 1507244). diffstat: src/core/ngx_resolver.c | 5 +---- 1 files changed, 1 insertions(+), 4 deletions(-) diffs (15 lines): diff -r 2a77754cd9fe -r 0422365794f7 src/core/ngx_resolver.c --- a/src/core/ngx_resolver.c Tue Jul 12 21:44:02 2022 +0400 +++ b/src/core/ngx_resolver.c Thu Jul 14 21:26:54 2022 +0400 @@ -3684,10 +3684,7 @@ ngx_resolver_create_name_query(ngx_resol } rn->qlen = (u_short) len; - - if (r->ipv4) { - rn->query = p; - } + rn->query = p; #if (NGX_HAVE_INET6) if (r->ipv6) { From yar at nginx.com Thu Jul 14 17:51:28 2022 From: yar at nginx.com (=?utf-8?q?Yaroslav_Zhuravlev?=) Date: Thu, 14 Jul 2022 18:51:28 +0100 Subject: [PATCH] Documented the "ipv4=off" parameter of the "resolver" directive Message-ID: xml/en/docs/http/ngx_http_core_module.xml | 8 +++++++- xml/en/docs/mail/ngx_mail_core_module.xml | 8 +++++++- xml/en/docs/stream/ngx_stream_core_module.xml | 8 +++++++- xml/ru/docs/http/ngx_http_core_module.xml | 8 +++++++- xml/ru/docs/mail/ngx_mail_core_module.xml | 8 +++++++- xml/ru/docs/stream/ngx_stream_core_module.xml | 8 +++++++- 6 files changed, 42 insertions(+), 6 deletions(-) -------------- next part -------------- A non-text attachment was scrubbed... Name: nginx.org.patch Type: text/x-patch Size: 7683 bytes Desc: not available URL: From xeioex at nginx.com Fri Jul 15 03:56:47 2022 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Fri, 15 Jul 2022 03:56:47 +0000 Subject: [njs] Style: removed excessive empty line intoduced in 3acc8a1d9088. Message-ID: details: https://hg.nginx.org/njs/rev/36a6dfe84da4 branches: changeset: 1907:36a6dfe84da4 user: Dmitry Volyntsev date: Thu Jul 14 20:16:34 2022 -0700 description: Style: removed excessive empty line intoduced in 3acc8a1d9088. diffstat: src/njs_string.c | 1 - 1 files changed, 0 insertions(+), 1 deletions(-) diffs (11 lines): diff -r 3acc8a1d9088 -r 36a6dfe84da4 src/njs_string.c --- a/src/njs_string.c Tue Jul 12 08:56:35 2022 -0700 +++ b/src/njs_string.c Thu Jul 14 20:16:34 2022 -0700 @@ -316,7 +316,6 @@ njs_encode_hex_length(const njs_str_t *s void njs_encode_base64(njs_str_t *dst, const njs_str_t *src) { - njs_encode_base64_core(dst, src, njs_basis64_enc, 1); } From xeioex at nginx.com Fri Jul 15 03:56:49 2022 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Fri, 15 Jul 2022 03:56:49 +0000 Subject: [njs] Added querystring API for modules. Message-ID: details: https://hg.nginx.org/njs/rev/016946f45c70 branches: changeset: 1908:016946f45c70 user: Dmitry Volyntsev date: Thu Jul 14 20:16:36 2022 -0700 description: Added querystring API for modules. diffstat: external/njs_query_string_module.c | 113 +++++++++++++++++++++++------------- src/njs.h | 3 + 2 files changed, 75 insertions(+), 41 deletions(-) diffs (201 lines): diff -r 36a6dfe84da4 -r 016946f45c70 external/njs_query_string_module.c --- a/external/njs_query_string_module.c Thu Jul 14 20:16:34 2022 -0700 +++ b/external/njs_query_string_module.c Thu Jul 14 20:16:36 2022 -0700 @@ -9,15 +9,9 @@ #include -static const njs_value_t njs_escape_str = njs_string("escape"); -static const njs_value_t njs_unescape_str = njs_string("unescape"); -static const njs_value_t njs_encode_uri_str = - njs_long_string("encodeURIComponent"); -static const njs_value_t njs_decode_uri_str = - njs_long_string("decodeURIComponent"); -static const njs_value_t njs_max_keys_str = njs_string("maxKeys"); - - +static njs_int_t njs_query_string_parser(njs_vm_t *vm, u_char *query, + u_char *end, const njs_str_t *sep, const njs_str_t *eq, + njs_function_t *decode, njs_uint_t max_keys, njs_value_t *retval); static njs_int_t njs_query_string_parse(njs_vm_t *vm, njs_value_t *args, njs_uint_t nargs, njs_index_t unused); static njs_int_t njs_query_string_stringify(njs_vm_t *vm, njs_value_t *args, @@ -114,6 +108,21 @@ njs_module_t njs_query_string_module = }; +static const njs_value_t njs_escape_str = njs_string("escape"); +static const njs_value_t njs_unescape_str = njs_string("unescape"); +static const njs_value_t njs_encode_uri_str = + njs_long_string("encodeURIComponent"); +static const njs_value_t njs_decode_uri_str = + njs_long_string("decodeURIComponent"); +static const njs_value_t njs_max_keys_str = njs_string("maxKeys"); + +static const njs_str_t njs_sep_default = njs_str("&"); +static const njs_str_t njs_eq_default = njs_str("="); + +static const njs_value_t njs_unescape_default = + njs_native_function(njs_query_string_unescape, 1); + + static njs_object_t * njs_query_string_object_alloc(njs_vm_t *vm) { @@ -343,7 +352,7 @@ njs_query_string_append(njs_vm_t *vm, nj static u_char * -njs_query_string_match(u_char *p, u_char *end, njs_str_t *v) +njs_query_string_match(u_char *p, u_char *end, const njs_str_t *v) { size_t length; @@ -375,40 +384,28 @@ static njs_int_t njs_query_string_parse(njs_vm_t *vm, njs_value_t *args, njs_uint_t nargs, njs_index_t unused) { - size_t size; - u_char *end, *part, *key, *val; - int64_t max_keys, count; + int64_t max_keys; njs_int_t ret; - njs_str_t str; - njs_value_t obj, value, *this, *string, *options, *arg; + njs_str_t str, sep, eq; + njs_value_t value, *this, *string, *options, *arg; njs_value_t val_sep, val_eq; - njs_object_t *object; njs_function_t *decode; - njs_str_t sep = njs_str("&"); - njs_str_t eq = njs_str("="); - - count = 0; decode = NULL; max_keys = 1000; - object = njs_query_string_object_alloc(vm); - if (njs_slow_path(object == NULL)) { - return NJS_ERROR; - } - - njs_set_object(&obj, object); - this = njs_argument(args, 0); string = njs_arg(args, nargs, 1); - if (njs_slow_path(!njs_is_string(string) - || njs_string_length(string) == 0)) - { - goto done; + if (njs_is_string(string)) { + njs_string_get(string, &str); + + } else { + str = njs_str_value(""); } - njs_string_get(string, &str); + sep = njs_sep_default; + eq = njs_eq_default; arg = njs_arg(args, nargs, 2); if (!njs_is_null_or_undefined(arg)) { @@ -478,26 +475,62 @@ njs_query_string_parse(njs_vm_t *vm, njs decode = njs_function(&value); } - key = str.start; - end = str.start + str.length; + return njs_query_string_parser(vm, str.start, str.start + str.length, + &sep, &eq, decode, max_keys, &vm->retval); +} + + +njs_int_t +njs_vm_query_string_parse(njs_vm_t *vm, u_char *start, u_char *end, + njs_value_t *retval) +{ + return njs_query_string_parser(vm, start, end, &njs_sep_default, + &njs_eq_default, + njs_function(&njs_unescape_default), + 1000, retval); +} + + +static njs_int_t +njs_query_string_parser(njs_vm_t *vm, u_char *query, u_char *end, + const njs_str_t *sep, const njs_str_t *eq, njs_function_t *decode, + njs_uint_t max_keys, njs_value_t *retval) +{ + size_t size; + u_char *part, *key, *val; + njs_int_t ret; + njs_uint_t count; + njs_value_t obj; + njs_object_t *object; + + object = njs_query_string_object_alloc(vm); + if (njs_slow_path(object == NULL)) { + return NJS_ERROR; + } + + njs_set_object(&obj, object); + + count = 0; + + key = query; do { if (count++ == max_keys) { break; } - part = njs_query_string_match(key, end, &sep); + part = njs_query_string_match(key, end, sep); if (part == key) { goto next; } - val = njs_query_string_match(key, part, &eq); + val = njs_query_string_match(key, part, eq); size = val - key; if (val != end) { - val += eq.length; + val += eq->length; } ret = njs_query_string_append(vm, &obj, key, size, val, part - val, @@ -508,13 +541,11 @@ njs_query_string_parse(njs_vm_t *vm, njs next: - key = part + sep.length; + key = part + sep->length; } while (key < end); -done: - - njs_set_object(&vm->retval, object); + njs_set_object(retval, object); return NJS_OK; } diff -r 36a6dfe84da4 -r 016946f45c70 src/njs.h --- a/src/njs.h Thu Jul 14 20:16:34 2022 -0700 +++ b/src/njs.h Thu Jul 14 20:16:36 2022 -0700 @@ -451,6 +451,9 @@ NJS_EXPORT njs_int_t njs_vm_json_parse(n NJS_EXPORT njs_int_t njs_vm_json_stringify(njs_vm_t *vm, njs_value_t *args, njs_uint_t nargs); +NJS_EXPORT njs_int_t njs_vm_query_string_parse(njs_vm_t *vm, u_char *start, + u_char *end, njs_value_t *retval); + NJS_EXPORT njs_int_t njs_vm_promise_create(njs_vm_t *vm, njs_value_t *retval, njs_value_t *callbacks); From xeioex at nginx.com Fri Jul 15 03:56:51 2022 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Fri, 15 Jul 2022 03:56:51 +0000 Subject: [njs] HTTP: refactored r.args object. Message-ID: details: https://hg.nginx.org/njs/rev/6a71c19cec11 branches: changeset: 1909:6a71c19cec11 user: Dmitry Volyntsev date: Thu Jul 14 20:16:37 2022 -0700 description: HTTP: refactored r.args object. 1) added support for multiple arguments with the same key. 2) added cases sensitivity for keys. 3) keys and values are percent-decoded. diffstat: nginx/ngx_http_js_module.c | 133 +++++++++++++------------------------------- 1 files changed, 41 insertions(+), 92 deletions(-) diffs (180 lines): diff -r 016946f45c70 -r 6a71c19cec11 nginx/ngx_http_js_module.c --- a/nginx/ngx_http_js_module.c Thu Jul 14 20:16:36 2022 -0700 +++ b/nginx/ngx_http_js_module.c Thu Jul 14 20:16:37 2022 -0700 @@ -60,6 +60,7 @@ typedef struct { ngx_int_t status; njs_opaque_value_t retval; njs_opaque_value_t request; + njs_opaque_value_t args; njs_opaque_value_t request_body; njs_opaque_value_t response_body; ngx_str_t redirect_uri; @@ -179,6 +180,9 @@ static njs_int_t ngx_http_js_ext_get_htt static njs_int_t ngx_http_js_ext_get_remote_address(njs_vm_t *vm, njs_object_prop_t *prop, njs_value_t *value, njs_value_t *setval, njs_value_t *retval); +static njs_int_t ngx_http_js_ext_get_args(njs_vm_t *vm, + njs_object_prop_t *prop, njs_value_t *value, njs_value_t *setval, + njs_value_t *retval); static njs_int_t ngx_http_js_ext_get_request_body(njs_vm_t *vm, njs_object_prop_t *prop, njs_value_t *value, njs_value_t *setval, njs_value_t *retval); @@ -199,11 +203,6 @@ static njs_int_t ngx_http_js_header_in_a #endif static njs_int_t ngx_http_js_ext_keys_header_in(njs_vm_t *vm, njs_value_t *value, njs_value_t *keys); -static njs_int_t ngx_http_js_ext_get_arg(njs_vm_t *vm, - njs_object_prop_t *prop, njs_value_t *value, njs_value_t *setval, - njs_value_t *retval); -static njs_int_t ngx_http_js_ext_keys_arg(njs_vm_t *vm, njs_value_t *value, - njs_value_t *keys); static njs_int_t ngx_http_js_ext_variables(njs_vm_t *vm, njs_object_prop_t *prop, njs_value_t *value, njs_value_t *setval, njs_value_t *retval); @@ -582,13 +581,11 @@ static njs_external_t ngx_http_js_ext_r }, { - .flags = NJS_EXTERN_OBJECT, + .flags = NJS_EXTERN_PROPERTY, .name.string = njs_str("args"), .enumerable = 1, - .u.object = { - .enumerable = 1, - .prop_handler = ngx_http_js_ext_get_arg, - .keys = ngx_http_js_ext_keys_arg, + .u.property = { + .handler = ngx_http_js_ext_get_args, } }, @@ -2528,6 +2525,40 @@ ngx_http_js_ext_get_remote_address(njs_v static njs_int_t +ngx_http_js_ext_get_args(njs_vm_t *vm, njs_object_prop_t *prop, + njs_value_t *value, njs_value_t *setval, njs_value_t *retval) +{ + njs_int_t ret; + njs_value_t *args; + ngx_http_js_ctx_t *ctx; + ngx_http_request_t *r; + + r = njs_vm_external(vm, ngx_http_js_request_proto_id, value); + if (r == NULL) { + njs_value_undefined_set(retval); + return NJS_DECLINED; + } + + ctx = ngx_http_get_module_ctx(r, ngx_http_js_module); + + args = njs_value_arg(&ctx->args); + + if (njs_value_is_null(args)) { + ret = njs_vm_query_string_parse(vm, r->args.data, + r->args.data + r->args.len, args); + + if (ret == NJS_ERROR) { + return NJS_ERROR; + } + } + + njs_value_assign(retval, args); + + return NJS_OK; +} + + +static njs_int_t ngx_http_js_ext_get_request_body(njs_vm_t *vm, njs_object_prop_t *prop, njs_value_t *value, njs_value_t *setval, njs_value_t *retval) { @@ -2816,88 +2847,6 @@ ngx_http_js_ext_keys_header_in(njs_vm_t return ngx_http_js_ext_keys_header(vm, value, keys, &r->headers_in.headers); } -static njs_int_t -ngx_http_js_ext_get_arg(njs_vm_t *vm, njs_object_prop_t *prop, - njs_value_t *value, njs_value_t *setval, njs_value_t *retval) -{ - njs_int_t rc; - njs_str_t *v, key; - ngx_str_t arg; - ngx_http_request_t *r; - - r = njs_vm_external(vm, ngx_http_js_request_proto_id, value); - if (r == NULL) { - njs_value_undefined_set(retval); - return NJS_DECLINED; - } - - rc = njs_vm_prop_name(vm, prop, &key); - if (rc != NJS_OK) { - njs_value_undefined_set(retval); - return NJS_DECLINED; - } - - v = &key; - - if (ngx_http_arg(r, v->start, v->length, &arg) == NGX_OK) { - return njs_vm_value_string_set(vm, retval, arg.data, arg.len); - } - - njs_value_undefined_set(retval); - - return NJS_DECLINED; -} - - -static njs_int_t -ngx_http_js_ext_keys_arg(njs_vm_t *vm, njs_value_t *value, njs_value_t *keys) -{ - u_char *v, *p, *start, *end; - njs_int_t rc; - ngx_http_request_t *r; - - rc = njs_vm_array_alloc(vm, keys, 8); - if (rc != NJS_OK) { - return NJS_ERROR; - } - - r = njs_vm_external(vm, ngx_http_js_request_proto_id, value); - if (r == NULL) { - return NJS_OK; - } - - start = r->args.data; - end = start + r->args.len; - - while (start < end) { - p = ngx_strlchr(start, end, '&'); - if (p == NULL) { - p = end; - } - - v = ngx_strlchr(start, p, '='); - if (v == NULL) { - v = p; - } - - if (v != start) { - value = njs_vm_array_push(vm, keys); - if (value == NULL) { - return NJS_ERROR; - } - - rc = njs_vm_value_string_set(vm, value, start, v - start); - if (rc != NJS_OK) { - return NJS_ERROR; - } - } - - start = p + 1; - } - - return NJS_OK; -} - static njs_int_t ngx_http_js_ext_variables(njs_vm_t *vm, njs_object_prop_t *prop, From mdounin at mdounin.ru Fri Jul 15 05:25:29 2022 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 15 Jul 2022 08:25:29 +0300 Subject: [PATCH] Documented the "ipv4=off" parameter of the "resolver" directive In-Reply-To: References: Message-ID: Hello! On Thu, Jul 14, 2022 at 06:51:28PM +0100, Yaroslav Zhuravlev wrote: > # User Yaroslav Zhuravlev > # Date 1657820606 -3600 > # Thu Jul 14 18:43:26 2022 +0100 > # Node ID ef51bb5722334aea46d2846d7a30d671b0f624db > # Parent 9172cf4d2713b685d6956810e5fcad4c29881637 > Documented the "ipv4=off" parameter of the "resolver" directive. > > diff --git a/xml/en/docs/http/ngx_http_core_module.xml b/xml/en/docs/http/ngx_http_core_module.xml > --- a/xml/en/docs/http/ngx_http_core_module.xml > +++ b/xml/en/docs/http/ngx_http_core_module.xml > @@ -10,7 +10,7 @@ > link="/en/docs/http/ngx_http_core_module.html" > lang="en" > - rev="99"> > + rev="100"> > >
> > @@ -2181,6 +2181,7 @@ > address ... > [valid=time] > [ipv6=on|off] > + [ipv4=on|off] > [status_zone=zone] > > http > @@ -2214,6 +2215,11 @@ > > > > + > +If looking up of IPv4 addresses is not desired, > +the ipv4=off parameter can be specified (1.23.1). > + Shouldn't it be in the same paragraph which describes the default behaviour ("both IPv4 and IPv6 addresses") and "ipv6=off"? It might be also a good idea to describe "ipv6=off" and "ipv4=off" in one sentence instead of repeating the same phrase twice. -- Maxim Dounin http://mdounin.ru/ From yar at nginx.com Fri Jul 15 11:49:42 2022 From: yar at nginx.com (Yaroslav Zhuravlev) Date: Fri, 15 Jul 2022 12:49:42 +0100 Subject: [PATCH] Documented the "ipv4=off" parameter of the "resolver" directive In-Reply-To: References: Message-ID: [...] >> + >> +If looking up of IPv4 addresses is not desired, >> +the ipv4=off parameter can be specified (1.23.1). >> + > > Shouldn't it be in the same paragraph which describes the default > behaviour ("both IPv4 and IPv6 addresses") and "ipv6=off"? > > It might be also a good idea to describe "ipv6=off" and > "ipv4=off" in one sentence instead of repeating the same phrase > twice. > Hi Maxim, Thank you for your feedback, the patch reworked, in the new version the changes are minimal if compared to the current doc: # HG changeset patch # User Yaroslav Zhuravlev # Date 1657884682 -3600 # Fri Jul 15 12:31:22 2022 +0100 # Node ID eba78fc7fe66c1d1a51064c509a11945bb6ef8a4 # Parent 9172cf4d2713b685d6956810e5fcad4c29881637 Documented the "ipv4=off" parameter of the "resolver" directive. diff --git a/xml/en/docs/http/ngx_http_core_module.xml b/xml/en/docs/http/ngx_http_core_module.xml --- a/xml/en/docs/http/ngx_http_core_module.xml +++ b/xml/en/docs/http/ngx_http_core_module.xml @@ -10,7 +10,7 @@ + rev="100">
@@ -2181,6 +2181,7 @@ address ... [valid=time] [ipv6=on|off] + [ipv4=on|off] [status_zone=zone] http @@ -2206,8 +2207,9 @@ By default, nginx will look up both IPv4 and IPv6 addresses while resolving. -If looking up of IPv6 addresses is not desired, -the ipv6=off parameter can be specified. +If looking up of any part of these addresses is not desired, +the ipv6=off or +the ipv4=off (1.23.1) parameter can be specified. Resolving of names into IPv6 addresses is supported starting from version 1.5.8. diff --git a/xml/en/docs/mail/ngx_mail_core_module.xml b/xml/en/docs/mail/ngx_mail_core_module.xml --- a/xml/en/docs/mail/ngx_mail_core_module.xml +++ b/xml/en/docs/mail/ngx_mail_core_module.xml @@ -10,7 +10,7 @@ + rev="20">
@@ -314,6 +314,7 @@ address ... [valid=time] [ipv6=on|off] + [ipv4=on|off] [status_zone=zone] off off @@ -344,8 +345,9 @@ By default, nginx will look up both IPv4 and IPv6 addresses while resolving. -If looking up of IPv6 addresses is not desired, -the ipv6=off parameter can be specified. +If looking up of any part of these addresses is not desired, +the ipv6=off or +the ipv4=off (1.23.1) parameter can be specified. Resolving of names into IPv6 addresses is supported starting from version 1.5.8. diff --git a/xml/en/docs/stream/ngx_stream_core_module.xml b/xml/en/docs/stream/ngx_stream_core_module.xml --- a/xml/en/docs/stream/ngx_stream_core_module.xml +++ b/xml/en/docs/stream/ngx_stream_core_module.xml @@ -9,7 +9,7 @@ + rev="35">
@@ -342,6 +342,7 @@ address ... [valid=time] [ipv6=on|off] + [ipv4=on|off] [status_zone=zone] stream @@ -362,8 +363,9 @@ By default, nginx will look up both IPv4 and IPv6 addresses while resolving. -If looking up of IPv6 addresses is not desired, -the ipv6=off parameter can be specified. +If looking up of any part of these addresses is not desired, +the ipv6=off or +the ipv4=off (1.23.1) parameter can be specified. diff --git a/xml/ru/docs/http/ngx_http_core_module.xml b/xml/ru/docs/http/ngx_http_core_module.xml --- a/xml/ru/docs/http/ngx_http_core_module.xml +++ b/xml/ru/docs/http/ngx_http_core_module.xml @@ -10,7 +10,7 @@ + rev="100">
@@ -2178,6 +2178,7 @@ адрес ... [valid=время] [ipv6=on|off] + [ipv4=on|off] [status_zone=зона] http @@ -2204,8 +2205,9 @@ По умолчанию nginx будет искать как IPv4-, так и IPv6-адреса при преобразовании имён в адреса. -Если поиск IPv6-адресов нежелателен, -можно указать параметр ipv6=off. +Если поиск части адресов нежелателен, +можно указать параметр ipv6=off +или ipv4=off (1.23.1). Преобразование имён в IPv6-адреса поддерживается начиная с версии 1.5.8. diff --git a/xml/ru/docs/mail/ngx_mail_core_module.xml b/xml/ru/docs/mail/ngx_mail_core_module.xml --- a/xml/ru/docs/mail/ngx_mail_core_module.xml +++ b/xml/ru/docs/mail/ngx_mail_core_module.xml @@ -10,7 +10,7 @@ + rev="20">
@@ -317,6 +317,7 @@ адрес ... [valid=time] [ipv6=on|off] + [ipv4=on|off] [status_zone=зона] off off @@ -348,8 +349,9 @@ По умолчанию nginx будет искать как IPv4-, так и IPv6-адреса при преобразовании имён в адреса. -Если поиск IPv6-адресов нежелателен, -можно указать параметр ipv6=off. +Если поиск части адресов нежелателен, +можно указать параметр ipv6=off +или ipv4=off (1.23.1). Преобразование имён в IPv6-адреса поддерживается начиная с версии 1.5.8. diff --git a/xml/ru/docs/stream/ngx_stream_core_module.xml b/xml/ru/docs/stream/ngx_stream_core_module.xml --- a/xml/ru/docs/stream/ngx_stream_core_module.xml +++ b/xml/ru/docs/stream/ngx_stream_core_module.xml @@ -9,7 +9,7 @@ + rev="35">
@@ -347,6 +347,7 @@ адрес ... [valid=время] [ipv6=on|off] + [ipv4=on|off] [status_zone=зона] stream @@ -368,8 +369,9 @@ По умолчанию nginx будет искать как IPv4-, так и IPv6-адреса при преобразовании имён в адреса. -Если поиск IPv6-адресов нежелателен, -можно указать параметр ipv6=off. +Если поиск части адресов нежелателен, +можно указать параметр ipv6=off +или ipv4=off (1.23.1). [...] From pluknet at nginx.com Fri Jul 15 13:34:03 2022 From: pluknet at nginx.com (Sergey Kandaurov) Date: Fri, 15 Jul 2022 13:34:03 +0000 Subject: [nginx] Range filter: clearing of pre-existing Content-Range headers. Message-ID: details: https://hg.nginx.org/nginx/rev/ae2d62bb12c0 branches: changeset: 8057:ae2d62bb12c0 user: Maxim Dounin date: Fri Jul 15 07:01:44 2022 +0300 description: Range filter: clearing of pre-existing Content-Range headers. Some servers might emit Content-Range header on 200 responses, and this does not seem to contradict RFC 9110: as per RFC 9110, the Content-Range header has no meaning for status codes other than 206 and 416. Previously this resulted in duplicate Content-Range headers in nginx responses handled by the range filter. Fix is to clear pre-existing headers. diffstat: src/http/modules/ngx_http_range_filter_module.c | 13 +++++++++++++ 1 files changed, 13 insertions(+), 0 deletions(-) diffs (37 lines): diff -r 0422365794f7 -r ae2d62bb12c0 src/http/modules/ngx_http_range_filter_module.c --- a/src/http/modules/ngx_http_range_filter_module.c Thu Jul 14 21:26:54 2022 +0400 +++ b/src/http/modules/ngx_http_range_filter_module.c Fri Jul 15 07:01:44 2022 +0300 @@ -425,6 +425,10 @@ ngx_http_range_singlepart_header(ngx_htt return NGX_ERROR; } + if (r->headers_out.content_range) { + r->headers_out.content_range->hash = 0; + } + r->headers_out.content_range = content_range; content_range->hash = 1; @@ -582,6 +586,11 @@ ngx_http_range_multipart_header(ngx_http r->headers_out.content_length = NULL; } + if (r->headers_out.content_range) { + r->headers_out.content_range->hash = 0; + r->headers_out.content_range = NULL; + } + return ngx_http_next_header_filter(r); } @@ -598,6 +607,10 @@ ngx_http_range_not_satisfiable(ngx_http_ return NGX_ERROR; } + if (r->headers_out.content_range) { + r->headers_out.content_range->hash = 0; + } + r->headers_out.content_range = content_range; content_range->hash = 1; From v.zhestikov at f5.com Fri Jul 15 23:06:55 2022 From: v.zhestikov at f5.com (Vadim Zhestikov) Date: Fri, 15 Jul 2022 23:06:55 +0000 Subject: [njs] Fixed async function declaration in CLI. Message-ID: details: https://hg.nginx.org/njs/rev/56a890599de2 branches: changeset: 1910:56a890599de2 user: Vadim Zhestikov date: Fri Jul 15 15:44:16 2022 -0700 description: Fixed async function declaration in CLI. This closes #559 issue on Github. diffstat: src/njs_generator.c | 3 ++- 1 files changed, 2 insertions(+), 1 deletions(-) diffs (13 lines): diff -r 6a71c19cec11 -r 56a890599de2 src/njs_generator.c --- a/src/njs_generator.c Thu Jul 14 20:16:37 2022 -0700 +++ b/src/njs_generator.c Fri Jul 15 15:44:16 2022 -0700 @@ -2615,7 +2615,8 @@ njs_generate_stop_statement_end(njs_vm_t if (node != NULL) { if ((node->index != NJS_INDEX_NONE - && node->token_type != NJS_TOKEN_FUNCTION_DECLARATION) + && node->token_type != NJS_TOKEN_FUNCTION_DECLARATION + && node->token_type != NJS_TOKEN_ASYNC_FUNCTION_DECLARATION) || node->token_type == NJS_TOKEN_THIS) { index = node->index; From v.zhestikov at f5.com Fri Jul 15 23:06:57 2022 From: v.zhestikov at f5.com (Vadim Zhestikov) Date: Fri, 15 Jul 2022 23:06:57 +0000 Subject: [njs] Fixed AST debug with tokens added with async/await feature (0.7.0). Message-ID: details: https://hg.nginx.org/njs/rev/0a2bd6c71db4 branches: changeset: 1911:0a2bd6c71db4 user: Vadim Zhestikov date: Fri Jul 15 15:44:27 2022 -0700 description: Fixed AST debug with tokens added with async/await feature (0.7.0). diffstat: src/njs_parser.c | 3 +++ 1 files changed, 3 insertions(+), 0 deletions(-) diffs (15 lines): diff -r 56a890599de2 -r 0a2bd6c71db4 src/njs_parser.c --- a/src/njs_parser.c Fri Jul 15 15:44:16 2022 -0700 +++ b/src/njs_parser.c Fri Jul 15 15:44:27 2022 -0700 @@ -9145,8 +9145,11 @@ njs_parser_serialize_node(njs_chb_t *cha njs_token_serialize(NJS_TOKEN_PROTO_INIT); njs_token_serialize(NJS_TOKEN_FUNCTION); + njs_token_serialize(NJS_TOKEN_ASYNC_FUNCTION); njs_token_serialize(NJS_TOKEN_FUNCTION_DECLARATION); + njs_token_serialize(NJS_TOKEN_ASYNC_FUNCTION_DECLARATION); njs_token_serialize(NJS_TOKEN_FUNCTION_EXPRESSION); + njs_token_serialize(NJS_TOKEN_ASYNC_FUNCTION_EXPRESSION); njs_token_serialize(NJS_TOKEN_FUNCTION_CALL); njs_token_serialize(NJS_TOKEN_METHOD_CALL); From mdounin at mdounin.ru Fri Jul 15 12:23:57 2022 From: mdounin at mdounin.ru (=?utf-8?q?Maxim_Dounin?=) Date: Fri, 15 Jul 2022 15:23:57 +0300 Subject: [PATCH] Events: fixed EPOLLRDHUP with FIONREAD (ticket #2367) Message-ID: # HG changeset patch # User Maxim Dounin # Date 1657887572 -10800 # Fri Jul 15 15:19:32 2022 +0300 # Node ID f3510cb959d1ae168e3458036d1606dcedffd212 # Parent ae2d62bb12c00ebd014c147d7b37252ccfe72373 Events: fixed EPOLLRDHUP with FIONREAD (ticket #2367). When reading exactly rev->available bytes, rev->available might become 0 after FIONREAD usage introduction in efd71d49bde0. On the next call of ngx_readv_chain() on systems with EPOLLRDHUP this resulted in return without any actions, that is, with rev->ready set, and this in turn resulted in no timers set in event pipe, leading to socket leaks. Fix is to reset rev->ready in ngx_readv_chain() when returning due to rev->available being 0 with EPOLLRDHUP, much like it is already done in ngx_unix_recv(). This ensures that if rev->available will become 0, on systems with EPOLLRDHUP support appropriate EPOLLRDHUP-specific handling will happen on the next ngx_readv_chain() call. While here, also synced ngx_readv_chain() to match ngx_unix_recv() and reset rev->ready when returning due to rev->available being 0 with kqueue. This is mostly cosmetic change, as rev->ready is anyway reset when rev->available is set to 0. diff --git a/src/os/unix/ngx_readv_chain.c b/src/os/unix/ngx_readv_chain.c --- a/src/os/unix/ngx_readv_chain.c +++ b/src/os/unix/ngx_readv_chain.c @@ -46,6 +46,7 @@ ngx_readv_chain(ngx_connection_t *c, ngx return 0; } else { + rev->ready = 0; return NGX_AGAIN; } } @@ -63,6 +64,7 @@ ngx_readv_chain(ngx_connection_t *c, ngx rev->pending_eof, rev->available); if (rev->available == 0 && !rev->pending_eof) { + rev->ready = 0; return NGX_AGAIN; } } From mdounin at mdounin.ru Sat Jul 16 05:41:53 2022 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sat, 16 Jul 2022 08:41:53 +0300 Subject: [PATCH] Events: fixed EPOLLRDHUP with FIONREAD (ticket #2367) In-Reply-To: References: Message-ID: Hello! On Fri, Jul 15, 2022 at 03:23:57PM +0300, Maxim Dounin wrote: > # HG changeset patch > # User Maxim Dounin > # Date 1657887572 -10800 > # Fri Jul 15 15:19:32 2022 +0300 > # Node ID f3510cb959d1ae168e3458036d1606dcedffd212 > # Parent ae2d62bb12c00ebd014c147d7b37252ccfe72373 > Events: fixed EPOLLRDHUP with FIONREAD (ticket #2367). > > When reading exactly rev->available bytes, rev->available might become 0 > after FIONREAD usage introduction in efd71d49bde0. On the next call of > ngx_readv_chain() on systems with EPOLLRDHUP this resulted in return without > any actions, that is, with rev->ready set, and this in turn resulted in no > timers set in event pipe, leading to socket leaks. > > Fix is to reset rev->ready in ngx_readv_chain() when returning due to > rev->available being 0 with EPOLLRDHUP, much like it is already done in > ngx_unix_recv(). This ensures that if rev->available will become 0, on > systems with EPOLLRDHUP support appropriate EPOLLRDHUP-specific handling > will happen on the next ngx_readv_chain() call. > > While here, also synced ngx_readv_chain() to match ngx_unix_recv() and > reset rev->ready when returning due to rev->available being 0 with kqueue. > This is mostly cosmetic change, as rev->ready is anyway reset when > rev->available is set to 0. > > diff --git a/src/os/unix/ngx_readv_chain.c b/src/os/unix/ngx_readv_chain.c > --- a/src/os/unix/ngx_readv_chain.c > +++ b/src/os/unix/ngx_readv_chain.c > @@ -46,6 +46,7 @@ ngx_readv_chain(ngx_connection_t *c, ngx > return 0; > > } else { > + rev->ready = 0; > return NGX_AGAIN; > } > } > @@ -63,6 +64,7 @@ ngx_readv_chain(ngx_connection_t *c, ngx > rev->pending_eof, rev->available); > > if (rev->available == 0 && !rev->pending_eof) { > + rev->ready = 0; > return NGX_AGAIN; > } > } Already reviewed in #2367 comments, pushed to http://mdounin.ru/hg/nginx/. -- Maxim Dounin http://mdounin.ru/ From pluknet at nginx.com Sun Jul 17 08:05:56 2022 From: pluknet at nginx.com (Sergey Kandaurov) Date: Sun, 17 Jul 2022 08:05:56 +0000 Subject: [nginx] Events: fixed EPOLLRDHUP with FIONREAD (ticket #2367). Message-ID: details: https://hg.nginx.org/nginx/rev/f3510cb959d1 branches: changeset: 8058:f3510cb959d1 user: Maxim Dounin date: Fri Jul 15 15:19:32 2022 +0300 description: Events: fixed EPOLLRDHUP with FIONREAD (ticket #2367). When reading exactly rev->available bytes, rev->available might become 0 after FIONREAD usage introduction in efd71d49bde0. On the next call of ngx_readv_chain() on systems with EPOLLRDHUP this resulted in return without any actions, that is, with rev->ready set, and this in turn resulted in no timers set in event pipe, leading to socket leaks. Fix is to reset rev->ready in ngx_readv_chain() when returning due to rev->available being 0 with EPOLLRDHUP, much like it is already done in ngx_unix_recv(). This ensures that if rev->available will become 0, on systems with EPOLLRDHUP support appropriate EPOLLRDHUP-specific handling will happen on the next ngx_readv_chain() call. While here, also synced ngx_readv_chain() to match ngx_unix_recv() and reset rev->ready when returning due to rev->available being 0 with kqueue. This is mostly cosmetic change, as rev->ready is anyway reset when rev->available is set to 0. diffstat: src/os/unix/ngx_readv_chain.c | 2 ++ 1 files changed, 2 insertions(+), 0 deletions(-) diffs (19 lines): diff -r ae2d62bb12c0 -r f3510cb959d1 src/os/unix/ngx_readv_chain.c --- a/src/os/unix/ngx_readv_chain.c Fri Jul 15 07:01:44 2022 +0300 +++ b/src/os/unix/ngx_readv_chain.c Fri Jul 15 15:19:32 2022 +0300 @@ -46,6 +46,7 @@ ngx_readv_chain(ngx_connection_t *c, ngx return 0; } else { + rev->ready = 0; return NGX_AGAIN; } } @@ -63,6 +64,7 @@ ngx_readv_chain(ngx_connection_t *c, ngx rev->pending_eof, rev->available); if (rev->available == 0 && !rev->pending_eof) { + rev->ready = 0; return NGX_AGAIN; } } From cnewton at netflix.com Mon Jul 18 15:02:53 2022 From: cnewton at netflix.com (Chris Newton) Date: Mon, 18 Jul 2022 16:02:53 +0100 Subject: ngx_http_script_run() question Message-ID: I'm adding a variable that wants to provide an absolute URL to proxy_pass, whose value is based in part on a database lookup - but if the search key is not found in that database, my handler function is returning an NGX_ERROR which seemed reasonable (there is no good default I can return), so just causing a 500 error at that point is not unreasonable. However, that also causes an error to be logged by ngx_http_proxy_eval() for every request, which i'd like to avoid. I can make a change to ngx_http_proxy_eval() : *--- a/nginx/files/nginx/src/http/modules/ngx_http_proxy_module.c* *+++ b/nginx/files/nginx/src/http/modules/ngx_http_proxy_module.c* @@ -1098,9 +1098,10 @@ ngx_http_proxy_eval(ngx_http_request_t *r, ngx_http_proxy_ctx_t *ctx, ngx_url_t url; ngx_http_upstream_t *u; if (ngx_http_script_run(r, &proxy, plcf->proxy_lengths->elts, 0, plcf->proxy_values->elts) - == NULL) + == NULL || proxy.len == 0) { return NGX_ERROR; } although I think it might be good to make a more general fix - ie., have ngx_http_script_run() return NULL if there was no output from the script: *--- a/nginx/files/nginx/src/http/ngx_http_script.c* *+++ b/nginx/files/nginx/src/http/ngx_http_script.c* @@ -641,6 +641,9 @@ ngx_http_script_run(ngx_http_request_t *r, ngx_str_t *value, len += lcode(&e); } + if (len == 0) { + return NULL; + } Would that be a reasonable change to make ? TIA Chris -------------- next part -------------- An HTML attachment was scrubbed... URL: From xeioex at nginx.com Tue Jul 19 01:43:01 2022 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Tue, 19 Jul 2022 01:43:01 +0000 Subject: [njs] Version 0.7.6. Message-ID: details: https://hg.nginx.org/njs/rev/461dfb0bb60e branches: changeset: 1912:461dfb0bb60e user: Dmitry Volyntsev date: Mon Jul 18 17:58:41 2022 -0700 description: Version 0.7.6. diffstat: CHANGES | 24 ++++++++++++++++++++++++ 1 files changed, 24 insertions(+), 0 deletions(-) diffs (31 lines): diff -r 0a2bd6c71db4 -r 461dfb0bb60e CHANGES --- a/CHANGES Fri Jul 15 15:44:27 2022 -0700 +++ b/CHANGES Mon Jul 18 17:58:41 2022 -0700 @@ -1,3 +1,27 @@ +Changes with njs 0.7.6 19 Jul 2022 + + nginx modules: + + *) Feature: improved r.args object. Added support for multiple + arguments with the same key. Added case sensitivity for + keys. Keys and values are percent-decoded now. + + *) Bugfix: fixed r.headersOut setter for special headers. + + Core: + + *) Feature: added Symbol.for() and Symbol.keyfor(). + + *) Feature: added btoa() and atob() from WHATWG spec. + + *) Bugfix: fixed large non-decimal literals. + + *) Bugfix: fixed unicode argument trimming in parseInt(). + + *) Bugfix: fixed break instruction in a try-catch block. + + *) Bugfix: fixed async function declaration in CLI. + Changes with njs 0.7.5 21 Jun 2022 nginx modules: From xeioex at nginx.com Tue Jul 19 01:43:03 2022 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Tue, 19 Jul 2022 01:43:03 +0000 Subject: [njs] Added tag 0.7.6 for changeset 461dfb0bb60e Message-ID: details: https://hg.nginx.org/njs/rev/61d5b54b2026 branches: changeset: 1913:61d5b54b2026 user: Dmitry Volyntsev date: Mon Jul 18 18:42:28 2022 -0700 description: Added tag 0.7.6 for changeset 461dfb0bb60e diffstat: .hgtags | 1 + 1 files changed, 1 insertions(+), 0 deletions(-) diffs (8 lines): diff -r 461dfb0bb60e -r 61d5b54b2026 .hgtags --- a/.hgtags Mon Jul 18 17:58:41 2022 -0700 +++ b/.hgtags Mon Jul 18 18:42:28 2022 -0700 @@ -51,3 +51,4 @@ 3dd315b80bab10b6ac475ee25dd207d2eb759881 f15d039cf625fb92e061f21c9f28a788032a0faa 0.7.3 b5198f7f11a3b5e174f9e75a7bd50394fa354fb0 0.7.4 63c258c456ca018385b13f352faefdf25c7bd3bb 0.7.5 +461dfb0bb60e531d361319f30993f29860c19f55 0.7.6 From mdounin at mdounin.ru Tue Jul 19 02:31:02 2022 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 19 Jul 2022 05:31:02 +0300 Subject: ngx_http_script_run() question In-Reply-To: References: Message-ID: Hello! On Mon, Jul 18, 2022 at 04:02:53PM +0100, Chris Newton via nginx-devel wrote: > I'm adding a variable that wants to provide an absolute URL to proxy_pass, > whose value is based in part on a database lookup - but if the search key > is not found in that database, my handler function is returning an > NGX_ERROR which seemed reasonable (there is no good default I can return), > so just causing a 500 error at that point is not unreasonable. > > However, that also causes an error to be logged by ngx_http_proxy_eval() > for every request, which i'd like to avoid. NGX_ERROR means a fatal error and it is expected to be logged - by the code where the error appears. Right now scripting infrastructure silently converts errors from variable handlers to empty values - but this is something to change, since this isn't a safe approach and can cause various issues. Consider returning a value instead - something like an empty value or a value with the not_found flag. The logging you see is, however, unrelated. It's ngx_http_proxy_eval() which complains about an invalid URL after variable evaluation. That's because proxy_pass is expected to be reasonably configured to always result in a valid URL. If it doesn't, this is considered to be a configuration error and reported, both to the user with 500 (Internal Server Error) and to the error log. If in your case a valid URL for proxying is not always available, consider re-configuring nginx to avoid the error condition. In particular, providing a dummy URL which immediately returns an error or an explicit check before proxying might be considered. -- Maxim Dounin http://mdounin.ru/ From mdounin at mdounin.ru Tue Jul 19 13:23:14 2022 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 19 Jul 2022 16:23:14 +0300 Subject: nginx-1.23.1 changes draft Message-ID: Hello! Changes with nginx 1.23.1 19 Jul 2022 *) Feature: memory usage optimization in configurations with SSL proxying. *) Feature: looking up of IPv4 addresses while resolving now can be disabled with the "ipv4=off" parameter of the "resolver" directive. *) Change: the logging level of the "bad key share", "bad extension", "bad cipher", and "bad ecpoint" SSL errors has been lowered from "crit" to "info". *) Bugfix: while returning byte ranges nginx did not remove the "Content-Range" header line if it was present in the original backend response. *) Bugfix: a proxied response might be truncated during reconfiguration on Linux; the bug had appeared in 1.17.5. Изменения в nginx 1.23.1 19.07.2022 *) Добавление: оптимизация использования памяти в конфигурациях с SSL-проксированием. *) Добавление: теперь с помощью параметра "ipv4=off" директивы "resolver" можно запретить поиск IPv4-адресов при преобразовании имён в адреса. *) Изменение: уровень логгирования ошибок SSL "bad key share", "bad extension", "bad cipher" и "bad ecpoint" понижен с уровня crit до info. *) Исправление: при возврате диапазонов nginx не удалял строку заголовка "Content-Range", если она присутствовала в исходном ответе бэкенда. *) Исправление: проксированный ответ мог быть отправлен не полностью при переконфигурации на Linux; ошибка появилась в 1.17.5. -- Maxim Dounin http://mdounin.ru/ From yar at nginx.com Tue Jul 19 13:24:49 2022 From: yar at nginx.com (Yaroslav Zhuravlev) Date: Tue, 19 Jul 2022 14:24:49 +0100 Subject: [PATCH] Documented the "ipv4=off" parameter of the "resolver" directive In-Reply-To: References: Message-ID: <75B9FB9A-8E1F-477F-A9B9-17E6E54565D8@nginx.com> [...] Hello, Here's a slightly reworked version: "By default, nginx will look up both IPv4 and IPv6 addresses while resolving. If looking up of IPv4 or IPv6 addresses is not desired, the ipv4=off (1.23.1) or the ipv6=off parameter can be specified." There is also an alternative variant: ""By default, nginx will look up both IPv4 and IPv6 addresses while resolving. If looking up of of either address family is not desired, the ipv4=off (1.23.1) or the ipv6=off parameter can be specified." # HG changeset patch # User Yaroslav Zhuravlev # Date 1658236202 -3600 # Tue Jul 19 14:10:02 2022 +0100 # Node ID e76c1b137b99a004b0344c825ada94657508bb0e # Parent 9172cf4d2713b685d6956810e5fcad4c29881637 Documented the "ipv4=off" parameter of the "resolver" directive. diff --git a/xml/en/docs/http/ngx_http_core_module.xml b/xml/en/docs/http/ngx_http_core_module.xml --- a/xml/en/docs/http/ngx_http_core_module.xml +++ b/xml/en/docs/http/ngx_http_core_module.xml @@ -10,7 +10,7 @@ + rev="100">
@@ -2180,6 +2180,7 @@ address ... [valid=time] + [ipv4=on|off] [ipv6=on|off] [status_zone=zone] @@ -2206,7 +2207,8 @@ By default, nginx will look up both IPv4 and IPv6 addresses while resolving. -If looking up of IPv6 addresses is not desired, +If looking up of IPv4 or IPv6 addresses is not desired, +the ipv4=off (1.23.1) or the ipv6=off parameter can be specified. Resolving of names into IPv6 addresses is supported diff --git a/xml/en/docs/mail/ngx_mail_core_module.xml b/xml/en/docs/mail/ngx_mail_core_module.xml --- a/xml/en/docs/mail/ngx_mail_core_module.xml +++ b/xml/en/docs/mail/ngx_mail_core_module.xml @@ -10,7 +10,7 @@ + rev="20">
@@ -313,6 +313,7 @@ address ... [valid=time] + [ipv4=on|off] [ipv6=on|off] [status_zone=zone] off @@ -344,7 +345,8 @@ By default, nginx will look up both IPv4 and IPv6 addresses while resolving. -If looking up of IPv6 addresses is not desired, +If looking up of IPv4 or IPv6 addresses is not desired, +the ipv4=off (1.23.1) or the ipv6=off parameter can be specified. Resolving of names into IPv6 addresses is supported diff --git a/xml/en/docs/stream/ngx_stream_core_module.xml b/xml/en/docs/stream/ngx_stream_core_module.xml --- a/xml/en/docs/stream/ngx_stream_core_module.xml +++ b/xml/en/docs/stream/ngx_stream_core_module.xml @@ -9,7 +9,7 @@ + rev="35">
@@ -341,6 +341,7 @@ address ... [valid=time] + [ipv4=on|off] [ipv6=on|off] [status_zone=zone] @@ -362,7 +363,8 @@ By default, nginx will look up both IPv4 and IPv6 addresses while resolving. -If looking up of IPv6 addresses is not desired, +If looking up of IPv4 or IPv6 addresses is not desired, +the ipv4=off (1.23.1) or the ipv6=off parameter can be specified. diff --git a/xml/ru/docs/http/ngx_http_core_module.xml b/xml/ru/docs/http/ngx_http_core_module.xml --- a/xml/ru/docs/http/ngx_http_core_module.xml +++ b/xml/ru/docs/http/ngx_http_core_module.xml @@ -10,7 +10,7 @@ + rev="100">
@@ -2177,6 +2177,7 @@ адрес ... [valid=время] + [ipv4=on|off] [ipv6=on|off] [status_zone=зона] @@ -2204,8 +2205,9 @@ По умолчанию nginx будет искать как IPv4-, так и IPv6-адреса при преобразовании имён в адреса. -Если поиск IPv6-адресов нежелателен, -можно указать параметр ipv6=off. +Если поиск IPv4- или IPv6-адресов нежелателен, +можно указать параметр ipv4=off (1.23.1) или +ipv6=off. Преобразование имён в IPv6-адреса поддерживается начиная с версии 1.5.8. diff --git a/xml/ru/docs/mail/ngx_mail_core_module.xml b/xml/ru/docs/mail/ngx_mail_core_module.xml --- a/xml/ru/docs/mail/ngx_mail_core_module.xml +++ b/xml/ru/docs/mail/ngx_mail_core_module.xml @@ -10,7 +10,7 @@ + rev="20">
@@ -316,6 +316,7 @@ адрес ... [valid=time] + [ipv4=on|off] [ipv6=on|off] [status_zone=зона] off @@ -348,8 +349,9 @@ По умолчанию nginx будет искать как IPv4-, так и IPv6-адреса при преобразовании имён в адреса. -Если поиск IPv6-адресов нежелателен, -можно указать параметр ipv6=off. +Если поиск IPv4- или IPv6-адресов нежелателен, +можно указать параметр ipv4=off (1.23.1) или +ipv6=off. Преобразование имён в IPv6-адреса поддерживается начиная с версии 1.5.8. diff --git a/xml/ru/docs/stream/ngx_stream_core_module.xml b/xml/ru/docs/stream/ngx_stream_core_module.xml --- a/xml/ru/docs/stream/ngx_stream_core_module.xml +++ b/xml/ru/docs/stream/ngx_stream_core_module.xml @@ -9,7 +9,7 @@ + rev="35">
@@ -346,6 +346,7 @@ адрес ... [valid=время] + [ipv4=on|off] [ipv6=on|off] [status_zone=зона] @@ -368,8 +369,9 @@ По умолчанию nginx будет искать как IPv4-, так и IPv6-адреса при преобразовании имён в адреса. -Если поиск IPv6-адресов нежелателен, -можно указать параметр ipv6=off. +Если поиск IPv4- или IPv6-адресов нежелателен, +можно указать параметр ipv4=off (1.23.1) или +ipv6=off. From maxim at nginx.com Tue Jul 19 13:44:17 2022 From: maxim at nginx.com (Maxim Konovalov) Date: Tue, 19 Jul 2022 17:44:17 +0400 Subject: nginx-1.23.1 changes draft In-Reply-To: References: Message-ID: <2a1245b6-edee-5455-d523-74e8f6000e19@nginx.com> On 19.07.2022 17:23, Maxim Dounin wrote: > Hello! > > > Changes with nginx 1.23.1 19 Jul 2022 > [...] > > Изменения в nginx 1.23.1 19.07.2022 > [...] Looks good. -- Maxim Konovalov From mdounin at mdounin.ru Tue Jul 19 13:48:31 2022 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 19 Jul 2022 16:48:31 +0300 Subject: [PATCH] Documented the "ipv4=off" parameter of the "resolver" directive In-Reply-To: <75B9FB9A-8E1F-477F-A9B9-17E6E54565D8@nginx.com> References: <75B9FB9A-8E1F-477F-A9B9-17E6E54565D8@nginx.com> Message-ID: Hello! On Tue, Jul 19, 2022 at 02:24:49PM +0100, Yaroslav Zhuravlev wrote: > Here's a slightly reworked version: > > "By default, nginx will look up both IPv4 and IPv6 addresses > while resolving. If looking up of IPv4 or IPv6 addresses is not > desired, the ipv4=off (1.23.1) or the ipv6=off parameter can be > specified." > > There is also an alternative variant: > > ""By default, nginx will look up both IPv4 and IPv6 addresses > while resolving. If looking up of of either address family is > not desired, the ipv4=off (1.23.1) or the ipv6=off parameter can > be specified." Both variants look good to me. > # HG changeset patch > # User Yaroslav Zhuravlev > # Date 1658236202 -3600 > # Tue Jul 19 14:10:02 2022 +0100 > # Node ID e76c1b137b99a004b0344c825ada94657508bb0e > # Parent 9172cf4d2713b685d6956810e5fcad4c29881637 > Documented the "ipv4=off" parameter of the "resolver" directive. > > diff --git a/xml/en/docs/http/ngx_http_core_module.xml b/xml/en/docs/http/ngx_http_core_module.xml > --- a/xml/en/docs/http/ngx_http_core_module.xml > +++ b/xml/en/docs/http/ngx_http_core_module.xml > @@ -10,7 +10,7 @@ > link="/en/docs/http/ngx_http_core_module.html" > lang="en" > - rev="99"> > + rev="100"> > >
> > @@ -2180,6 +2180,7 @@ > > address ... > [valid=time] > + [ipv4=on|off] > [ipv6=on|off] > [status_zone=zone] > > @@ -2206,7 +2207,8 @@ > > > By default, nginx will look up both IPv4 and IPv6 addresses while resolving. > -If looking up of IPv6 addresses is not desired, > +If looking up of IPv4 or IPv6 addresses is not desired, > +the ipv4=off (1.23.1) or > the ipv6=off parameter can be specified. > > Resolving of names into IPv6 addresses is supported > diff --git a/xml/en/docs/mail/ngx_mail_core_module.xml b/xml/en/docs/mail/ngx_mail_core_module.xml > --- a/xml/en/docs/mail/ngx_mail_core_module.xml > +++ b/xml/en/docs/mail/ngx_mail_core_module.xml > @@ -10,7 +10,7 @@ > link="/en/docs/mail/ngx_mail_core_module.html" > lang="en" > - rev="19"> > + rev="20"> > >
> > @@ -313,6 +313,7 @@ > > address ... > [valid=time] > + [ipv4=on|off] > [ipv6=on|off] > [status_zone=zone] > off > @@ -344,7 +345,8 @@ > > > By default, nginx will look up both IPv4 and IPv6 addresses while resolving. > -If looking up of IPv6 addresses is not desired, > +If looking up of IPv4 or IPv6 addresses is not desired, > +the ipv4=off (1.23.1) or > the ipv6=off parameter can be specified. > > Resolving of names into IPv6 addresses is supported > diff --git a/xml/en/docs/stream/ngx_stream_core_module.xml b/xml/en/docs/stream/ngx_stream_core_module.xml > --- a/xml/en/docs/stream/ngx_stream_core_module.xml > +++ b/xml/en/docs/stream/ngx_stream_core_module.xml > @@ -9,7 +9,7 @@ > link="/en/docs/stream/ngx_stream_core_module.html" > lang="en" > - rev="34"> > + rev="35"> > >
> > @@ -341,6 +341,7 @@ > > address ... > [valid=time] > + [ipv4=on|off] > [ipv6=on|off] > [status_zone=zone] > > @@ -362,7 +363,8 @@ > > > By default, nginx will look up both IPv4 and IPv6 addresses while resolving. > -If looking up of IPv6 addresses is not desired, > +If looking up of IPv4 or IPv6 addresses is not desired, > +the ipv4=off (1.23.1) or > the ipv6=off parameter can be specified. > > > diff --git a/xml/ru/docs/http/ngx_http_core_module.xml b/xml/ru/docs/http/ngx_http_core_module.xml > --- a/xml/ru/docs/http/ngx_http_core_module.xml > +++ b/xml/ru/docs/http/ngx_http_core_module.xml > @@ -10,7 +10,7 @@ > link="/ru/docs/http/ngx_http_core_module.html" > lang="ru" > - rev="99"> > + rev="100"> > >
> > @@ -2177,6 +2177,7 @@ > > адрес ... > [valid=время] > + [ipv4=on|off] > [ipv6=on|off] > [status_zone=зона] > > @@ -2204,8 +2205,9 @@ > > По умолчанию nginx будет искать как IPv4-, так и IPv6-адреса > при преобразовании имён в адреса. > -Если поиск IPv6-адресов нежелателен, > -можно указать параметр ipv6=off. > +Если поиск IPv4- или IPv6-адресов нежелателен, > +можно указать параметр ipv4=off (1.23.1) или > +ipv6=off. > > Преобразование имён в IPv6-адреса поддерживается > начиная с версии 1.5.8. > diff --git a/xml/ru/docs/mail/ngx_mail_core_module.xml b/xml/ru/docs/mail/ngx_mail_core_module.xml > --- a/xml/ru/docs/mail/ngx_mail_core_module.xml > +++ b/xml/ru/docs/mail/ngx_mail_core_module.xml > @@ -10,7 +10,7 @@ > link="/ru/docs/mail/ngx_mail_core_module.html" > lang="ru" > - rev="19"> > + rev="20"> > >
> > @@ -316,6 +316,7 @@ > > адрес ... > [valid=time] > + [ipv4=on|off] > [ipv6=on|off] > [status_zone=зона] > off > @@ -348,8 +349,9 @@ > > По умолчанию nginx будет искать как IPv4-, так и IPv6-адреса > при преобразовании имён в адреса. > -Если поиск IPv6-адресов нежелателен, > -можно указать параметр ipv6=off. > +Если поиск IPv4- или IPv6-адресов нежелателен, > +можно указать параметр ipv4=off (1.23.1) или > +ipv6=off. > > Преобразование имён в IPv6-адреса поддерживается > начиная с версии 1.5.8. > diff --git a/xml/ru/docs/stream/ngx_stream_core_module.xml b/xml/ru/docs/stream/ngx_stream_core_module.xml > --- a/xml/ru/docs/stream/ngx_stream_core_module.xml > +++ b/xml/ru/docs/stream/ngx_stream_core_module.xml > @@ -9,7 +9,7 @@ > link="/ru/docs/stream/ngx_stream_core_module.html" > lang="ru" > - rev="34"> > + rev="35"> > >
> > @@ -346,6 +346,7 @@ > > адрес ... > [valid=время] > + [ipv4=on|off] > [ipv6=on|off] > [status_zone=зона] > > @@ -368,8 +369,9 @@ > > По умолчанию nginx будет искать как IPv4-, так и IPv6-адреса > при преобразовании имён в адреса. > -Если поиск IPv6-адресов нежелателен, > -можно указать параметр ipv6=off. > +Если поиск IPv4- или IPv6-адресов нежелателен, > +можно указать параметр ipv4=off (1.23.1) или > +ipv6=off. > > > Looks good. -- Maxim Dounin http://mdounin.ru/ From pluknet at nginx.com Tue Jul 19 14:09:56 2022 From: pluknet at nginx.com (Sergey Kandaurov) Date: Tue, 19 Jul 2022 18:09:56 +0400 Subject: nginx-1.23.1 changes draft In-Reply-To: References: Message-ID: > On 19 Jul 2022, at 17:23, Maxim Dounin wrote: > > Hello! > > > Changes with nginx 1.23.1 19 Jul 2022 > > *) Feature: memory usage optimization in configurations with SSL > proxying. > > *) Feature: looking up of IPv4 addresses while resolving now can be > disabled with the "ipv4=off" parameter of the "resolver" directive. > > *) Change: the logging level of the "bad key share", "bad extension", > "bad cipher", and "bad ecpoint" SSL errors has been lowered from > "crit" to "info". > > *) Bugfix: while returning byte ranges nginx did not remove the > "Content-Range" header line if it was present in the original backend > response. > > *) Bugfix: a proxied response might be truncated during reconfiguration > on Linux; the bug had appeared in 1.17.5. > > > Изменения в nginx 1.23.1 19.07.2022 > > *) Добавление: оптимизация использования памяти в конфигурациях с > SSL-проксированием. > > *) Добавление: теперь с помощью параметра "ipv4=off" директивы > "resolver" можно запретить поиск IPv4-адресов при преобразовании имён No quotes usually here, otherwise looks fine. > в адреса. > > *) Изменение: уровень логгирования ошибок SSL "bad key share", "bad > extension", "bad cipher" и "bad ecpoint" понижен с уровня crit до > info. > > *) Исправление: при возврате диапазонов nginx не удалял строку заголовка > "Content-Range", если она присутствовала в исходном ответе бэкенда. > > *) Исправление: проксированный ответ мог быть отправлен не полностью при > переконфигурации на Linux; ошибка появилась в 1.17.5. > -- Sergey Kandaurov From mdounin at mdounin.ru Tue Jul 19 14:14:58 2022 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 19 Jul 2022 17:14:58 +0300 Subject: nginx-1.23.1 changes draft In-Reply-To: References: Message-ID: Hello! On Tue, Jul 19, 2022 at 04:23:14PM +0300, Maxim Dounin wrote: > Changes with nginx 1.23.1 19 Jul 2022 > > *) Feature: memory usage optimization in configurations with SSL > proxying. > > *) Feature: looking up of IPv4 addresses while resolving now can be > disabled with the "ipv4=off" parameter of the "resolver" directive. > > *) Change: the logging level of the "bad key share", "bad extension", > "bad cipher", and "bad ecpoint" SSL errors has been lowered from > "crit" to "info". > > *) Bugfix: while returning byte ranges nginx did not remove the > "Content-Range" header line if it was present in the original backend > response. > > *) Bugfix: a proxied response might be truncated during reconfiguration > on Linux; the bug had appeared in 1.17.5. > > > Изменения в nginx 1.23.1 19.07.2022 > > *) Добавление: оптимизация использования памяти в конфигурациях с > SSL-проксированием. > > *) Добавление: теперь с помощью параметра "ipv4=off" директивы > "resolver" можно запретить поиск IPv4-адресов при преобразовании имён > в адреса. > > *) Изменение: уровень логгирования ошибок SSL "bad key share", "bad > extension", "bad cipher" и "bad ecpoint" понижен с уровня crit до > info. > > *) Исправление: при возврате диапазонов nginx не удалял строку заголовка > "Content-Range", если она присутствовала в исходном ответе бэкенда. > > *) Исправление: проксированный ответ мог быть отправлен не полностью при > переконфигурации на Linux; ошибка появилась в 1.17.5. Pushed to: http://mdounin.ru/hg/nginx/ http://mdounin.ru/hg/nginx.org/ Release builds: http://mdounin.ru/temp/nginx-1.23.1.tar.gz http://mdounin.ru/temp/nginx-1.23.1.tar.gz.asc http://mdounin.ru/temp/nginx-1.23.1.zip http://mdounin.ru/temp/nginx-1.23.1.zip.asc -- Maxim Dounin http://mdounin.ru/ From thresh at nginx.com Tue Jul 19 14:25:16 2022 From: thresh at nginx.com (Konstantin Pavlov) Date: Tue, 19 Jul 2022 14:25:16 +0000 Subject: [nginx] Updated OpenSSL used for win32 builds. Message-ID: details: https://hg.nginx.org/nginx/rev/e8723b2cef75 branches: changeset: 8059:e8723b2cef75 user: Maxim Dounin date: Tue Jul 19 17:03:30 2022 +0300 description: Updated OpenSSL used for win32 builds. diffstat: misc/GNUmakefile | 2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diffs (12 lines): diff -r f3510cb959d1 -r e8723b2cef75 misc/GNUmakefile --- a/misc/GNUmakefile Fri Jul 15 15:19:32 2022 +0300 +++ b/misc/GNUmakefile Tue Jul 19 17:03:30 2022 +0300 @@ -6,7 +6,7 @@ TEMP = tmp CC = cl OBJS = objs.msvc8 -OPENSSL = openssl-1.1.1p +OPENSSL = openssl-1.1.1q ZLIB = zlib-1.2.12 PCRE = pcre2-10.39 From thresh at nginx.com Tue Jul 19 14:25:19 2022 From: thresh at nginx.com (Konstantin Pavlov) Date: Tue, 19 Jul 2022 14:25:19 +0000 Subject: [nginx] nginx-1.23.1-RELEASE Message-ID: details: https://hg.nginx.org/nginx/rev/a63d0a70afea branches: changeset: 8060:a63d0a70afea user: Maxim Dounin date: Tue Jul 19 17:05:27 2022 +0300 description: nginx-1.23.1-RELEASE diffstat: docs/xml/nginx/changes.xml | 66 ++++++++++++++++++++++++++++++++++++++++++++++ 1 files changed, 66 insertions(+), 0 deletions(-) diffs (76 lines): diff -r e8723b2cef75 -r a63d0a70afea docs/xml/nginx/changes.xml --- a/docs/xml/nginx/changes.xml Tue Jul 19 17:03:30 2022 +0300 +++ b/docs/xml/nginx/changes.xml Tue Jul 19 17:05:27 2022 +0300 @@ -5,6 +5,72 @@ + + + + +оптимизация использования памяти +в конфигурациях с SSL-проксированием. + + +memory usage optimization +in configurations with SSL proxying. + + + + + +теперь с помощью параметра "ipv4=off" директивы "resolver" +можно запретить поиск IPv4-адресов при преобразовании имён в адреса. + + +looking up of IPv4 addresses while resolving now can be disabled +with the "ipv4=off" parameter of the "resolver" directive. + + + + + +уровень логгирования ошибок SSL "bad key share", "bad extension", +"bad cipher" и "bad ecpoint" +понижен с уровня crit до info. + + +the logging level of the "bad key share", "bad extension", +"bad cipher", and "bad ecpoint" SSL errors +has been lowered from "crit" to "info". + + + + + +при возврате диапазонов +nginx не удалял строку заголовка "Content-Range", +если она присутствовала в исходном ответе бэкенда. + + +while returning byte ranges +nginx did not remove the "Content-Range" header line +if it was present in the original backend response. + + + + + +проксированный ответ мог быть отправлен не полностью +при переконфигурации на Linux; +ошибка появилась в 1.17.5. + + +a proxied response might be truncated +during reconfiguration on Linux; +the bug had appeared in 1.17.5. + + + + + + From thresh at nginx.com Tue Jul 19 14:25:22 2022 From: thresh at nginx.com (Konstantin Pavlov) Date: Tue, 19 Jul 2022 14:25:22 +0000 Subject: [nginx] release-1.23.1 tag Message-ID: details: https://hg.nginx.org/nginx/rev/069a4813e8d6 branches: changeset: 8061:069a4813e8d6 user: Maxim Dounin date: Tue Jul 19 17:05:27 2022 +0300 description: release-1.23.1 tag diffstat: .hgtags | 1 + 1 files changed, 1 insertions(+), 0 deletions(-) diffs (8 lines): diff -r a63d0a70afea -r 069a4813e8d6 .hgtags --- a/.hgtags Tue Jul 19 17:05:27 2022 +0300 +++ b/.hgtags Tue Jul 19 17:05:27 2022 +0300 @@ -468,3 +468,4 @@ 39be8a682c58308d9399cddd57e37f9fdb7bdf3e d986378168fd4d70e0121cabac274c560cca9bdf release-1.21.5 714eb4b2c09e712fb2572a2164ce2bf67638ccac release-1.21.6 5da2c0902e8e2aa4534008a582a60c61c135960e release-1.23.0 +a63d0a70afea96813ba6667997bc7d68b5863f0d release-1.23.1 From thresh at nginx.com Tue Jul 19 15:28:25 2022 From: thresh at nginx.com (=?iso-8859-1?q?Konstantin_Pavlov?=) Date: Tue, 19 Jul 2022 19:28:25 +0400 Subject: [PATCH] Linux packages: removed Ubuntu 21.10 'impish' due to EOL Message-ID: # HG changeset patch # User Konstantin Pavlov # Date 1658244488 -14400 # Tue Jul 19 19:28:08 2022 +0400 # Node ID ca4adc1068f0ba18c477f9816ce2b798f675fbe0 # Parent e06cf66a9f630d376699be0fd78b9fc64ef6256e Linux packages: removed Ubuntu 21.10 'impish' due to EOL. diff -r e06cf66a9f63 -r ca4adc1068f0 xml/en/linux_packages.xml --- a/xml/en/linux_packages.xml Tue Jul 19 14:10:02 2022 +0100 +++ b/xml/en/linux_packages.xml Tue Jul 19 19:28:08 2022 +0400 @@ -7,7 +7,7 @@
+ rev="77">
@@ -88,11 +88,6 @@ versions: -21.10 “impish” -x86_64, aarch64/arm64 - - - 22.04 “jammy” x86_64, aarch64/arm64, s390x diff -r e06cf66a9f63 -r ca4adc1068f0 xml/ru/linux_packages.xml --- a/xml/ru/linux_packages.xml Tue Jul 19 14:10:02 2022 +0100 +++ b/xml/ru/linux_packages.xml Tue Jul 19 19:28:08 2022 +0400 @@ -7,7 +7,7 @@
+ rev="77">
@@ -88,11 +88,6 @@ -21.10 “impish” -x86_64, aarch64/arm64 - - - 22.04 “jammy” x86_64, aarch64/arm64, s390x From pluknet at nginx.com Tue Jul 19 15:45:57 2022 From: pluknet at nginx.com (Sergey Kandaurov) Date: Tue, 19 Jul 2022 19:45:57 +0400 Subject: [PATCH] Linux packages: removed Ubuntu 21.10 'impish' due to EOL In-Reply-To: References: Message-ID: <20220719154557.ymhissljbnaqlxs3@Y9MQ9X2QVV> On Tue, Jul 19, 2022 at 07:28:25PM +0400, Konstantin Pavlov wrote: > # HG changeset patch > # User Konstantin Pavlov > # Date 1658244488 -14400 > # Tue Jul 19 19:28:08 2022 +0400 > # Node ID ca4adc1068f0ba18c477f9816ce2b798f675fbe0 > # Parent e06cf66a9f630d376699be0fd78b9fc64ef6256e > Linux packages: removed Ubuntu 21.10 'impish' due to EOL. Looks good. From pluknet at nginx.com Wed Jul 20 13:58:26 2022 From: pluknet at nginx.com (=?iso-8859-1?q?Sergey_Kandaurov?=) Date: Wed, 20 Jul 2022 17:58:26 +0400 Subject: [PATCH] SSL: consistent certificate verification depth Message-ID: <5a9cc2a846c9ea4c0af0.1658325506@enoparse.local> # HG changeset patch # User Sergey Kandaurov # Date 1658325446 -14400 # Wed Jul 20 17:57:26 2022 +0400 # Node ID 5a9cc2a846c9ea4c0af03109ab186af1ac28e222 # Parent 069a4813e8d6d7ec662d282a10f5f7062ebd817f SSL: consistent certificate verification depth. Originally, certificate verification depth was used to limit the number of signatures to validate, that is, to limit chains with intermediate certificates one less. The semantics was changed in OpenSSL 1.1.0, and instead it limits now the number of intermediate certificates allowed. This makes it not possible to limit certificate checking to self-signed certificates with verify depth 0 in OpenSSL 1.1.0+, and is inconsistent compared with former behaviour in BoringSSL and older OpenSSL versions. This change restores former verification logic when using OpenSSL 1.1.0+. The verify callback is adjusted to emit the "certificate chain too long" error when the certificate chain exceeds the verification depth. It has no effect to other SSL libraries, where this is limited by other means. Also, this fixes verification checks when using LibreSSL 3.4.0+, where a chain depth is not limited except by X509_VERIFY_MAX_CHAIN_CERTS (32). diff -r 069a4813e8d6 -r 5a9cc2a846c9 src/event/ngx_event_openssl.c --- a/src/event/ngx_event_openssl.c Tue Jul 19 17:05:27 2022 +0300 +++ b/src/event/ngx_event_openssl.c Wed Jul 20 17:57:26 2022 +0400 @@ -997,16 +997,26 @@ ngx_ssl_crl(ngx_conf_t *cf, ngx_ssl_t *s static int ngx_ssl_verify_callback(int ok, X509_STORE_CTX *x509_store) { + int depth, verify_depth; + ngx_ssl_conn_t *ssl_conn; + + ssl_conn = X509_STORE_CTX_get_ex_data(x509_store, + SSL_get_ex_data_X509_STORE_CTX_idx()); + + depth = X509_STORE_CTX_get_error_depth(x509_store); + verify_depth = SSL_CTX_get_verify_depth(SSL_get_SSL_CTX(ssl_conn)); + + if (depth > verify_depth) { + X509_STORE_CTX_set_error(x509_store, X509_V_ERR_CERT_CHAIN_TOO_LONG); + } + #if (NGX_DEBUG) + + int err; char *subject, *issuer; - int err, depth; X509 *cert; X509_NAME *sname, *iname; ngx_connection_t *c; - ngx_ssl_conn_t *ssl_conn; - - ssl_conn = X509_STORE_CTX_get_ex_data(x509_store, - SSL_get_ex_data_X509_STORE_CTX_idx()); c = ngx_ssl_get_connection(ssl_conn); @@ -1016,7 +1026,6 @@ ngx_ssl_verify_callback(int ok, X509_STO cert = X509_STORE_CTX_get_current_cert(x509_store); err = X509_STORE_CTX_get_error(x509_store); - depth = X509_STORE_CTX_get_error_depth(x509_store); sname = X509_get_subject_name(cert); @@ -1058,6 +1067,7 @@ ngx_ssl_verify_callback(int ok, X509_STO if (issuer) { OPENSSL_free(issuer); } + #endif return 1; From mdounin at mdounin.ru Wed Jul 20 18:04:52 2022 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 20 Jul 2022 21:04:52 +0300 Subject: [PATCH] SSL: consistent certificate verification depth In-Reply-To: <5a9cc2a846c9ea4c0af0.1658325506@enoparse.local> References: <5a9cc2a846c9ea4c0af0.1658325506@enoparse.local> Message-ID: Hello! On Wed, Jul 20, 2022 at 05:58:26PM +0400, Sergey Kandaurov wrote: > # HG changeset patch > # User Sergey Kandaurov > # Date 1658325446 -14400 > # Wed Jul 20 17:57:26 2022 +0400 > # Node ID 5a9cc2a846c9ea4c0af03109ab186af1ac28e222 > # Parent 069a4813e8d6d7ec662d282a10f5f7062ebd817f > SSL: consistent certificate verification depth. > > Originally, certificate verification depth was used to limit the number > of signatures to validate, that is, to limit chains with intermediate > certificates one less. The semantics was changed in OpenSSL 1.1.0, and > instead it limits now the number of intermediate certificates allowed. > This makes it not possible to limit certificate checking to self-signed > certificates with verify depth 0 in OpenSSL 1.1.0+, and is inconsistent > compared with former behaviour in BoringSSL and older OpenSSL versions. > > This change restores former verification logic when using OpenSSL 1.1.0+. > The verify callback is adjusted to emit the "certificate chain too long" > error when the certificate chain exceeds the verification depth. It has > no effect to other SSL libraries, where this is limited by other means. > Also, this fixes verification checks when using LibreSSL 3.4.0+, where > a chain depth is not limited except by X509_VERIFY_MAX_CHAIN_CERTS (32). This (highly questionable) OpenSSL behaviour seems to be status quo for a while, also recorded in tests (see ssl_verify_depth.t). Any specific reasons for the nginx-side workaround? > > diff -r 069a4813e8d6 -r 5a9cc2a846c9 src/event/ngx_event_openssl.c > --- a/src/event/ngx_event_openssl.c Tue Jul 19 17:05:27 2022 +0300 > +++ b/src/event/ngx_event_openssl.c Wed Jul 20 17:57:26 2022 +0400 > @@ -997,16 +997,26 @@ ngx_ssl_crl(ngx_conf_t *cf, ngx_ssl_t *s > static int > ngx_ssl_verify_callback(int ok, X509_STORE_CTX *x509_store) > { > + int depth, verify_depth; > + ngx_ssl_conn_t *ssl_conn; > + > + ssl_conn = X509_STORE_CTX_get_ex_data(x509_store, > + SSL_get_ex_data_X509_STORE_CTX_idx()); > + > + depth = X509_STORE_CTX_get_error_depth(x509_store); > + verify_depth = SSL_CTX_get_verify_depth(SSL_get_SSL_CTX(ssl_conn)); s/verify_depth/limit/? Also, using SSL_get_verify_depth() instead of going through SSL context might be easier. > + > + if (depth > verify_depth) { > + X509_STORE_CTX_set_error(x509_store, X509_V_ERR_CERT_CHAIN_TOO_LONG); This is going to overwrite earlier errors, if any. Does not look like a good idea. > + } > + > #if (NGX_DEBUG) > + > + int err; This is not going to work, variables have to be defined at the start of a block unless you assume C99. At least MSVC 2010 won't be able to compile this. > char *subject, *issuer; > - int err, depth; > X509 *cert; > X509_NAME *sname, *iname; > ngx_connection_t *c; > - ngx_ssl_conn_t *ssl_conn; > - > - ssl_conn = X509_STORE_CTX_get_ex_data(x509_store, > - SSL_get_ex_data_X509_STORE_CTX_idx()); > > c = ngx_ssl_get_connection(ssl_conn); > > @@ -1016,7 +1026,6 @@ ngx_ssl_verify_callback(int ok, X509_STO > > cert = X509_STORE_CTX_get_current_cert(x509_store); > err = X509_STORE_CTX_get_error(x509_store); > - depth = X509_STORE_CTX_get_error_depth(x509_store); > > sname = X509_get_subject_name(cert); > > @@ -1058,6 +1067,7 @@ ngx_ssl_verify_callback(int ok, X509_STO > if (issuer) { > OPENSSL_free(issuer); > } > + > #endif > > return 1; > -- Maxim Dounin http://mdounin.ru/ From V.Kokshenev at F5.com Wed Jul 20 19:08:40 2022 From: V.Kokshenev at F5.com (Vladimir Kokshenev) Date: Wed, 20 Jul 2022 19:08:40 +0000 Subject: Open-sourcing periodic upstream server resolution and implementing a dedicated service worker. Message-ID: <9E5B3F41-2CAD-47ED-A2EC-EE807DC54A18@f5.com> Hello! This is the two-part proposal to open-source periodic upstream server resolution and implement a dedicated service worker for nginx. The purpose of this e-mail is to describe the WHY and solicit feedback. Nginx supports domain names in the upstream server configuration. Currently, domain names are resolved at configuration time only, and there are no subsequent name resolutions. There are plans to open-source re-resolvable upstream servers. This will allow applying DNS updates to upstream configurations in runtime. So, there is a need to support periodic asynchronous operations. And a dedicated service worker is a possible architectural way to address this. The master process reads and parses configuration and creates the service worker when needed (in a similar way to cache-related processes). The service worker manages periodic name resolutions and updates corresponding upstream configurations. The name resolution relies on the existing nginx resolver and upstream zone functionality. The service worker will be responsible solely for periodic background tasks and wouldn't accept client connections. The service worker should be the last worker process to shut down to maintain the actual state of upstreams when there are active workers. Alternative architecture considered was about choosing one of the regular workers (e.g., worker zero) to take care of periodic upstream server resolution, but it creates asymmetry in responsibilities and load for this dedicated worker. -- Vladimir Kokshenev From mdounin at mdounin.ru Wed Jul 20 20:36:03 2022 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 20 Jul 2022 23:36:03 +0300 Subject: Open-sourcing periodic upstream server resolution and implementing a dedicated service worker. In-Reply-To: <9E5B3F41-2CAD-47ED-A2EC-EE807DC54A18@f5.com> References: <9E5B3F41-2CAD-47ED-A2EC-EE807DC54A18@f5.com> Message-ID: Hello! On Wed, Jul 20, 2022 at 07:08:40PM +0000, Vladimir Kokshenev via nginx-devel wrote: > Hello! > > This is the two-part proposal to open-source periodic upstream server resolution > and implement a dedicated service worker for nginx. The purpose of this e-mail > is to describe the WHY and solicit feedback. > > Nginx supports domain names in the upstream server configuration. > Currently, domain names are resolved at configuration time only, > and there are no subsequent name resolutions. > > There are plans to open-source re-resolvable upstream servers. > This will allow applying DNS updates to upstream configurations in runtime. > So, there is a need to support periodic asynchronous operations. > And a dedicated service worker is a possible architectural way to address this. > > The master process reads and parses configuration and creates the service worker > when needed (in a similar way to cache-related processes). > > The service worker manages periodic name resolutions and updates corresponding > upstream configurations. The name resolution relies on the existing nginx > resolver and upstream zone functionality. > > The service worker will be responsible solely for periodic background tasks > and wouldn't accept client connections. > > The service worker should be the last worker process to shut down > to maintain the actual state of upstreams when there are active workers. > > Alternative architecture considered was about choosing one of the regular > workers (e.g., worker zero) to take care of periodic upstream server resolution, > but it creates asymmetry in responsibilities and load for this dedicated worker. Both alternatives look bad to me. We already have dedicated processes to load and manage caches, and I tend to think that the only thing which somehow justifies these being dedicated is that cache management implies disk-intensive blocking operations. Mixing these with normal request processing will cause latency issues, not to mention will be non-trivial to implement. On the other hand, we are already seeing issues with dedicated process being used: in some configurations just one cache manager process simply isn't enough to remove all the files being add by many worker processes. As such, I generally tend to think that dedicated processes is a wrong way to go. With any special requirements like "last to shut down" the whole idea becomes even worse. And in this particular case all operations are perfectly asynchronous, so there is no justification like in the cache manager case. Similarly, "choosing one of the regular workers (e.g., worker zero)" looks wrong (and I've already provided this feedback previously). All workers are expected to be equal, and doing something only in a particular worker is expected to cause issues. E.g., consider a worker is stopped (due to a bug or intentionally to debug an issue) - this shouldn't disrupt operations of other workers. Rather, I would recommend focusing on doing all periodic tasks in a way which doesn't depend on being run in the particular worker. The simplest approach would be to run tasks in all worker processes, with some minimal checks to avoid duplicate work. -- Maxim Dounin http://mdounin.ru/ From xeioex at nginx.com Thu Jul 21 02:58:35 2022 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Thu, 21 Jul 2022 02:58:35 +0000 Subject: [njs] Version bump. Message-ID: details: https://hg.nginx.org/njs/rev/d7a0a16d22e4 branches: changeset: 1914:d7a0a16d22e4 user: Dmitry Volyntsev date: Wed Jul 20 19:56:42 2022 -0700 description: Version bump. diffstat: src/njs.h | 4 ++-- 1 files changed, 2 insertions(+), 2 deletions(-) diffs (14 lines): diff -r 61d5b54b2026 -r d7a0a16d22e4 src/njs.h --- a/src/njs.h Mon Jul 18 18:42:28 2022 -0700 +++ b/src/njs.h Wed Jul 20 19:56:42 2022 -0700 @@ -11,8 +11,8 @@ #include -#define NJS_VERSION "0.7.6" -#define NJS_VERSION_NUMBER 0x000706 +#define NJS_VERSION "0.7.7" +#define NJS_VERSION_NUMBER 0x000707 #include /* STDOUT_FILENO, STDERR_FILENO */ From xeioex at nginx.com Thu Jul 21 02:58:37 2022 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Thu, 21 Jul 2022 02:58:37 +0000 Subject: [njs] Modules: fixed reading response body in fetch API. Message-ID: details: https://hg.nginx.org/njs/rev/c597ee3450ab branches: changeset: 1915:c597ee3450ab user: Dmitry Volyntsev date: Wed Jul 20 17:49:00 2022 -0700 description: Modules: fixed reading response body in fetch API. Previously, the response body was ignored if the Content-Length was missing. This closes #557 issue on Github. diffstat: nginx/ngx_js_fetch.c | 49 ++++++++++++++++++++++++++++++++----------------- 1 files changed, 32 insertions(+), 17 deletions(-) diffs (90 lines): diff -r d7a0a16d22e4 -r c597ee3450ab nginx/ngx_js_fetch.c --- a/nginx/ngx_js_fetch.c Wed Jul 20 19:56:42 2022 -0700 +++ b/nginx/ngx_js_fetch.c Wed Jul 20 17:49:00 2022 -0700 @@ -623,6 +623,8 @@ ngx_js_http_alloc(njs_vm_t *vm, ngx_pool http->timeout = 10000; + http->http_parse.content_length_n = -1; + ret = njs_vm_promise_create(vm, njs_value_arg(&http->promise), njs_value_arg(&http->promise_callbacks)); if (ret != NJS_OK) { @@ -1388,7 +1390,7 @@ ngx_js_http_process_headers(ngx_js_http_ static ngx_int_t ngx_js_http_process_body(ngx_js_http_t *http) { - ssize_t size, need; + ssize_t size, chsize, need; ngx_int_t rc; njs_int_t ret; ngx_buf_t *b; @@ -1403,7 +1405,16 @@ ngx_js_http_process_body(ngx_js_http_t * return NGX_ERROR; } - if (size == http->http_parse.content_length_n) { + if (http->http_parse.chunked + && http->http_parse.content_length_n == -1) + { + ngx_js_http_error(http, 0, "invalid fetch chunked response"); + return NGX_ERROR; + } + + if (http->http_parse.content_length_n == -1 + || size == http->http_parse.content_length_n) + { ret = njs_vm_external_create(http->vm, njs_value_arg(&http->reply), ngx_http_js_fetch_proto_id, http, 0); if (ret != NJS_OK) { @@ -1415,13 +1426,6 @@ ngx_js_http_process_body(ngx_js_http_t * return NGX_DONE; } - if (http->http_parse.chunked - && http->http_parse.content_length_n == 0) - { - ngx_js_http_error(http, 0, "invalid fetch chunked response"); - return NGX_ERROR; - } - if (size < http->http_parse.content_length_n) { return NGX_AGAIN; } @@ -1454,17 +1458,28 @@ ngx_js_http_process_body(ngx_js_http_t * b->pos = http->http_chunk_parse.pos; } else { - need = http->http_parse.content_length_n - njs_chb_size(&http->chain); - size = ngx_min(need, b->last - b->pos); - - if (size > 0) { - njs_chb_append(&http->chain, b->pos, size); - b->pos += size; - rc = NGX_AGAIN; + size = njs_chb_size(&http->chain); + + if (http->http_parse.content_length_n == -1) { + need = http->max_response_body_size - size; } else { - rc = NGX_DONE; + need = http->http_parse.content_length_n - size; } + + chsize = ngx_min(need, b->last - b->pos); + + if (size + chsize > http->max_response_body_size) { + ngx_js_http_error(http, 0, "fetch response body is too large"); + return NGX_ERROR; + } + + if (chsize > 0) { + njs_chb_append(&http->chain, b->pos, chsize); + b->pos += chsize; + } + + rc = (need > chsize) ? NGX_AGAIN : NGX_DONE; } if (b->pos == b->end) { From Neil.Craig at bbc.co.uk Thu Jul 21 13:01:24 2022 From: Neil.Craig at bbc.co.uk (Neil Craig) Date: Thu, 21 Jul 2022 13:01:24 +0000 Subject: Open-sourcing periodic upstream server resolution and implementing a dedicated service worker. In-Reply-To: <9E5B3F41-2CAD-47ED-A2EC-EE807DC54A18@f5.com> References: <9E5B3F41-2CAD-47ED-A2EC-EE807DC54A18@f5.com> Message-ID: <0D24ED91-EED7-471C-941B-4F29B37D351D@bbc.co.uk> Hi Vladimir I hope you don't mind me chipping in here. We currently have 2 services which use Nginx with upstreams whose DNS can change frequently (e.g. AWS S3, ELB/ALB origins) so we're familiar with this problem. Initially we did a config reload every minute to solve the issue but since then, we've created custom Lua (using the OpenResty Lua module) to achieve the same thing in a more efficient way. What you're proposing sounds great to me. I wanted to suggest a few feature ideas for your consideration: * DNS TTL-based refresh (rather than refreshing every N seconds) * DNS prefresh (update DNS when the elapsed time since the last refresh is perhaps 90% of the TTL) * IPv4/6 ignore (useful for example when there's a v6 IP available but the system/network doesn't support v6) Cheers Neil Craig (He/Him) Lead Architect, BBC Digital Distribution https://www.bbc.co.uk/blogs/internet/authors/1633673f-c77c-4bc6-b100-0664db0db613 https://twitter.com/tdp_org On 20/07/2022, 20:09, "Vladimir Kokshenev via nginx-devel" wrote: Hello! This is the two-part proposal to open-source periodic upstream server resolution and implement a dedicated service worker for nginx. The purpose of this e-mail is to describe the WHY and solicit feedback. Nginx supports domain names in the upstream server configuration. Currently, domain names are resolved at configuration time only, and there are no subsequent name resolutions. There are plans to open-source re-resolvable upstream servers. This will allow applying DNS updates to upstream configurations in runtime. So, there is a need to support periodic asynchronous operations. And a dedicated service worker is a possible architectural way to address this. The master process reads and parses configuration and creates the service worker when needed (in a similar way to cache-related processes). The service worker manages periodic name resolutions and updates corresponding upstream configurations. The name resolution relies on the existing nginx resolver and upstream zone functionality. The service worker will be responsible solely for periodic background tasks and wouldn't accept client connections. The service worker should be the last worker process to shut down to maintain the actual state of upstreams when there are active workers. Alternative architecture considered was about choosing one of the regular workers (e.g., worker zero) to take care of periodic upstream server resolution, but it creates asymmetry in responsibilities and load for this dedicated worker. -- Vladimir Kokshenev _______________________________________________ nginx-devel mailing list -- nginx-devel at nginx.org To unsubscribe send an email to nginx-devel-leave at nginx.org From pluknet at nginx.com Thu Jul 21 14:18:24 2022 From: pluknet at nginx.com (Sergey Kandaurov) Date: Thu, 21 Jul 2022 18:18:24 +0400 Subject: [PATCH] SSL: consistent certificate verification depth In-Reply-To: References: <5a9cc2a846c9ea4c0af0.1658325506@enoparse.local> Message-ID: > On 20 Jul 2022, at 22:04, Maxim Dounin wrote: > > Hello! > > On Wed, Jul 20, 2022 at 05:58:26PM +0400, Sergey Kandaurov wrote: > >> # HG changeset patch >> # User Sergey Kandaurov >> # Date 1658325446 -14400 >> # Wed Jul 20 17:57:26 2022 +0400 >> # Node ID 5a9cc2a846c9ea4c0af03109ab186af1ac28e222 >> # Parent 069a4813e8d6d7ec662d282a10f5f7062ebd817f >> SSL: consistent certificate verification depth. >> >> Originally, certificate verification depth was used to limit the number >> of signatures to validate, that is, to limit chains with intermediate >> certificates one less. The semantics was changed in OpenSSL 1.1.0, and >> instead it limits now the number of intermediate certificates allowed. >> This makes it not possible to limit certificate checking to self-signed >> certificates with verify depth 0 in OpenSSL 1.1.0+, and is inconsistent >> compared with former behaviour in BoringSSL and older OpenSSL versions. >> >> This change restores former verification logic when using OpenSSL 1.1.0+. >> The verify callback is adjusted to emit the "certificate chain too long" >> error when the certificate chain exceeds the verification depth. It has >> no effect to other SSL libraries, where this is limited by other means. >> Also, this fixes verification checks when using LibreSSL 3.4.0+, where >> a chain depth is not limited except by X509_VERIFY_MAX_CHAIN_CERTS (32). > > This (highly questionable) OpenSSL behaviour seems to be status > quo for a while, also recorded in tests (see ssl_verify_depth.t). > Any specific reasons for the nginx-side workaround? As explained in the commit message, main motivation is to eliminate annoying difference in behaviour among various SSL libraries (aside from working around the arguably broken LibreSSL verifier). Nothing specific behind that. Disambiguating ssl_verify_depth.t is a good demo of net result. The downside is that this can potentially break previously working configurations when using with modern OpenSSL versions. So the patch is to seek feedback on whether this makes sense. > >> >> diff -r 069a4813e8d6 -r 5a9cc2a846c9 src/event/ngx_event_openssl.c >> --- a/src/event/ngx_event_openssl.c Tue Jul 19 17:05:27 2022 +0300 >> +++ b/src/event/ngx_event_openssl.c Wed Jul 20 17:57:26 2022 +0400 >> @@ -997,16 +997,26 @@ ngx_ssl_crl(ngx_conf_t *cf, ngx_ssl_t *s >> static int >> ngx_ssl_verify_callback(int ok, X509_STORE_CTX *x509_store) >> { >> + int depth, verify_depth; >> + ngx_ssl_conn_t *ssl_conn; >> + >> + ssl_conn = X509_STORE_CTX_get_ex_data(x509_store, >> + SSL_get_ex_data_X509_STORE_CTX_idx()); >> + >> + depth = X509_STORE_CTX_get_error_depth(x509_store); >> + verify_depth = SSL_CTX_get_verify_depth(SSL_get_SSL_CTX(ssl_conn)); > > s/verify_depth/limit/? > > Also, using SSL_get_verify_depth() instead of going through SSL > context might be easier. Fixed, thanks. Some remnants left after r&d. > >> + >> + if (depth > verify_depth) { >> + X509_STORE_CTX_set_error(x509_store, X509_V_ERR_CERT_CHAIN_TOO_LONG); > > This is going to overwrite earlier errors, if any. Does not look like a good > idea. Agree, it can actually - not what I want to do within this change. Also, refined commit message. > >> + } >> + >> #if (NGX_DEBUG) >> + >> + int err; > > This is not going to work, variables have to be defined at the > start of a block unless you assume C99. At least MSVC 2010 won't > be able to compile this. Fixed, thanks. # HG changeset patch # User Sergey Kandaurov # Date 1658412531 -14400 # Thu Jul 21 18:08:51 2022 +0400 # Node ID d064df4a9eb7f74f328d01337e7743adb6468dd5 # Parent 069a4813e8d6d7ec662d282a10f5f7062ebd817f SSL: consistent certificate verification depth. Originally, certificate verification depth was used to limit the number of signatures to validate, that is, to limit chains with intermediate certificates one less. The semantics was changed in OpenSSL 1.1.0, and instead it limits now the number of intermediate certificates allowed. This makes it not possible to limit certificate checking to self-signed certificates with verify depth 0 in OpenSSL 1.1.0+, and is inconsistent compared with BoringSSL and former behaviour in older OpenSSL versions. This change restores former verification logic when using OpenSSL 1.1.0+. The verify callback is adjusted to set the "certificate chain too long" error if the certificate chain exceeds the verification depth and the error wasn't previously set by other means. Also, this fixes verification checks when using LibreSSL 3.4.0+, where a chain depth is not limited except by X509_VERIFY_MAX_CHAIN_CERTS (32). diff -r 069a4813e8d6 -r d064df4a9eb7 src/event/ngx_event_openssl.c --- a/src/event/ngx_event_openssl.c Tue Jul 19 17:05:27 2022 +0300 +++ b/src/event/ngx_event_openssl.c Thu Jul 21 18:08:51 2022 +0400 @@ -997,16 +997,28 @@ ngx_ssl_crl(ngx_conf_t *cf, ngx_ssl_t *s static int ngx_ssl_verify_callback(int ok, X509_STORE_CTX *x509_store) { + int err, depth, limit; + ngx_ssl_conn_t *ssl_conn; + + ssl_conn = X509_STORE_CTX_get_ex_data(x509_store, + SSL_get_ex_data_X509_STORE_CTX_idx()); + + err = X509_STORE_CTX_get_error(x509_store); + depth = X509_STORE_CTX_get_error_depth(x509_store); + + limit = SSL_get_verify_depth(ssl_conn); + + if (depth > limit && err == X509_V_OK) { + err = X509_V_ERR_CERT_CHAIN_TOO_LONG; + X509_STORE_CTX_set_error(x509_store, err); + } + #if (NGX_DEBUG) + { char *subject, *issuer; - int err, depth; X509 *cert; X509_NAME *sname, *iname; ngx_connection_t *c; - ngx_ssl_conn_t *ssl_conn; - - ssl_conn = X509_STORE_CTX_get_ex_data(x509_store, - SSL_get_ex_data_X509_STORE_CTX_idx()); c = ngx_ssl_get_connection(ssl_conn); @@ -1015,8 +1027,6 @@ ngx_ssl_verify_callback(int ok, X509_STO } cert = X509_STORE_CTX_get_current_cert(x509_store); - err = X509_STORE_CTX_get_error(x509_store); - depth = X509_STORE_CTX_get_error_depth(x509_store); sname = X509_get_subject_name(cert); @@ -1058,6 +1068,7 @@ ngx_ssl_verify_callback(int ok, X509_STO if (issuer) { OPENSSL_free(issuer); } + } #endif return 1; -- Sergey Kandaurov From V.Kokshenev at F5.com Fri Jul 22 00:42:15 2022 From: V.Kokshenev at F5.com (Vladimir Kokshenev) Date: Fri, 22 Jul 2022 00:42:15 +0000 Subject: Open-sourcing periodic upstream server resolution and implementing a dedicated service worker. In-Reply-To: <0D24ED91-EED7-471C-941B-4F29B37D351D@bbc.co.uk> References: <9E5B3F41-2CAD-47ED-A2EC-EE807DC54A18@f5.com> <0D24ED91-EED7-471C-941B-4F29B37D351D@bbc.co.uk> Message-ID: Hi Neil Thank you for sharing your perspectives. TTL-based DNS re-resolving is planned for this feature. Also, nginx resolver (which we plan to use) supports "ipv6=off" and (since 1.23.1) "ipv4=off" parameters for cases when IPv6 or IPv4 addresses are not desired. We know about solutions based on Lua. However, the motivation is to provide a solution that works out of the box. -- Vladimir Kokshenev On 7/21/22, 6:02 AM, "Neil Craig" wrote: EXTERNAL MAIL: Neil.Craig at bbc.co.uk Hi Vladimir I hope you don't mind me chipping in here. We currently have 2 services which use Nginx with upstreams whose DNS can change frequently (e.g. AWS S3, ELB/ALB origins) so we're familiar with this problem. Initially we did a config reload every minute to solve the issue but since then, we've created custom Lua (using the OpenResty Lua module) to achieve the same thing in a more efficient way. What you're proposing sounds great to me. I wanted to suggest a few feature ideas for your consideration: * DNS TTL-based refresh (rather than refreshing every N seconds) * DNS prefresh (update DNS when the elapsed time since the last refresh is perhaps 90% of the TTL) * IPv4/6 ignore (useful for example when there's a v6 IP available but the system/network doesn't support v6) Cheers Neil Craig (He/Him) Lead Architect, BBC Digital Distribution On 20/07/2022, 20:09, "Vladimir Kokshenev via nginx-devel" wrote: Hello! This is the two-part proposal to open-source periodic upstream server resolution and implement a dedicated service worker for nginx. The purpose of this e-mail is to describe the WHY and solicit feedback. Nginx supports domain names in the upstream server configuration. Currently, domain names are resolved at configuration time only, and there are no subsequent name resolutions. There are plans to open-source re-resolvable upstream servers. This will allow applying DNS updates to upstream configurations in runtime. So, there is a need to support periodic asynchronous operations. And a dedicated service worker is a possible architectural way to address this. The master process reads and parses configuration and creates the service worker when needed (in a similar way to cache-related processes). The service worker manages periodic name resolutions and updates corresponding upstream configurations. The name resolution relies on the existing nginx resolver and upstream zone functionality. The service worker will be responsible solely for periodic background tasks and wouldn't accept client connections. The service worker should be the last worker process to shut down to maintain the actual state of upstreams when there are active workers. Alternative architecture considered was about choosing one of the regular workers (e.g., worker zero) to take care of periodic upstream server resolution, but it creates asymmetry in responsibilities and load for this dedicated worker. -- Vladimir Kokshenev _______________________________________________ nginx-devel mailing list -- nginx-devel at nginx.org To unsubscribe send an email to nginx-devel-leave at nginx.org From v.zhestikov at f5.com Fri Jul 22 01:40:12 2022 From: v.zhestikov at f5.com (Vadim Zhestikov) Date: Fri, 22 Jul 2022 01:40:12 +0000 Subject: [njs] Fixed assignment to global property by name only. Message-ID: details: https://hg.nginx.org/njs/rev/4b8d8237f598 branches: changeset: 1916:4b8d8237f598 user: Vadim Zhestikov date: Thu Jul 21 18:33:20 2022 -0700 description: Fixed assignment to global property by name only. This closes #145 issue on Github. diffstat: src/njs_generator.c | 72 ++++++++++++++++++++++++++++++++++++++++++++++-- src/test/njs_unit_test.c | 22 ++++++++++++++ 2 files changed, 91 insertions(+), 3 deletions(-) diffs (170 lines): diff -r c597ee3450ab -r 4b8d8237f598 src/njs_generator.c --- a/src/njs_generator.c Wed Jul 20 17:49:00 2022 -0700 +++ b/src/njs_generator.c Thu Jul 21 18:33:20 2022 -0700 @@ -222,6 +222,9 @@ static njs_int_t njs_generate_comma_expr njs_generator_t *generator, njs_parser_node_t *node); static njs_int_t njs_generate_comma_expression_end(njs_vm_t *vm, njs_generator_t *generator, njs_parser_node_t *node); +static njs_int_t njs_generate_global_property_set(njs_vm_t *vm, + njs_generator_t *generator, njs_parser_node_t *node_dst, + njs_parser_node_t *node_src); static njs_int_t njs_generate_assignment(njs_vm_t *vm, njs_generator_t *generator, njs_parser_node_t *node); static njs_int_t njs_generate_assignment_name(njs_vm_t *vm, @@ -2659,6 +2662,53 @@ njs_generate_comma_expression_end(njs_vm static njs_int_t +njs_generate_global_property_set(njs_vm_t *vm, njs_generator_t *generator, + njs_parser_node_t *node_dst, njs_parser_node_t *node_src) +{ + ssize_t length; + njs_int_t ret; + njs_value_t property; + njs_variable_t *var; + njs_vmcode_prop_set_t *prop_set; + const njs_lexer_entry_t *lex_entry; + + var = njs_variable_reference(vm, node_dst); + if (var == NULL) { + njs_generate_code(generator, njs_vmcode_prop_set_t, prop_set, + NJS_VMCODE_PROPERTY_SET, 3, node_src); + + prop_set->value = node_dst->index; + prop_set->object = njs_scope_global_this_index(); + + lex_entry = njs_lexer_entry(node_dst->u.reference.unique_id); + if (njs_slow_path(lex_entry == NULL)) { + return NJS_ERROR; + } + + length = njs_utf8_length(lex_entry->name.start, lex_entry->name.length); + if (njs_slow_path(length < 0)) { + return NJS_ERROR; + } + + ret = njs_string_new(vm, &property, lex_entry->name.start, + lex_entry->name.length, length); + if (njs_slow_path(ret != NJS_OK)) { + return NJS_ERROR; + } + + prop_set->property = njs_scope_global_index(vm, &property, + generator->runtime); + if (njs_slow_path(prop_set->property == NJS_INDEX_ERROR)) { + return NJS_ERROR; + } + + } + + return NJS_OK; +} + + +static njs_int_t njs_generate_assignment(njs_vm_t *vm, njs_generator_t *generator, njs_parser_node_t *node) { @@ -2673,7 +2723,7 @@ njs_generate_assignment(njs_vm_t *vm, nj if (lvalue->token_type == NJS_TOKEN_NAME) { - ret = njs_generate_variable(vm, generator, lvalue, NJS_DECLARATION, + ret = njs_generate_variable(vm, generator, lvalue, NJS_REFERENCE, &var); if (njs_slow_path(ret != NJS_OK)) { return ret; @@ -2721,6 +2771,7 @@ static njs_int_t njs_generate_assignment_name(njs_vm_t *vm, njs_generator_t *generator, njs_parser_node_t *node) { + njs_int_t ret; njs_parser_node_t *lvalue, *expr; njs_vmcode_move_t *move; @@ -2739,6 +2790,11 @@ njs_generate_assignment_name(njs_vm_t *v node->index = expr->index; node->temporary = expr->temporary; + ret = njs_generate_global_property_set(vm, generator, node->left, expr); + if (njs_slow_path(ret != NJS_OK)) { + return ret; + } + return njs_generator_stack_pop(vm, generator, NULL); } @@ -2855,7 +2911,7 @@ njs_generate_operation_assignment(njs_vm if (lvalue->token_type == NJS_TOKEN_NAME) { - ret = njs_generate_variable(vm, generator, lvalue, NJS_DECLARATION, + ret = njs_generate_variable(vm, generator, lvalue, NJS_REFERENCE, &var); if (njs_slow_path(ret != NJS_OK)) { return ret; @@ -2938,6 +2994,11 @@ njs_generate_operation_assignment_name(n node->index = lvalue->index; + ret = njs_generate_global_property_set(vm, generator, node->left, expr); + if (njs_slow_path(ret != NJS_OK)) { + return ret; + } + if (lvalue->index != index) { ret = njs_generate_index_release(vm, generator, index); if (njs_slow_path(ret != NJS_OK)) { @@ -3553,7 +3614,7 @@ njs_generate_inc_dec_operation(njs_vm_t if (lvalue->token_type == NJS_TOKEN_NAME) { - ret = njs_generate_variable(vm, generator, lvalue, NJS_DECLARATION, + ret = njs_generate_variable(vm, generator, lvalue, NJS_REFERENCE, &var); if (njs_slow_path(ret != NJS_OK)) { return ret; @@ -3580,6 +3641,11 @@ njs_generate_inc_dec_operation(njs_vm_t code->src1 = lvalue->index; code->src2 = lvalue->index; + ret = njs_generate_global_property_set(vm, generator, lvalue, lvalue); + if (njs_slow_path(ret) != NJS_OK) { + return ret; + } + return njs_generator_stack_pop(vm, generator, NULL); } diff -r c597ee3450ab -r 4b8d8237f598 src/test/njs_unit_test.c --- a/src/test/njs_unit_test.c Wed Jul 20 17:49:00 2022 -0700 +++ b/src/test/njs_unit_test.c Thu Jul 21 18:33:20 2022 -0700 @@ -12077,6 +12077,28 @@ static njs_unit_test_t njs_test[] = { njs_str("this.a = 1; a"), njs_str("1") }, + { njs_str("this.a = 1; a = 3; this.a"), + njs_str("3") }, + + { njs_str("this.a = 1; ++a; this.a"), + njs_str("2") }, + + { njs_str("this.a = 1; a += 3; this.a"), + njs_str("4") }, + + { njs_str("var b=11;" + "var t = function () {b += 5; return b};" + "t() === 16 && b === 16 && this.b === 16" ), + njs_str("true") }, + + { njs_str("this.c=15;" + "var t = function () {c += 5; return c};" + "t() === 20 && c === 20 && this.c === 20" ), + njs_str("true") }, + + { njs_str("--undefined"), + njs_str("TypeError: Cannot assign to read-only property \"undefined\" of object") }, + { njs_str("this.a = 2; this.b = 3; a * b - a"), njs_str("4") }, From mdounin at mdounin.ru Sat Jul 23 21:53:26 2022 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 24 Jul 2022 00:53:26 +0300 Subject: [PATCH] SSL: consistent certificate verification depth In-Reply-To: References: <5a9cc2a846c9ea4c0af0.1658325506@enoparse.local> Message-ID: Hello! On Thu, Jul 21, 2022 at 06:18:24PM +0400, Sergey Kandaurov wrote: > > On 20 Jul 2022, at 22:04, Maxim Dounin wrote: > > > > On Wed, Jul 20, 2022 at 05:58:26PM +0400, Sergey Kandaurov wrote: > > > >> # HG changeset patch > >> # User Sergey Kandaurov > >> # Date 1658325446 -14400 > >> # Wed Jul 20 17:57:26 2022 +0400 > >> # Node ID 5a9cc2a846c9ea4c0af03109ab186af1ac28e222 > >> # Parent 069a4813e8d6d7ec662d282a10f5f7062ebd817f > >> SSL: consistent certificate verification depth. > >> > >> Originally, certificate verification depth was used to limit the number > >> of signatures to validate, that is, to limit chains with intermediate > >> certificates one less. The semantics was changed in OpenSSL 1.1.0, and > >> instead it limits now the number of intermediate certificates allowed. > >> This makes it not possible to limit certificate checking to self-signed > >> certificates with verify depth 0 in OpenSSL 1.1.0+, and is inconsistent > >> compared with former behaviour in BoringSSL and older OpenSSL versions. > >> > >> This change restores former verification logic when using OpenSSL 1.1.0+. > >> The verify callback is adjusted to emit the "certificate chain too long" > >> error when the certificate chain exceeds the verification depth. It has > >> no effect to other SSL libraries, where this is limited by other means. > >> Also, this fixes verification checks when using LibreSSL 3.4.0+, where > >> a chain depth is not limited except by X509_VERIFY_MAX_CHAIN_CERTS (32). > > > > This (highly questionable) OpenSSL behaviour seems to be status > > quo for a while, also recorded in tests (see ssl_verify_depth.t). > > Any specific reasons for the nginx-side workaround? > > As explained in the commit message, main motivation is to eliminate > annoying difference in behaviour among various SSL libraries > (aside from working around the arguably broken LibreSSL verifier). > Nothing specific behind that. > Disambiguating ssl_verify_depth.t is a good demo of net result. > The downside is that this can potentially break previously working > configurations when using with modern OpenSSL versions. > So the patch is to seek feedback on whether this makes sense. Apart from potentially breaking some existing configurations (especially with LibreSSL), I have several basic concerns here: 1. As of now, it's OpenSSL responsibility to apply verify depth limit, and "openssl verify -verify_depth " can be used to test the particular verification. Switching to our implementation of the depth limit means that it will be our responsibility to apply and document the limit. 2. The verify callback behaviour is complex enough, and I expect portability issues. 3. Further, the main reason why the depth limit exists is to limit maximum resource consumption of the whole certificate verification process. Raising an error within the verify callback without returning failure won't stop OpenSSL from checking additional certificates, and hence is not going to limit the resource usage. That is, such a limit would be misleading: it will prevent certificates from being accepted, but won't stop DoS attacks. I don't think that there is a way to implement the limit via verify callback in a way which will prevent excessive signature checks yet preserve nginx existing behaviour with not closing the connection in case of verification failures. Summing the above, I tend to think that it's not worth the effort. [...] -- Maxim Dounin http://mdounin.ru/ From pluknet at nginx.com Mon Jul 25 22:36:37 2022 From: pluknet at nginx.com (Sergey Kandaurov) Date: Tue, 26 Jul 2022 02:36:37 +0400 Subject: [PATCH 1 of 4] QUIC: fixed-length buffers for secrets In-Reply-To: References: Message-ID: <762C0281-9179-4352-9BD0-ACB70F98D2BF@nginx.com> > On 31 May 2022, at 11:06, Roman Arutyunyan wrote: > > # HG changeset patch > # User Vladimir Homutov > # Date 1645524401 -10800 > # Tue Feb 22 13:06:41 2022 +0300 > # Branch quic > # Node ID a881ff28070262f3810517d5d3cb4ff67a4b7121 > # Parent 5b1011b5702b5c5db2ba3d392a4da25596183cc2 > QUIC: fixed-length buffers for secrets. > > diff --git a/src/event/quic/ngx_event_quic_protection.c b/src/event/quic/ngx_event_quic_protection.c > --- a/src/event/quic/ngx_event_quic_protection.c > +++ b/src/event/quic/ngx_event_quic_protection.c > @@ -17,6 +17,9 @@ > > #define NGX_QUIC_AES_128_KEY_LEN 16 > > +/* largest hash used in TLS is SHA-384 */ > +#define NGX_QUIC_MAX_MD_SIZE 48 > + > #define NGX_AES_128_GCM_SHA256 0x1301 > #define NGX_AES_256_GCM_SHA384 0x1302 > #define NGX_CHACHA20_POLY1305_SHA256 0x1303 > @@ -30,6 +33,18 @@ > > > typedef struct { > + size_t len; > + u_char data[NGX_QUIC_MAX_MD_SIZE]; > +} ngx_quic_md_t; > + > + > +typedef struct { > + size_t len; > + u_char data[NGX_QUIC_IV_LEN]; > +} ngx_quic_iv_t; > + > + > +typedef struct { > const ngx_quic_cipher_t *c; > const EVP_CIPHER *hp; > const EVP_MD *d; > @@ -37,10 +52,10 @@ typedef struct { > > > typedef struct ngx_quic_secret_s { > - ngx_str_t secret; > - ngx_str_t key; > - ngx_str_t iv; > - ngx_str_t hp; > + ngx_quic_md_t secret; > + ngx_quic_md_t key; > + ngx_quic_iv_t iv; > + ngx_quic_md_t hp; > } ngx_quic_secret_t; > > > @@ -57,6 +72,25 @@ struct ngx_quic_keys_s { > }; > > > +typedef struct { > + size_t out_len; > + u_char *out; > + > + size_t prk_len; > + const uint8_t *prk; > + > + size_t label_len; > + const u_char *label; > +} ngx_quic_hkdf_t; > + > +#define ngx_quic_hkdf_set(label, out, prk) \ > + { \ > + (out)->len, (out)->data, \ > + (prk)->len, (prk)->data, \ > + (sizeof(label) - 1), (u_char *)(label), \ > + } > + > + > static ngx_int_t ngx_hkdf_expand(u_char *out_key, size_t out_len, > const EVP_MD *digest, const u_char *prk, size_t prk_len, > const u_char *info, size_t info_len); > @@ -78,8 +112,8 @@ static ngx_int_t ngx_quic_tls_seal(const > ngx_str_t *ad, ngx_log_t *log); > static ngx_int_t ngx_quic_tls_hp(ngx_log_t *log, const EVP_CIPHER *cipher, > ngx_quic_secret_t *s, u_char *out, u_char *in); > -static ngx_int_t ngx_quic_hkdf_expand(ngx_pool_t *pool, const EVP_MD *digest, > - ngx_str_t *out, ngx_str_t *label, const uint8_t *prk, size_t prk_len); > +static ngx_int_t ngx_quic_hkdf_expand(ngx_quic_hkdf_t *hkdf, > + const EVP_MD *digest, ngx_pool_t *pool); > > static ngx_int_t ngx_quic_create_packet(ngx_quic_header_t *pkt, > ngx_str_t *res); > @@ -204,28 +238,20 @@ ngx_quic_keys_set_initial_secret(ngx_poo > client->iv.len = NGX_QUIC_IV_LEN; > server->iv.len = NGX_QUIC_IV_LEN; > > - struct { > - ngx_str_t label; > - ngx_str_t *key; > - ngx_str_t *prk; > - } seq[] = { > + ngx_quic_hkdf_t seq[] = { > /* labels per RFC 9001, 5.1. Packet Protection Keys */ > - { ngx_string("tls13 client in"), &client->secret, &iss }, > - { ngx_string("tls13 quic key"), &client->key, &client->secret }, > - { ngx_string("tls13 quic iv"), &client->iv, &client->secret }, > - { ngx_string("tls13 quic hp"), &client->hp, &client->secret }, > - { ngx_string("tls13 server in"), &server->secret, &iss }, > - { ngx_string("tls13 quic key"), &server->key, &server->secret }, > - { ngx_string("tls13 quic iv"), &server->iv, &server->secret }, > - { ngx_string("tls13 quic hp"), &server->hp, &server->secret }, > + ngx_quic_hkdf_set("tls13 client in", &client->secret, &iss), > + ngx_quic_hkdf_set("tls13 quic key", &client->key, &client->secret), > + ngx_quic_hkdf_set("tls13 quic iv", &client->iv, &client->secret), > + ngx_quic_hkdf_set("tls13 quic hp", &client->hp, &client->secret), > + ngx_quic_hkdf_set("tls13 server in", &server->secret, &iss), > + ngx_quic_hkdf_set("tls13 quic key", &server->key, &server->secret), > + ngx_quic_hkdf_set("tls13 quic iv", &server->iv, &server->secret), > + ngx_quic_hkdf_set("tls13 quic hp", &server->hp, &server->secret), > }; > > for (i = 0; i < (sizeof(seq) / sizeof(seq[0])); i++) { > - > - if (ngx_quic_hkdf_expand(pool, digest, seq[i].key, &seq[i].label, > - seq[i].prk->data, seq[i].prk->len) > - != NGX_OK) > - { > + if (ngx_quic_hkdf_expand(&seq[i], digest, pool) != NGX_OK) { > return NGX_ERROR; > } > } > @@ -235,40 +261,41 @@ ngx_quic_keys_set_initial_secret(ngx_poo > > > static ngx_int_t > -ngx_quic_hkdf_expand(ngx_pool_t *pool, const EVP_MD *digest, ngx_str_t *out, > - ngx_str_t *label, const uint8_t *prk, size_t prk_len) > +ngx_quic_hkdf_expand(ngx_quic_hkdf_t *h, const EVP_MD *digest, ngx_pool_t *pool) > { > size_t info_len; > uint8_t *p; > uint8_t info[20]; > > - if (out->data == NULL) { > - out->data = ngx_pnalloc(pool, out->len); > - if (out->data == NULL) { > + if (h->out == NULL) { > + h->out = ngx_pnalloc(pool, h->out_len); > + if (h->out == NULL) { > return NGX_ERROR; > } > } The condition can be removed now. I see no reason to postpone this to the next change. > > - info_len = 2 + 1 + label->len + 1; > + info_len = 2 + 1 + h->label_len + 1; > > info[0] = 0; > - info[1] = out->len; > - info[2] = label->len; > - p = ngx_cpymem(&info[3], label->data, label->len); > + info[1] = h->out_len; > + info[2] = h->label_len; > + > + p = ngx_cpymem(&info[3], h->label, h->label_len); > *p = '\0'; > > - if (ngx_hkdf_expand(out->data, out->len, digest, > - prk, prk_len, info, info_len) > + if (ngx_hkdf_expand(h->out, h->out_len, digest, > + h->prk, h->prk_len, info, info_len) > != NGX_OK) > { > ngx_ssl_error(NGX_LOG_INFO, pool->log, 0, > - "ngx_hkdf_expand(%V) failed", label); > + "ngx_hkdf_expand(%*s) failed", h->label_len, h->label); > return NGX_ERROR; > } > > #ifdef NGX_QUIC_DEBUG_CRYPTO > - ngx_log_debug3(NGX_LOG_DEBUG_EVENT, pool->log, 0, > - "quic expand %V key len:%uz %xV", label, out->len, out); > + ngx_log_debug5(NGX_LOG_DEBUG_EVENT, pool->log, 0, > + "quic expand \"%*s\" key len:%uz %*xs", > + h->label_len, h->label, h->out_len, h->out_len, h->out); > #endif While here, I'd also drop "key" from the message (an 64a484fd40a9 leftover). [..] -- Sergey Kandaurov From pluknet at nginx.com Mon Jul 25 23:00:37 2022 From: pluknet at nginx.com (Sergey Kandaurov) Date: Tue, 26 Jul 2022 03:00:37 +0400 Subject: [PATCH 3 of 4] QUIC: removed ngx_quic_keys_new() In-Reply-To: <7929cae8d65fd1f41d07.1653980793@arut-laptop> References: <7929cae8d65fd1f41d07.1653980793@arut-laptop> Message-ID: <7363BF1F-F37B-4A6A-B527-E56D53629BAC@nginx.com> > On 31 May 2022, at 11:06, Roman Arutyunyan wrote: > > # HG changeset patch > # User Vladimir Homutov > # Date 1653652352 -14400 > # Fri May 27 15:52:32 2022 +0400 > # Branch quic > # Node ID 7929cae8d65fd1f41d07365cae93970b29f2d03d > # Parent 41f47332273e0350157258cc40dd0ede4ee86c69 > QUIC: removed ngx_quic_keys_new(). > > The ngx_quic_keys_t structure is now exposed. IMHO, this line suites as the log summary. > This allows to use it in contexts where no pool/connection is available, > i.e. early packet processing. > > diff --git a/src/event/quic/ngx_event_quic.c b/src/event/quic/ngx_event_quic.c > --- a/src/event/quic/ngx_event_quic.c > +++ b/src/event/quic/ngx_event_quic.c > @@ -238,7 +238,7 @@ ngx_quic_new_connection(ngx_connection_t > return NULL; > } > > - qc->keys = ngx_quic_keys_new(c->pool); > + qc->keys = ngx_pcalloc(c->pool, sizeof(ngx_quic_keys_t)); > if (qc->keys == NULL) { > return NULL; > } > diff --git a/src/event/quic/ngx_event_quic_output.c b/src/event/quic/ngx_event_quic_output.c > --- a/src/event/quic/ngx_event_quic_output.c > +++ b/src/event/quic/ngx_event_quic_output.c > @@ -928,6 +928,7 @@ ngx_quic_send_early_cc(ngx_connection_t > { > ssize_t len; > ngx_str_t res; > + ngx_quic_keys_t keys; > ngx_quic_frame_t frame; > ngx_quic_header_t pkt; > > @@ -956,10 +957,9 @@ ngx_quic_send_early_cc(ngx_connection_t > return NGX_ERROR; > } > > - pkt.keys = ngx_quic_keys_new(c->pool); > - if (pkt.keys == NULL) { > - return NGX_ERROR; > - } > + ngx_memzero(&keys, sizeof(ngx_quic_keys_t)); > + > + pkt.keys = &keys; > > if (ngx_quic_keys_set_initial_secret(pkt.keys, &inpkt->dcid, c->log) > != NGX_OK) > diff --git a/src/event/quic/ngx_event_quic_protection.c b/src/event/quic/ngx_event_quic_protection.c > --- a/src/event/quic/ngx_event_quic_protection.c > +++ b/src/event/quic/ngx_event_quic_protection.c > @@ -10,16 +10,11 @@ > #include > > > -/* RFC 5116, 5.1 and RFC 8439, 2.3 for all supported ciphers */ > -#define NGX_QUIC_IV_LEN 12 > /* RFC 9001, 5.4.1. Header Protection Application: 5-byte mask */ > #define NGX_QUIC_HP_LEN 5 > > #define NGX_QUIC_AES_128_KEY_LEN 16 > > -/* largest hash used in TLS is SHA-384 */ > -#define NGX_QUIC_MAX_MD_SIZE 48 > - > #define NGX_AES_128_GCM_SHA256 0x1301 > #define NGX_AES_256_GCM_SHA384 0x1302 > #define NGX_CHACHA20_POLY1305_SHA256 0x1303 > @@ -33,45 +28,12 @@ > > > typedef struct { > - size_t len; > - u_char data[NGX_QUIC_MAX_MD_SIZE]; > -} ngx_quic_md_t; > - > - > -typedef struct { > - size_t len; > - u_char data[NGX_QUIC_IV_LEN]; > -} ngx_quic_iv_t; > - > - > -typedef struct { > const ngx_quic_cipher_t *c; > const EVP_CIPHER *hp; > const EVP_MD *d; > } ngx_quic_ciphers_t; > > > -typedef struct ngx_quic_secret_s { > - ngx_quic_md_t secret; > - ngx_quic_md_t key; > - ngx_quic_iv_t iv; > - ngx_quic_md_t hp; > -} ngx_quic_secret_t; > - > - > -typedef struct { > - ngx_quic_secret_t client; > - ngx_quic_secret_t server; > -} ngx_quic_secrets_t; > - > - > -struct ngx_quic_keys_s { > - ngx_quic_secrets_t secrets[NGX_QUIC_ENCRYPTION_LAST]; > - ngx_quic_secrets_t next_key; > - ngx_uint_t cipher; > -}; > - > - > typedef struct { > size_t out_len; > u_char *out; > @@ -721,13 +683,6 @@ ngx_quic_keys_set_encryption_secret(ngx_ > } > > > -ngx_quic_keys_t * > -ngx_quic_keys_new(ngx_pool_t *pool) > -{ > - return ngx_pcalloc(pool, sizeof(ngx_quic_keys_t)); > -} > - > - > ngx_uint_t > ngx_quic_keys_available(ngx_quic_keys_t *keys, > enum ssl_encryption_level_t level) > diff --git a/src/event/quic/ngx_event_quic_protection.h b/src/event/quic/ngx_event_quic_protection.h > --- a/src/event/quic/ngx_event_quic_protection.h > +++ b/src/event/quic/ngx_event_quic_protection.h > @@ -16,8 +16,46 @@ > > #define NGX_QUIC_ENCRYPTION_LAST ((ssl_encryption_application) + 1) > > +/* RFC 5116, 5.1 and RFC 8439, 2.3 for all supported ciphers */ > +#define NGX_QUIC_IV_LEN 12 > > -ngx_quic_keys_t *ngx_quic_keys_new(ngx_pool_t *pool); > +/* largest hash used in TLS is SHA-384 */ > +#define NGX_QUIC_MAX_MD_SIZE 48 > + > + > +typedef struct { > + size_t len; > + u_char data[NGX_QUIC_MAX_MD_SIZE]; > +} ngx_quic_md_t; > + > + > +typedef struct { > + size_t len; > + u_char data[NGX_QUIC_IV_LEN]; > +} ngx_quic_iv_t; > + > + > +typedef struct ngx_quic_secret_s { The "ngx_quic_secret_s" part can be dropped, unused since 9c3be23ddbe7. > + ngx_quic_md_t secret; > + ngx_quic_md_t key; > + ngx_quic_iv_t iv; > + ngx_quic_md_t hp; > +} ngx_quic_secret_t; > + > + > +typedef struct { > + ngx_quic_secret_t client; > + ngx_quic_secret_t server; > +} ngx_quic_secrets_t; > + > + > +struct ngx_quic_keys_s { > + ngx_quic_secrets_t secrets[NGX_QUIC_ENCRYPTION_LAST]; > + ngx_quic_secrets_t next_key; > + ngx_uint_t cipher; > +}; > + > + > ngx_int_t ngx_quic_keys_set_initial_secret(ngx_quic_keys_t *keys, > ngx_str_t *secret, ngx_log_t *log); > ngx_int_t ngx_quic_keys_set_encryption_secret(ngx_log_t *log, > [ The remaining patches look good. ] -- Sergey Kandaurov From xeioex at nginx.com Tue Jul 26 01:41:26 2022 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Tue, 26 Jul 2022 01:41:26 +0000 Subject: [njs] Added generic logger callback. Message-ID: details: https://hg.nginx.org/njs/rev/14426cb84197 branches: changeset: 1917:14426cb84197 user: Dmitry Volyntsev date: Mon Jul 25 18:40:24 2022 -0700 description: Added generic logger callback. This allows for a host environment to control when and how internal NJS messages are logged. diffstat: nginx/ngx_http_js_module.c | 1 + nginx/ngx_js.c | 15 +++++++++++++++ nginx/ngx_js.h | 2 ++ nginx/ngx_stream_js_module.c | 1 + src/njs.h | 19 +++++++++++++++++++ src/njs_shell.c | 38 +++++++++++++++++++++++++++----------- src/njs_vm.c | 26 ++++++++++++++++++++++++++ 7 files changed, 91 insertions(+), 11 deletions(-) diffs (274 lines): diff -r 4b8d8237f598 -r 14426cb84197 nginx/ngx_http_js_module.c --- a/nginx/ngx_http_js_module.c Thu Jul 21 18:33:20 2022 -0700 +++ b/nginx/ngx_http_js_module.c Mon Jul 25 18:40:24 2022 -0700 @@ -783,6 +783,7 @@ static njs_vm_ops_t ngx_http_js_ops = { ngx_http_js_set_timer, ngx_http_js_clear_timer, NULL, + ngx_js_logger, }; diff -r 4b8d8237f598 -r 14426cb84197 nginx/ngx_js.c --- a/nginx/ngx_js.c Thu Jul 21 18:33:20 2022 -0700 +++ b/nginx/ngx_js.c Mon Jul 25 18:40:24 2022 -0700 @@ -318,3 +318,18 @@ ngx_js_ext_log(njs_vm_t *vm, njs_value_t } +void +ngx_js_logger(njs_vm_t *vm, njs_external_ptr_t external, njs_log_level_t level, + const u_char *start, size_t length) +{ + ngx_connection_t *c; + ngx_log_handler_pt handler; + + c = ngx_external_connection(vm, external); + handler = c->log->handler; + c->log->handler = NULL; + + ngx_log_error((ngx_uint_t) level, c->log, 0, "js: %*s", length, start); + + c->log->handler = handler; +} diff -r 4b8d8237f598 -r 14426cb84197 nginx/ngx_js.h --- a/nginx/ngx_js.h Thu Jul 21 18:33:20 2022 -0700 +++ b/nginx/ngx_js.h Mon Jul 25 18:40:24 2022 -0700 @@ -68,6 +68,8 @@ ngx_int_t ngx_js_retval(njs_vm_t *vm, nj njs_int_t ngx_js_ext_log(njs_vm_t *vm, njs_value_t *args, njs_uint_t nargs, njs_index_t level); +void ngx_js_logger(njs_vm_t *vm, njs_external_ptr_t external, + njs_log_level_t level, const u_char *start, size_t length); njs_int_t ngx_js_ext_string(njs_vm_t *vm, njs_object_prop_t *prop, njs_value_t *value, njs_value_t *setval, njs_value_t *retval); diff -r 4b8d8237f598 -r 14426cb84197 nginx/ngx_stream_js_module.c --- a/nginx/ngx_stream_js_module.c Thu Jul 21 18:33:20 2022 -0700 +++ b/nginx/ngx_stream_js_module.c Mon Jul 25 18:40:24 2022 -0700 @@ -499,6 +499,7 @@ static njs_vm_ops_t ngx_stream_js_ops = ngx_stream_js_set_timer, ngx_stream_js_clear_timer, NULL, + ngx_js_logger, }; diff -r 4b8d8237f598 -r 14426cb84197 src/njs.h --- a/src/njs.h Thu Jul 21 18:33:20 2022 -0700 +++ b/src/njs.h Mon Jul 25 18:40:24 2022 -0700 @@ -45,6 +45,11 @@ typedef struct { uint64_t filler[2]; } njs_opaque_value_t; +typedef enum { + NJS_LOG_LEVEL_ERROR = 4, + NJS_LOG_LEVEL_WARN = 5, + NJS_LOG_LEVEL_INFO = 7, +} njs_log_level_t; /* sizeof(njs_value_t) is 16 bytes. */ #define njs_argument(args, n) \ @@ -69,6 +74,12 @@ extern const njs_value_t njs_ #define njs_vm_error(vm, fmt, ...) \ njs_vm_value_error_set(vm, njs_vm_retval(vm), fmt, ##__VA_ARGS__) +#define njs_vm_log(vm, fmt, ...) njs_vm_logger(vm, NJS_LOG_LEVEL_INFO, fmt, \ + ##__VA_ARGS__) +#define njs_vm_warn(vm, fmt, ...) njs_vm_logger(vm, NJS_LOG_LEVEL_WARN, fmt, \ + ##__VA_ARGS__) +#define njs_vm_err(vm, fmt, ...) njs_vm_logger(vm, NJS_LOG_LEVEL_ERROR, fmt, \ + ##__VA_ARGS__) /* * njs_prop_handler_t operates as a property getter/setter or delete handler. @@ -187,12 +198,15 @@ typedef void (*njs_event_destructor_t)(n njs_host_event_t event); typedef njs_mod_t *(*njs_module_loader_t)(njs_vm_t *vm, njs_external_ptr_t external, njs_str_t *name); +typedef void (*njs_logger_t)(njs_vm_t *vm, njs_external_ptr_t external, + njs_log_level_t level, const u_char *start, size_t length); typedef struct { njs_set_timer_t set_timer; njs_event_destructor_t clear_timer; njs_module_loader_t module_loader; + njs_logger_t logger; } njs_vm_ops_t; @@ -221,6 +235,8 @@ typedef struct { char **argv; njs_uint_t argc; + njs_log_level_t log_level; + #define NJS_VM_OPT_UNHANDLED_REJECTION_IGNORE 0 #define NJS_VM_OPT_UNHANDLED_REJECTION_THROW 1 @@ -401,6 +417,9 @@ NJS_EXPORT void njs_vm_value_error_set(n const char *fmt, ...); NJS_EXPORT void njs_vm_memory_error(njs_vm_t *vm); +NJS_EXPORT void njs_vm_logger(njs_vm_t *vm, njs_log_level_t level, + const char *fmt, ...); + NJS_EXPORT void njs_value_undefined_set(njs_value_t *value); NJS_EXPORT void njs_value_null_set(njs_value_t *value); NJS_EXPORT void njs_value_invalid_set(njs_value_t *value); diff -r 4b8d8237f598 -r 14426cb84197 src/njs_shell.c --- a/src/njs_shell.c Thu Jul 21 18:33:20 2022 -0700 +++ b/src/njs_shell.c Mon Jul 25 18:40:24 2022 -0700 @@ -121,6 +121,8 @@ static njs_host_event_t njs_console_set_ static void njs_console_clear_timer(njs_external_ptr_t external, njs_host_event_t event); +static void njs_console_log(njs_vm_t *vm, njs_external_ptr_t external, + njs_log_level_t level, const u_char *start, size_t length); static njs_int_t njs_timelabel_hash_test(njs_lvlhsh_query_t *lhq, void *data); @@ -207,6 +209,7 @@ static njs_vm_ops_t njs_console_ops = { njs_console_set_timer, njs_console_clear_timer, NULL, + njs_console_log, }; @@ -1204,14 +1207,14 @@ njs_ext_console_log(njs_vm_t *vm, njs_va return NJS_ERROR; } - njs_printf("%s", (n != 1) ? " " : ""); - njs_print(msg.start, msg.length); + njs_vm_log(vm, "%s", (n != 1) ? " " : ""); + njs_vm_log(vm, "%*s", msg.length, msg.start); n++; } if (nargs > 1) { - njs_printf("\n"); + njs_vm_log(vm, "\n"); } njs_set_undefined(&vm->retval); @@ -1284,7 +1287,7 @@ njs_ext_console_time(njs_vm_t *vm, njs_v return NJS_ERROR; } - njs_printf("Timer \"%V\" already exists.\n", &name); + njs_vm_log(vm, "Timer \"%V\" already exists.\n", &name); label = lhq.value; } @@ -1349,7 +1352,7 @@ njs_ext_console_time_end(njs_vm_t *vm, n ms = ns / 1000000; ns = ns % 1000000; - njs_printf("%V: %uL.%06uLms\n", &name, ms, ns); + njs_vm_log(vm, "%V: %uL.%06uLms\n", &name, ms, ns); /* GC: release. */ njs_mp_free(vm->mem_pool, label); @@ -1361,7 +1364,7 @@ njs_ext_console_time_end(njs_vm_t *vm, n return NJS_ERROR; } - njs_printf("Timer \"%V\" doesn’t exist.\n", &name); + njs_vm_log(vm, "Timer \"%V\" doesn’t exist.\n", &name); } njs_set_undefined(&vm->retval); @@ -1380,14 +1383,14 @@ njs_console_set_timer(njs_external_ptr_t njs_console_t *console; njs_lvlhsh_query_t lhq; + console = external; + vm = console->vm; + if (delay != 0) { - njs_stderror("njs_console_set_timer(): async timers unsupported\n"); + njs_vm_err(vm, "njs_console_set_timer(): async timers unsupported\n"); return NULL; } - console = external; - vm = console->vm; - ev = njs_mp_alloc(vm->mem_pool, sizeof(njs_ev_t)); if (njs_slow_path(ev == NULL)) { return NULL; @@ -1441,13 +1444,26 @@ njs_console_clear_timer(njs_external_ptr ret = njs_lvlhsh_delete(&console->events, &lhq); if (ret != NJS_OK) { - njs_stderror("njs_lvlhsh_delete() failed\n"); + njs_vm_err(vm, "njs_lvlhsh_delete() failed\n"); } njs_mp_free(vm->mem_pool, ev); } +static void +njs_console_log(njs_vm_t *vm, njs_external_ptr_t external, + njs_log_level_t level, const u_char *start, size_t length) +{ + if (level == NJS_LOG_LEVEL_ERROR) { + njs_stderror("%*s", length, start); + + } else { + njs_printf("%*s", length, start); + } +} + + static njs_int_t njs_timelabel_hash_test(njs_lvlhsh_query_t *lhq, void *data) { diff -r 4b8d8237f598 -r 14426cb84197 src/njs_vm.c --- a/src/njs_vm.c Thu Jul 21 18:33:20 2022 -0700 +++ b/src/njs_vm.c Mon Jul 25 18:40:24 2022 -0700 @@ -23,6 +23,8 @@ void njs_vm_opt_init(njs_vm_opt_t *options) { njs_memzero(options, sizeof(njs_vm_opt_t)); + + options->log_level = NJS_LOG_LEVEL_INFO; } @@ -881,6 +883,30 @@ njs_vm_memory_error(njs_vm_t *vm) } +njs_noinline void +njs_vm_logger(njs_vm_t *vm, njs_log_level_t level, const char *fmt, ...) +{ + u_char *p; + va_list args; + njs_logger_t logger; + u_char buf[NJS_MAX_ERROR_STR]; + + if (vm->options.ops == NULL) { + return; + } + + logger = vm->options.ops->logger; + + if (logger != NULL && vm->options.log_level >= level) { + va_start(args, fmt); + p = njs_vsprintf(buf, buf + sizeof(buf), fmt, args); + va_end(args); + + logger(vm, vm->external, level, buf, p - buf); + } +} + + njs_int_t njs_vm_value_string(njs_vm_t *vm, njs_str_t *dst, njs_value_t *src) { From xeioex at nginx.com Tue Jul 26 01:41:28 2022 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Tue, 26 Jul 2022 01:41:28 +0000 Subject: [njs] Added soft deprecation warning for deprecated methods and properties. Message-ID: details: https://hg.nginx.org/njs/rev/beaff2c39864 branches: changeset: 1918:beaff2c39864 user: Dmitry Volyntsev date: Mon Jul 25 18:40:24 2022 -0700 description: Added soft deprecation warning for deprecated methods and properties. diffstat: nginx/ngx_http_js_module.c | 4 ++++ src/njs.h | 11 +++++++++++ src/njs_string.c | 10 ++++++++++ 3 files changed, 25 insertions(+), 0 deletions(-) diffs (90 lines): diff -r 14426cb84197 -r beaff2c39864 nginx/ngx_http_js_module.c --- a/nginx/ngx_http_js_module.c Mon Jul 25 18:40:24 2022 -0700 +++ b/nginx/ngx_http_js_module.c Mon Jul 25 18:40:24 2022 -0700 @@ -2572,6 +2572,8 @@ ngx_http_js_ext_get_request_body(njs_vm_ ngx_http_js_ctx_t *ctx; ngx_http_request_t *r; + njs_deprecated(vm, "r.requestBody"); + r = njs_vm_external(vm, ngx_http_js_request_proto_id, value); if (r == NULL) { njs_value_undefined_set(retval); @@ -3416,6 +3418,8 @@ ngx_http_js_ext_get_response_body(njs_vm ngx_http_js_ctx_t *ctx; ngx_http_request_t *r; + njs_deprecated(vm, "r.responseBody"); + r = njs_vm_external(vm, ngx_http_js_request_proto_id, value); if (r == NULL) { njs_value_undefined_set(retval); diff -r 14426cb84197 -r beaff2c39864 src/njs.h --- a/src/njs.h Mon Jul 25 18:40:24 2022 -0700 +++ b/src/njs.h Mon Jul 25 18:40:24 2022 -0700 @@ -81,6 +81,17 @@ extern const njs_value_t njs_ #define njs_vm_err(vm, fmt, ...) njs_vm_logger(vm, NJS_LOG_LEVEL_ERROR, fmt, \ ##__VA_ARGS__) +#define njs_deprecated(vm, text) \ + do { \ + static njs_bool_t reported; \ + \ + if (!reported) { \ + njs_vm_warn(vm, text " is deprecated " \ + "and will be removed in the future"); \ + reported = 1; \ + } \ + } while(0) + /* * njs_prop_handler_t operates as a property getter/setter or delete handler. * - retval != NULL && setval == NULL - GET context. diff -r 14426cb84197 -r beaff2c39864 src/njs_string.c --- a/src/njs_string.c Mon Jul 25 18:40:24 2022 -0700 +++ b/src/njs_string.c Mon Jul 25 18:40:24 2022 -0700 @@ -982,6 +982,8 @@ njs_string_prototype_from_utf8(njs_vm_t njs_slice_prop_t slice; njs_string_prop_t string; + njs_deprecated(vm, "String.prototype.fromUTF8()"); + ret = njs_string_object_validate(vm, njs_argument(args, 0)); if (njs_slow_path(ret != NJS_OK)) { return ret; @@ -1025,6 +1027,8 @@ njs_string_prototype_to_utf8(njs_vm_t *v njs_slice_prop_t slice; njs_string_prop_t string; + njs_deprecated(vm, "String.prototype.toUTF8()"); + ret = njs_string_object_validate(vm, njs_argument(args, 0)); if (njs_slow_path(ret != NJS_OK)) { return ret; @@ -1059,6 +1063,8 @@ njs_string_prototype_from_bytes(njs_vm_t njs_slice_prop_t slice; njs_string_prop_t string; + njs_deprecated(vm, "String.prototype.fromBytes()"); + ret = njs_string_object_validate(vm, njs_argument(args, 0)); if (njs_slow_path(ret != NJS_OK)) { return ret; @@ -1125,6 +1131,8 @@ njs_string_prototype_to_bytes(njs_vm_t * njs_string_prop_t string; njs_unicode_decode_t ctx; + njs_deprecated(vm, "String.prototype.toBytes()"); + ret = njs_string_object_validate(vm, njs_argument(args, 0)); if (njs_slow_path(ret != NJS_OK)) { return ret; @@ -1617,6 +1625,8 @@ njs_string_bytes_from(njs_vm_t *vm, njs_ { njs_value_t *value; + njs_deprecated(vm, "String.bytesFrom()"); + value = njs_arg(args, nargs, 1); if (njs_is_string(value)) { From Eckart.Haufler at rohde-schwarz.com Tue Jul 26 07:39:29 2022 From: Eckart.Haufler at rohde-schwarz.com (Eckart Haufler) Date: Tue, 26 Jul 2022 07:39:29 +0000 Subject: ngx_http_dav_module disable_symlink question Message-ID: Hi, We want to use the ngx_http_dav_module with the nginx server (1.21) on a linux machine. For security reasons, we would like to forbid to follow symbol links (e.g. for the case of accidental symbol links to directories like root / ). The nginx directive “disable_symlinks“ looked promising. It suppresses the download of files, but “MOVE” or “DELETE” seems not to be blocked. Also the documentation says “ngx_http_autoindex_module, ngx_http_random_index_module, and ngx_http_dav_module modules currently ignore this directive.” Is this planned or resolved on some newer release branches – or are there other settings to achieve better protection? Thanks for any hints! Eckart -------------- next part -------------- An HTML attachment was scrubbed... URL: From dnj0496 at gmail.com Wed Jul 27 02:21:29 2022 From: dnj0496 at gmail.com (Dk Jack) Date: Tue, 26 Jul 2022 19:21:29 -0700 Subject: nginx crash Message-ID: Hi, I am noticing a crash in my nginx module. The crash is happening after an internal redirect. It's not always happening but for certain requests. Besides the trace log I do not have much info about the request. In my module, I am restoring the my module context in a similar fashion as the ` ngx_http_realip_get_module_ctx`. What I noticed is `r->pool == 0`. Why would the r->pool be ever zero'ed? The crash is happening in my get_module_ctx function which was called immediately after returning from the ngx_http_internal_redirect call. Any ideas on how to go about resolving this is greatly appreciated. Thanks. Regards, Dk. static ngx_http_realip_ctx_t * ngx_http_realip_get_module_ctx(ngx_http_request_t *r) { ngx_pool_cleanup_t *cln; ngx_http_realip_ctx_t *ctx; ctx = ngx_http_get_module_ctx(r, ngx_http_realip_module); if (ctx == NULL && (r->internal || r->filter_finalize)) { /* * if module context was reset, the original address * can still be found in the cleanup handler */ for (cln = r->pool->cleanup; cln; cln = cln->next) { if (cln->handler == ngx_http_realip_cleanup) { ctx = cln->data; break; } } } return ctx; } -------------- next part -------------- An HTML attachment was scrubbed... URL: From dnj0496 at gmail.com Wed Jul 27 02:22:06 2022 From: dnj0496 at gmail.com (Dk Jack) Date: Tue, 26 Jul 2022 19:22:06 -0700 Subject: nginx crash In-Reply-To: References: Message-ID: Forgot to add, this is with nginx version: 1.20.1 On Tue, Jul 26, 2022 at 7:21 PM Dk Jack wrote: > Hi, > I am noticing a crash in my nginx module. The crash is happening after an > internal redirect. It's not always happening but for certain requests. > Besides the trace log I do not have much info about the request. In my > module, I am restoring the my module context in a similar fashion as the ` > ngx_http_realip_get_module_ctx`. What I noticed is `r->pool == 0`. Why > would the r->pool be ever zero'ed? > > The crash is happening in my get_module_ctx function which was called > immediately after returning from the ngx_http_internal_redirect call. Any > ideas on how to go about resolving this is greatly appreciated. Thanks. > > Regards, > Dk. > > static ngx_http_realip_ctx_t * > ngx_http_realip_get_module_ctx(ngx_http_request_t *r) > { > ngx_pool_cleanup_t *cln; > ngx_http_realip_ctx_t *ctx; > > ctx = ngx_http_get_module_ctx(r, ngx_http_realip_module); > if (ctx == NULL && (r->internal || r->filter_finalize)) { > /* > * if module context was reset, the original address > * can still be found in the cleanup handler > */ > for (cln = r->pool->cleanup; cln; cln = cln->next) { > if (cln->handler == ngx_http_realip_cleanup) { > ctx = cln->data; > break; > } > } > } > > return ctx; > } > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Wed Jul 27 13:34:39 2022 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 27 Jul 2022 16:34:39 +0300 Subject: nginx crash In-Reply-To: References: Message-ID: Hello! On Tue, Jul 26, 2022 at 07:21:29PM -0700, Dk Jack wrote: > Hi, > I am noticing a crash in my nginx module. The crash is happening after an > internal redirect. It's not always happening but for certain requests. > Besides the trace log I do not have much info about the request. In my > module, I am restoring the my module context in a similar fashion as the ` > ngx_http_realip_get_module_ctx`. What I noticed is `r->pool == 0`. Why > would the r->pool be ever zero'ed? > > The crash is happening in my get_module_ctx function which was called > immediately after returning from the ngx_http_internal_redirect call. Any > ideas on how to go about resolving this is greatly appreciated. Thanks. It looks like there are issues with request reference counting somewhere in your code, and the request is freed in the ngx_http_internal_redirect() call. Review your code logic and corresponding request reference counting, it should help. -- Maxim Dounin http://mdounin.ru/ From dnj0496 at gmail.com Wed Jul 27 15:40:41 2022 From: dnj0496 at gmail.com (Dk Jack) Date: Wed, 27 Jul 2022 08:40:41 -0700 Subject: nginx crash In-Reply-To: References: Message-ID: <568013AB-8211-4189-B916-2FF84412219B@gmail.com> In my code I am not modifying the reference count. Could you let me if there any function calls if invoked would indirectly update the reference count? Dk. > On Jul 27, 2022, at 6:35 AM, Maxim Dounin wrote: > > Hello! > >> On Tue, Jul 26, 2022 at 07:21:29PM -0700, Dk Jack wrote: >> >> Hi, >> I am noticing a crash in my nginx module. The crash is happening after an >> internal redirect. It's not always happening but for certain requests. >> Besides the trace log I do not have much info about the request. In my >> module, I am restoring the my module context in a similar fashion as the ` >> ngx_http_realip_get_module_ctx`. What I noticed is `r->pool == 0`. Why >> would the r->pool be ever zero'ed? >> >> The crash is happening in my get_module_ctx function which was called >> immediately after returning from the ngx_http_internal_redirect call. Any >> ideas on how to go about resolving this is greatly appreciated. Thanks. > > It looks like there are issues with request reference counting > somewhere in your code, and the request is freed in the > ngx_http_internal_redirect() call. Review your code logic and > corresponding request reference counting, it should help. > > -- > Maxim Dounin > http://mdounin.ru/ > _______________________________________________ > nginx-devel mailing list -- nginx-devel at nginx.org > To unsubscribe send an email to nginx-devel-leave at nginx.org From mdounin at mdounin.ru Thu Jul 28 03:05:33 2022 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 28 Jul 2022 06:05:33 +0300 Subject: nginx crash In-Reply-To: <568013AB-8211-4189-B916-2FF84412219B@gmail.com> References: <568013AB-8211-4189-B916-2FF84412219B@gmail.com> Message-ID: Hello! On Wed, Jul 27, 2022 at 08:40:41AM -0700, Dk Jack wrote: > In my code I am not modifying the reference count. Could you let > me if there any function calls if invoked would indirectly > update the reference count? There are quite a few functions and/or return codes which might indirectly modify request reference counter, including the ngx_http_internal_redirect() function (which increments reference count and assumes it is decremented by further processing), request body reading code (which increments reference counter and assumes it is decremented by the body handler) and ngx_http_finalize_request() (which is normally used to decrement reference counter). Further, lack of explicit reference counting might be the issue if you are trying to do something non-trivial, and, for example, calling functions which might indirectly finalize the request. That's why I've suggested to review the code logic in the first place. -- Maxim Dounin http://mdounin.ru/ From dnj0496 at gmail.com Thu Jul 28 03:16:55 2022 From: dnj0496 at gmail.com (Dk Jack) Date: Wed, 27 Jul 2022 20:16:55 -0700 Subject: nginx crash In-Reply-To: References: Message-ID: Thanks Dounin, I will look through my code. As I said, I try not to touch nginx data structure or variables in my module. Is there a guideline document that details when a module should increment or decrement reference count value. When invoking some of these functions? Thanks, Dk. > On Jul 27, 2022, at 8:05 PM, Maxim Dounin wrote: > > Hello! > >> On Wed, Jul 27, 2022 at 08:40:41AM -0700, Dk Jack wrote: >> >> In my code I am not modifying the reference count. Could you let >> me if there any function calls if invoked would indirectly >> update the reference count? > > There are quite a few functions and/or return codes which might > indirectly modify request reference counter, including the > ngx_http_internal_redirect() function (which increments reference > count and assumes it is decremented by further processing), > request body reading code (which increments reference counter and > assumes it is decremented by the body handler) and > ngx_http_finalize_request() (which is normally used to decrement > reference counter). Further, lack of explicit reference counting > might be the issue if you are trying to do something non-trivial, > and, for example, calling functions which might indirectly > finalize the request. That's why I've suggested to review the > code logic in the first place. > > -- > Maxim Dounin > http://mdounin.ru/ > _______________________________________________ > nginx-devel mailing list -- nginx-devel at nginx.org > To unsubscribe send an email to nginx-devel-leave at nginx.org