From nginx at rastos.org Sun Apr 4 07:52:03 2021 From: nginx at rastos.org (Rastislav Stanik) Date: Sun, 4 Apr 2021 09:52:03 +0200 Subject: fsize and flastmod SSI commands Message-ID: <6682f478-e82a-0436-4d7b-02b3f8407f30@rastos.org> Hi A year ago there was a patch posted that adds support for fsize and flastmod SSI commands: http://mailman.nginx.org/pipermail/nginx-devel/2020-February/013003.html What's needs to happen to get that into official sources? -- Sincerely Rastislav Stanik From mdounin at mdounin.ru Sun Apr 4 15:56:25 2021 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 4 Apr 2021 18:56:25 +0300 Subject: fsize and flastmod SSI commands In-Reply-To: <6682f478-e82a-0436-4d7b-02b3f8407f30@rastos.org> References: <6682f478-e82a-0436-4d7b-02b3f8407f30@rastos.org> Message-ID: Hello! On Sun, Apr 04, 2021 at 09:52:03AM +0200, Rastislav Stanik wrote: > A year ago there was a patch posted that adds support for fsize and flastmod SSI > commands: > http://mailman.nginx.org/pipermail/nginx-devel/2020-February/013003.html > What's needs to happen to get that into official sources? Firt of all, questions and concerns from the previous review need answers. -- Maxim Dounin http://mdounin.ru/ From nginx at rastos.org Sun Apr 4 19:19:30 2021 From: nginx at rastos.org (Rastislav Stanik) Date: Sun, 4 Apr 2021 21:19:30 +0200 Subject: fsize and flastmod SSI commands In-Reply-To: References: <6682f478-e82a-0436-4d7b-02b3f8407f30@rastos.org> Message-ID: On 04/04/2021 17.56, Maxim Dounin wrote: > Hello! > > On Sun, Apr 04, 2021 at 09:52:03AM +0200, Rastislav Stanik wrote: > >> A year ago there was a patch posted that adds support for fsize and flastmod SSI >> commands: >> http://mailman.nginx.org/pipermail/nginx-devel/2020-February/013003.html >> What's needs to happen to get that into official sources? > > Firt of all, questions and concerns from the previous review need > answers. > Hi As far as I can see, the patches posted on February 2020 (linked above) are addressing your concerns expressed in May 2017 - perhaps with exception of your concern about handling multiple config commands? I'm not sure about that one. I did not find more questions and concerns expressed on patch proposed on February 2020. -- Sincerely Rastislav Stanik From mdounin at mdounin.ru Sun Apr 4 23:39:09 2021 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 5 Apr 2021 02:39:09 +0300 Subject: fsize and flastmod SSI commands In-Reply-To: References: <6682f478-e82a-0436-4d7b-02b3f8407f30@rastos.org> Message-ID: Hello! On Sun, Apr 04, 2021 at 09:19:30PM +0200, Rastislav Stanik wrote: > On 04/04/2021 17.56, Maxim Dounin wrote: > > Hello! > > > > On Sun, Apr 04, 2021 at 09:52:03AM +0200, Rastislav Stanik wrote: > > > >> A year ago there was a patch posted that adds support for fsize and flastmod SSI > >> commands: > >> http://mailman.nginx.org/pipermail/nginx-devel/2020-February/013003.html > >> What's needs to happen to get that into official sources? > > > > Firt of all, questions and concerns from the previous review need > > answers. > > As far as I can see, the patches posted on February 2020 (linked above) are > addressing your concerns expressed in May 2017 - perhaps with exception of your > concern about handling multiple config commands? I'm not sure about that one. > I did not find more questions and concerns expressed on patch proposed on > February 2020. I don't see answers to questions asked in the review made in 2017 (actually, two reviews, both without any answers), and obviously not motivated to do additional reviews. Even if all concerns are addressed in the new patch. -- Maxim Dounin http://mdounin.ru/ From mdounin at mdounin.ru Mon Apr 5 13:00:00 2021 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 05 Apr 2021 13:00:00 +0000 Subject: [nginx] Version bump. Message-ID: details: https://hg.nginx.org/nginx/rev/19799b290812 branches: changeset: 7815:19799b290812 user: Maxim Dounin date: Mon Apr 05 04:03:10 2021 +0300 description: Version bump. diffstat: src/core/nginx.h | 4 ++-- 1 files changed, 2 insertions(+), 2 deletions(-) diffs (14 lines): diff -r eb23d58bfd6b -r 19799b290812 src/core/nginx.h --- a/src/core/nginx.h Tue Mar 30 17:47:11 2021 +0300 +++ b/src/core/nginx.h Mon Apr 05 04:03:10 2021 +0300 @@ -9,8 +9,8 @@ #define _NGINX_H_INCLUDED_ -#define nginx_version 1019009 -#define NGINX_VERSION "1.19.9" +#define nginx_version 1019010 +#define NGINX_VERSION "1.19.10" #define NGINX_VER "nginx/" NGINX_VERSION #ifdef NGX_BUILD From mdounin at mdounin.ru Mon Apr 5 13:00:04 2021 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 05 Apr 2021 13:00:04 +0000 Subject: [nginx] Gzip: support for zlib-ng. Message-ID: details: https://hg.nginx.org/nginx/rev/1f3d0d9f893f branches: changeset: 7816:1f3d0d9f893f user: Maxim Dounin date: Mon Apr 05 04:06:58 2021 +0300 description: Gzip: support for zlib-ng. diffstat: src/http/modules/ngx_http_gzip_filter_module.c | 23 +++++++++++++++++++++-- 1 files changed, 21 insertions(+), 2 deletions(-) diffs (65 lines): diff -r 19799b290812 -r 1f3d0d9f893f src/http/modules/ngx_http_gzip_filter_module.c --- a/src/http/modules/ngx_http_gzip_filter_module.c Mon Apr 05 04:03:10 2021 +0300 +++ b/src/http/modules/ngx_http_gzip_filter_module.c Mon Apr 05 04:06:58 2021 +0300 @@ -57,6 +57,7 @@ typedef struct { unsigned nomem:1; unsigned buffering:1; unsigned intel:1; + unsigned zlib_ng:1; size_t zin; size_t zout; @@ -214,6 +215,7 @@ static ngx_http_output_header_filter_pt static ngx_http_output_body_filter_pt ngx_http_next_body_filter; static ngx_uint_t ngx_http_gzip_assume_intel; +static ngx_uint_t ngx_http_gzip_assume_zlib_ng; static ngx_int_t @@ -506,7 +508,7 @@ ngx_http_gzip_filter_memory(ngx_http_req if (!ngx_http_gzip_assume_intel) { ctx->allocated = 8192 + (1 << (wbits + 2)) + (1 << (memlevel + 9)); - } else { + } else if (!ngx_http_gzip_assume_zlib_ng) { /* * A zlib variant from Intel, https://github.com/jtkukunas/zlib. * It can force window bits to 13 for fast compression level, @@ -523,6 +525,20 @@ ngx_http_gzip_filter_memory(ngx_http_req + (1 << (ngx_max(memlevel, 8) + 8)) + (1 << (memlevel + 8)); ctx->intel = 1; + + } else { + /* + * Another zlib variant, https://github.com/zlib-ng/zlib-ng. + * Similar to Intel's variant, though uses 128K hash. + */ + + if (conf->level == 1) { + wbits = ngx_max(wbits, 13); + } + + ctx->allocated = 8192 + 16 + (1 << (wbits + 2)) + + 131072 + (1 << (memlevel + 8)); + ctx->zlib_ng = 1; } } @@ -945,11 +961,14 @@ ngx_http_gzip_filter_alloc(void *opaque, return p; } - if (ctx->intel) { + if (ctx->zlib_ng) { ngx_log_error(NGX_LOG_ALERT, ctx->request->connection->log, 0, "gzip filter failed to use preallocated memory: " "%ud of %ui", items * size, ctx->allocated); + } else if (ctx->intel) { + ngx_http_gzip_assume_zlib_ng = 1; + } else { ngx_http_gzip_assume_intel = 1; } From mdounin at mdounin.ru Mon Apr 5 13:00:06 2021 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 05 Apr 2021 13:00:06 +0000 Subject: [nginx] Gzip: updated handling of zlib variant from Intel. Message-ID: details: https://hg.nginx.org/nginx/rev/c297c2c252d8 branches: changeset: 7817:c297c2c252d8 user: Maxim Dounin date: Mon Apr 05 04:07:17 2021 +0300 description: Gzip: updated handling of zlib variant from Intel. In current versions (all versions based on zlib 1.2.11, at least since 2018) it no longer uses 64K hash and does not force window bits to 13 if it is less than 13. That is, it needs just 16 bytes more memory than normal zlib, so these bytes are simply added to the normal size calculation. diffstat: src/http/modules/ngx_http_gzip_filter_module.c | 35 ++++++------------------- 1 files changed, 9 insertions(+), 26 deletions(-) diffs (74 lines): diff -r 1f3d0d9f893f -r c297c2c252d8 src/http/modules/ngx_http_gzip_filter_module.c --- a/src/http/modules/ngx_http_gzip_filter_module.c Mon Apr 05 04:06:58 2021 +0300 +++ b/src/http/modules/ngx_http_gzip_filter_module.c Mon Apr 05 04:07:17 2021 +0300 @@ -56,7 +56,6 @@ typedef struct { unsigned done:1; unsigned nomem:1; unsigned buffering:1; - unsigned intel:1; unsigned zlib_ng:1; size_t zin; @@ -214,7 +213,6 @@ static ngx_str_t ngx_http_gzip_ratio = static ngx_http_output_header_filter_pt ngx_http_next_header_filter; static ngx_http_output_body_filter_pt ngx_http_next_body_filter; -static ngx_uint_t ngx_http_gzip_assume_intel; static ngx_uint_t ngx_http_gzip_assume_zlib_ng; @@ -503,33 +501,21 @@ ngx_http_gzip_filter_memory(ngx_http_req * 8K is for zlib deflate_state, it takes * *) 5816 bytes on i386 and sparc64 (32-bit mode) * *) 5920 bytes on amd64 and sparc64 + * + * A zlib variant from Intel (https://github.com/jtkukunas/zlib) + * uses additional 16-byte padding in one of window-sized buffers. */ - if (!ngx_http_gzip_assume_intel) { - ctx->allocated = 8192 + (1 << (wbits + 2)) + (1 << (memlevel + 9)); - - } else if (!ngx_http_gzip_assume_zlib_ng) { - /* - * A zlib variant from Intel, https://github.com/jtkukunas/zlib. - * It can force window bits to 13 for fast compression level, - * on processors with SSE 4.2 it uses 64K hash instead of scaling - * it from the specified memory level, and also introduces - * 16-byte padding in one out of the two window-sized buffers. - */ - - if (conf->level == 1) { - wbits = ngx_max(wbits, 13); - } - + if (!ngx_http_gzip_assume_zlib_ng) { ctx->allocated = 8192 + 16 + (1 << (wbits + 2)) - + (1 << (ngx_max(memlevel, 8) + 8)) - + (1 << (memlevel + 8)); - ctx->intel = 1; + + (1 << (memlevel + 9)); } else { /* * Another zlib variant, https://github.com/zlib-ng/zlib-ng. - * Similar to Intel's variant, though uses 128K hash. + * It forces window bits to 13 for fast compression level, + * uses 16-byte padding in one of window-sized buffers, and + * uses 128K hash. */ if (conf->level == 1) { @@ -966,11 +952,8 @@ ngx_http_gzip_filter_alloc(void *opaque, "gzip filter failed to use preallocated memory: " "%ud of %ui", items * size, ctx->allocated); - } else if (ctx->intel) { + } else { ngx_http_gzip_assume_zlib_ng = 1; - - } else { - ngx_http_gzip_assume_intel = 1; } p = ngx_palloc(ctx->request->pool, items * size); From mdounin at mdounin.ru Mon Apr 5 18:20:16 2021 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 05 Apr 2021 18:20:16 +0000 Subject: [nginx] Configure: fixed --test-build-epoll on FreeBSD 13. Message-ID: details: https://hg.nginx.org/nginx/rev/e2e9e0fae747 branches: changeset: 7818:e2e9e0fae747 user: Maxim Dounin date: Mon Apr 05 20:14:16 2021 +0300 description: Configure: fixed --test-build-epoll on FreeBSD 13. In FreeBSD 13, eventfd(2) was added, and this breaks build with --test-build-epoll and without --with-file-aio. Fix is to move eventfd(2) detection to auto/os/linux, as it is used only on Linux as a notification mechanism for epoll(). diffstat: auto/os/linux | 25 +++++++++++++++++++++++++ auto/unix | 23 ----------------------- 2 files changed, 25 insertions(+), 23 deletions(-) diffs (68 lines): diff -r c297c2c252d8 -r e2e9e0fae747 auto/os/linux --- a/auto/os/linux Mon Apr 05 04:07:17 2021 +0300 +++ b/auto/os/linux Mon Apr 05 20:14:16 2021 +0300 @@ -86,6 +86,31 @@ if [ $ngx_found = yes ]; then ee.data.ptr = NULL; epoll_ctl(efd, EPOLL_CTL_ADD, fd, &ee)" . auto/feature + + + # eventfd() + + ngx_feature="eventfd()" + ngx_feature_name="NGX_HAVE_EVENTFD" + ngx_feature_run=no + ngx_feature_incs="#include " + ngx_feature_path= + ngx_feature_libs= + ngx_feature_test="(void) eventfd(0, 0)" + . auto/feature + + if [ $ngx_found = yes ]; then + have=NGX_HAVE_SYS_EVENTFD_H . auto/have + fi + + + if [ $ngx_found = no ]; then + + ngx_feature="eventfd() (SYS_eventfd)" + ngx_feature_incs="#include " + ngx_feature_test="(void) SYS_eventfd" + . auto/feature + fi fi diff -r c297c2c252d8 -r e2e9e0fae747 auto/unix --- a/auto/unix Mon Apr 05 04:07:17 2021 +0300 +++ b/auto/unix Mon Apr 05 20:14:16 2021 +0300 @@ -582,29 +582,6 @@ Currently file AIO is supported on FreeB END exit 1 fi - -else - - ngx_feature="eventfd()" - ngx_feature_name="NGX_HAVE_EVENTFD" - ngx_feature_run=no - ngx_feature_incs="#include " - ngx_feature_path= - ngx_feature_libs= - ngx_feature_test="(void) eventfd(0, 0)" - . auto/feature - - if [ $ngx_found = yes ]; then - have=NGX_HAVE_SYS_EVENTFD_H . auto/have - fi - - if [ $ngx_found = no ]; then - - ngx_feature="eventfd() (SYS_eventfd)" - ngx_feature_incs="#include " - ngx_feature_test="(void) SYS_eventfd" - . auto/feature - fi fi From jusmaki at gmail.com Wed Apr 7 17:51:38 2021 From: jusmaki at gmail.com (Jussi Maki) Date: Wed, 7 Apr 2021 20:51:38 +0300 Subject: PATCH Upstream: new "keepalive_max_connection_duration" directive Message-ID: # HG changeset patch # User Jussi Maki # Date 1617816597 -10800 # Wed Apr 07 20:29:57 2021 +0300 # Node ID 3699288ff20a3e51ee4b7689898ce0241f64f0f5 # Parent e2e9e0fae74734b28974c64daacc492d751b4781 Upstream: new "keepalive_max_connection_duration" directive Added a new keepalive_max_connection duration which provides the time in milliseconds for the upstream block on how long the connection should be kept connected. The current keepalive directives either define the idle time or the number of requests but there is no elapsed time-based parameter. The elapsed time-based connection parameter is useful in a case when there are multiple backends and the connection should be evenly load balanced to them and the response times for upstream requests vary. diff -r e2e9e0fae747 -r 3699288ff20a src/core/ngx_connection.h --- a/src/core/ngx_connection.h Mon Apr 05 20:14:16 2021 +0300 +++ b/src/core/ngx_connection.h Wed Apr 07 20:29:57 2021 +0300 @@ -191,6 +191,8 @@ #if (NGX_THREADS || NGX_COMPAT) ngx_thread_task_t *sendfile_task; #endif + + ngx_msec_t connection_started_time; }; diff -r e2e9e0fae747 -r 3699288ff20a src/http/modules/ngx_http_upstream_keepalive_module.c --- a/src/http/modules/ngx_http_upstream_keepalive_module.c Mon Apr 05 20:14:16 2021 +0300 +++ b/src/http/modules/ngx_http_upstream_keepalive_module.c Wed Apr 07 20:29:57 2021 +0300 @@ -14,6 +14,7 @@ ngx_uint_t max_cached; ngx_uint_t requests; ngx_msec_t timeout; + ngx_msec_t duration; ngx_queue_t cache; ngx_queue_t free; @@ -100,6 +101,13 @@ offsetof(ngx_http_upstream_keepalive_srv_conf_t, requests), NULL }, + { ngx_string("keepalive_max_connection_duration"), + NGX_HTTP_UPS_CONF|NGX_CONF_TAKE1, + ngx_conf_set_num_slot, + NGX_HTTP_SRV_CONF_OFFSET, + offsetof(ngx_http_upstream_keepalive_srv_conf_t, duration), + NULL }, + ngx_null_command }; @@ -387,7 +395,13 @@ item->socklen = pc->socklen; ngx_memcpy(&item->sockaddr, pc->sockaddr, pc->socklen); - if (c->read->ready) { + if (kp->conf->duration != NGX_CONF_UNSET_MSEC && + ngx_current_msec - c->connection_started_time > kp->conf->duration) { + ngx_log_debug1(NGX_LOG_DEBUG_HTTP, pc->log, 0, + "free keepalive peer: expired connection %p", c); + c->close = 1; + } + if (c->close || c->read->ready) { ngx_http_upstream_keepalive_close_handler(c->read); } @@ -515,6 +529,7 @@ conf->timeout = NGX_CONF_UNSET_MSEC; conf->requests = NGX_CONF_UNSET_UINT; + conf->duration = NGX_CONF_UNSET_MSEC; return conf; } diff -r e2e9e0fae747 -r 3699288ff20a src/http/ngx_http_upstream.c --- a/src/http/ngx_http_upstream.c Mon Apr 05 20:14:16 2021 +0300 +++ b/src/http/ngx_http_upstream.c Wed Apr 07 20:29:57 2021 +0300 @@ -2017,6 +2017,10 @@ u->state->connect_time = ngx_current_msec - u->start_time; } + if (!c->connection_started_time) { + c->connection_started_time = ngx_current_msec; + } + if (!u->request_sent && ngx_http_upstream_test_connect(c) != NGX_OK) { ngx_http_upstream_next(r, u, NGX_HTTP_UPSTREAM_FT_ERROR); return; -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Wed Apr 7 22:35:20 2021 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 07 Apr 2021 22:35:20 +0000 Subject: [nginx] HTTP/2: relaxed PRIORITY frames limit. Message-ID: details: https://hg.nginx.org/nginx/rev/3674d5b7174e branches: changeset: 7819:3674d5b7174e user: Maxim Dounin date: Wed Apr 07 02:03:29 2021 +0300 description: HTTP/2: relaxed PRIORITY frames limit. Firefox uses several idle streams for PRIORITY frames[1], and "http2_max_concurrent_streams 1;" results in "client sent too many PRIORITY frames" errors when a connection is established by Firefox. Fix is to relax the PRIORITY frames limit to use at least 100 as the initial value (which is the recommended by the HTTP/2 protocol minimum limit on the number of concurrent streams, so it is not unreasonable for clients to assume that similar number of idle streams can be used for prioritization). [1] https://hg.mozilla.org/mozilla-central/file/32a9e6e145d6e3071c3993a20bb603a2f388722b/netwerk/protocol/http/Http2Stream.cpp#l1270 diffstat: src/http/v2/ngx_http_v2.c | 2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diffs (12 lines): diff -r e2e9e0fae747 -r 3674d5b7174e src/http/v2/ngx_http_v2.c --- a/src/http/v2/ngx_http_v2.c Mon Apr 05 20:14:16 2021 +0300 +++ b/src/http/v2/ngx_http_v2.c Wed Apr 07 02:03:29 2021 +0300 @@ -277,7 +277,7 @@ ngx_http_v2_init(ngx_event_t *rev) h2scf = ngx_http_get_module_srv_conf(hc->conf_ctx, ngx_http_v2_module); h2c->concurrent_pushes = h2scf->concurrent_pushes; - h2c->priority_limit = h2scf->concurrent_streams; + h2c->priority_limit = ngx_max(h2scf->concurrent_streams, 100); h2c->pool = ngx_create_pool(h2scf->pool_size, h2c->connection->log); if (h2c->pool == NULL) { From mdounin at mdounin.ru Wed Apr 7 22:35:23 2021 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 07 Apr 2021 22:35:23 +0000 Subject: [nginx] Introduced the "keepalive_time" directive. Message-ID: details: https://hg.nginx.org/nginx/rev/fdc3d40979b0 branches: changeset: 7820:fdc3d40979b0 user: Maxim Dounin date: Thu Apr 08 00:15:48 2021 +0300 description: Introduced the "keepalive_time" directive. Similar to lingering_time, it limits total connection lifetime before keepalive is switched off. The default is 1 hour, which is close to the total maximum connection lifetime possible with default keepalive_requests and keepalive_timeout. diffstat: src/core/ngx_connection.h | 1 + src/core/ngx_resolver.c | 4 ++++ src/event/ngx_event_accept.c | 2 ++ src/event/ngx_event_acceptex.c | 2 ++ src/event/ngx_event_connect.c | 2 ++ src/event/ngx_event_udp.c | 2 ++ src/http/modules/ngx_http_upstream_keepalive_module.c | 14 ++++++++++++++ src/http/ngx_http_core_module.c | 15 +++++++++++++++ src/http/ngx_http_core_module.h | 1 + src/http/v2/ngx_http_v2.c | 4 +++- 10 files changed, 46 insertions(+), 1 deletions(-) diffs (203 lines): diff -r 3674d5b7174e -r fdc3d40979b0 src/core/ngx_connection.h --- a/src/core/ngx_connection.h Wed Apr 07 02:03:29 2021 +0300 +++ b/src/core/ngx_connection.h Thu Apr 08 00:15:48 2021 +0300 @@ -162,6 +162,7 @@ struct ngx_connection_s { ngx_atomic_uint_t number; + ngx_msec_t start_time; ngx_uint_t requests; unsigned buffered:8; diff -r 3674d5b7174e -r fdc3d40979b0 src/core/ngx_resolver.c --- a/src/core/ngx_resolver.c Wed Apr 07 02:03:29 2021 +0300 +++ b/src/core/ngx_resolver.c Thu Apr 08 00:15:48 2021 +0300 @@ -4459,6 +4459,8 @@ ngx_udp_connect(ngx_resolver_connection_ c->number = ngx_atomic_fetch_add(ngx_connection_counter, 1); + c->start_time = ngx_current_msec; + ngx_log_debug3(NGX_LOG_DEBUG_EVENT, &rec->log, 0, "connect to %V, fd:%d #%uA", &rec->server, s, c->number); @@ -4545,6 +4547,8 @@ ngx_tcp_connect(ngx_resolver_connection_ c->number = ngx_atomic_fetch_add(ngx_connection_counter, 1); + c->start_time = ngx_current_msec; + if (ngx_add_conn) { if (ngx_add_conn(c) == NGX_ERROR) { goto failed; diff -r 3674d5b7174e -r fdc3d40979b0 src/event/ngx_event_accept.c --- a/src/event/ngx_event_accept.c Wed Apr 07 02:03:29 2021 +0300 +++ b/src/event/ngx_event_accept.c Thu Apr 08 00:15:48 2021 +0300 @@ -256,6 +256,8 @@ ngx_event_accept(ngx_event_t *ev) c->number = ngx_atomic_fetch_add(ngx_connection_counter, 1); + c->start_time = ngx_current_msec; + #if (NGX_STAT_STUB) (void) ngx_atomic_fetch_add(ngx_stat_handled, 1); #endif diff -r 3674d5b7174e -r fdc3d40979b0 src/event/ngx_event_acceptex.c --- a/src/event/ngx_event_acceptex.c Wed Apr 07 02:03:29 2021 +0300 +++ b/src/event/ngx_event_acceptex.c Thu Apr 08 00:15:48 2021 +0300 @@ -80,6 +80,8 @@ ngx_event_acceptex(ngx_event_t *rev) c->number = ngx_atomic_fetch_add(ngx_connection_counter, 1); + c->start_time = ngx_current_msec; + ls->handler(c); return; diff -r 3674d5b7174e -r fdc3d40979b0 src/event/ngx_event_connect.c --- a/src/event/ngx_event_connect.c Wed Apr 07 02:03:29 2021 +0300 +++ b/src/event/ngx_event_connect.c Thu Apr 08 00:15:48 2021 +0300 @@ -193,6 +193,8 @@ ngx_event_connect_peer(ngx_peer_connecti c->number = ngx_atomic_fetch_add(ngx_connection_counter, 1); + c->start_time = ngx_current_msec; + if (ngx_add_conn) { if (ngx_add_conn(c) == NGX_ERROR) { goto failed; diff -r 3674d5b7174e -r fdc3d40979b0 src/event/ngx_event_udp.c --- a/src/event/ngx_event_udp.c Wed Apr 07 02:03:29 2021 +0300 +++ b/src/event/ngx_event_udp.c Thu Apr 08 00:15:48 2021 +0300 @@ -363,6 +363,8 @@ ngx_event_recvmsg(ngx_event_t *ev) c->number = ngx_atomic_fetch_add(ngx_connection_counter, 1); + c->start_time = ngx_current_msec; + #if (NGX_STAT_STUB) (void) ngx_atomic_fetch_add(ngx_stat_handled, 1); #endif diff -r 3674d5b7174e -r fdc3d40979b0 src/http/modules/ngx_http_upstream_keepalive_module.c --- a/src/http/modules/ngx_http_upstream_keepalive_module.c Wed Apr 07 02:03:29 2021 +0300 +++ b/src/http/modules/ngx_http_upstream_keepalive_module.c Thu Apr 08 00:15:48 2021 +0300 @@ -13,6 +13,7 @@ typedef struct { ngx_uint_t max_cached; ngx_uint_t requests; + ngx_msec_t time; ngx_msec_t timeout; ngx_queue_t cache; @@ -86,6 +87,13 @@ static ngx_command_t ngx_http_upstream_ 0, NULL }, + { ngx_string("keepalive_time"), + NGX_HTTP_UPS_CONF|NGX_CONF_TAKE1, + ngx_conf_set_msec_slot, + NGX_HTTP_SRV_CONF_OFFSET, + offsetof(ngx_http_upstream_keepalive_srv_conf_t, time), + NULL }, + { ngx_string("keepalive_timeout"), NGX_HTTP_UPS_CONF|NGX_CONF_TAKE1, ngx_conf_set_msec_slot, @@ -149,6 +157,7 @@ ngx_http_upstream_init_keepalive(ngx_con kcf = ngx_http_conf_upstream_srv_conf(us, ngx_http_upstream_keepalive_module); + ngx_conf_init_msec_value(kcf->time, 3600000); ngx_conf_init_msec_value(kcf->timeout, 60000); ngx_conf_init_uint_value(kcf->requests, 100); @@ -326,6 +335,10 @@ ngx_http_upstream_free_keepalive_peer(ng goto invalid; } + if (ngx_current_msec - c->start_time > kp->conf->time) { + goto invalid; + } + if (!u->keepalive) { goto invalid; } @@ -513,6 +526,7 @@ ngx_http_upstream_keepalive_create_conf( * conf->max_cached = 0; */ + conf->time = NGX_CONF_UNSET_MSEC; conf->timeout = NGX_CONF_UNSET_MSEC; conf->requests = NGX_CONF_UNSET_UINT; diff -r 3674d5b7174e -r fdc3d40979b0 src/http/ngx_http_core_module.c --- a/src/http/ngx_http_core_module.c Wed Apr 07 02:03:29 2021 +0300 +++ b/src/http/ngx_http_core_module.c Thu Apr 08 00:15:48 2021 +0300 @@ -495,6 +495,13 @@ static ngx_command_t ngx_http_core_comm offsetof(ngx_http_core_loc_conf_t, limit_rate_after), NULL }, + { ngx_string("keepalive_time"), + NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1, + ngx_conf_set_msec_slot, + NGX_HTTP_LOC_CONF_OFFSET, + offsetof(ngx_http_core_loc_conf_t, keepalive_time), + NULL }, + { ngx_string("keepalive_timeout"), NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE12, ngx_http_core_keepalive, @@ -1335,6 +1342,11 @@ ngx_http_update_location_config(ngx_http } else if (r->connection->requests >= clcf->keepalive_requests) { r->keepalive = 0; + } else if (ngx_current_msec - r->connection->start_time + > clcf->keepalive_time) + { + r->keepalive = 0; + } else if (r->headers_in.msie6 && r->method == NGX_HTTP_POST && (clcf->keepalive_disable @@ -3500,6 +3512,7 @@ ngx_http_core_create_loc_conf(ngx_conf_t clcf->send_timeout = NGX_CONF_UNSET_MSEC; clcf->send_lowat = NGX_CONF_UNSET_SIZE; clcf->postpone_output = NGX_CONF_UNSET_SIZE; + clcf->keepalive_time = NGX_CONF_UNSET_MSEC; clcf->keepalive_timeout = NGX_CONF_UNSET_MSEC; clcf->keepalive_header = NGX_CONF_UNSET; clcf->keepalive_requests = NGX_CONF_UNSET_UINT; @@ -3738,6 +3751,8 @@ ngx_http_core_merge_loc_conf(ngx_conf_t conf->limit_rate_after = prev->limit_rate_after; } + ngx_conf_merge_msec_value(conf->keepalive_time, + prev->keepalive_time, 3600000); ngx_conf_merge_msec_value(conf->keepalive_timeout, prev->keepalive_timeout, 75000); ngx_conf_merge_sec_value(conf->keepalive_header, diff -r 3674d5b7174e -r fdc3d40979b0 src/http/ngx_http_core_module.h --- a/src/http/ngx_http_core_module.h Wed Apr 07 02:03:29 2021 +0300 +++ b/src/http/ngx_http_core_module.h Thu Apr 08 00:15:48 2021 +0300 @@ -359,6 +359,7 @@ struct ngx_http_core_loc_conf_s { ngx_msec_t client_body_timeout; /* client_body_timeout */ ngx_msec_t send_timeout; /* send_timeout */ + ngx_msec_t keepalive_time; /* keepalive_time */ ngx_msec_t keepalive_timeout; /* keepalive_timeout */ ngx_msec_t lingering_time; /* lingering_time */ ngx_msec_t lingering_timeout; /* lingering_timeout */ diff -r 3674d5b7174e -r fdc3d40979b0 src/http/v2/ngx_http_v2.c --- a/src/http/v2/ngx_http_v2.c Wed Apr 07 02:03:29 2021 +0300 +++ b/src/http/v2/ngx_http_v2.c Thu Apr 08 00:15:48 2021 +0300 @@ -1369,7 +1369,9 @@ ngx_http_v2_state_headers(ngx_http_v2_co ngx_http_core_module); if (clcf->keepalive_timeout == 0 - || h2c->connection->requests >= clcf->keepalive_requests) + || h2c->connection->requests >= clcf->keepalive_requests + || ngx_current_msec - h2c->connection->start_time + > clcf->keepalive_time) { h2c->goaway = 1; From mdounin at mdounin.ru Wed Apr 7 22:35:26 2021 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 07 Apr 2021 22:35:26 +0000 Subject: [nginx] Added $connection_time variable. Message-ID: details: https://hg.nginx.org/nginx/rev/6d4f7d5e279f branches: changeset: 7821:6d4f7d5e279f user: Maxim Dounin date: Thu Apr 08 00:16:17 2021 +0300 description: Added $connection_time variable. diffstat: src/http/ngx_http_variables.c | 30 ++++++++++++++++++++++++++++++ 1 files changed, 30 insertions(+), 0 deletions(-) diffs (54 lines): diff -r fdc3d40979b0 -r 6d4f7d5e279f src/http/ngx_http_variables.c --- a/src/http/ngx_http_variables.c Thu Apr 08 00:15:48 2021 +0300 +++ b/src/http/ngx_http_variables.c Thu Apr 08 00:16:17 2021 +0300 @@ -129,6 +129,8 @@ static ngx_int_t ngx_http_variable_conne ngx_http_variable_value_t *v, uintptr_t data); static ngx_int_t ngx_http_variable_connection_requests(ngx_http_request_t *r, ngx_http_variable_value_t *v, uintptr_t data); +static ngx_int_t ngx_http_variable_connection_time(ngx_http_request_t *r, + ngx_http_variable_value_t *v, uintptr_t data); static ngx_int_t ngx_http_variable_nginx_version(ngx_http_request_t *r, ngx_http_variable_value_t *v, uintptr_t data); @@ -342,6 +344,9 @@ static ngx_http_variable_t ngx_http_cor { ngx_string("connection_requests"), NULL, ngx_http_variable_connection_requests, 0, 0, 0 }, + { ngx_string("connection_time"), NULL, ngx_http_variable_connection_time, + 0, NGX_HTTP_VAR_NOCACHEABLE, 0 }, + { ngx_string("nginx_version"), NULL, ngx_http_variable_nginx_version, 0, 0, 0 }, @@ -2253,6 +2258,31 @@ ngx_http_variable_connection_requests(ng static ngx_int_t +ngx_http_variable_connection_time(ngx_http_request_t *r, + ngx_http_variable_value_t *v, uintptr_t data) +{ + u_char *p; + ngx_msec_int_t ms; + + p = ngx_pnalloc(r->pool, NGX_TIME_T_LEN + 4); + if (p == NULL) { + return NGX_ERROR; + } + + ms = ngx_current_msec - r->connection->start_time; + ms = ngx_max(ms, 0); + + v->len = ngx_sprintf(p, "%T.%03M", (time_t) ms / 1000, ms % 1000) - p; + v->valid = 1; + v->no_cacheable = 0; + v->not_found = 0; + v->data = p; + + return NGX_OK; +} + + +static ngx_int_t ngx_http_variable_nginx_version(ngx_http_request_t *r, ngx_http_variable_value_t *v, uintptr_t data) { From mdounin at mdounin.ru Wed Apr 7 22:35:29 2021 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 07 Apr 2021 22:35:29 +0000 Subject: [nginx] Changed keepalive_requests default to 1000 (ticket #2155). Message-ID: details: https://hg.nginx.org/nginx/rev/82e174e47663 branches: changeset: 7822:82e174e47663 user: Maxim Dounin date: Thu Apr 08 00:16:30 2021 +0300 description: Changed keepalive_requests default to 1000 (ticket #2155). It turns out no browsers implement HTTP/2 GOAWAY handling properly, and large enough number of resources on a page results in failures to load some resources. In particular, Chrome seems to experience errors if loading of all resources requires more than 1 connection (while it is usually able to retry requests at least once, even with 2 connections there are occasional failures for some reason), Safari if loading requires more than 3 connections, and Firefox if loading requires more than 10 connections (can be configured with network.http.request.max-attempts, defaults to 10). It does not seem to be possible to resolve this on nginx side, even strict limiting of maximum concurrency does not help, and loading issues seems to be triggered by merely queueing of a request for a particular connection. The only available mitigation seems to use higher keepalive_requests value. The new default is 1000 and matches previously used default for http2_max_requests. It is expected to be enough for 99.98% of the pages (https://httparchive.org/reports/state-of-the-web?start=latest#reqTotal) even in Chrome. diffstat: src/http/modules/ngx_http_upstream_keepalive_module.c | 2 +- src/http/ngx_http_core_module.c | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diffs (24 lines): diff -r 6d4f7d5e279f -r 82e174e47663 src/http/modules/ngx_http_upstream_keepalive_module.c --- a/src/http/modules/ngx_http_upstream_keepalive_module.c Thu Apr 08 00:16:17 2021 +0300 +++ b/src/http/modules/ngx_http_upstream_keepalive_module.c Thu Apr 08 00:16:30 2021 +0300 @@ -159,7 +159,7 @@ ngx_http_upstream_init_keepalive(ngx_con ngx_conf_init_msec_value(kcf->time, 3600000); ngx_conf_init_msec_value(kcf->timeout, 60000); - ngx_conf_init_uint_value(kcf->requests, 100); + ngx_conf_init_uint_value(kcf->requests, 1000); if (kcf->original_init_upstream(cf, us) != NGX_OK) { return NGX_ERROR; diff -r 6d4f7d5e279f -r 82e174e47663 src/http/ngx_http_core_module.c --- a/src/http/ngx_http_core_module.c Thu Apr 08 00:16:17 2021 +0300 +++ b/src/http/ngx_http_core_module.c Thu Apr 08 00:16:30 2021 +0300 @@ -3758,7 +3758,7 @@ ngx_http_core_merge_loc_conf(ngx_conf_t ngx_conf_merge_sec_value(conf->keepalive_header, prev->keepalive_header, 0); ngx_conf_merge_uint_value(conf->keepalive_requests, - prev->keepalive_requests, 100); + prev->keepalive_requests, 1000); ngx_conf_merge_uint_value(conf->lingering_close, prev->lingering_close, NGX_HTTP_LINGERING_ON); ngx_conf_merge_msec_value(conf->lingering_time, From mdounin at mdounin.ru Wed Apr 7 23:07:31 2021 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 8 Apr 2021 02:07:31 +0300 Subject: PATCH Upstream: new "keepalive_max_connection_duration" directive In-Reply-To: References: Message-ID: Hello! On Wed, Apr 07, 2021 at 08:51:38PM +0300, Jussi Maki wrote: > # HG changeset patch > # User Jussi Maki > # Date 1617816597 -10800 > # Wed Apr 07 20:29:57 2021 +0300 > # Node ID 3699288ff20a3e51ee4b7689898ce0241f64f0f5 > # Parent e2e9e0fae74734b28974c64daacc492d751b4781 > Upstream: new "keepalive_max_connection_duration" directive > > Added a new keepalive_max_connection duration which provides > the time in milliseconds for the upstream block on how long > the connection should be kept connected. The current keepalive > directives either define the idle time or the number of requests > but there is no elapsed time-based parameter. > > The elapsed time-based connection parameter is useful in a case > when there are multiple backends and the connection should be > evenly load balanced to them and the response times for upstream > requests vary. Thanks for the patch. I've just committed a patch series which adds the "keepalive_time" directive[1] both for keepalive connections with clients and with upstream servers, as a part of a mitigation for browser issues identified in ticket #2155[2]. As far as I can see, it is basically identical to what you are trying to introduce for uptream servers, and should work for your use case as well. [1] https://hg.nginx.org/nginx/rev/fdc3d40979b0 [2] https://trac.nginx.org/nginx/ticket/2155 -- Maxim Dounin http://mdounin.ru/ From piotrsikora at google.com Fri Apr 9 02:56:07 2021 From: piotrsikora at google.com (Piotr Sikora) Date: Thu, 8 Apr 2021 19:56:07 -0700 Subject: [nginx] Changed keepalive_requests default to 1000 (ticket #2155). In-Reply-To: References: Message-ID: Hi Maxim, > It turns out no browsers implement HTTP/2 GOAWAY handling properly, and > large enough number of resources on a page results in failures to load > some resources. In particular, Chrome seems to experience errors if > loading of all resources requires more than 1 connection (while it > is usually able to retry requests at least once, even with 2 connections > there are occasional failures for some reason), Safari if loading requires > more than 3 connections, and Firefox if loading requires more than 10 > connections (can be configured with network.http.request.max-attempts, > defaults to 10). > > It does not seem to be possible to resolve this on nginx side, even strict > limiting of maximum concurrency does not help, and loading issues seems to > be triggered by merely queueing of a request for a particular connection. > The only available mitigation seems to use higher keepalive_requests value. Instead of blaming browsers, did you consider implementing graceful shutdown using 2-stage GOAWAY? The process is clearly described in RFC7540, sec. 6.8: [...] A server that is attempting to gracefully shut down a connection SHOULD send an initial GOAWAY frame with the last stream identifier set to 2^31-1 and a NO_ERROR code. This signals to the client that a shutdown is imminent and that initiating further requests is prohibited. After allowing time for any in-flight stream creation (at least one round-trip time), the server can send another GOAWAY frame with an updated last stream identifier. This ensures that a connection can be cleanly shut down without losing requests. This is a solved problem, and the solution was pointed out years ago: http://mailman.nginx.org/pipermail/nginx-devel/2017-August/010439.html http://mailman.nginx.org/pipermail/nginx-devel/2018-March/010930.html Best regards, Piotr Sikora From vasiliy.soshnikov at gmail.com Fri Apr 9 13:26:52 2021 From: vasiliy.soshnikov at gmail.com (Vasiliy Soshnikov) Date: Fri, 9 Apr 2021 16:26:52 +0300 Subject: [PATCH] Support of proxy v2 protocol for NGINX stream module Message-ID: diff -r 82e174e47663 src/core/ngx_proxy_protocol.c --- a/src/core/ngx_proxy_protocol.c Thu Apr 08 00:16:30 2021 +0300 +++ b/src/core/ngx_proxy_protocol.c Fri Apr 09 16:10:29 2021 +0300 @@ -13,6 +13,34 @@ #define NGX_PROXY_PROTOCOL_AF_INET6 2 +#define NGX_PROXY_PROTOCOL_V2_SIG "\x0D\x0A\x0D\x0A\x00\x0D\x0A\x51\x55\x49\x54\x0A" +#define NGX_PROXY_PROTOCOL_V2_SIG_LEN 12 +#define NGX_PROXY_PROTOCOL_V2_HDR_LEN 16 +#define NGX_PROXY_PROTOCOL_V2_HDR_LEN_INET \ + (NGX_PROXY_PROTOCOL_V2_HDR_LEN + (4 + 4 + 2 + 2)) +#define NGX_PROXY_PROTOCOL_V2_HDR_LEN_INET6 \ + (NGX_PROXY_PROTOCOL_V2_HDR_LEN + (16 + 16 + 2 + 2)) + +#define NGX_PROXY_PROTOCOL_V2_CMD_PROXY (0x20 | 0x01) + +#define NGX_PROXY_PROTOCOL_V2_TRANS_STREAM 0x01 + +#define NGX_PROXY_PROTOCOL_V2_FAM_UNSPEC 0x00 +#define NGX_PROXY_PROTOCOL_V2_FAM_INET 0x10 +#define NGX_PROXY_PROTOCOL_V2_FAM_INET6 0x20 + +#define NGX_PROXY_PROTOCOL_V2_TYPE_ALPN 0x01 +#define NGX_PROXY_PROTOCOL_V2_TYPE_SSL 0x20 +#define NGX_PROXY_PROTOCOL_V2_SUBTYPE_SSL_VERSION 0x21 +#define NGX_PROXY_PROTOCOL_V2_SUBTYPE_SSL_CIPHER 0x23 +#define NGX_PROXY_PROTOCOL_V2_SUBTYPE_SSL_SIG_ALG 0x24 +#define NGX_PROXY_PROTOCOL_V2_SUBTYPE_SSL_KEY_ALG 0x25 + +#define NGX_PROXY_PROTOCOL_V2_CLIENT_SSL 0x01 +#define NGX_PROXY_PROTOCOL_V2_CLIENT_CERT_CONN 0x02 +#define NGX_PROXY_PROTOCOL_V2_CLIENT_CERT_SESS 0x04 + + #define ngx_proxy_protocol_parse_uint16(p) ((p)[0] << 8 | (p)[1]) @@ -40,12 +68,68 @@ } ngx_proxy_protocol_inet6_addrs_t; +typedef union { + struct { + uint32_t src_addr; + uint32_t dst_addr; + uint16_t src_port; + uint16_t dst_port; + } ip4; + struct { + uint8_t src_addr[16]; + uint8_t dst_addr[16]; + uint16_t src_port; + uint16_t dst_port; + } ip6; +} ngx_proxy_protocol_addrs_t; + + +typedef struct { + u_char signature[12]; + uint8_t version_command; + uint8_t family_transport; + uint16_t len; + ngx_proxy_protocol_addrs_t addr; +} ngx_proxy_protocol_v2_header_t; + + +struct ngx_tlv_s { + uint8_t type; + uint8_t length_hi; + uint8_t length_lo; + uint8_t value[0]; +} __attribute__((packed)); + +typedef struct ngx_tlv_s ngx_tlv_t; + + +#if (NGX_STREAM_SSL) +struct ngx_tlv_ssl_s { + ngx_tlv_t tlv; + uint8_t client; + uint32_t verify; + uint8_t sub_tlv[]; +} __attribute__((packed)); + +typedef struct ngx_tlv_ssl_s ngx_tlv_ssl_t; +#endif + + static u_char *ngx_proxy_protocol_read_addr(ngx_connection_t *c, u_char *p, u_char *last, ngx_str_t *addr); static u_char *ngx_proxy_protocol_read_port(u_char *p, u_char *last, in_port_t *port, u_char sep); static u_char *ngx_proxy_protocol_v2_read(ngx_connection_t *c, u_char *buf, u_char *last); +static u_char *ngx_proxy_protocol_v2_write(ngx_connection_t *c, u_char *buf, + u_char *last); +#if (NGX_HAVE_INET6) +static void ngx_v4tov6(struct in6_addr *sin6_addr, struct sockaddr *addr); +#endif +#if (NGX_STREAM_SSL) +static u_char *ngx_copy_tlv(u_char *pos, u_char *last, u_char type, + u_char *value, uint16_t value_len); +#endif u_char * @@ -223,7 +307,8 @@ u_char * -ngx_proxy_protocol_write(ngx_connection_t *c, u_char *buf, u_char *last) +ngx_proxy_protocol_write(ngx_connection_t *c, u_char *buf, u_char *last, + ngx_uint_t pp_version) { ngx_uint_t port, lport; @@ -235,6 +320,10 @@ return NULL; } + if (pp_version == 2) { + return ngx_proxy_protocol_v2_write(c, buf, last); + } + switch (c->sockaddr->sa_family) { case AF_INET: @@ -420,3 +509,344 @@ return end; } + + +static u_char * +ngx_proxy_protocol_v2_write(ngx_connection_t *c, u_char *buf, u_char *last) +{ + struct sockaddr *src, *dst; + ngx_proxy_protocol_v2_header_t *header; +#if (NGX_HAVE_INET6) + struct in6_addr v6_tmp; + ngx_int_t v6_used; +#endif +#if (NGX_STREAM_SSL) + ngx_tlv_ssl_t *tlv; + u_char *value, *pos; + u_char kbuf[100]; + const unsigned char *data; + unsigned int data_len; + + X509 *crt; + EVP_PKEY *key; + const ASN1_OBJECT *algorithm; + const char *s; + + long rc; + size_t tlv_len; +#endif + size_t len; + + header = (ngx_proxy_protocol_v2_header_t *) buf; + + header->len = 0; + + src = c->sockaddr; + dst = c->local_sockaddr; + + len = 0; + +#if (NGX_HAVE_INET6) + v6_used = 0; +#endif + + ngx_memcpy(header->signature, NGX_PROXY_PROTOCOL_V2_SIG, + NGX_PROXY_PROTOCOL_V2_SIG_LEN); + + header->version_command = NGX_PROXY_PROTOCOL_V2_CMD_PROXY; + header->family_transport = NGX_PROXY_PROTOCOL_V2_TRANS_STREAM; + + /** Addrs */ + + switch (src->sa_family) { + + case AF_INET: + + if (dst->sa_family == AF_INET) { + + header->addr.ip4.src_addr = + ((struct sockaddr_in *) src)->sin_addr.s_addr; + header->addr.ip4.src_port = ((struct sockaddr_in *) src)->sin_port; + } +#if (NGX_HAVE_INET6) + else /** dst == AF_INET6 */{ + + ngx_v4tov6(&v6_tmp, src); + ngx_memcpy(header->addr.ip6.src_addr, &v6_tmp, 16); + header->addr.ip6.src_port = ((struct sockaddr_in *) src)->sin_port; + } +#endif + break; + +#if (NGX_HAVE_INET6) + case AF_INET6: + v6_used = 1; + + ngx_memcpy(header->addr.ip6.src_addr, + &((struct sockaddr_in6 *) src)->sin6_addr, 16); + header->addr.ip6.src_port = ((struct sockaddr_in6 *) src)->sin6_port; + + break; +#endif + + default: + ngx_log_debug1(NGX_LOG_DEBUG_CORE, c->log, 0, + "PROXY protocol v2 unsupported src address family %ui", + src->sa_family); + goto unspec; + }; + + switch (dst->sa_family) { + case AF_INET: + + if (src->sa_family == AF_INET) { + + header->addr.ip4.dst_addr = + ((struct sockaddr_in *) dst)->sin_addr.s_addr; + header->addr.ip4.dst_port = ((struct sockaddr_in *) dst)->sin_port; + } +#if (NGX_HAVE_INET6) + else /** src == AF_INET6 */{ + + ngx_v4tov6(&v6_tmp, dst); + ngx_memcpy(header->addr.ip6.dst_addr, &v6_tmp, 16); + header->addr.ip6.dst_port = ((struct sockaddr_in *) dst)->sin_port; + + } +#endif + break; + +#if (NGX_HAVE_INET6) + case AF_INET6: + v6_used = 1; + + ngx_memcpy(header->addr.ip6.dst_addr, + &((struct sockaddr_in6 *) dst)->sin6_addr, 16); + header->addr.ip6.dst_port = ((struct sockaddr_in6 *) dst)->sin6_port; + + break; +#endif + + default: + ngx_log_debug1(NGX_LOG_DEBUG_CORE, c->log, 0, + "PROXY protocol v2 unsupported dest address family %ui", + dst->sa_family); + goto unspec; + } + +#if (NGX_HAVE_INET6) + if (!v6_used) { + header->family_transport |= NGX_PROXY_PROTOCOL_V2_FAM_INET; + len = NGX_PROXY_PROTOCOL_V2_HDR_LEN_INET; + + } else { + header->family_transport |= NGX_PROXY_PROTOCOL_V2_FAM_INET6; + len = NGX_PROXY_PROTOCOL_V2_HDR_LEN_INET6; + + } +#else + header->family_transport |= NGX_PROXY_PROTOCOL_V2_FAM_INET; + len = NGX_PROXY_PROTOCOL_V2_HDR_LEN_INET; +#endif + + /** SSL TLVs */ + +#if (NGX_STREAM_SSL) + + data = NULL; + data_len = 0; + + tlv = (ngx_tlv_ssl_t *) (buf + len); + ngx_memzero(tlv, sizeof(ngx_tlv_ssl_t)); + + tlv->tlv.type = NGX_PROXY_PROTOCOL_V2_TYPE_SSL; + pos = buf + len + sizeof(ngx_tlv_ssl_t); + + tlv->client |= NGX_PROXY_PROTOCOL_V2_CLIENT_SSL; + + if (c->ssl != NULL) { + +#ifdef TLSEXT_TYPE_application_layer_protocol_negotiation + SSL_get0_alpn_selected(c->ssl->connection, &data, &data_len); + +#ifdef TLSEXT_TYPE_next_proto_neg + if (data_len == 0) { + SSL_get0_next_proto_negotiated(c->ssl->connection, + &data, &data_len); + } +#endif + +#else /* TLSEXT_TYPE_next_proto_neg */ + SSL_get0_next_proto_negotiated(c->ssl->connection, &data, &data_len); +#endif + + if (data_len) { + + pos = ngx_copy_tlv(pos, last, + NGX_PROXY_PROTOCOL_V2_TYPE_ALPN, + (u_char *) data, (uint16_t) data_len); + if (pos == NULL) { + return NULL; + } + } + + value = (u_char *) SSL_get_version(c->ssl->connection); + if (value != NULL) { + + pos = ngx_copy_tlv(pos, last, + NGX_PROXY_PROTOCOL_V2_SUBTYPE_SSL_VERSION, + value, ngx_strlen(value)); + if (pos == NULL) { + return NULL; + } + } + + crt = SSL_get_peer_certificate(c->ssl->connection); + if (crt != NULL) { + + tlv->client |= NGX_PROXY_PROTOCOL_V2_CLIENT_CERT_SESS; + + rc = SSL_get_verify_result(c->ssl->connection); + tlv->verify = htonl(rc); + + if (rc == X509_V_OK) { + + if (ngx_ssl_ocsp_get_status(c, &s) == NGX_OK) { + tlv->client |= NGX_PROXY_PROTOCOL_V2_CLIENT_CERT_CONN; + } + } + + X509_free(crt); + } + + crt = SSL_get_certificate(c->ssl->connection); + if (crt != NULL) { + + key = X509_get_pubkey(crt); + + /** Key */ + if (key != NULL) { + + switch (EVP_PKEY_base_id(key)) { + case EVP_PKEY_RSA: + value = (u_char *) "RSA"; + break; + case EVP_PKEY_EC: + value = (u_char *) "EC"; + break; + case EVP_PKEY_DSA: + value = (u_char *) "DSA"; + break; + default: + value = NULL; + break; + } + + if (value != NULL) { + + value = ngx_snprintf(kbuf, sizeof(kbuf) - 1, "%s%d%Z", + value, EVP_PKEY_bits(key)); + + pos = ngx_copy_tlv(pos, last, + NGX_PROXY_PROTOCOL_V2_SUBTYPE_SSL_KEY_ALG, + kbuf, ngx_strlen(kbuf)); + } + + EVP_PKEY_free(key); + + if (pos == NULL) { + return NULL; + } + } + + /* ALG */ + X509_ALGOR_get0(&algorithm, NULL, NULL, X509_get0_tbs_sigalg(crt)); + value = (u_char *) OBJ_nid2sn(OBJ_obj2nid(algorithm)); + + if (value != NULL) { + + pos = ngx_copy_tlv(pos, last, + NGX_PROXY_PROTOCOL_V2_SUBTYPE_SSL_SIG_ALG, + value, ngx_strlen(value)); + if (pos == NULL) { + return NULL; + } + } + } + + value = (u_char *) SSL_get_cipher_name(c->ssl->connection); + if (value != NULL) { + + pos = ngx_copy_tlv(pos, last, + NGX_PROXY_PROTOCOL_V2_SUBTYPE_SSL_CIPHER, + value, ngx_strlen(value)); + if (pos == NULL) { + return NULL; + } + } + } + + tlv_len = pos - (buf + len); + + tlv->tlv.length_hi = (uint16_t) (tlv_len - sizeof(ngx_tlv_t)) >> 8; + tlv->tlv.length_lo = (uint16_t) (tlv_len - sizeof(ngx_tlv_t)) & 0x00ff; + + len = len + tlv_len; + +#endif + + header->len = htons(len - NGX_PROXY_PROTOCOL_V2_HDR_LEN); + return buf + len; + +unspec: + header->family_transport |= NGX_PROXY_PROTOCOL_V2_FAM_UNSPEC; + header->len = 0; + + return buf + NGX_PROXY_PROTOCOL_V2_HDR_LEN; +} + + +#if (NGX_HAVE_INET6) +static void +ngx_v4tov6(struct in6_addr *sin6_addr, struct sockaddr *addr) +{ + static const char rfc4291[] = { 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0xFF, 0xFF }; + + struct in_addr tmp_addr, *sin_addr; + + sin_addr = &((struct sockaddr_in *) addr)->sin_addr; + + tmp_addr.s_addr = sin_addr->s_addr; + ngx_memcpy(sin6_addr->s6_addr, rfc4291, sizeof(rfc4291)); + ngx_memcpy(sin6_addr->s6_addr + 12, &tmp_addr.s_addr, 4); +} +#endif + + +#if (NGX_STREAM_SSL) + +static u_char * +ngx_copy_tlv(u_char *pos, u_char *last, u_char type, + u_char *value, uint16_t value_len) +{ + ngx_tlv_t *tlv; + + if (last - pos < (long) sizeof(*tlv)) { + return NULL; + } + + tlv = (ngx_tlv_t *) pos; + + tlv->type = type; + tlv->length_hi = (uint16_t) value_len >> 8; + tlv->length_lo = (uint16_t) value_len & 0x00ff; + ngx_memcpy(tlv->value, value, value_len); + + return pos + (value_len + sizeof(*tlv)); +} + +#endif + + diff -r 82e174e47663 src/core/ngx_proxy_protocol.h --- a/src/core/ngx_proxy_protocol.h Thu Apr 08 00:16:30 2021 +0300 +++ b/src/core/ngx_proxy_protocol.h Fri Apr 09 16:10:29 2021 +0300 @@ -13,7 +13,7 @@ #include -#define NGX_PROXY_PROTOCOL_MAX_HEADER 107 +#define NGX_PROXY_PROTOCOL_MAX_HEADER 214 struct ngx_proxy_protocol_s { @@ -27,7 +27,7 @@ u_char *ngx_proxy_protocol_read(ngx_connection_t *c, u_char *buf, u_char *last); u_char *ngx_proxy_protocol_write(ngx_connection_t *c, u_char *buf, - u_char *last); + u_char *last, ngx_uint_t pp_version); #endif /* _NGX_PROXY_PROTOCOL_H_INCLUDED_ */ diff -r 82e174e47663 src/stream/ngx_stream_proxy_module.c --- a/src/stream/ngx_stream_proxy_module.c Thu Apr 08 00:16:30 2021 +0300 +++ b/src/stream/ngx_stream_proxy_module.c Fri Apr 09 16:10:29 2021 +0300 @@ -30,7 +30,7 @@ ngx_uint_t responses; ngx_uint_t next_upstream_tries; ngx_flag_t next_upstream; - ngx_flag_t proxy_protocol; + ngx_uint_t proxy_protocol; ngx_stream_upstream_local_t *local; ngx_flag_t socket_keepalive; @@ -121,6 +121,14 @@ #endif +static ngx_conf_enum_t ngx_stream_proxy_protocol[] = { + { ngx_string("off"), 0 }, + { ngx_string("on"), 1 }, + { ngx_string("v2"), 2 }, + { ngx_null_string, 0 } +}; + + static ngx_conf_deprecated_t ngx_conf_deprecated_proxy_downstream_buffer = { ngx_conf_deprecated, "proxy_downstream_buffer", "proxy_buffer_size" }; @@ -239,10 +247,10 @@ { ngx_string("proxy_protocol"), NGX_STREAM_MAIN_CONF|NGX_STREAM_SRV_CONF|NGX_CONF_FLAG, - ngx_conf_set_flag_slot, + ngx_conf_set_enum_slot, NGX_STREAM_SRV_CONF_OFFSET, offsetof(ngx_stream_proxy_srv_conf_t, proxy_protocol), - NULL }, + &ngx_stream_proxy_protocol }, #if (NGX_STREAM_SSL) @@ -891,7 +899,8 @@ cl->buf->pos = p; - p = ngx_proxy_protocol_write(c, p, p + NGX_PROXY_PROTOCOL_MAX_HEADER); + p = ngx_proxy_protocol_write(c, p, p + NGX_PROXY_PROTOCOL_MAX_HEADER, + u->proxy_protocol); if (p == NULL) { ngx_stream_proxy_finalize(s, NGX_STREAM_INTERNAL_SERVER_ERROR); return; @@ -942,14 +951,15 @@ ngx_log_debug0(NGX_LOG_DEBUG_STREAM, c->log, 0, "stream proxy send PROXY protocol header"); - p = ngx_proxy_protocol_write(c, buf, buf + NGX_PROXY_PROTOCOL_MAX_HEADER); + u = s->upstream; + + p = ngx_proxy_protocol_write(c, buf, buf + NGX_PROXY_PROTOCOL_MAX_HEADER, + u->proxy_protocol); if (p == NULL) { ngx_stream_proxy_finalize(s, NGX_STREAM_INTERNAL_SERVER_ERROR); return NGX_ERROR; } - u = s->upstream; - pc = u->peer.connection; size = p - buf; @@ -1998,7 +2008,7 @@ conf->responses = NGX_CONF_UNSET_UINT; conf->next_upstream_tries = NGX_CONF_UNSET_UINT; conf->next_upstream = NGX_CONF_UNSET; - conf->proxy_protocol = NGX_CONF_UNSET; + conf->proxy_protocol = NGX_CONF_UNSET_UINT; conf->local = NGX_CONF_UNSET_PTR; conf->socket_keepalive = NGX_CONF_UNSET; @@ -2053,7 +2063,7 @@ ngx_conf_merge_value(conf->next_upstream, prev->next_upstream, 1); - ngx_conf_merge_value(conf->proxy_protocol, prev->proxy_protocol, 0); + ngx_conf_merge_uint_value(conf->proxy_protocol, prev->proxy_protocol, 0); ngx_conf_merge_ptr_value(conf->local, prev->local, NULL); diff -r 82e174e47663 src/stream/ngx_stream_upstream.h --- a/src/stream/ngx_stream_upstream.h Thu Apr 08 00:16:30 2021 +0300 +++ b/src/stream/ngx_stream_upstream.h Fri Apr 09 16:10:29 2021 +0300 @@ -141,7 +141,7 @@ ngx_stream_upstream_resolved_t *resolved; ngx_stream_upstream_state_t *state; unsigned connected:1; - unsigned proxy_protocol:1; + unsigned proxy_protocol:2; } ngx_stream_upstream_t; -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Fri Apr 9 13:49:57 2021 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 9 Apr 2021 16:49:57 +0300 Subject: [nginx] Changed keepalive_requests default to 1000 (ticket #2155). In-Reply-To: References: Message-ID: Hello! On Thu, Apr 08, 2021 at 07:56:07PM -0700, Piotr Sikora wrote: > Hi Maxim, > > > It turns out no browsers implement HTTP/2 GOAWAY handling properly, and > > large enough number of resources on a page results in failures to load > > some resources. In particular, Chrome seems to experience errors if > > loading of all resources requires more than 1 connection (while it > > is usually able to retry requests at least once, even with 2 connections > > there are occasional failures for some reason), Safari if loading requires > > more than 3 connections, and Firefox if loading requires more than 10 > > connections (can be configured with network.http.request.max-attempts, > > defaults to 10). > > > > It does not seem to be possible to resolve this on nginx side, even strict > > limiting of maximum concurrency does not help, and loading issues seems to > > be triggered by merely queueing of a request for a particular connection. > > The only available mitigation seems to use higher keepalive_requests value. > > Instead of blaming browsers, did you consider implementing graceful shutdown > using 2-stage GOAWAY? The process is clearly described in RFC7540, sec. 6.8: > > [...] A server that is attempting to gracefully shut down a > connection SHOULD send an initial GOAWAY frame with the last stream > identifier set to 2^31-1 and a NO_ERROR code. This signals to the > client that a shutdown is imminent and that initiating further > requests is prohibited. After allowing time for any in-flight stream > creation (at least one round-trip time), the server can send another > GOAWAY frame with an updated last stream identifier. This ensures > that a connection can be cleanly shut down without losing requests. > > This is a solved problem, and the solution was pointed out years ago: > http://mailman.nginx.org/pipermail/nginx-devel/2017-August/010439.html > http://mailman.nginx.org/pipermail/nginx-devel/2018-March/010930.html As you can see from the commit log, as well as the details in the ticket, even limiting concurrency does not help, and this means that two-stage GOAWAY would be useless: its only benefit is to make it possible to process in-flight requests without rejecting them. Not to mention that all requests in tests can be easily retried by browsers (and many are actually retried, but not all). Nevertheless, I've tried implementing two-stage GOAWAY as well while working on this, just a quick hack to see if it helps. As expected from the above, it doesn't help. Further, it triggers the bug in Chrome (https://crbug.com/1030255), which basically stops any communication with the server after the first GOAWAY, and does nothing till the connection is closed by the server. That is, a simple approach of sending GOAWAY with 2^31-1 and waiting for keepalive_timeout to expire and then sending the real GOAWAY (or waiting for the client to close the connection) clearly does more harm than good. Probably it can be implemented in a way which doesn't hurt Chrome that much, but, given it doesn't help anyway, this wasn't considered. -- Maxim Dounin http://mdounin.ru/ From mdounin at mdounin.ru Fri Apr 9 14:17:49 2021 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 9 Apr 2021 17:17:49 +0300 Subject: [PATCH] Support of proxy v2 protocol for NGINX stream module In-Reply-To: References: Message-ID: Hello! On Fri, Apr 09, 2021 at 04:26:52PM +0300, Vasiliy Soshnikov wrote: [...] > + /** SSL TLVs */ > + > +#if (NGX_STREAM_SSL) > + > + data = NULL; > + data_len = 0; > + > + tlv = (ngx_tlv_ssl_t *) (buf + len); > + ngx_memzero(tlv, sizeof(ngx_tlv_ssl_t)); > + > + tlv->tlv.type = NGX_PROXY_PROTOCOL_V2_TYPE_SSL; > + pos = buf + len + sizeof(ngx_tlv_ssl_t); > + > + tlv->client |= NGX_PROXY_PROTOCOL_V2_CLIENT_SSL; > + > + if (c->ssl != NULL) { > + > +#ifdef TLSEXT_TYPE_application_layer_protocol_negotiation > + SSL_get0_alpn_selected(c->ssl->connection, &data, &data_len); > + > +#ifdef TLSEXT_TYPE_next_proto_neg > + if (data_len == 0) { > + SSL_get0_next_proto_negotiated(c->ssl->connection, > + &data, &data_len); > + } > +#endif > + > +#else /* TLSEXT_TYPE_next_proto_neg */ > + SSL_get0_next_proto_negotiated(c->ssl->connection, &data, > &data_len); > +#endif > + > + if (data_len) { > + > + pos = ngx_copy_tlv(pos, last, > + NGX_PROXY_PROTOCOL_V2_TYPE_ALPN, > + (u_char *) data, (uint16_t) data_len); > + if (pos == NULL) { > + return NULL; > + } > + } > + > + value = (u_char *) SSL_get_version(c->ssl->connection); > + if (value != NULL) { > + > + pos = ngx_copy_tlv(pos, last, > + NGX_PROXY_PROTOCOL_V2_SUBTYPE_SSL_VERSION, > + value, ngx_strlen(value)); > + if (pos == NULL) { > + return NULL; > + } > + } [...] Thanks for the patch. For the record, as discussed privately: this is more or less proof-of-concept for the ticket #1639[1], used for tests with RabbitMQ[2]. A committable solution probably needs something similar to proxy_set_header / fastcgi_param to control TLVs sent to the upstream server instead of hardcoding them. [1] https://trac.nginx.org/nginx/ticket/1639 [2] https://www.rabbitmq.com/networking.html#proxy-protocol -- Maxim Dounin http://mdounin.ru/ From vasiliy.soshnikov at gmail.com Fri Apr 9 15:08:37 2021 From: vasiliy.soshnikov at gmail.com (Vasiliy Soshnikov) Date: Fri, 9 Apr 2021 18:08:37 +0300 Subject: [PATCH] Support of proxy v2 protocol for NGINX stream module In-Reply-To: References: Message-ID: Hello, Yeah. The proposed design would work well for me. On Fri, Apr 9, 2021 at 5:17 PM Maxim Dounin wrote: > Hello! > > On Fri, Apr 09, 2021 at 04:26:52PM +0300, Vasiliy Soshnikov wrote: > > [...] > > > + /** SSL TLVs */ > > + > > +#if (NGX_STREAM_SSL) > > + > > + data = NULL; > > + data_len = 0; > > + > > + tlv = (ngx_tlv_ssl_t *) (buf + len); > > + ngx_memzero(tlv, sizeof(ngx_tlv_ssl_t)); > > + > > + tlv->tlv.type = NGX_PROXY_PROTOCOL_V2_TYPE_SSL; > > + pos = buf + len + sizeof(ngx_tlv_ssl_t); > > + > > + tlv->client |= NGX_PROXY_PROTOCOL_V2_CLIENT_SSL; > > + > > + if (c->ssl != NULL) { > > + > > +#ifdef TLSEXT_TYPE_application_layer_protocol_negotiation > > + SSL_get0_alpn_selected(c->ssl->connection, &data, &data_len); > > + > > +#ifdef TLSEXT_TYPE_next_proto_neg > > + if (data_len == 0) { > > + SSL_get0_next_proto_negotiated(c->ssl->connection, > > + &data, &data_len); > > + } > > +#endif > > + > > +#else /* TLSEXT_TYPE_next_proto_neg */ > > + SSL_get0_next_proto_negotiated(c->ssl->connection, &data, > > &data_len); > > +#endif > > + > > + if (data_len) { > > + > > + pos = ngx_copy_tlv(pos, last, > > + NGX_PROXY_PROTOCOL_V2_TYPE_ALPN, > > + (u_char *) data, (uint16_t) data_len); > > + if (pos == NULL) { > > + return NULL; > > + } > > + } > > + > > + value = (u_char *) SSL_get_version(c->ssl->connection); > > + if (value != NULL) { > > + > > + pos = ngx_copy_tlv(pos, last, > > + NGX_PROXY_PROTOCOL_V2_SUBTYPE_SSL_VERSION, > > + value, ngx_strlen(value)); > > + if (pos == NULL) { > > + return NULL; > > + } > > + } > > [...] > > Thanks for the patch. > > For the record, as discussed privately: this is more or less > proof-of-concept for the ticket #1639[1], used for tests with > RabbitMQ[2]. A committable solution probably needs something similar > to proxy_set_header / fastcgi_param to control TLVs sent to the > upstream server instead of hardcoding them. > > [1] https://trac.nginx.org/nginx/ticket/1639 > [2] https://www.rabbitmq.com/networking.html#proxy-protocol > > -- > Maxim Dounin > http://mdounin.ru/ > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Tue Apr 13 15:34:25 2021 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 13 Apr 2021 15:34:25 +0000 Subject: [nginx] nginx-1.19.10-RELEASE Message-ID: details: https://hg.nginx.org/nginx/rev/ffcbb9980ee2 branches: changeset: 7823:ffcbb9980ee2 user: Maxim Dounin date: Tue Apr 13 18:13:58 2021 +0300 description: nginx-1.19.10-RELEASE diffstat: docs/xml/nginx/changes.xml | 43 +++++++++++++++++++++++++++++++++++++++++++ 1 files changed, 43 insertions(+), 0 deletions(-) diffs (53 lines): diff -r 82e174e47663 -r ffcbb9980ee2 docs/xml/nginx/changes.xml --- a/docs/xml/nginx/changes.xml Thu Apr 08 00:16:30 2021 +0300 +++ b/docs/xml/nginx/changes.xml Tue Apr 13 18:13:58 2021 +0300 @@ -5,6 +5,49 @@ + + + + +? ????????? keepalive_requests ???????? ?? ????????? ???????? ?? 1000. + + +the default value of the "keepalive_requests" directive was changed to 1000. + + + + + +????????? keepalive_time. + + +the "keepalive_time" directive. + + + + + +?????????? $connection_time. + + +the $connection_time variable. + + + + + +??? ????????????? zlib-ng +? ????? ?????????? ????????? "gzip filter failed to use preallocated memory". + + +"gzip filter failed to use preallocated memory" alerts appeared in logs +when using zlib-ng. + + + + + + From mdounin at mdounin.ru Tue Apr 13 15:34:28 2021 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 13 Apr 2021 15:34:28 +0000 Subject: [nginx] release-1.19.10 tag Message-ID: details: https://hg.nginx.org/nginx/rev/b56c45e3bd50 branches: changeset: 7824:b56c45e3bd50 user: Maxim Dounin date: Tue Apr 13 18:13:59 2021 +0300 description: release-1.19.10 tag diffstat: .hgtags | 1 + 1 files changed, 1 insertions(+), 0 deletions(-) diffs (8 lines): diff -r ffcbb9980ee2 -r b56c45e3bd50 .hgtags --- a/.hgtags Tue Apr 13 18:13:58 2021 +0300 +++ b/.hgtags Tue Apr 13 18:13:59 2021 +0300 @@ -459,3 +459,4 @@ f618488eb769e0ed74ef0d93cd118d2ad79ef94d 3fa6e2095a7a51acc630517e1c27a7b7ac41f7b3 release-1.19.7 8c65d21464aaa5923775f80c32474adc7a320068 release-1.19.8 da571b8eaf8f30f36c43b3c9b25e01e31f47149c release-1.19.9 +ffcbb9980ee2bad27b4d7b1cd680b14ff47b29aa release-1.19.10