From jan.prachar at gmail.com Wed Jan 3 18:53:00 2018 From: jan.prachar at gmail.com (Jan =?UTF-8?Q?Pracha=C5=99?=) Date: Wed, 03 Jan 2018 19:53:00 +0100 Subject: [PATCH] Chunked filter: check if ctx is null Message-ID: <1515005580.31375.45.camel@gmail.com> There exists a path which brings you to the body filter in the chunked filter module while the module ctx is null, which results in segfault. If during piping chunked response from upstream to downstream both upstream and downstream error happens, internal redirect to a named location is performed (accoring to the directive error_page) and module's contexts cleared. If you have a lua handler in that location, it starts sending a body, because headers was already sent. A crash in the chunked filter module follows, because ctx is NULL. Maybe there is also a problem in the lua module and it should call header filters first. Also maybe nginx should not perform internal redirect, if part of the body was already sent. But better safe than sorry :) I found that the same checks are in body filters of other core modules too. --- nginx/src/http/modules/ngx_http_chunked_filter_module.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/nginx/src/http/modules/ngx_http_chunked_filter_module.c b/nginx/src/http/modules/ngx_http_chunked_filter_module.c index 4d6fd3eed..c3d173b20 100644 --- a/nginx/src/http/modules/ngx_http_chunked_filter_module.c +++ b/nginx/src/http/modules/ngx_http_chunked_filter_module.c @@ -116,6 +116,9 @@ ngx_http_chunked_body_filter(ngx_http_request_t *r, ngx_chain_t *in) } ctx = ngx_http_get_module_ctx(r, ngx_http_chunked_filter_module); + if (ctx == NULL) { + return ngx_http_next_body_filter(r, in); + } out = NULL; ll = &out; From mdounin at mdounin.ru Thu Jan 4 00:42:27 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 4 Jan 2018 03:42:27 +0300 Subject: [PATCH] Chunked filter: check if ctx is null In-Reply-To: <1515005580.31375.45.camel@gmail.com> References: <1515005580.31375.45.camel@gmail.com> Message-ID: <20180104004227.GR34136@mdounin.ru> Hello! On Wed, Jan 03, 2018 at 07:53:00PM +0100, Jan Pracha? wrote: > There exists a path which brings you to the body filter in the chunked > filter module while the module ctx is null, which results in segfault. > > If during piping chunked response from upstream to downstream both > upstream and downstream error happens, internal redirect to a named > location is performed (accoring to the directive error_page) and > module's contexts cleared. If you have a lua handler in that location, > it > starts sending a body, because headers was already sent. A crash in the > chunked filter module follows, because ctx is NULL. > > Maybe there is also a problem in the lua module and it should call > header filters first. Also maybe nginx should not perform internal > redirect, if part of the body was already sent. > > But better safe than sorry :) I found that the same checks are in body > filters of other core modules too. Trying to fix the chunked filter to tolerate such incorrect behaviour looks like a bad idea. We can't reasonably assume all filters are prepared to this. And even if we'll be able to modify them all - if the connection remains open in such a situation, the resulting mess at the protocol level will likely cause other problems, including security ones. As such, the root cause should be fixed instead. To catch cases when a duplicate response is returned after the header was already sent we have a dedicated check in the ngx_http_send_header() function, see this commit for details: http://hg.nginx.org/nginx/rev/03ff14058272 Trying to bypass this check is a bad idea. The same applies to conditionally sending headers based on the r->headers_sent flag, as it will mean that the check will be bypassed. This is what the lua module seems to be doing, and it should be fixed to avoid doing this. The other part of the equation is how and why error_page is called after the header as already sent. If you know a scenario where error_page can be called with the header already sent, you may want focus on reproducing and fixing this. Normally this is expected to result in the "header already sent" alerts produced by the check discussed above. -- Maxim Dounin http://mdounin.ru/ From jan at prachar.eu Thu Jan 4 22:08:11 2018 From: jan at prachar.eu (Jan Prachar) Date: Thu, 4 Jan 2018 23:08:11 +0100 Subject: [PATCH] Chunked filter: check if ctx is null In-Reply-To: <20180104004227.GR34136@mdounin.ru> References: <1515005580.31375.45.camel@gmail.com> <20180104004227.GR34136@mdounin.ru> Message-ID: Hello, thank you for response! On Thu, 2018-01-04 at 03:42 +0300, Maxim Dounin wrote: > Hello! > > On Wed, Jan 03, 2018 at 07:53:00PM +0100, Jan Pracha=C5=99 wrote: > > To catch cases when a duplicate response is returned after the > header was already sent we have a dedicated check in the > ngx_http_send_header() function, see this commit for details: > > http://hg.nginx.org/nginx/rev/03ff14058272 > > Trying to bypass this check is a bad idea. The same applies to > conditionally sending headers based on the r->headers_sent flag, > as it will mean that the check will be bypassed. This is what the > lua module seems to be doing, and it should be fixed to avoid > doing this. Lua module checks r->header_sent in function ngx_http_lua_send_header_if_needed(), which is called with every output. See https://github.com/openresty/lua-nginx-module/commit/235875b5c6afd49611 81fa9ead9c167dc865e737 So you suggest, that they should have their own flag (like they already had - ctx->headers_sent) and always call ngx_http_send_header() function, if this flag is not set? > The other part of the equation is how and why error_page is called > after the header as already sent. If you know a scenario where > error_page can be called with the header already sent, you may > want focus on reproducing and fixing this. Normally this is > expected to result in the "header already sent" alerts produced by > the check discussed above. On the nginx side it is cause by this: http://hg.nginx.org/nginx/rev/ad3f342f14ba046c If writing to client returns an error and thus u->pipe- >downstream_error is 1 and then reading from upstream fails and thus u- >pipe->upstream_error is 1. ngx_http_upstream_finalize_request() is then called with rc=NGX_HTTP_BAD_GATEWAY, where thanks to the above commit the ngx_http_finalize_request() function is called also with rc=NGX_HTTP_BAD_GATEWAY and thus error_page is called (if it is configured for 502 status). I think, that the ngx_http_finalize_request() function should be called with rc=NGX_ERROR in this case. -- Jan Prachar From weixu365 at gmail.com Fri Jan 5 04:53:46 2018 From: weixu365 at gmail.com (Wei Xu) Date: Fri, 5 Jan 2018 15:53:46 +1100 Subject: Fwd: [ module ] Add http upstream keep alive timeout parameter In-Reply-To: <20171122160016.GB78325@mdounin.ru> References: <20171109170702.GH26836@mdounin.ru> <20171113194946.GR26836@mdounin.ru> <20171122160016.GB78325@mdounin.ru> Message-ID: Hi, Is it possible to merge the upstream keep alive feature first? because it's a valuable and simple patch. We're using React server render, and by adding Nginx as the reverse proxy on each server, our AWS EC2 instances count reduced 25%, from 43 to 27-37 C4.Large instances. I wrote a detailed article to explain what happened and why it works at: https://theantway.com/2017/12/metrics-driven-development-how-did-i-reduced-aws-ec2-costs-to-27-and-improved-performance/ The only problem now is we still using the custom patched version, which makes it *difficult to share the solution with other teams*. So back to the initial question, is it possible to merge this feature first, and you can create separate patches if you need to add more features later. Regards Wei On Thu, Nov 23, 2017 at 3:00 AM, Maxim Dounin wrote: > Hello! > > On Wed, Nov 22, 2017 at 05:31:25PM +1100, Wei Xu wrote: > > > Hi, > > > > Is there any place to view the status of current proposed patches? I'm > not > > sure if this patch had been accepted, still waiting or rejected? > > > > In order to avoid errors in production, I'm running the patched version > > now. But I think it would be better to run the official one, and also I > can > > introduce this solution for 'Connection reset by peer errors' to other > > teams. > > The patch in question is sitting in my patch queue waiting for > further work - I consider introducing keepalive_requests at the > same time, and probably $upstream_connection and > $upstream_connection_requests variables. > > -- > Maxim Dounin > http://mdounin.ru/ > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Fri Jan 5 05:41:52 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 5 Jan 2018 08:41:52 +0300 Subject: [PATCH] Chunked filter: check if ctx is null In-Reply-To: References: <1515005580.31375.45.camel@gmail.com> <20180104004227.GR34136@mdounin.ru> Message-ID: <20180105054152.GV34136@mdounin.ru> Hello! On Thu, Jan 04, 2018 at 11:08:11PM +0100, Jan Prachar wrote: > Hello, thank you for response! > > On Thu, 2018-01-04 at 03:42 +0300, Maxim Dounin wrote: > > Hello! > > > > On Wed, Jan 03, 2018 at 07:53:00PM +0100, Jan Pracha=C5=99 wrote: > > > > To catch cases when a duplicate response is returned after the > > header was already sent we have a dedicated check in the > > ngx_http_send_header() function, see this commit for details: > > > > http://hg.nginx.org/nginx/rev/03ff14058272 > > > > Trying to bypass this check is a bad idea. The same applies to > > conditionally sending headers based on the r->headers_sent flag, > > as it will mean that the check will be bypassed. This is what the > > lua module seems to be doing, and it should be fixed to avoid > > doing this. > > Lua module checks r->header_sent in function > ngx_http_lua_send_header_if_needed(), which is called with every > output. See > https://github.com/openresty/lua-nginx-module/commit/235875b5c6afd49611 > 81fa9ead9c167dc865e737 > > So you suggest, that they should have their own flag (like they already > had - ctx->headers_sent) and always call ngx_http_send_header() > function, if this flag is not set? Yes. > > The other part of the equation is how and why error_page is called > > after the header as already sent. If you know a scenario where > > error_page can be called with the header already sent, you may > > want focus on reproducing and fixing this. Normally this is > > expected to result in the "header already sent" alerts produced by > > the check discussed above. > > On the nginx side it is cause by this: > > http://hg.nginx.org/nginx/rev/ad3f342f14ba046c > > If writing to client returns an error and thus u->pipe- > >downstream_error is 1 and then reading from upstream fails and thus u- > >pipe->upstream_error is 1. ngx_http_upstream_finalize_request() is > then called with rc=NGX_HTTP_BAD_GATEWAY, where thanks to the above > commit the ngx_http_finalize_request() function is called also with > rc=NGX_HTTP_BAD_GATEWAY and thus error_page is called (if it is > configured for 502 status). > > I think, that the ngx_http_finalize_request() function should be called > with rc=NGX_ERROR in this case. I agree, the code in question looks incorrect. As long as header is already sent, it shouldn't call ngx_http_finalize_request() with NGX_HTTP_BAD_GATEWAY or any other special response code (except may be NGX_HTTP_REQUEST_TIME_OUT and NGX_HTTP_CLIENT_CLOSED_REQUEST, which are explicitly handled in ngx_http_finalize_request()). The exact scenario described won't work though. If writing to the client returns an error, c->error will be set, and ngx_http_finalize_request() won't call error_page processing. I was able to trigger the alert using a HEAD request, which results in u->pipe->downstream_error being set in ngx_http_upstream_send_response() without an actual connection error (http://hg.nginx.org/nginx-tests/rev/b17f27fa9081). The same situation might also happen due to various other errors - for example, if writing to the client times out. Please try the following patch: # HG changeset patch # User Maxim Dounin # Date 1515130723 -10800 # Fri Jan 05 08:38:43 2018 +0300 # Node ID 8f0bf141818d82ba9754559c4cb2472554e64e09 # Parent 6d2e92acb013224e6ef2c71c9e61ab07f0b03271 Upstream: fixed "header already sent" alerts on backend errors. Following ad3f342f14ba046c (1.9.13), it is possible that a request where header was already sent will be finalized with NGX_HTTP_BAD_GATEWAY or NGX_HTTP_GATEWAY_TIMEOUT, triggering an attempt to return additional error response and the "header already sent" alert as a result. In particular, it is trivial to reproduce the problem with a HEAD request and caching enabled. With caching enabled nginx will change HEAD to GET and will set u->pipe->downstream_error to suppress sending the response body to the client. When a backend-related error occurs (for example, proxy_read_timeout expires), ngx_http_finalize_upstream_request() will be called with NGX_HTTP_GATEWAY_TIMEOUT. After ad3f342f14ba046c this will result in ngx_http_finalize_request(NGX_HTTP_GATEWAY_TIMEOUT). Fix is to move u->pipe->downstream_error handling to a later point, where all special response codes are changed to NGX_ERROR. Reported by Jan Prachar, http://mailman.nginx.org/pipermail/nginx-devel/2018-January/010737.html. diff --git a/src/http/ngx_http_upstream.c b/src/http/ngx_http_upstream.c --- a/src/http/ngx_http_upstream.c +++ b/src/http/ngx_http_upstream.c @@ -4374,8 +4374,7 @@ ngx_http_upstream_finalize_request(ngx_h if (!u->header_sent || rc == NGX_HTTP_REQUEST_TIME_OUT - || rc == NGX_HTTP_CLIENT_CLOSED_REQUEST - || (u->pipe && u->pipe->downstream_error)) + || rc == NGX_HTTP_CLIENT_CLOSED_REQUEST) { ngx_http_finalize_request(r, rc); return; @@ -4388,7 +4387,9 @@ ngx_http_upstream_finalize_request(ngx_h flush = 1; } - if (r->header_only) { + if (r->header_only + || (u->pipe && u->pipe->downstream_error)) + { ngx_http_finalize_request(r, rc); return; } -- Maxim Dounin http://mdounin.ru/ From spacewanderlzx at gmail.com Fri Jan 5 09:56:40 2018 From: spacewanderlzx at gmail.com (Zexuan Luo) Date: Fri, 5 Jan 2018 17:56:40 +0800 Subject: [PATCH] Core: added const qualifier to ngx_parse_http_time argument Message-ID: # HG changeset patch # User spacewander # Date 1515142886 -28800 # Fri Jan 05 17:01:26 2018 +0800 # Node ID 17d6674fe60421961903d913831d7d19b351bd11 # Parent 6d2e92acb013224e6ef2c71c9e61ab07f0b03271 Core: added const qualifier to ngx_parse_http_time argument 'ngx_parse_http_time(u_char *value, size_t len)' doesn't change the 'value' actually, so it is safe to add const qualifier to it. With this change we could pass const string without hacky '(u_char *)' casting. diff -r 6d2e92acb013 -r 17d6674fe604 src/core/ngx_parse_time.c --- a/src/core/ngx_parse_time.c Thu Dec 28 12:01:05 2017 +0200 +++ b/src/core/ngx_parse_time.c Fri Jan 05 17:01:26 2018 +0800 @@ -12,7 +12,7 @@ static ngx_uint_t mday[] = { 31, 28, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31 }; time_t -ngx_parse_http_time(u_char *value, size_t len) +ngx_parse_http_time(const u_char *value, size_t len) { u_char *p, *end; ngx_int_t month; diff -r 6d2e92acb013 -r 17d6674fe604 src/core/ngx_parse_time.h --- a/src/core/ngx_parse_time.h Thu Dec 28 12:01:05 2017 +0200 +++ b/src/core/ngx_parse_time.h Fri Jan 05 17:01:26 2018 +0800 @@ -13,7 +13,7 @@ #include -time_t ngx_parse_http_time(u_char *value, size_t len); +time_t ngx_parse_http_time(const u_char *value, size_t len); /* compatibility */ #define ngx_http_parse_time(value, len) ngx_parse_http_time(value, len) From jan.prachar at gmail.com Fri Jan 5 12:20:03 2018 From: jan.prachar at gmail.com (Jan =?UTF-8?Q?Pracha=C5=99?=) Date: Fri, 05 Jan 2018 13:20:03 +0100 Subject: [PATCH] Chunked filter: check if ctx is null In-Reply-To: <20180105054152.GV34136@mdounin.ru> References: <1515005580.31375.45.camel@gmail.com> <20180104004227.GR34136@mdounin.ru> <20180105054152.GV34136@mdounin.ru> Message-ID: <1515154803.31375.68.camel@gmail.com> On Fri, 2018-01-05 at 08:41 +0300, Maxim Dounin wrote: > Hello! > > On Thu, Jan 04, 2018 at 11:08:11PM +0100, Jan Prachar wrote: > > > Hello, thank you for response! > > > > On Thu, 2018-01-04 at 03:42 +0300, Maxim Dounin wrote: > > > Hello! > > > > > > On Wed, Jan 03, 2018 at 07:53:00PM +0100, Jan Pracha=C5=99 wrote: > > > > > > To catch cases when a duplicate response is returned after the > > > header was already sent we have a dedicated check in the > > > ngx_http_send_header() function, see this commit for details: > > > > > > http://hg.nginx.org/nginx/rev/03ff14058272 > > > > > > Trying to bypass this check is a bad idea. The same applies to > > > conditionally sending headers based on the r->headers_sent flag, > > > as it will mean that the check will be bypassed. This is what > > > the > > > lua module seems to be doing, and it should be fixed to avoid > > > doing this. > > > > Lua module checks r->header_sent in function > > ngx_http_lua_send_header_if_needed(), which is called with every > > output. See > > https://github.com/openresty/lua-nginx-module/commit/235875b5c6afd4 > > 9611 > > 81fa9ead9c167dc865e737 > > > > So you suggest, that they should have their own flag (like they > > already > > had - ctx->headers_sent) and always call ngx_http_send_header() > > function, if this flag is not set? > > Yes. Thanks. I will report it to lua module developers. > > > The other part of the equation is how and why error_page is > > > called > > > after the header as already sent. If you know a scenario where > > > error_page can be called with the header already sent, you may > > > want focus on reproducing and fixing this. Normally this is > > > expected to result in the "header already sent" alerts produced > > > by > > > the check discussed above. > > > > On the nginx side it is cause by this: > > > > http://hg.nginx.org/nginx/rev/ad3f342f14ba046c > > > > If writing to client returns an error and thus u->pipe- > > > downstream_error is 1 and then reading from upstream fails and > > > thus u- > > > pipe->upstream_error is 1. ngx_http_upstream_finalize_request() > > > is > > > > then called with rc=NGX_HTTP_BAD_GATEWAY, where thanks to the above > > commit the ngx_http_finalize_request() function is called also with > > rc=NGX_HTTP_BAD_GATEWAY and thus error_page is called (if it is > > configured for 502 status). > > > > I think, that the ngx_http_finalize_request() function should be > > called > > with rc=NGX_ERROR in this case. > > I agree, the code in question looks incorrect. As long as header > is already sent, it shouldn't call ngx_http_finalize_request() > with NGX_HTTP_BAD_GATEWAY or any other special response code > (except may be NGX_HTTP_REQUEST_TIME_OUT and > NGX_HTTP_CLIENT_CLOSED_REQUEST, which are explicitly handled in > ngx_http_finalize_request()). > > The exact scenario described won't work though. If writing to the > client returns an error, c->error will be set, and > ngx_http_finalize_request() won't call error_page processing. > > I was able to trigger the alert using a HEAD request, which > results in u->pipe->downstream_error being set in > ngx_http_upstream_send_response() without an actual connection > error (http://hg.nginx.org/nginx-tests/rev/b17f27fa9081).??The > same situation might also happen due to various other errors - for > example, if writing to the client times out. You are right, I checked it and in my scenario actually the client timeout happened. > Please try the following patch: The patch works for me (request is terminated without an internal redirect). -- Jan Pracha? From debayang.qdt at qualcommdatacenter.com Fri Jan 5 13:34:56 2018 From: debayang.qdt at qualcommdatacenter.com (debayang.qdt) Date: Fri, 5 Jan 2018 13:34:56 +0000 Subject: [PATCH] Using worker specific counters for http_stub_status module In-Reply-To: <5ab36b546338d5cc3476.1515158406@null-8cfdf008883d.ap.qualcomm.com> References: <5ab36b546338d5cc3476.1515158406@null-8cfdf008883d.ap.qualcomm.com> Message-ID: When the http_stub_status_module is enabled, a performance impact seen on some platforms with several worker processes running and increased workload. There is a contention with the atomic updates of the several shared memory counters maintained by this module - which could be eliminated if we maintain worker process specific counters and only sum them up when requested by client. Below patch is an attempt to do so - which bypasses the contention and improves performance on such platforms. # HG changeset patch # User Debayan Ghosh # Date 1515158399 0 # Fri Jan 05 13:19:59 2018 +0000 # Node ID 5ab36b546338d5cc34769e1a70ddc754bfc5e9ea # Parent 6d2e92acb013224e6ef2c71c9e61ab07f0b03271 Using worker specific counters for http_stub_status module. Eliminate the shared memory contention using worker specific counters for http_stub_status module and aggregate them only on lookup. diff -r 6d2e92acb013 -r 5ab36b546338 src/core/ngx_connection.c --- a/src/core/ngx_connection.c Thu Dec 28 12:01:05 2017 +0200 +++ b/src/core/ngx_connection.c Fri Jan 05 13:19:59 2018 +0000 @@ -1211,7 +1211,8 @@ ngx_cycle->reusable_connections_n--; #if (NGX_STAT_STUB) - (void) ngx_atomic_fetch_add(ngx_stat_waiting, -1); + (void) ngx_worker_atomic_fetch_add(ngx_stat_waiting, + ngx_process_slot, -1); #endif } @@ -1225,7 +1226,8 @@ ngx_cycle->reusable_connections_n++; #if (NGX_STAT_STUB) - (void) ngx_atomic_fetch_add(ngx_stat_waiting, 1); + (void) ngx_worker_atomic_fetch_add(ngx_stat_waiting, + ngx_process_slot, 1); #endif } } diff -r 6d2e92acb013 -r 5ab36b546338 src/event/ngx_event.c --- a/src/event/ngx_event.c Thu Dec 28 12:01:05 2017 +0200 +++ b/src/event/ngx_event.c Fri Jan 05 13:19:59 2018 +0000 @@ -481,7 +481,7 @@ /* cl should be equal to or greater than cache line size */ - cl = 128; + cl = NGX_COUNTER_SLOT_SIZE; size = cl /* ngx_accept_mutex */ + cl /* ngx_connection_counter */ @@ -489,13 +489,13 @@ #if (NGX_STAT_STUB) - size += cl /* ngx_stat_accepted */ - + cl /* ngx_stat_handled */ - + cl /* ngx_stat_requests */ - + cl /* ngx_stat_active */ - + cl /* ngx_stat_reading */ - + cl /* ngx_stat_writing */ - + cl; /* ngx_stat_waiting */ + size += cl * NGX_MAX_PROCESSES /* ngx_stat_accepted */ + + cl * NGX_MAX_PROCESSES /* ngx_stat_handled */ + + cl * NGX_MAX_PROCESSES /* ngx_stat_requests */ + + cl * NGX_MAX_PROCESSES /* ngx_stat_active */ + + cl * NGX_MAX_PROCESSES /* ngx_stat_reading */ + + cl * NGX_MAX_PROCESSES /* ngx_stat_writing */ + + cl * NGX_MAX_PROCESSES; /* ngx_stat_waiting */ #endif @@ -535,13 +535,13 @@ #if (NGX_STAT_STUB) - ngx_stat_accepted = (ngx_atomic_t *) (shared + 3 * cl); - ngx_stat_handled = (ngx_atomic_t *) (shared + 4 * cl); - ngx_stat_requests = (ngx_atomic_t *) (shared + 5 * cl); - ngx_stat_active = (ngx_atomic_t *) (shared + 6 * cl); - ngx_stat_reading = (ngx_atomic_t *) (shared + 7 * cl); - ngx_stat_writing = (ngx_atomic_t *) (shared + 8 * cl); - ngx_stat_waiting = (ngx_atomic_t *) (shared + 9 * cl); + ngx_stat_accepted = (ngx_atomic_t *) (shared + 3 * cl ); + ngx_stat_handled = (ngx_atomic_t *) ((u_char*) ngx_stat_accepted + (cl * NGX_MAX_PROCESSES)); + ngx_stat_requests = (ngx_atomic_t *) ((u_char*) ngx_stat_handled + (cl * NGX_MAX_PROCESSES)); + ngx_stat_active = (ngx_atomic_t *) ((u_char*) ngx_stat_requests + (cl * NGX_MAX_PROCESSES)); + ngx_stat_reading = (ngx_atomic_t *) ((u_char*) ngx_stat_active + (cl * NGX_MAX_PROCESSES)); + ngx_stat_writing = (ngx_atomic_t *) ((u_char*) ngx_stat_reading + (cl * NGX_MAX_PROCESSES)); + ngx_stat_waiting = (ngx_atomic_t *) ((u_char*) ngx_stat_writing + (cl * NGX_MAX_PROCESSES)); #endif diff -r 6d2e92acb013 -r 5ab36b546338 src/event/ngx_event_accept.c --- a/src/event/ngx_event_accept.c Thu Dec 28 12:01:05 2017 +0200 +++ b/src/event/ngx_event_accept.c Fri Jan 05 13:19:59 2018 +0000 @@ -135,7 +135,8 @@ } #if (NGX_STAT_STUB) - (void) ngx_atomic_fetch_add(ngx_stat_accepted, 1); + (void) ngx_worker_atomic_fetch_add(ngx_stat_accepted, + ngx_process_slot, 1); #endif ngx_accept_disabled = ngx_cycle->connection_n / 8 @@ -155,7 +156,8 @@ c->type = SOCK_STREAM; #if (NGX_STAT_STUB) - (void) ngx_atomic_fetch_add(ngx_stat_active, 1); + (void) ngx_worker_atomic_fetch_add(ngx_stat_active, + ngx_process_slot, 1); #endif c->pool = ngx_create_pool(ls->pool_size, ev->log); @@ -262,7 +264,8 @@ c->number = ngx_atomic_fetch_add(ngx_connection_counter, 1); #if (NGX_STAT_STUB) - (void) ngx_atomic_fetch_add(ngx_stat_handled, 1); + (void) ngx_worker_atomic_fetch_add(ngx_stat_handled, + ngx_process_slot, 1); #endif if (ls->addr_ntop) { @@ -421,7 +424,8 @@ } #if (NGX_STAT_STUB) - (void) ngx_atomic_fetch_add(ngx_stat_accepted, 1); + (void) ngx_worker_atomic_fetch_add(ngx_stat_accepted, + ngx_process_slot, 1); #endif #if (NGX_HAVE_MSGHDR_MSG_CONTROL) @@ -449,7 +453,8 @@ } #if (NGX_STAT_STUB) - (void) ngx_atomic_fetch_add(ngx_stat_active, 1); + (void) ngx_worker_atomic_fetch_add(ngx_stat_active, + ngx_process_slot, 1); #endif c->pool = ngx_create_pool(ls->pool_size, ev->log); @@ -589,7 +594,8 @@ c->number = ngx_atomic_fetch_add(ngx_connection_counter, 1); #if (NGX_STAT_STUB) - (void) ngx_atomic_fetch_add(ngx_stat_handled, 1); + (void) ngx_worker_atomic_fetch_add(ngx_stat_handled, + ngx_process_slot, 1); #endif if (ls->addr_ntop) { @@ -766,7 +772,8 @@ } #if (NGX_STAT_STUB) - (void) ngx_atomic_fetch_add(ngx_stat_active, -1); + (void) ngx_worker_atomic_fetch_add(ngx_stat_active, + ngx_process_slot , -1); #endif } diff -r 6d2e92acb013 -r 5ab36b546338 src/http/modules/ngx_http_stub_status_module.c --- a/src/http/modules/ngx_http_stub_status_module.c Thu Dec 28 12:01:05 2017 +0200 +++ b/src/http/modules/ngx_http_stub_status_module.c Fri Jan 05 13:19:59 2018 +0000 @@ -79,6 +79,17 @@ ngx_http_null_variable }; +static ngx_atomic_int_t +ngx_http_get_aggregated_status(ngx_atomic_t* ctr) { + ngx_atomic_int_t sum = 0; + int i; + for (i = 0; i < NGX_MAX_PROCESSES; i++) { + sum += *(ngx_atomic_t*) ((u_char*) ctr + (i * NGX_COUNTER_SLOT_SIZE)); + } + + return sum; +} static ngx_int_t ngx_http_stub_status_handler(ngx_http_request_t *r) @@ -126,13 +137,14 @@ out.buf = b; out.next = NULL; - ap = *ngx_stat_accepted; - hn = *ngx_stat_handled; - ac = *ngx_stat_active; - rq = *ngx_stat_requests; - rd = *ngx_stat_reading; - wr = *ngx_stat_writing; - wa = *ngx_stat_waiting; + + ap = ngx_http_get_aggregated_status(ngx_stat_accepted); + hn = ngx_http_get_aggregated_status(ngx_stat_handled); + ac = ngx_http_get_aggregated_status(ngx_stat_active); + rq = ngx_http_get_aggregated_status(ngx_stat_requests); + rd = ngx_http_get_aggregated_status(ngx_stat_reading); + wr = ngx_http_get_aggregated_status(ngx_stat_writing); + wa = ngx_http_get_aggregated_status(ngx_stat_waiting); b->last = ngx_sprintf(b->last, "Active connections: %uA \n", ac); diff -r 6d2e92acb013 -r 5ab36b546338 src/http/ngx_http_request.c --- a/src/http/ngx_http_request.c Thu Dec 28 12:01:05 2017 +0200 +++ b/src/http/ngx_http_request.c Fri Jan 05 13:19:59 2018 +0000 @@ -617,9 +617,11 @@ r->log_handler = ngx_http_log_error_handler; #if (NGX_STAT_STUB) - (void) ngx_atomic_fetch_add(ngx_stat_reading, 1); + (void) ngx_worker_atomic_fetch_add(ngx_stat_reading, + ngx_process_slot, 1); r->stat_reading = 1; - (void) ngx_atomic_fetch_add(ngx_stat_requests, 1); + (void) ngx_worker_atomic_fetch_add(ngx_stat_requests, + ngx_process_slot, 1); #endif return r; @@ -1935,9 +1937,11 @@ } #if (NGX_STAT_STUB) - (void) ngx_atomic_fetch_add(ngx_stat_reading, -1); + (void) ngx_worker_atomic_fetch_add(ngx_stat_reading, + ngx_process_slot, -1); r->stat_reading = 0; - (void) ngx_atomic_fetch_add(ngx_stat_writing, 1); + (void) ngx_worker_atomic_fetch_add(ngx_stat_writing, + ngx_process_slot, 1); r->stat_writing = 1; #endif @@ -3491,11 +3495,13 @@ #if (NGX_STAT_STUB) if (r->stat_reading) { - (void) ngx_atomic_fetch_add(ngx_stat_reading, -1); + (void) ngx_worker_atomic_fetch_add(ngx_stat_reading, + ngx_process_slot, -1); } if (r->stat_writing) { - (void) ngx_atomic_fetch_add(ngx_stat_writing, -1); + (void) ngx_worker_atomic_fetch_add(ngx_stat_writing, + ngx_process_slot, -1); } #endif @@ -3584,7 +3590,8 @@ #endif #if (NGX_STAT_STUB) - (void) ngx_atomic_fetch_add(ngx_stat_active, -1); + (void) ngx_worker_atomic_fetch_add(ngx_stat_active, + ngx_process_slot, -1); #endif c->destroyed = 1; diff -r 6d2e92acb013 -r 5ab36b546338 src/mail/ngx_mail_handler.c --- a/src/mail/ngx_mail_handler.c Thu Dec 28 12:01:05 2017 +0200 +++ b/src/mail/ngx_mail_handler.c Fri Jan 05 13:19:59 2018 +0000 @@ -838,7 +838,8 @@ #endif #if (NGX_STAT_STUB) - (void) ngx_atomic_fetch_add(ngx_stat_active, -1); + (void) ngx_worker_atomic_fetch_add(ngx_stat_active, + ngx_process_slot, -1); #endif c->destroyed = 1; diff -r 6d2e92acb013 -r 5ab36b546338 src/os/unix/ngx_atomic.h --- a/src/os/unix/ngx_atomic.h Thu Dec 28 12:01:05 2017 +0200 +++ b/src/os/unix/ngx_atomic.h Fri Jan 05 13:19:59 2018 +0000 @@ -309,5 +309,9 @@ #define ngx_trylock(lock) (*(lock) == 0 && ngx_atomic_cmp_set(lock, 0, 1)) #define ngx_unlock(lock) *(lock) = 0 +#define NGX_COUNTER_SLOT_SIZE 128 +#define ngx_worker_atomic_fetch_add(value, worker, add) \ + ngx_atomic_fetch_add((ngx_atomic_t*)((u_char*) value + \ + (worker * NGX_COUNTER_SLOT_SIZE)), +add) #endif /* _NGX_ATOMIC_H_INCLUDED_ */ diff -r 6d2e92acb013 -r 5ab36b546338 src/stream/ngx_stream_handler.c --- a/src/stream/ngx_stream_handler.c Thu Dec 28 12:01:05 2017 +0200 +++ b/src/stream/ngx_stream_handler.c Fri Jan 05 13:19:59 2018 +0000 @@ -345,7 +345,8 @@ #endif #if (NGX_STAT_STUB) - (void) ngx_atomic_fetch_add(ngx_stat_active, -1); + (void) ngx_worker_atomic_fetch_add(ngx_stat_active, + ngx_process_slot, -1); #endif pool = c->pool; From mdounin at mdounin.ru Fri Jan 5 21:56:13 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sat, 6 Jan 2018 00:56:13 +0300 Subject: [PATCH] Core: added const qualifier to ngx_parse_http_time argument In-Reply-To: References: Message-ID: <20180105215613.GX34136@mdounin.ru> Hello! On Fri, Jan 05, 2018 at 05:56:40PM +0800, Zexuan Luo wrote: > # HG changeset patch > # User spacewander > # Date 1515142886 -28800 > # Fri Jan 05 17:01:26 2018 +0800 > # Node ID 17d6674fe60421961903d913831d7d19b351bd11 > # Parent 6d2e92acb013224e6ef2c71c9e61ab07f0b03271 > Core: added const qualifier to ngx_parse_http_time argument > > 'ngx_parse_http_time(u_char *value, size_t len)' doesn't change the 'value' > actually, so it is safe to add const qualifier to it. With this change we could > pass const string without hacky '(u_char *)' casting. No, thanks. We generally don't use const qualifiers in the code since it adds little to no value, but is viral and forces ugly casts and/or const qualifiers in other parts of the code. [...] -- Maxim Dounin http://mdounin.ru/ From mdounin at mdounin.ru Sun Jan 7 15:14:12 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 7 Jan 2018 18:14:12 +0300 Subject: [PATCH] Using worker specific counters for http_stub_status module In-Reply-To: References: <5ab36b546338d5cc3476.1515158406@null-8cfdf008883d.ap.qualcomm.com> Message-ID: <20180107151412.GZ34136@mdounin.ru> Hello! On Fri, Jan 05, 2018 at 01:34:56PM +0000, debayang.qdt wrote: > When the http_stub_status_module is enabled, a performance > impact seen on some platforms with several worker processes > running and increased workload. > There is a contention with the atomic updates of the several > shared memory counters maintained by this module - which could > be eliminated if we maintain worker process specific counters > and only sum them up when requested by client. > > Below patch is an attempt to do so - which bypasses the > contention and improves performance on such platforms. So far we haven't seen any noticeable performance degradation on real workloads due to stub status enabled. Several atomic increments aren't visible compared to generic request processing costs. If you've seen any noticeable preformance degradation on real workloads - you may want to share your observations first. Also, it might not be a good idea to spend 128k per variable, as this might be noticeable on platforms with small amounts of memory. -- Maxim Dounin http://mdounin.ru/ From mdounin at mdounin.ru Sun Jan 7 16:33:56 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 7 Jan 2018 19:33:56 +0300 Subject: Fwd: [ module ] Add http upstream keep alive timeout parameter In-Reply-To: References: <20171109170702.GH26836@mdounin.ru> <20171113194946.GR26836@mdounin.ru> <20171122160016.GB78325@mdounin.ru> Message-ID: <20180107163355.GA34136@mdounin.ru> Hello! On Fri, Jan 05, 2018 at 03:53:46PM +1100, Wei Xu wrote: > Is it possible to merge the upstream keep alive feature first? because it's > a valuable and simple patch. > > We're using React server render, and by adding Nginx as the reverse proxy > on each server, our AWS EC2 instances count reduced 25%, from 43 to 27-37 > C4.Large instances. > > I wrote a detailed article to explain what happened and why it works at: > https://theantway.com/2017/12/metrics-driven-development-how-did-i-reduced-aws-ec2-costs-to-27-and-improved-performance/ > > > The only problem now is we still using the custom patched version, which > makes it *difficult to share the solution with other teams*. So back to the > initial question, is it possible to merge this feature first, and you can > create separate patches if you need to add more features later. Sorry, but unlikely I'll be able to spend more time on this in the upcoming couple of weeks at least. And I certainly don't want to commit incomplete solution, as keepalive_requests might be equally important for some workloads. Meanwhile, you may want to consider solutions which do not require any patching, in particular: - configuring upstream group and proxy_next_upstream appropriately, so nginx will retry failed requests (this is the default as long as you have more than one upstream server configured and requests are idempotent); - tuning your backend to use higher keepalive timeouts, which will made the race unlikely. -- Maxim Dounin http://mdounin.ru/ From vadimjunk at gmail.com Tue Jan 9 15:06:00 2018 From: vadimjunk at gmail.com (Vadim Fedorenko) Date: Tue, 9 Jan 2018 18:06:00 +0300 Subject: [ PATCH ] Add preadv2 support with RWF_NOWAIT flag Message-ID: Introduction of thread pools is really good thing, but it adds overhead to reading files which are already in page cache in linux. With preadv2 (introduced in Linux 4.6) and RWF_NOWAIT flag (introduced in Linux 4.14) we can eliminate this overhead. Needs glibc >= 2.26 # HG changeset patch # User Vadim Fedorenko # Date 1515498853 -10800 # Tue Jan 09 14:54:13 2018 +0300 # Node ID f955f9cddd38ce35e19c50b871558ca8739a1d4b # Parent 6d2e92acb013224e6ef2c71c9e61ab07f0b03271 Add preadv2() with RWF_NOWAIT flag Eliminate overhead with threads synchronization when cache file or chain is in page cache already diff -r 6d2e92acb013 -r f955f9cddd38 auto/unix --- a/auto/unix Thu Dec 28 12:01:05 2017 +0200 +++ b/auto/unix Tue Jan 09 14:54:13 2018 +0300 @@ -726,6 +726,21 @@ if (n == -1) return 1" . auto/feature +# preadv2() was introduced in Linux 4.6, glibc 2.26 +# RWF_NOWAIT flag was introduced in Linux 4.14 + +ngx_feature="preadv2()" +ngx_feature_name="NGX_HAVE_PREADV2_NONBLOCK" +ngx_feature_run=no +ngx_feature_incs='#include ' +ngx_feature_path= +ngx_feature_libs= +ngx_feature_test="char buf[1]; struct iovec vec[1]; ssize_t n; + vec[0].iov_base = buf; + vec[0].iov_len = 1; + n = preadv2(0, vec, 1, 0, RWF_NOWAIT); + if (n == -1) return 1" +. auto/feature ngx_feature="sys_nerr" ngx_feature_name="NGX_SYS_NERR" diff -r 6d2e92acb013 -r f955f9cddd38 src/core/ngx_output_chain.c --- a/src/core/ngx_output_chain.c Thu Dec 28 12:01:05 2017 +0200 +++ b/src/core/ngx_output_chain.c Tue Jan 09 14:54:13 2018 +0300 @@ -577,7 +577,15 @@ } else #endif #if (NGX_THREADS) - if (ctx->thread_handler) { +#if (NGX_HAVE_PREADV2_NONBLOCK) + + n = ngx_preadv2_file(src->file, dst->pos, (size_t) size, + src->file_pos); +#else + n = NGX_AGAIN; +#endif + if (n == NGX_AGAIN && ctx->thread_handler) { + src->file->thread_task = ctx->thread_task; src->file->thread_handler = ctx->thread_handler; src->file->thread_ctx = ctx->filter_ctx; @@ -589,7 +597,7 @@ return NGX_AGAIN; } - } else + } else if (!ctx->thread_handler && n == NGX_AGAIN) #endif { n = ngx_read_file(src->file, dst->pos, (size_t) size, diff -r 6d2e92acb013 -r f955f9cddd38 src/http/ngx_http_file_cache.c --- a/src/http/ngx_http_file_cache.c Thu Dec 28 12:01:05 2017 +0200 +++ b/src/http/ngx_http_file_cache.c Tue Jan 09 14:54:13 2018 +0300 @@ -699,6 +699,19 @@ #if (NGX_THREADS) if (clcf->aio == NGX_HTTP_AIO_THREADS) { + +#if (NGX_HAVE_PREADV2_NONBLOCK) + + n = ngx_preadv2_file(&c->file, c->buf->pos, c->body_start, 0); + + if (n != NGX_AGAIN) { + ngx_log_debug2(NGX_LOG_DEBUG_CORE, c->file.log, 0, + "preadv2 non blocking: \"%s\" - %uz", c->file.name.data, c->body_start); + return n; + } + +#endif + c->file.thread_task = c->thread_task; c->file.thread_handler = ngx_http_cache_thread_handler; c->file.thread_ctx = r; diff -r 6d2e92acb013 -r f955f9cddd38 src/os/unix/ngx_files.c --- a/src/os/unix/ngx_files.c Thu Dec 28 12:01:05 2017 +0200 +++ b/src/os/unix/ngx_files.c Tue Jan 09 14:54:13 2018 +0300 @@ -26,6 +26,68 @@ #endif +#if (NGX_THREADS) +#if (NGX_HAVE_PREADV2_NONBLOCK) + +ngx_uint_t ngx_preadv2_nonblock = 1; + +#endif + +ssize_t +ngx_preadv2_file(ngx_file_t *file, u_char *buf, size_t size, off_t offset) +{ +#if (NGX_HAVE_PREADV2_NONBLOCK) + ssize_t n; + struct iovec iovs[1]; + + if (!ngx_preadv2_nonblock) { + return NGX_AGAIN; + } + + iovs[0].iov_base = buf; + iovs[0].iov_len = size; + + n = preadv2(file->fd, iovs, 1, offset, RWF_NOWAIT); + + if (n == -1) { // let's analyze the return code + switch (ngx_errno) { + case EAGAIN: + ngx_log_debug(NGX_LOG_DEBUG_CORE, file->log, 0, + "preadv2() will block on \"%s\"", file->name.data); + return NGX_AGAIN; + case EINVAL: + // Most possible case - not supported RWF_NOWAIT + ngx_log_error(NGX_LOG_ERR, file->log, ngx_errno, + "preadv2() \"%s\" failed RWF_NOWAIT", file->name.data); + ngx_preadv2_nonblock = 0; + return NGX_AGAIN; + default: + return NGX_AGAIN; + + } + } + + // Check if we read partial file + if (((size_t)n < size) && (n < file->info.st_size)) { + // blocked on partial read + ngx_log_debug2(NGX_LOG_DEBUG_CORE, file->log, 0, + "preadv2() blocked partial on \"%s\" " + "with read size %uz", file->name.data, n); + return NGX_AGAIN; + } + + file->offset += n; + + return n; + +#else + + return NGX_AGAIN; + +#endif +} + +#endif ssize_t ngx_read_file(ngx_file_t *file, u_char *buf, size_t size, off_t offset) diff -r 6d2e92acb013 -r f955f9cddd38 src/os/unix/ngx_files.h --- a/src/os/unix/ngx_files.h Thu Dec 28 12:01:05 2017 +0200 +++ b/src/os/unix/ngx_files.h Tue Jan 09 14:54:13 2018 +0300 @@ -389,7 +389,12 @@ off_t offset, ngx_pool_t *pool); ssize_t ngx_thread_write_chain_to_file(ngx_file_t *file, ngx_chain_t *cl, off_t offset, ngx_pool_t *pool); + +#if (NGX_HAVE_PREADV2_NONBLOCK) +ssize_t ngx_preadv2_file(ngx_file_t *file, u_char *buf, size_t size, + off_t offset); #endif +#endif #endif /* _NGX_FILES_H_INCLUDED_ */ From debayang.qdt at qualcommdatacenter.com Tue Jan 9 19:08:41 2018 From: debayang.qdt at qualcommdatacenter.com (debayang.qdt) Date: Tue, 9 Jan 2018 19:08:41 +0000 Subject: FW: [PATCH] Using worker specific counters for http_stub_status module References: <5ab36b546338d5cc3476.1515158406@null-8cfdf008883d.ap.qualcomm.com> <20180107151412.GZ34136@mdounin.ru> Message-ID: <9342728e63b14197b9c1bb5cbfedef0a@aptaiexm02a.ap.qualcomm.com> Hello, I had this observation while benchmarking nginx 48 workers with wrk on two separate back to back high speed connected systems (arm) with several random files being accessed from the client. As you rightly mentioned this may not have impacted performance any real time workload in any significant way - as has been observed during benchmarking. However if it's easy to avoid shared memory contention - it may make sense to avoid it - as it might have a negative impact on some platforms under peak loads. Also in the code the counter slot size was kept to 128 with a comment like - keep equal to or more than CL size. Does it make sense to keep it to ngx_cacheline_size rather than hardcoding it to a largest CL size ? Thanks Debayan -----Original Message----- From: Maxim Dounin [mailto:mdounin at mdounin.ru] Sent: Sunday, January 7, 2018 8:44 PM To: nginx-devel at nginx.org Cc: debayang.qdt Subject: Re: [PATCH] Using worker specific counters for http_stub_status module Hello! On Fri, Jan 05, 2018 at 01:34:56PM +0000, debayang.qdt wrote: > When the http_stub_status_module is enabled, a performance impact seen > on some platforms with several worker processes running and increased > workload. > There is a contention with the atomic updates of the several shared > memory counters maintained by this module - which could be eliminated > if we maintain worker process specific counters and only sum them up > when requested by client. > > Below patch is an attempt to do so - which bypasses the contention and > improves performance on such platforms. So far we haven't seen any noticeable performance degradation on real workloads due to stub status enabled. Several atomic increments aren't visible compared to generic request processing costs. If you've seen any noticeable preformance degradation on real workloads - you may want to share your observations first. Also, it might not be a good idea to spend 128k per variable, as this might be noticeable on platforms with small amounts of memory. -- Maxim Dounin http://mdounin.ru/ From mdounin at mdounin.ru Tue Jan 9 19:50:16 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 9 Jan 2018 22:50:16 +0300 Subject: FW: [PATCH] Using worker specific counters for http_stub_status module In-Reply-To: <9342728e63b14197b9c1bb5cbfedef0a@aptaiexm02a.ap.qualcomm.com> References: <5ab36b546338d5cc3476.1515158406@null-8cfdf008883d.ap.qualcomm.com> <20180107151412.GZ34136@mdounin.ru> <9342728e63b14197b9c1bb5cbfedef0a@aptaiexm02a.ap.qualcomm.com> Message-ID: <20180109195015.GE34136@mdounin.ru> Hello! On Tue, Jan 09, 2018 at 07:08:41PM +0000, debayang.qdt wrote: > I had this observation while benchmarking nginx 48 workers with > wrk on two separate back to back high speed connected systems > (arm) with several random files being accessed from the client. > As you rightly mentioned this may not have impacted performance > any real time workload in any significant way - as has been > observed during benchmarking. > However if it's easy to avoid shared memory contention - it may > make sense to avoid it - as it might have a negative impact on > some platforms under peak loads. The other part of the problem is that if you can easily avoid some code, it make sense to avoid it, as any code has maintanance costs. And the same applies to memory usage - if you can avoid using more memory, you should. As there are various embedded devices where memory is quite limited. Current nginx approach is to use 128 bytes for each variable to avoid cache invalidation on modifications of unrelated variables. Yet we haven't seen valid reasons to extend this to something more complex. > Also in the code the counter slot size was kept to 128 with a comment like - keep equal to or more than CL size. > Does it make sense to keep it to ngx_cacheline_size rather than hardcoding it to a largest CL size ? I don't think there is a big difference in terms of memory usage, in both cases it's huge if you allocate a 128 or ngx_cacheline_size for each ngx_processes slot. On the other hand, using ngx_cacheline_size might result in problems if ngx_cacheline_size will be somehow different in different processess using the same shared memory segment. -- Maxim Dounin http://mdounin.ru/ From jk at ip-clear.de Wed Jan 10 15:09:11 2018 From: jk at ip-clear.de (=?utf-8?q?J=C3=B6rg?= Kost) Date: Wed, 10 Jan 2018 16:09:11 +0100 Subject: [PATCH] -h output note addition: Testing of configuration can lead to a correction of path/file permissions Message-ID: <55B0A36B-745D-400D-AB81-AEF35494D06E@ip-clear.de> # HG changeset patch # User Joerg Kost # Date 1515599827 -3600 # Wed Jan 10 16:57:07 2018 +0100 # Node ID 9132d6facd3ddbc9c50543a96686374b6e058f10 # Parent 6d2e92acb013224e6ef2c71c9e61ab07f0b03271 Modified cli help output to make it clear that nginx may modify permissions and inode owners on test runs. diff -r 6d2e92acb013 -r 9132d6facd3d src/core/nginx.c --- a/src/core/nginx.c Thu Dec 28 12:01:05 2017 +0200 +++ b/src/core/nginx.c Wed Jan 10 16:57:07 2018 +0100 @@ -401,8 +401,8 @@ " -v : show version and exit" NGX_LINEFEED " -V : show version and configure options then exit" NGX_LINEFEED - " -t : test configuration and exit" NGX_LINEFEED - " -T : test configuration, dump it and exit" + " -t : test & open configuration, correct path permissions & exit" NGX_LINEFEED + " -T : same as -t, but additionally dump to stdout and exit" NGX_LINEFEED " -q : suppress non-error messages " "during configuration testing" NGX_LINEFEED From mdounin at mdounin.ru Wed Jan 10 16:39:19 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 10 Jan 2018 19:39:19 +0300 Subject: [PATCH] -h output note addition: Testing of configuration can lead to a correction of path/file permissions In-Reply-To: <55B0A36B-745D-400D-AB81-AEF35494D06E@ip-clear.de> References: <55B0A36B-745D-400D-AB81-AEF35494D06E@ip-clear.de> Message-ID: <20180110163919.GF34136@mdounin.ru> Hello! On Wed, Jan 10, 2018 at 04:09:11PM +0100, J?rg Kost wrote: > # HG changeset patch > # User Joerg Kost > # Date 1515599827 -3600 > # Wed Jan 10 16:57:07 2018 +0100 > # Node ID 9132d6facd3ddbc9c50543a96686374b6e058f10 > # Parent 6d2e92acb013224e6ef2c71c9e61ab07f0b03271 > Modified cli help output to make it clear that nginx > may modify permissions and inode owners on test runs. > > diff -r 6d2e92acb013 -r 9132d6facd3d src/core/nginx.c > --- a/src/core/nginx.c Thu Dec 28 12:01:05 2017 +0200 > +++ b/src/core/nginx.c Wed Jan 10 16:57:07 2018 +0100 > @@ -401,8 +401,8 @@ > " -v : show version and exit" NGX_LINEFEED > " -V : show version and configure options then > exit" > NGX_LINEFEED > - " -t : test configuration and exit" > NGX_LINEFEED > - " -T : test configuration, dump it and exit" > + " -t : test & open configuration, correct path > permissions & exit" NGX_LINEFEED > + " -T : same as -t, but additionally dump to > stdout and exit" > NGX_LINEFEED > " -q : suppress non-error messages " > "during configuration testing" > NGX_LINEFEED I don't think this belongs to the "nginx -h" output. The "nginx -h" purpose is to provide short and readable description on what each option does, and "test configuration and exit" is a good enough description. What "test configuration" implies and which side-effects it may have is a completely different question, and trying to answer it here will certainly hurt readability. Also note that the description you've suggested is certainly incomplete - for example, testing configuration also creates various files and directories. -- Maxim Dounin http://mdounin.ru/ From mdounin at mdounin.ru Thu Jan 11 17:58:55 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 11 Jan 2018 20:58:55 +0300 Subject: [ PATCH ] Add preadv2 support with RWF_NOWAIT flag In-Reply-To: References: Message-ID: <20180111175854.GN34136@mdounin.ru> Hello! On Tue, Jan 09, 2018 at 06:06:00PM +0300, Vadim Fedorenko wrote: > Introduction of thread pools is really good thing, but it adds > overhead to reading files which are already in page cache in linux. > With preadv2 (introduced in Linux 4.6) and RWF_NOWAIT flag (introduced > in Linux 4.14) we can eliminate this overhead. Needs glibc >= 2.26 > > # HG changeset patch > # User Vadim Fedorenko > # Date 1515498853 -10800 > # Tue Jan 09 14:54:13 2018 +0300 > # Node ID f955f9cddd38ce35e19c50b871558ca8739a1d4b > # Parent 6d2e92acb013224e6ef2c71c9e61ab07f0b03271 > Add preadv2() with RWF_NOWAIT flag > > Eliminate overhead with threads synchronization when cache file or > chain is in page cache already There should be more dots here. > diff -r 6d2e92acb013 -r f955f9cddd38 auto/unix > --- a/auto/unix Thu Dec 28 12:01:05 2017 +0200 > +++ b/auto/unix Tue Jan 09 14:54:13 2018 +0300 > @@ -726,6 +726,21 @@ > if (n == -1) return 1" > . auto/feature > > +# preadv2() was introduced in Linux 4.6, glibc 2.26 > +# RWF_NOWAIT flag was introduced in Linux 4.14 > + > +ngx_feature="preadv2()" > +ngx_feature_name="NGX_HAVE_PREADV2_NONBLOCK" > +ngx_feature_run=no > +ngx_feature_incs='#include ' > +ngx_feature_path= > +ngx_feature_libs= > +ngx_feature_test="char buf[1]; struct iovec vec[1]; ssize_t n; > + vec[0].iov_base = buf; > + vec[0].iov_len = 1; > + n = preadv2(0, vec, 1, 0, RWF_NOWAIT); > + if (n == -1) return 1" > +. auto/feature > > ngx_feature="sys_nerr" > ngx_feature_name="NGX_SYS_NERR" It might be a good idea to keep the feature name closer to the code. That is, it might be a good idea to use NOWAIT instead of NONBLOCK. Style: as you can see, this file uses two empty lines between tests. Please do so. Also, there are various other style issues in the code, including use of C99 single-line comments, missing spaces and empty lines, error and debug messages which are not in line with other used in the code. It is a good idea to cleanup the code. > diff -r 6d2e92acb013 -r f955f9cddd38 src/core/ngx_output_chain.c > --- a/src/core/ngx_output_chain.c Thu Dec 28 12:01:05 2017 +0200 > +++ b/src/core/ngx_output_chain.c Tue Jan 09 14:54:13 2018 +0300 > @@ -577,7 +577,15 @@ > } else > #endif > #if (NGX_THREADS) > - if (ctx->thread_handler) { > +#if (NGX_HAVE_PREADV2_NONBLOCK) > + > + n = ngx_preadv2_file(src->file, dst->pos, (size_t) size, > + src->file_pos); > +#else > + n = NGX_AGAIN; > +#endif > + if (n == NGX_AGAIN && ctx->thread_handler) { > + > src->file->thread_task = ctx->thread_task; > src->file->thread_handler = ctx->thread_handler; > src->file->thread_ctx = ctx->filter_ctx; Certainly we don't want platform-specific additions before each ngx_thread_read() call. The preadv2() should be transparently called from ngx_thread_read() instead. [...] -- Maxim Dounin http://mdounin.ru/ From mdounin at mdounin.ru Thu Jan 11 19:39:02 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 11 Jan 2018 19:39:02 +0000 Subject: [nginx] Year 2018. Message-ID: details: http://hg.nginx.org/nginx/rev/c6cc8db553eb branches: changeset: 7187:c6cc8db553eb user: Maxim Dounin date: Thu Jan 11 21:43:24 2018 +0300 description: Year 2018. diffstat: docs/text/LICENSE | 4 ++-- 1 files changed, 2 insertions(+), 2 deletions(-) diffs (12 lines): diff --git a/docs/text/LICENSE b/docs/text/LICENSE --- a/docs/text/LICENSE +++ b/docs/text/LICENSE @@ -1,6 +1,6 @@ /* - * Copyright (C) 2002-2017 Igor Sysoev - * Copyright (C) 2011-2017 Nginx, Inc. + * Copyright (C) 2002-2018 Igor Sysoev + * Copyright (C) 2011-2018 Nginx, Inc. * All rights reserved. * * Redistribution and use in source and binary forms, with or without From mdounin at mdounin.ru Thu Jan 11 19:39:04 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 11 Jan 2018 19:39:04 +0000 Subject: [nginx] Upstream: fixed "header already sent" alerts on backend errors. Message-ID: details: http://hg.nginx.org/nginx/rev/93abb5a855d6 branches: changeset: 7188:93abb5a855d6 user: Maxim Dounin date: Thu Jan 11 21:43:49 2018 +0300 description: Upstream: fixed "header already sent" alerts on backend errors. Following ad3f342f14ba046c (1.9.13), it is possible that a request where header was already sent will be finalized with NGX_HTTP_BAD_GATEWAY, triggering an attempt to return additional error response and the "header already sent" alert as a result. In particular, it is trivial to reproduce the problem with a HEAD request and caching enabled. With caching enabled nginx will change HEAD to GET and will set u->pipe->downstream_error to suppress sending the response body to the client. When a backend-related error occurs (for example, proxy_read_timeout expires), ngx_http_finalize_upstream_request() will be called with NGX_HTTP_BAD_GATEWAY. After ad3f342f14ba046c this will result in ngx_http_finalize_request(NGX_HTTP_BAD_GATEWAY). Fix is to move u->pipe->downstream_error handling to a later point, where all special response codes are changed to NGX_ERROR. Reported by Jan Prachar, http://mailman.nginx.org/pipermail/nginx-devel/2018-January/010737.html. diffstat: src/http/ngx_http_upstream.c | 7 ++++--- 1 files changed, 4 insertions(+), 3 deletions(-) diffs (24 lines): diff --git a/src/http/ngx_http_upstream.c b/src/http/ngx_http_upstream.c --- a/src/http/ngx_http_upstream.c +++ b/src/http/ngx_http_upstream.c @@ -4374,8 +4374,7 @@ ngx_http_upstream_finalize_request(ngx_h if (!u->header_sent || rc == NGX_HTTP_REQUEST_TIME_OUT - || rc == NGX_HTTP_CLIENT_CLOSED_REQUEST - || (u->pipe && u->pipe->downstream_error)) + || rc == NGX_HTTP_CLIENT_CLOSED_REQUEST) { ngx_http_finalize_request(r, rc); return; @@ -4388,7 +4387,9 @@ ngx_http_upstream_finalize_request(ngx_h flush = 1; } - if (r->header_only) { + if (r->header_only + || (u->pipe && u->pipe->downstream_error)) + { ngx_http_finalize_request(r, rc); return; } From vadimjunk at gmail.com Thu Jan 11 23:41:28 2018 From: vadimjunk at gmail.com (Vadim Fedorenko) Date: Fri, 12 Jan 2018 02:41:28 +0300 Subject: [PATCH v2] Add preadv2 support with RWF_NOWAIT flag Message-ID: # HG changeset patch # User Vadim Fedorenko # Date 1515713238 -10800 # Fri Jan 12 02:27:18 2018 +0300 # Node ID fbf6a421212b291cbacfcfc503173c0168449165 # Parent 93abb5a855d6534f0356882f45be49f8c6a95a8b Add preadv2 support with RWF_NOWAIT flag Introduction of thread pools is really good thing, but it adds overhead to reading files which are already in page cache in linux. With preadv2 (introduced in Linux 4.6) and RWF_NOWAIT flag (introduced in Linux 4.14) we can eliminate this overhead. Needs glibc >= 2.26 This is v2 patch with code style fixes. Feature renamed to NGX_HAVE_PREADV2_NOWAIT, call to preadv2() moved to ngx_thread_read(), that's why it became simpler. diff -r 93abb5a855d6 -r fbf6a421212b auto/unix --- a/auto/unix Thu Jan 11 21:43:49 2018 +0300 +++ b/auto/unix Fri Jan 12 02:27:18 2018 +0300 @@ -727,6 +727,23 @@ . auto/feature +# preadv2() was introduced in Linux 4.6, glibc 2.26 +# RWF_NOWAIT flag was introduced in Linux 4.14 + +ngx_feature="preadv2()" +ngx_feature_name="NGX_HAVE_PREADV2_NOWAIT" +ngx_feature_run=no +ngx_feature_incs='#include ' +ngx_feature_path= +ngx_feature_libs= +ngx_feature_test="char buf[1]; struct iovec vec[1]; ssize_t n; + vec[0].iov_base = buf; + vec[0].iov_len = 1; + n = preadv2(0, vec, 1, 0, RWF_NOWAIT); + if (n == -1) return 1" +. auto/feature + + ngx_feature="sys_nerr" ngx_feature_name="NGX_SYS_NERR" ngx_feature_run=value diff -r 93abb5a855d6 -r fbf6a421212b src/os/unix/ngx_files.c --- a/src/os/unix/ngx_files.c Thu Jan 11 21:43:49 2018 +0300 +++ b/src/os/unix/ngx_files.c Fri Jan 12 02:27:18 2018 +0300 @@ -26,6 +26,61 @@ #endif +#if (NGX_THREADS) && (NGX_HAVE_PREADV2_NOWAIT) + +ngx_uint_t ngx_preadv2_nowait = 1; + + +ssize_t +ngx_preadv2_file(ngx_file_t *file, u_char *buf, size_t size, off_t offset) +{ + ssize_t n; + struct iovec iovs[1]; + + if (!ngx_preadv2_nowait) { + return NGX_AGAIN; + } + + iovs[0].iov_base = buf; + iovs[0].iov_len = size; + + n = preadv2(file->fd, iovs, 1, offset, RWF_NOWAIT); + + if (n == -1) { /* let's analyze the return code */ + switch (ngx_errno) { + case EAGAIN: + ngx_log_debug(NGX_LOG_DEBUG_CORE, file->log, 0, + "preadv2() will block on \"%s\"", + file->name.data); + return NGX_AGAIN; + case EINVAL: + /* Most possible case - not supported RWF_NOWAIT */ + ngx_log_error(NGX_LOG_ERR, file->log, ngx_errno, + "preadv2() \"%s\" failed RWF_NOWAIT", + file->name.data); + ngx_preadv2_nowait = 0; + return NGX_AGAIN; + default: + return NGX_AGAIN; + + } + } + + /* Check if we read partial file */ + if (((size_t)n < size) && (n < file->info.st_size)) { + /* blocked on partial read */ + ngx_log_debug2(NGX_LOG_DEBUG_CORE, file->log, 0, + "preadv2() blocked partial on \"%s\" " + "with read size %uz", file->name.data, n); + return NGX_AGAIN; + } + + file->offset += n; + + return n; +} +#endif + ssize_t ngx_read_file(ngx_file_t *file, u_char *buf, size_t size, off_t offset) @@ -97,6 +152,9 @@ { ngx_thread_task_t *task; ngx_thread_file_ctx_t *ctx; +#if (NGX_HAVE_PREADV2_NOWAIT) + ssize_t n; +#endif ngx_log_debug4(NGX_LOG_DEBUG_CORE, file->log, 0, "thread read: %d, %p, %uz, %O", @@ -105,6 +163,15 @@ task = file->thread_task; if (task == NULL) { +#if (NGX_HAVE_PREADV2_NOWAIT) + n = ngx_preadv2_file(file, buf, size, offset); + if (n != NGX_AGAIN) { + ngx_log_debug2(NGX_LOG_DEBUG_CORE, file->log, 0, + "preadv2 non blocking: \"%s\" - %uz", + file->name.data, n); + return n; + } +#endif task = ngx_thread_task_alloc(pool, sizeof(ngx_thread_file_ctx_t)); if (task == NULL) { return NGX_ERROR; -------------- next part -------------- An HTML attachment was scrubbed... URL: From jk at ip-clear.de Fri Jan 12 08:59:37 2018 From: jk at ip-clear.de (=?utf-8?q?J=C3=B6rg?= Kost) Date: Fri, 12 Jan 2018 09:59:37 +0100 Subject: [PATCH] -h output note addition: Testing of configuration can lead to a correction of path/file permissions In-Reply-To: <20180110163919.GF34136@mdounin.ru> References: <55B0A36B-745D-400D-AB81-AEF35494D06E@ip-clear.de> <20180110163919.GF34136@mdounin.ru> Message-ID: <8151906F-2378-4D3A-8B0C-4633F5A93735@ip-clear.de> Hi Maxim, still feel, that the -t switch behavior might be not so good documented. In some border cases this might not be the intention of running a test. Currently it will log this things to the error file only on failure, but I am thinking of a more general approach -> always log? Regards J?rg On 10 Jan 2018, at 17:39, Maxim Dounin wrote: > Hello! > > On Wed, Jan 10, 2018 at 04:09:11PM +0100, J?rg Kost wrote: > > I don't think this belongs to the "nginx -h" output. The "nginx > -h" purpose is to provide short and readable description on what > each option does, and "test configuration and exit" is a good > enough description. > > What "test configuration" implies and which side-effects it may > have is a completely different question, and trying to answer it > here will certainly hurt readability. Also note that the > description you've suggested is certainly incomplete - for > example, testing configuration also creates various files and > directories. > > -- > Maxim Dounin > http://mdounin.ru/ > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel From mdounin at mdounin.ru Fri Jan 12 14:41:59 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 12 Jan 2018 17:41:59 +0300 Subject: [PATCH] -h output note addition: Testing of configuration can lead to a correction of path/file permissions In-Reply-To: <8151906F-2378-4D3A-8B0C-4633F5A93735@ip-clear.de> References: <55B0A36B-745D-400D-AB81-AEF35494D06E@ip-clear.de> <20180110163919.GF34136@mdounin.ru> <8151906F-2378-4D3A-8B0C-4633F5A93735@ip-clear.de> Message-ID: <20180112144159.GP34136@mdounin.ru> Hello! On Fri, Jan 12, 2018 at 09:59:37AM +0100, J?rg Kost wrote: > still feel, that the -t switch behavior might be not so good documented. > In some border cases this might not be the intention of running a test. While it might not be the intention, certainly configuration test is not the same as syntax check, and would never be. The idea behind configuration test is to be able to check that nginx will be able to start with the configuration in question, but without actually starting it - to avoid interference with already running nginx, if any. Configuration testing may have side effects, this is expected. Some of these side effects are documented at http://nginx.org/en/docs/switches.html, though certainly there are others. If you think that the documentation is insufficient - consider submitting a patch to improve it. If you think that certain side effects should be avoided - feel free to submit a patch to avoid them. > Currently it will log this things to the error file only on failure, but > I am thinking of a more general approach -> always log? Not sure it's a good idea. -- Maxim Dounin http://mdounin.ru/ From zchao1995 at gmail.com Tue Jan 16 05:59:53 2018 From: zchao1995 at gmail.com (tokers) Date: Mon, 15 Jan 2018 21:59:53 -0800 Subject: yield 499 while reading client body and client prematurely closed connection Message-ID: # HG changeset patch # User Alex Zhang # Date 1516079440 -28800 # Tue Jan 16 13:10:40 2018 +0800 # Node ID 9ca5af970d2296a02acefb3070237c5f52119708 # Parent 93abb5a855d6534f0356882f45be49f8c6a95a8b yield 499 while reading client body and client prematurely closed connection. The function ngx_http_do_read_client_request_body returns NGX_HTTP_BAD_REQUEST (client prematurely closed connection), while the 400 status code cannot reflect that client closed connection prematurely. It should return code 499(NGX_HTTP_CLIENT_CLOSED_REQUEST) and it is helpful to troubleshoot some relevant problems. Signed-off-by: Alex Zhang diff -r 93abb5a855d6 -r 9ca5af970d22 src/http/ngx_http_request_body.c --- a/src/http/ngx_http_request_body.c Thu Jan 11 21:43:49 2018 +0300 +++ b/src/http/ngx_http_request_body.c Tue Jan 16 13:10:40 2018 +0800 @@ -342,14 +342,17 @@ break; } - if (n == 0) { + if (n == 0 || n == NGX_ERROR) { + c->error = 1; + + if (n == 0) { + return NGX_HTTP_BAD_REQUEST; + } + ngx_log_error(NGX_LOG_INFO, c->log, 0, "client prematurely closed connection"); - } - if (n == 0 || n == NGX_ERROR) { - c->error = 1; - return NGX_HTTP_BAD_REQUEST; + return NGX_HTTP_CLIENT_CLOSED_REQUEST; } rb->buf->last += n; -------------- next part -------------- An HTML attachment was scrubbed... URL: From piotrsikora at google.com Tue Jan 16 06:57:27 2018 From: piotrsikora at google.com (Piotr Sikora) Date: Mon, 15 Jan 2018 22:57:27 -0800 Subject: yield 499 while reading client body and client prematurely closed connection In-Reply-To: References: Message-ID: Hi Alex, On Mon, Jan 15, 2018 at 9:59 PM, tokers wrote: > # HG changeset patch > # User Alex Zhang > # Date 1516079440 -28800 > # Tue Jan 16 13:10:40 2018 +0800 > # Node ID 9ca5af970d2296a02acefb3070237c5f52119708 > # Parent 93abb5a855d6534f0356882f45be49f8c6a95a8b > yield 499 while reading client body and client prematurely closed > connection. > > The function ngx_http_do_read_client_request_body returns > NGX_HTTP_BAD_REQUEST (client prematurely closed connection), > while the 400 status code cannot reflect that client closed connection > prematurely. It should return code 499(NGX_HTTP_CLIENT_CLOSED_REQUEST) > and it is helpful to troubleshoot some relevant problems. > > Signed-off-by: Alex Zhang > > diff -r 93abb5a855d6 -r 9ca5af970d22 src/http/ngx_http_request_body.c > --- a/src/http/ngx_http_request_body.c Thu Jan 11 21:43:49 2018 +0300 > +++ b/src/http/ngx_http_request_body.c Tue Jan 16 13:10:40 2018 +0800 > @@ -342,14 +342,17 @@ > break; > } > > - if (n == 0) { > + if (n == 0 || n == NGX_ERROR) { > + c->error = 1; > + > + if (n == 0) { > + return NGX_HTTP_BAD_REQUEST; > + } > + > ngx_log_error(NGX_LOG_INFO, c->log, 0, > "client prematurely closed connection"); > - } > > - if (n == 0 || n == NGX_ERROR) { > - c->error = 1; > - return NGX_HTTP_BAD_REQUEST; > + return NGX_HTTP_CLIENT_CLOSED_REQUEST; > } > > rb->buf->last += n; I agree with this change (in fact, I have similar code in my local tree), but something like this is probably more readable: diff -r 93abb5a855d6 src/http/ngx_http_request_body.c --- a/src/http/ngx_http_request_body.c +++ b/src/http/ngx_http_request_body.c @@ -345,9 +345,11 @@ ngx_http_do_read_client_request_body(ngx if (n == 0) { ngx_log_error(NGX_LOG_INFO, c->log, 0, "client prematurely closed connection"); + c->error = 1; + return NGX_HTTP_CLIENT_CLOSED_REQUEST; } - if (n == 0 || n == NGX_ERROR) { + if (n == NGX_ERROR) { c->error = 1; return NGX_HTTP_BAD_REQUEST; } Having said that, handing of client errors before request body is fully received is pretty inconsistent in NGINX, especially between HTTP/1.1 and HTTP/2, so this is only partial fix. Best regards, Piotr Sikora From vadimjunk at gmail.com Tue Jan 16 11:57:40 2018 From: vadimjunk at gmail.com (Vadim Fedorenko) Date: Tue, 16 Jan 2018 14:57:40 +0300 Subject: Fix "header too long" error Message-ID: # HG changeset patch # User Vadim Fedorenko # Date 1516103689 -10800 # Tue Jan 16 14:54:49 2018 +0300 # Node ID deaa364977488f3390d48306c34dc80961e54e14 # Parent fbf6a421212b291cbacfcfc503173c0168449165 Fix "header too long" error This error occurs in rare cases when cached file with different "Vary" header value have headers length more than main cache file and main cache file can be used without revalidation (ngx_file_cache_exists finds file node and rewrites c->body_start on first read). Fix saves buffer size derived from proxy_buffer parameter. diff -r fbf6a421212b -r deaa36497748 src/http/ngx_http_file_cache.c --- a/src/http/ngx_http_file_cache.c Fri Jan 12 02:27:18 2018 +0300 +++ b/src/http/ngx_http_file_cache.c Tue Jan 16 14:54:49 2018 +0300 @@ -271,6 +271,7 @@ ngx_open_file_info_t of; ngx_http_file_cache_t *cache; ngx_http_core_loc_conf_t *clcf; + size_t buffer_size; c = r->cache; @@ -294,6 +295,12 @@ cln->data = c; } + /* save buffer_size because ngx_http_file_cache_exists + * can overwrite c->body_start + */ + + buffer_size = c->body_start; + rc = ngx_http_file_cache_exists(cache, c); ngx_log_debug2(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, @@ -382,7 +389,7 @@ c->length = of.size; c->fs_size = (of.fs_size + cache->bsize - 1) / cache->bsize; - c->buf = ngx_create_temp_buf(r->pool, c->body_start); + c->buf = ngx_create_temp_buf(r->pool, buffer_size); if (c->buf == NULL) { return NGX_ERROR; } -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Tue Jan 16 12:23:00 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 16 Jan 2018 15:23:00 +0300 Subject: yield 499 while reading client body and client prematurely closed connection In-Reply-To: References: Message-ID: <20180116122300.GR34136@mdounin.ru> Hello! On Mon, Jan 15, 2018 at 09:59:53PM -0800, tokers wrote: > # HG changeset patch > # User Alex Zhang > # Date 1516079440 -28800 > # Tue Jan 16 13:10:40 2018 +0800 > # Node ID 9ca5af970d2296a02acefb3070237c5f52119708 > # Parent 93abb5a855d6534f0356882f45be49f8c6a95a8b > yield 499 while reading client body and client prematurely closed > connection. > > The function ngx_http_do_read_client_request_body returns > NGX_HTTP_BAD_REQUEST (client prematurely closed connection), > while the 400 status code cannot reflect that client closed connection > prematurely. It should return code 499(NGX_HTTP_CLIENT_CLOSED_REQUEST) > and it is helpful to troubleshoot some relevant problems. > > Signed-off-by: Alex Zhang The 499 code means that client closed a request before nginx was able to generate a response code and was waiting for something external - for example, a response from an upstream server. It is never meant to indicate a connection close by a client at some arbitrary time. If a client fails to provide full request with correct syntax, this is indicated by 400 with appropriate error message logged at the info level. For example, the same behaviour can be seen in the ngx_http_read_request_header() function. I don't think that changing the meaning of the code is a good idea, especially given that suggested meaning as seen from the patch is inconsistent across reading different parts of the request. -- Maxim Dounin http://mdounin.ru/ From cbranch at cloudflare.com Tue Jan 16 14:14:03 2018 From: cbranch at cloudflare.com (Chris Branch) Date: Tue, 16 Jan 2018 14:14:03 +0000 Subject: [PATCH] Added the $proxy_protocol_server_{addr,port} variables Message-ID: <5cd4799781372a9d405b.1516112043@cbranch-vm.localdomain> # HG changeset patch # User Chris Branch # Date 1516111627 0 # Tue Jan 16 14:07:07 2018 +0000 # Node ID 5cd4799781372a9d405b3d8e62f39ca3c76720c6 # Parent 93abb5a855d6534f0356882f45be49f8c6a95a8b Added the $proxy_protocol_server_{addr,port} variables. diff -r 93abb5a855d6 -r 5cd479978137 src/core/ngx_connection.h --- a/src/core/ngx_connection.h Thu Jan 11 21:43:49 2018 +0300 +++ b/src/core/ngx_connection.h Tue Jan 16 14:07:07 2018 +0000 @@ -146,6 +146,8 @@ ngx_str_t proxy_protocol_addr; in_port_t proxy_protocol_port; + ngx_str_t proxy_protocol_server_addr; + in_port_t proxy_protocol_server_port; #if (NGX_SSL || NGX_COMPAT) ngx_ssl_connection_t *ssl; diff -r 93abb5a855d6 -r 5cd479978137 src/core/ngx_proxy_protocol.c --- a/src/core/ngx_proxy_protocol.c Thu Jan 11 21:43:49 2018 +0300 +++ b/src/core/ngx_proxy_protocol.c Tue Jan 16 14:07:07 2018 +0000 @@ -40,6 +40,7 @@ } p += 5; + /* copy source address */ addr = p; for ( ;; ) { @@ -72,16 +73,40 @@ ngx_memcpy(c->proxy_protocol_addr.data, addr, len); c->proxy_protocol_addr.len = len; + /* copy destination address */ + addr = p; + for ( ;; ) { if (p == last) { goto invalid; } - if (*p++ == ' ') { + ch = *p++; + + if (ch == ' ') { break; } + + if (ch != ':' && ch != '.' + && (ch < 'a' || ch > 'f') + && (ch < 'A' || ch > 'F') + && (ch < '0' || ch > '9')) + { + goto invalid; + } } + len = p - addr - 1; + c->proxy_protocol_server_addr.data = ngx_pnalloc(c->pool, len); + + if (c->proxy_protocol_server_addr.data == NULL) { + return NULL; + } + + ngx_memcpy(c->proxy_protocol_server_addr.data, addr, len); + c->proxy_protocol_server_addr.len = len; + + /* parse source port */ port = p; for ( ;; ) { @@ -104,6 +129,31 @@ c->proxy_protocol_port = (in_port_t) n; + /* parse destination port */ + port = p; + + for ( ;; ) { + if (p == last) { + goto invalid; + } + + if (*p++ == CR) { + break; + } + } + + /* p now points to LF; step back to allow the skip loop to terminate */ + p--; + len = p - port; + + n = ngx_atoi(port, len); + + if (n < 0 || n > 65535) { + goto invalid; + } + + c->proxy_protocol_server_port = (in_port_t) n; + ngx_log_debug2(NGX_LOG_DEBUG_CORE, c->log, 0, "PROXY protocol address: %V %i", &c->proxy_protocol_addr, n); diff -r 93abb5a855d6 -r 5cd479978137 src/http/ngx_http_variables.c --- a/src/http/ngx_http_variables.c Thu Jan 11 21:43:49 2018 +0300 +++ b/src/http/ngx_http_variables.c Tue Jan 16 14:07:07 2018 +0000 @@ -65,6 +65,10 @@ ngx_http_variable_value_t *v, uintptr_t data); static ngx_int_t ngx_http_variable_proxy_protocol_port(ngx_http_request_t *r, ngx_http_variable_value_t *v, uintptr_t data); +static ngx_int_t ngx_http_variable_proxy_protocol_server_addr( + ngx_http_request_t *r, ngx_http_variable_value_t *v, uintptr_t data); +static ngx_int_t ngx_http_variable_proxy_protocol_server_port( + ngx_http_request_t *r, ngx_http_variable_value_t *v, uintptr_t data); static ngx_int_t ngx_http_variable_server_addr(ngx_http_request_t *r, ngx_http_variable_value_t *v, uintptr_t data); static ngx_int_t ngx_http_variable_server_port(ngx_http_request_t *r, @@ -204,6 +208,12 @@ { ngx_string("proxy_protocol_port"), NULL, ngx_http_variable_proxy_protocol_port, 0, 0, 0 }, + { ngx_string("proxy_protocol_server_addr"), NULL, + ngx_http_variable_proxy_protocol_server_addr, 0, 0, 0 }, + + { ngx_string("proxy_protocol_server_port"), NULL, + ngx_http_variable_proxy_protocol_server_port, 0, 0, 0 }, + { ngx_string("server_addr"), NULL, ngx_http_variable_server_addr, 0, 0, 0 }, { ngx_string("server_port"), NULL, ngx_http_variable_server_port, 0, 0, 0 }, @@ -1349,6 +1359,46 @@ static ngx_int_t +ngx_http_variable_proxy_protocol_server_addr(ngx_http_request_t *r, + ngx_http_variable_value_t *v, uintptr_t data) +{ + v->len = r->connection->proxy_protocol_server_addr.len; + v->valid = 1; + v->no_cacheable = 0; + v->not_found = 0; + v->data = r->connection->proxy_protocol_server_addr.data; + + return NGX_OK; +} + + +static ngx_int_t +ngx_http_variable_proxy_protocol_server_port(ngx_http_request_t *r, + ngx_http_variable_value_t *v, uintptr_t data) +{ + ngx_uint_t port; + + v->len = 0; + v->valid = 1; + v->no_cacheable = 0; + v->not_found = 0; + + v->data = ngx_pnalloc(r->pool, sizeof("65535") - 1); + if (v->data == NULL) { + return NGX_ERROR; + } + + port = r->connection->proxy_protocol_server_port; + + if (port > 0 && port < 65536) { + v->len = ngx_sprintf(v->data, "%ui", port) - v->data; + } + + return NGX_OK; +} + + +static ngx_int_t ngx_http_variable_server_addr(ngx_http_request_t *r, ngx_http_variable_value_t *v, uintptr_t data) { From cbranch at cloudflare.com Tue Jan 16 14:27:20 2018 From: cbranch at cloudflare.com (Chris Branch) Date: Tue, 16 Jan 2018 14:27:20 +0000 Subject: [PATCH] Tests: added tests for proxy_protocol_server_{addr, port} variables Message-ID: <1c5977148c79eba0f14e.1516112840@cbranch-vm.localdomain> # HG changeset patch # User Chris Branch # Date 1516111868 0 # Tue Jan 16 14:11:08 2018 +0000 # Node ID 1c5977148c79eba0f14e60df134a7cace77d3207 # Parent 6ca8b38f63b61c6c00ac4c9e355a7909dbb6aeb2 Tests: added tests for proxy_protocol_server_{addr,port} variables. diff -r 6ca8b38f63b6 -r 1c5977148c79 proxy_protocol_server.t --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/proxy_protocol_server.t Tue Jan 16 14:11:08 2018 +0000 @@ -0,0 +1,108 @@ +#!/usr/bin/perl + +# Tests for proxy_protocol_server_{addr,port} variables. + +############################################################################### + +use warnings; +use strict; + +use Test::More; + +use Socket qw/ CRLF /; + +BEGIN { use FindBin; chdir($FindBin::Bin); } + +use lib 'lib'; +use Test::Nginx; + +############################################################################### + +select STDERR; $| = 1; +select STDOUT; $| = 1; + +my $t = Test::Nginx->new()->has(qw/http/) + ->write_file_expand('nginx.conf', <<'EOF'); + +%%TEST_GLOBALS%% + +daemon off; + +events { +} + +http { + %%TEST_GLOBALS_HTTP%% + + log_format port $proxy_protocol_server_port; + + server { + listen 127.0.0.1:8080 proxy_protocol; + server_name localhost; + + add_header X-PP-Server-Addr $proxy_protocol_server_addr; + add_header X-PP-Server-Port $proxy_protocol_server_port; + add_header X-Remote-Addr $server_addr; + add_header X-Remote-Port $server_port; + + location /log { + access_log %%TESTDIR%%/port.log port; + } + } +} + +EOF + +$t->write_file('t1', 'SEE-THIS'); +$t->run()->plan(13); + +############################################################################### + +my $tcp4 = 'PROXY TCP4 192.0.2.1 192.0.2.2 123 5678' . CRLF; +my $tcp6 = 'PROXY TCP6 2001:Db8::1 2001:Db8::2 123 5678' . CRLF; +my $unk = 'PROXY UNKNOWN 1 2 3 4 5 6' . CRLF; +my $r; + +# PROXY header parsing + +$r = pp_get('/t1', $tcp4); +like($r, qr/SEE-THIS/, 'tcp4 request'); +like($r, qr/X-PP-Server-Addr: 192.0.2.2/, 'tcp4 pp server addr'); +like($r, qr/X-PP-Server-Port: 5678/, 'tcp4 pp server port'); +like($r, qr/X-Remote-Port: 8080/, 'tcp4 real server port'); + +$r = pp_get('/t1', $tcp6); +like($r, qr/SEE-THIS/, 'tcp6 request'); +like($r, qr/X-PP-Server-Addr: 2001:DB8::2/i, 'tcp6 pp server addr'); +like($r, qr/X-PP-Server-Port: 5678/, 'tcp6 pp server port'); +like($r, qr/X-Remote-Port: 8080/, 'tcp6 real server port'); + +$r = pp_get('/t1', $unk); +like($r, qr/SEE-THIS/, 'tcp6 request'); +unlike($r, qr/X-PP-Server-Addr/, 'unknown pp server addr'); +unlike($r, qr/X-PP-Server-Port/, 'unknown pp server port'); +like($r, qr/X-Remote-Port: 8080/, 'unknown real server port'); + +# log + +pp_get('/log', $tcp4); + +$t->stop(); + +my $log = $t->read_file('/port.log'); +chomp $log; + +is($log, 5678, 'pp port log'); + +############################################################################### + +sub pp_get { + my ($url, $proxy) = @_; + return http($proxy . < details: http://hg.nginx.org/nginx/rev/cbf59d483c9c branches: changeset: 7189:cbf59d483c9c user: Ruslan Ermilov date: Tue Jan 16 13:52:03 2018 +0300 description: Fixed --test-build-eventport on macOS 10.12 and later. In macOS 10.12, CLOCK_REALTIME and clockid_t were added, but not timer_t. diffstat: src/event/modules/ngx_eventport_module.c | 2 ++ 1 files changed, 2 insertions(+), 0 deletions(-) diffs (12 lines): diff -r 93abb5a855d6 -r cbf59d483c9c src/event/modules/ngx_eventport_module.c --- a/src/event/modules/ngx_eventport_module.c Thu Jan 11 21:43:49 2018 +0300 +++ b/src/event/modules/ngx_eventport_module.c Tue Jan 16 13:52:03 2018 +0300 @@ -19,6 +19,8 @@ #define CLOCK_REALTIME 0 typedef int clockid_t; typedef void * timer_t; +#elif (NGX_DARWIN) +typedef void * timer_t; #endif /* Solaris declarations */ From imhongxiaolong at gmail.com Fri Jan 19 12:43:06 2018 From: imhongxiaolong at gmail.com (xiaolong hong) Date: Fri, 19 Jan 2018 20:43:06 +0800 Subject: Fixed upstream->read timer when downstream->write not ready Message-ID: # HG changeset patch # User Xiaolong Hong # Date 1516354115 -28800 # Fri Jan 19 17:28:35 2018 +0800 # Node ID f017b8c1a99433cc3321475968556aee50609145 # Parent 93abb5a855d6534f0356882f45be49f8c6a95a8b Fixed upstream->read timer when downstream->write not ready. diff -r 93abb5a855d6 -r f017b8c1a994 src/http/ngx_http_upstream.c --- a/src/http/ngx_http_upstream.c Thu Jan 11 21:43:49 2018 +0300 +++ b/src/http/ngx_http_upstream.c Fri Jan 19 17:28:35 2018 +0800 @@ -3625,7 +3625,9 @@ ngx_http_upstream_process_non_buffered_r return; } - if (upstream->read->active && !upstream->read->ready) { + if (upstream->read->active && !upstream->read->ready + && !(u->length == 0 || (upstream->read->eof && u->length == -1))) + { ngx_add_timer(upstream->read, u->conf->read_timeout); } else if (upstream->read->timer_set) { -------- When downstream hung and nginx received the last buffer from upstream, both downstream->write timer and upstream->read timer would be added in the meantime because downstream->write->ready and upstream->read->ready would opportunely be 0. Actually if clcf->send_timeout less then u->conf->read_timeout, upstream->read timer would be waked before downstream->write timer, it caused mistakes to report the "upstream timed out" error deviating from the fact that upstream worked normally but downstream hung. This problem could be fixed to check upstream eof when trying to add upstream->read timer. -------- We got debug logs as follows: 2018/01/19 17:16:44 [debug] 19674#0: *5 http write filter: l:0 f:1 s:510767 2018/01/19 17:16:44 [debug] 19674#0: *5 http write filter limit 0 2018/01/19 17:16:44 [debug] 19674#0: *5 http write filter 000000010080E068 2018/01/19 17:16:44 [debug] 19674#0: *5 http copy filter: -2 "/t?" 2018/01/19 17:16:44 [debug] 19674#0: *5 event timer del: 7: 1516353481686 2018/01/19 17:16:44 [debug] 19674#0: *5 event timer add: 7: 77000:1516353481990 (add downstream->write timer) 2018/01/19 17:16:44 [debug] 19674#0: *5 event timer: 8, old: 1516353426958, new: 1516353426990 2018/01/19 17:16:44 [debug] 19674#0: timer delta: 16 2018/01/19 17:16:44 [debug] 19674#0: worker cycle 2018/01/19 17:16:44 [debug] 19674#0: kevent timer: 21968, changes: 0 (add upstream->read timer) 2018/01/19 17:16:45 [debug] 19674#0: kevent events: 1 2018/01/19 17:16:45 [debug] 19674#0: kevent: 8: ft:-1 fl:0025 ff:00000000 d:3068 ud:000000010180C6D0 2018/01/19 17:16:45 [debug] 19674#0: *5 http upstream request: "/t?" 2018/01/19 17:16:45 [debug] 19674#0: *5 http upstream process non buffered upstream 2018/01/19 17:16:45 [debug] 19674#0: *5 recv: eof:0, avail:3068, err:0 2018/01/19 17:16:45 [debug] 19674#0: *5 recv: fd:8 3080 of 4734892 2018/01/19 17:16:45 [debug] 19674#0: *5 posix_memalign: 000000010100AE00:4096 @16 2018/01/19 17:16:45 [debug] 19674#0: *5 http output filter "/t?" 2018/01/19 17:16:45 [debug] 19674#0: *5 http copy filter: "/t?" 2018/01/19 17:16:45 [debug] 19674#0: *5 http postpone filter "/t?" 000000010100AE20 2018/01/19 17:16:45 [debug] 19674#0: *5 http chunk: 3080 2018/01/19 17:17:06 [debug] 19674#0: kevent events: 0 2018/01/19 17:17:06 [debug] 19674#0: timer delta: 21739 2018/01/19 17:17:06 [debug] 19674#0: *5 event timer del: 8: 1516353426958 2018/01/19 17:17:06 [debug] 19674#0: *5 http upstream request: "/t?" 2018/01/19 17:17:06 [debug] 19674#0: *5 http upstream process non buffered upstream 2018/01/19 17:17:06 [error] 19674#0: *5 upstream timed out (60: Operation timed out) while reading upstream, client: 127.0.0.1, server: localhost, request: "GET /t HTTP/1.1", upstream: "http://127.0.0.1:9 090/t", host: "t.taobao.com" 2018/01/19 17:17:06 [debug] 19674#0: *5 finalize http upstream request: 504 2018/01/19 17:17:06 [debug] 19674#0: *5 finalize http proxy request 2018/01/19 17:17:06 [debug] 19674#0: *5 free rr peer 1 0 2018/01/19 17:17:06 [debug] 19674#0: *5 close http upstream connection: 8 2018/01/19 17:17:06 [debug] 19674#0: *5 free: 0000000100300220, unused: 48 2018/01/19 17:17:06 [debug] 19674#0: *5 reusable connection: 0 2018/01/19 17:17:06 [debug] 19674#0: *5 http output filter "/t?" 2018/01/19 17:17:06 [debug] 19674#0: *5 http copy filter: "/t?" 2018/01/19 17:17:06 [debug] 19674#0: *5 http postpone filter "/t?" 00007FFEEFBFF1B0 2018/01/19 17:17:06 [debug] 19674#0: *5 http chunk: 0 2018/01/19 17:18:23 [debug] 19674#0: kevent events: 0 2018/01/19 17:18:23 [debug] 19674#0: timer delta: 77012 2018/01/19 17:18:23 [debug] 19674#0: *5 event timer del: 7: 1516353503980 2018/01/19 17:18:23 [debug] 19674#0: *5 http run request: "/t?" 2018/01/19 17:18:23 [debug] 19674#0: *5 http writer handler: "/t?" 2018/01/19 17:18:23 [info] 19674#0: *5 client timed out (60: Operation timed out) while sending to client, client: 127.0.0.1, server: localhost, request: "GET /t HTTP/1.1", upstream: "http://127.0.0.1:909 0/t", host: "t.taobao.com" 2018/01/19 17:18:23 [debug] 19674#0: *5 http finalize request: 408, "/t?" a:1, c:1 2018/01/19 17:18:23 [debug] 19674#0: *5 http terminate request count:1 2018/01/19 17:18:23 [debug] 19674#0: *5 http terminate cleanup count:1 blk:0 2018/01/19 17:18:23 [debug] 19674#0: *5 http posted request: "/t?" 2018/01/19 17:18:23 [debug] 19674#0: *5 http terminate handler count:1 2018/01/19 17:18:23 [debug] 19674#0: *5 http request count:1 blk:0 2018/01/19 17:18:23 [debug] 19674#0: *5 http close request 2018/01/19 17:18:23 [debug] 19674#0: *5 http log handler -------- We could reproduce this problem by: 1?Configuration as follows: ``` http { include mime.types; default_type application/octet-stream; #log_format main '$remote_addr - $remote_user [$time_local] "$request" ' # '$status $body_bytes_sent "$http_referer" ' # '"$http_user_agent" "$http_x_forwarded_for"'; #access_log logs/access.log main; sendfile on; #tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; proxy_connect_timeout 60s; proxy_send_timeout 33s; proxy_read_timeout 22s; proxy_max_temp_file_size 0; proxy_busy_buffers_size 5120k; proxy_buffer_size 5120k; proxy_buffers 8 5120k; proxy_buffering off; client_header_timeout 55s; client_body_timeout 66s; send_timeout 77s; ``` 2. Create a HTTP Server as upstream to produce 1Mb response. 3. Create a downstream but only connect()->send()->sleep() beforce recv(). -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Mon Jan 22 16:34:15 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 22 Jan 2018 19:34:15 +0300 Subject: Fixed upstream->read timer when downstream->write not ready In-Reply-To: References: Message-ID: <20180122163415.GE34136@mdounin.ru> Hello! On Fri, Jan 19, 2018 at 08:43:06PM +0800, xiaolong hong wrote: > # HG changeset patch > # User Xiaolong Hong > # Date 1516354115 -28800 > # Fri Jan 19 17:28:35 2018 +0800 > # Node ID f017b8c1a99433cc3321475968556aee50609145 > # Parent 93abb5a855d6534f0356882f45be49f8c6a95a8b > Fixed upstream->read timer when downstream->write not ready. > > diff -r 93abb5a855d6 -r f017b8c1a994 src/http/ngx_http_upstream.c > --- a/src/http/ngx_http_upstream.c Thu Jan 11 21:43:49 2018 +0300 > +++ b/src/http/ngx_http_upstream.c Fri Jan 19 17:28:35 2018 +0800 > @@ -3625,7 +3625,9 @@ ngx_http_upstream_process_non_buffered_r > return; > } > > - if (upstream->read->active && !upstream->read->ready) { > + if (upstream->read->active && !upstream->read->ready > + && !(u->length == 0 || (upstream->read->eof && u->length == -1))) > + { > ngx_add_timer(upstream->read, u->conf->read_timeout); > > } else if (upstream->read->timer_set) { > > -------- > > When downstream hung and nginx received the last buffer from upstream, both > downstream->write timer and upstream->read timer would be added in the > meantime because downstream->write->ready and upstream->read->ready would > opportunely be 0. > > Actually if clcf->send_timeout less then u->conf->read_timeout, > upstream->read timer would be waked before downstream->write timer, it > caused mistakes to report the "upstream timed out" error deviating from the > fact that upstream worked normally but downstream hung. > > This problem could be fixed to check upstream eof when trying to add > upstream->read timer. Thank you for the patch. The problem looks valid - indeed, if in non-buffered proxy mode writing to the client blocked, it is possible that we'll continue reading from upstream after the response was fully received, and will keep upstream read timer set. This in turn can result in proxy_read_timeout being triggered if it is smaller than send_timeout. I can't say I like the suggested change though. The resulting conditions look overcomplicated. Also, likely we have similar problem in ngx_http_upstream_process_upgraded(), and it probably needs fixing too. Also, suggested condition is incorrect, as it will keep timer if upstream->read->eof is set, but u->length isn't -1, or if upstream->read->error is set. In case of ngx_http_upstream_process_non_buffered_request() a better approach might be to don't try to do any processing in the proxy module after response is received from upstream, but rather finalize the upstream connection and pass responsibility to ngx_writer() instead, as we do in the normal (aka buffered) case. For ngx_http_upstream_process_upgraded(), additional ->eof / ->error checks are probably needed. -- Maxim Dounin http://mdounin.ru/ From gmm at csdoc.com Mon Jan 22 19:43:42 2018 From: gmm at csdoc.com (Gena Makhomed) Date: Mon, 22 Jan 2018 21:43:42 +0200 Subject: [PATCH] Upstream: fastcgi_cache_convert_head directive. Message-ID: <441b2bd5-054b-cfae-8ef7-ebbd54032c67@csdoc.com> # HG changeset patch # User Gena Makhomed # Date 1516650013 -7200 # Mon Jan 22 21:40:13 2018 +0200 # Node ID 4f635c5c8da929eb1e25bc8fbce7d7d5726468cf # Parent cbf59d483c9cd94dc0fb05f1978601d02af69c20 Upstream: fastcgi_cache_convert_head directive. The directive toggles conversion of HEAD to GET for cacheable fastcgi requests. When disabled, $request_method must be added to cache key for consistency. By default, HEAD is converted to GET as before. After previous patch http://hg.nginx.org/nginx/rev/4d5ac1a31d44 HEAD is not converted to GET as before for cacheable fastcgi requests. This patch fixes fastcgi cache regression introduced by patch http://hg.nginx.org/nginx/rev/4d5ac1a31d44 diff -r cbf59d483c9c -r 4f635c5c8da9 src/http/modules/ngx_http_fastcgi_module.c --- a/src/http/modules/ngx_http_fastcgi_module.c Tue Jan 16 13:52:03 2018 +0300 +++ b/src/http/modules/ngx_http_fastcgi_module.c Mon Jan 22 21:40:13 2018 +0200 @@ -470,6 +470,13 @@ offsetof(ngx_http_fastcgi_loc_conf_t, upstream.cache_revalidate), NULL }, + { ngx_string("fastcgi_cache_convert_head"), + NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_FLAG, + ngx_conf_set_flag_slot, + NGX_HTTP_LOC_CONF_OFFSET, + offsetof(ngx_http_fastcgi_loc_conf_t, upstream.cache_convert_head), + NULL }, + { ngx_string("fastcgi_cache_background_update"), NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_FLAG, ngx_conf_set_flag_slot, @@ -2781,6 +2788,7 @@ conf->upstream.cache_lock_timeout = NGX_CONF_UNSET_MSEC; conf->upstream.cache_lock_age = NGX_CONF_UNSET_MSEC; conf->upstream.cache_revalidate = NGX_CONF_UNSET; + conf->upstream.cache_convert_head = NGX_CONF_UNSET; conf->upstream.cache_background_update = NGX_CONF_UNSET; #endif @@ -3074,6 +3082,9 @@ ngx_conf_merge_value(conf->upstream.cache_revalidate, prev->upstream.cache_revalidate, 0); + ngx_conf_merge_value(conf->upstream.cache_convert_head, + prev->upstream.cache_convert_head, 1); + ngx_conf_merge_value(conf->upstream.cache_background_update, prev->upstream.cache_background_update, 0); From mdounin at mdounin.ru Tue Jan 23 12:32:49 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 23 Jan 2018 15:32:49 +0300 Subject: [PATCH] Upstream: fastcgi_cache_convert_head directive. In-Reply-To: <441b2bd5-054b-cfae-8ef7-ebbd54032c67@csdoc.com> References: <441b2bd5-054b-cfae-8ef7-ebbd54032c67@csdoc.com> Message-ID: <20180123123249.GG34136@mdounin.ru> Hello! On Mon, Jan 22, 2018 at 09:43:42PM +0200, Gena Makhomed wrote: > # HG changeset patch > # User Gena Makhomed > # Date 1516650013 -7200 > # Mon Jan 22 21:40:13 2018 +0200 > # Node ID 4f635c5c8da929eb1e25bc8fbce7d7d5726468cf > # Parent cbf59d483c9cd94dc0fb05f1978601d02af69c20 > Upstream: fastcgi_cache_convert_head directive. > > The directive toggles conversion of HEAD to GET for cacheable fastcgi > requests. > When disabled, $request_method must be added to cache key for consistency. > By default, HEAD is converted to GET as before. > > After previous patch http://hg.nginx.org/nginx/rev/4d5ac1a31d44 > HEAD is not converted to GET as before for cacheable fastcgi requests. > > This patch fixes fastcgi cache regression introduced > by patch http://hg.nginx.org/nginx/rev/4d5ac1a31d44 Please elaborate. We aren't aware of any cache regressions introduced by 4d5ac1a31d44. Also, I don't see how the change in question can introduce one, or the suggested patch can fix it. -- Maxim Dounin http://mdounin.ru/ From gmm at csdoc.com Tue Jan 23 13:27:23 2018 From: gmm at csdoc.com (Gena Makhomed) Date: Tue, 23 Jan 2018 15:27:23 +0200 Subject: [PATCH] Upstream: fastcgi_cache_convert_head directive. In-Reply-To: <20180123123249.GG34136@mdounin.ru> References: <441b2bd5-054b-cfae-8ef7-ebbd54032c67@csdoc.com> <20180123123249.GG34136@mdounin.ru> Message-ID: <8505ce9a-8e77-b0b8-53f6-1ace10624481@csdoc.com> On 23.01.2018 14:32, Maxim Dounin wrote: >> # HG changeset patch >> # User Gena Makhomed >> # Date 1516650013 -7200 >> # Mon Jan 22 21:40:13 2018 +0200 >> # Node ID 4f635c5c8da929eb1e25bc8fbce7d7d5726468cf >> # Parent cbf59d483c9cd94dc0fb05f1978601d02af69c20 >> Upstream: fastcgi_cache_convert_head directive. >> >> The directive toggles conversion of HEAD to GET for cacheable fastcgi >> requests. >> When disabled, $request_method must be added to cache key for consistency. >> By default, HEAD is converted to GET as before. >> >> After previous patch http://hg.nginx.org/nginx/rev/4d5ac1a31d44 >> HEAD is not converted to GET as before for cacheable fastcgi requests. >> >> This patch fixes fastcgi cache regression introduced >> by patch http://hg.nginx.org/nginx/rev/4d5ac1a31d44 > > Please elaborate. We aren't aware of any cache regressions > introduced by 4d5ac1a31d44. Also, I don't see how the change in > question can introduce one, or the suggested patch can fix it. By default HEAD converted to GET only for cacheable proxy requests. For cacheable fastcgi requests no such conversion is done, and, this is means what example for fastcgi_cache_key in documentation https://nginx.org/en/docs/http/ngx_http_fastcgi_module.html#fastcgi_cache_key is invalid: fastcgi_cache_key localhost:9000$request_uri; If first request to cacheable resource was HEAD - nginx cache will be populated with empty response, and all rest GET requests will return empty page to client. This is bug. Probably nginx documentation should be fixed: $request_method must be included in fastcgi_cache_key example and nginx documentation should explicitly define, what $request_method always must be added to fastcgi cache key for consistency. Second approach is to make fastcgi cache work similar to proxy cache, and this way I was try to do with my patch. But as I can realize later, my patch in not complete and HEAD not converted to GET for cacheable fastcgi requests. Probably I also need to switch request method in ngx_http_fastcgi_create_request function from fastcgi module. I think, what fastcgi cache and proxy cache should work uniformly, by default converting HEAD to GET for cacheable fastcgi requests too. -- Best regards, Gena From mdounin at mdounin.ru Tue Jan 23 13:52:10 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 23 Jan 2018 16:52:10 +0300 Subject: [PATCH] Upstream: fastcgi_cache_convert_head directive. In-Reply-To: <8505ce9a-8e77-b0b8-53f6-1ace10624481@csdoc.com> References: <441b2bd5-054b-cfae-8ef7-ebbd54032c67@csdoc.com> <20180123123249.GG34136@mdounin.ru> <8505ce9a-8e77-b0b8-53f6-1ace10624481@csdoc.com> Message-ID: <20180123135209.GH34136@mdounin.ru> Hello! On Tue, Jan 23, 2018 at 03:27:23PM +0200, Gena Makhomed wrote: > On 23.01.2018 14:32, Maxim Dounin wrote: > > >> # HG changeset patch > >> # User Gena Makhomed > >> # Date 1516650013 -7200 > >> # Mon Jan 22 21:40:13 2018 +0200 > >> # Node ID 4f635c5c8da929eb1e25bc8fbce7d7d5726468cf > >> # Parent cbf59d483c9cd94dc0fb05f1978601d02af69c20 > >> Upstream: fastcgi_cache_convert_head directive. > >> > >> The directive toggles conversion of HEAD to GET for cacheable fastcgi > >> requests. > >> When disabled, $request_method must be added to cache key for consistency. > >> By default, HEAD is converted to GET as before. > >> > >> After previous patch http://hg.nginx.org/nginx/rev/4d5ac1a31d44 > >> HEAD is not converted to GET as before for cacheable fastcgi requests. > >> > >> This patch fixes fastcgi cache regression introduced > >> by patch http://hg.nginx.org/nginx/rev/4d5ac1a31d44 > > > > Please elaborate. We aren't aware of any cache regressions > > introduced by 4d5ac1a31d44. Also, I don't see how the change in > > question can introduce one, or the suggested patch can fix it. > > By default HEAD converted to GET only for cacheable proxy requests. > > For cacheable fastcgi requests no such conversion is done, > and, this is means what example for fastcgi_cache_key in documentation > https://nginx.org/en/docs/http/ngx_http_fastcgi_module.html#fastcgi_cache_key > is invalid: > > fastcgi_cache_key localhost:9000$request_uri; > > If first request to cacheable resource was HEAD > - nginx cache will be populated with empty response, > and all rest GET requests will return empty page to client. > > This is bug. Probably nginx documentation should be fixed: > $request_method must be included in fastcgi_cache_key example > and nginx documentation should explicitly define, what $request_method > always must be added to fastcgi cache key for consistency. The documentation provides an example. Whether this example is correct for a particular script or not - depends on the script and other configuration. Most [Fast]CGI scripts don't care about request method and always return response with a body, hence the example. Note well that fastcgi_cache_key is not set by default. This is because proper key depends on the configuration, and constructing appropriate cache key is dedicated to the administrator. > Second approach is to make fastcgi cache work similar to proxy cache, > and this way I was try to do with my patch. But as I can realize later, > my patch in not complete and HEAD not converted to GET for cacheable > fastcgi requests. Probably I also need to switch request method > in ngx_http_fastcgi_create_request function from fastcgi module. Ok, so you've already realized that your patch does nothing, and the patch description is simply wrong. > I think, what fastcgi cache and proxy cache should work uniformly, > by default converting HEAD to GET for cacheable fastcgi requests too. While making fastcgi cache and proxy cache identical is certainly a good goal, it would be something not trivial to achieve without various major changes. -- Maxim Dounin http://mdounin.ru/ From fx.juhel at free.fr Wed Jan 24 13:22:09 2018 From: fx.juhel at free.fr (fx.juhel at free.fr) Date: Wed, 24 Jan 2018 14:22:09 +0100 (CET) Subject: Fwd: nginx.conf + Location + regular expression In-Reply-To: <1173954835.829653965.1516795419092.JavaMail.root@zimbra58-e10.priv.proxad.net> Message-ID: <272391657.829924611.1516800129056.JavaMail.root@zimbra58-e10.priv.proxad.net> Hello nginx DEV TEAM ! How are you ? I would like to know how can I do to detect this type of directory name with regexp : (Guid 8-4-4-4-12) => ?be93d4d0-b25b-de94-fcbb-463e6c0fe9cc? How can I use $reg_exp in location I do this : set $reg_exp "[0-9a-fA-F]{8}?-[0-9a-fA-F]{4}?-[0-9a-fA-F]{4}?-[0-9a-fA-F]{4}?-[0-9a-fA-F]{12}"; location ~ ^/locale/Project/Documents/(reg_exp)/(?.*)$ { } Thank you very well for your reply Fran?ois ? Xavier JUHEL From fx.juhel at free.fr Wed Jan 24 13:27:18 2018 From: fx.juhel at free.fr (fx.juhel at free.fr) Date: Wed, 24 Jan 2018 14:27:18 +0100 (CET) Subject: Fwd: nginx.conf + Location + regular expression In-Reply-To: <272391657.829924611.1516800129056.JavaMail.root@zimbra58-e10.priv.proxad.net> Message-ID: <1962154784.829941159.1516800438601.JavaMail.root@zimbra58-e10.priv.proxad.net> Hello nginx DEV TEAM ! How are you ? I would like to know how can I do to detect this type of directory name with regexp : (Guid 8-4-4-4-12) => ?be93d4d0-b25b-de94-fcbb-463e6c0fe9cc? How can I use $reg_exp in location I do this : set $reg_exp "[0-9a-fA-F]{8}?-[0-9a-fA-F]{4}?-[0-9a-fA-F]{4}?-[0-9a-fA-F]{4}?-[0-9a-fA-F]{12}"; location ~ ^/locale/Project/Documents/(reg_exp)/(?.*)$ { } Thank you very well for your reply Fran?ois ? Xavier JUHEL From mdounin at mdounin.ru Wed Jan 24 13:56:12 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 24 Jan 2018 16:56:12 +0300 Subject: Fwd: nginx.conf + Location + regular expression In-Reply-To: <1962154784.829941159.1516800438601.JavaMail.root@zimbra58-e10.priv.proxad.net> References: <272391657.829924611.1516800129056.JavaMail.root@zimbra58-e10.priv.proxad.net> <1962154784.829941159.1516800438601.JavaMail.root@zimbra58-e10.priv.proxad.net> Message-ID: <20180124135611.GI34136@mdounin.ru> Hello! On Wed, Jan 24, 2018 at 02:27:18PM +0100, fx.juhel at free.fr wrote: > Hello nginx DEV TEAM ! How are you ? > > I would like to know how can I do to detect this type of > directory name with regexp : (Guid 8-4-4-4-12) => > ?be93d4d0-b25b-de94-fcbb-463e6c0fe9cc? This is a mailing list for nginx developers. For user-level questions, please use the nginx@ mailing list instead. Thank you. -- Maxim Dounin http://mdounin.ru/ From serg.brester at sebres.de Wed Jan 24 14:00:37 2018 From: serg.brester at sebres.de (Sergey Brester) Date: Wed, 24 Jan 2018 15:00:37 +0100 Subject: Fwd: nginx.conf + Location + regular expression In-Reply-To: <272391657.829924611.1516800129056.JavaMail.root@zimbra58-e10.priv.proxad.net> References: <272391657.829924611.1516800129056.JavaMail.root@zimbra58-e10.priv.proxad.net> Message-ID: <94bd6857176ff514c1551d6550f6b8d2@sebres.de> Although you've used a wrong list for your question (this is development mailing list not a forum for howto's), but because I've ATM a free minute, here you go: You CANNOT use variables in nginx location [2] uri-pattern (no matter regexp or static URI), so the following code: ``` location ~ ^.../$reg_exp/...$ {... ``` does not substitute the variable (the char $ - is just a dollar character here). The same is valid for directive if [3], so `if ($uri ~* ".../$reg_exp/...") { ...` does not interpolate `$reg_exp` as variable. So what you are trying is impossible in nginx. Just write it direct in location-regex (without variable) or use some macros resp. some custom module, that could do it. In addition please note this. [4] Regards, sebres. Am 24.01.2018 14:22, schrieb fx.juhel at free.fr: > Hello nginx DEV TEAM ! How are you ? > > I would like to know how can I do to detect this type of directory name with regexp : (Guid 8-4-4-4-12) => "be93d4d0-b25b-de94-fcbb-463e6c0fe9cc" > > How can I use $reg_exp in location > I do this : > > set $reg_exp "[0-9a-fA-F]{8}?-[0-9a-fA-F]{4}?-[0-9a-fA-F]{4}?-[0-9a-fA-F]{4}?-[0-9a-fA-F]{12}"; > > location ~ ^/locale/Project/Documents/(reg_exp)/(?.*)$ { > > } > > Thank you very well for your reply > > Fran?ois - Xavier JUHEL > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel [1] Links: ------ [1] http://mailman.nginx.org/mailman/listinfo/nginx-devel [2] http://nginx.org/en/docs/http/ngx_http_core_module.html#location [3] http://nginx.org/en/docs/http/ngx_http_rewrite_module.html#if [4] http://nginx.org/en/docs/faq/variables_in_config.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From pluknet at nginx.com Thu Jan 25 13:13:05 2018 From: pluknet at nginx.com (Sergey Kandaurov) Date: Thu, 25 Jan 2018 16:13:05 +0300 Subject: Fix "header too long" error In-Reply-To: References: Message-ID: <4162A3A6-F434-4021-9B30-79A9BE47DB40@nginx.com> > On 16 Jan 2018, at 14:57, Vadim Fedorenko wrote: > > # HG changeset patch > # User Vadim Fedorenko > # Date 1516103689 -10800 > # Tue Jan 16 14:54:49 2018 +0300 > # Node ID deaa364977488f3390d48306c34dc80961e54e14 > # Parent fbf6a421212b291cbacfcfc503173c0168449165 > Fix ?header too long" error Style: missing ?Cache: ? prefix and a dot after the sentence. > > This error occurs in rare cases when cached file with different "Vary" > header value have headers length more than main cache file and main > cache file can be used without revalidation (ngx_file_cache_exists > finds file node and rewrites c->body_start on first read). Fix saves > buffer size derived from proxy_buffer parameter. > > diff -r fbf6a421212b -r deaa36497748 src/http/ngx_http_file_cache.c > --- a/src/http/ngx_http_file_cache.c Fri Jan 12 02:27:18 2018 +0300 > +++ b/src/http/ngx_http_file_cache.c Tue Jan 16 14:54:49 2018 +0300 > @@ -271,6 +271,7 @@ > ngx_open_file_info_t of; > ngx_http_file_cache_t *cache; > ngx_http_core_loc_conf_t *clcf; > + size_t buffer_size; buffer_size is unsorted. > > c = r->cache; > > @@ -294,6 +295,12 @@ > cln->data = c; > } > > + /* save buffer_size because ngx_http_file_cache_exists > + * can overwrite c->body_start > + */ Multi-line comments should look like real sentences. > + > + buffer_size = c->body_start; > + > rc = ngx_http_file_cache_exists(cache, c); > > ngx_log_debug2(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, > @@ -382,7 +389,7 @@ > c->length = of.size; > c->fs_size = (of.fs_size + cache->bsize - 1) / cache->bsize; > > - c->buf = ngx_create_temp_buf(r->pool, c->body_start); > + c->buf = ngx_create_temp_buf(r->pool, buffer_size); > if (c->buf == NULL) { > return NGX_ERROR; > } The patch isn't optimal as it disregards c->body_start knowledge preserved in file cache node and used as a cache file buffer size. Thus, the cache buffer would be overallocated by the default size, as specified by the user of ngx_http_file_cache_open(), that's ngx_http_upstream_cache() buffer_size. Please try this patch instead. # HG changeset patch # User Sergey Kandaurov # Date 1501864657 -10800 # Fri Aug 04 19:37:37 2017 +0300 # Node ID f727ed0e9f2f3e4706fa6444e8e3df0a21f8fa3a # Parent 93abb5a855d6534f0356882f45be49f8c6a95a8b Cache: reset c->body_start when reading a variant on vary mismatch. Previously, a variant not present in shared memory and stored on disk using a secondary key was read with c->body_start from a variant stored with a primary key. This could result in critical errors "cache file .. has too long header". diff --git a/src/http/ngx_http_cache.h b/src/http/ngx_http_cache.h --- a/src/http/ngx_http_cache.h +++ b/src/http/ngx_http_cache.h @@ -80,6 +80,7 @@ struct ngx_http_cache_s { ngx_str_t vary; u_char variant[NGX_HTTP_CACHE_KEY_LEN]; + size_t buffer_size; size_t header_start; size_t body_start; off_t length; diff --git a/src/http/ngx_http_file_cache.c b/src/http/ngx_http_file_cache.c --- a/src/http/ngx_http_file_cache.c +++ b/src/http/ngx_http_file_cache.c @@ -294,6 +294,8 @@ ngx_http_file_cache_open(ngx_http_reques cln->data = c; } + c->buffer_size = c->body_start; + rc = ngx_http_file_cache_exists(cache, c); ngx_log_debug2(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, @@ -1230,7 +1232,7 @@ ngx_http_file_cache_reopen(ngx_http_requ c->secondary = 1; c->file.name.len = 0; - c->body_start = c->buf->end - c->buf->start; + c->body_start = c->buffer_size; ngx_memcpy(c->key, c->variant, NGX_HTTP_CACHE_KEY_LEN); -- Sergey Kandaurov From ru at nginx.com Tue Jan 30 12:26:27 2018 From: ru at nginx.com (Ruslan Ermilov) Date: Tue, 30 Jan 2018 12:26:27 +0000 Subject: [nginx] HTTP/2: handle duplicate INITIAL_WINDOW_SIZE settings. Message-ID: details: http://hg.nginx.org/nginx/rev/e11a0679d349 branches: changeset: 7190:e11a0679d349 user: Ruslan Ermilov date: Mon Jan 29 15:54:36 2018 +0300 description: HTTP/2: handle duplicate INITIAL_WINDOW_SIZE settings. diffstat: src/http/v2/ngx_http_v2.c | 4 ++-- 1 files changed, 2 insertions(+), 2 deletions(-) diffs (21 lines): diff -r cbf59d483c9c -r e11a0679d349 src/http/v2/ngx_http_v2.c --- a/src/http/v2/ngx_http_v2.c Tue Jan 16 13:52:03 2018 +0300 +++ b/src/http/v2/ngx_http_v2.c Mon Jan 29 15:54:36 2018 +0300 @@ -2000,8 +2000,6 @@ ngx_http_v2_state_settings_params(ngx_ht } window_delta = value - h2c->init_window; - - h2c->init_window = value; break; case NGX_HTTP_V2_MAX_FRAME_SIZE_SETTING: @@ -2037,6 +2035,8 @@ ngx_http_v2_state_settings_params(ngx_ht ngx_http_v2_queue_ordered_frame(h2c, frame); if (window_delta) { + h2c->init_window += window_delta; + if (ngx_http_v2_adjust_windows(h2c, window_delta) != NGX_OK) { return ngx_http_v2_connection_error(h2c, NGX_HTTP_V2_INTERNAL_ERROR); From ru at nginx.com Tue Jan 30 12:26:28 2018 From: ru at nginx.com (Ruslan Ermilov) Date: Tue, 30 Jan 2018 12:26:28 +0000 Subject: [nginx] HTTP/2: more style, comments, and debugging. Message-ID: details: http://hg.nginx.org/nginx/rev/61d276dcd493 branches: changeset: 7191:61d276dcd493 user: Ruslan Ermilov date: Mon Jan 29 16:06:33 2018 +0300 description: HTTP/2: more style, comments, and debugging. diffstat: src/http/v2/ngx_http_v2.c | 33 ++- src/http/v2/ngx_http_v2.h | 3 +- src/http/v2/ngx_http_v2_filter_module.c | 290 ++++++++++++++++--------------- 3 files changed, 168 insertions(+), 158 deletions(-) diffs (452 lines): diff -r e11a0679d349 -r 61d276dcd493 src/http/v2/ngx_http_v2.c --- a/src/http/v2/ngx_http_v2.c Mon Jan 29 15:54:36 2018 +0300 +++ b/src/http/v2/ngx_http_v2.c Mon Jan 29 16:06:33 2018 +0300 @@ -185,16 +185,16 @@ static void ngx_http_v2_pool_cleanup(voi static ngx_http_v2_handler_pt ngx_http_v2_frame_states[] = { - ngx_http_v2_state_data, - ngx_http_v2_state_headers, - ngx_http_v2_state_priority, - ngx_http_v2_state_rst_stream, - ngx_http_v2_state_settings, - ngx_http_v2_state_push_promise, - ngx_http_v2_state_ping, - ngx_http_v2_state_goaway, - ngx_http_v2_state_window_update, - ngx_http_v2_state_continuation + ngx_http_v2_state_data, /* NGX_HTTP_V2_DATA_FRAME */ + ngx_http_v2_state_headers, /* NGX_HTTP_V2_HEADERS_FRAME */ + ngx_http_v2_state_priority, /* NGX_HTTP_V2_PRIORITY_FRAME */ + ngx_http_v2_state_rst_stream, /* NGX_HTTP_V2_RST_STREAM_FRAME */ + ngx_http_v2_state_settings, /* NGX_HTTP_V2_SETTINGS_FRAME */ + ngx_http_v2_state_push_promise, /* NGX_HTTP_V2_PUSH_PROMISE_FRAME */ + ngx_http_v2_state_ping, /* NGX_HTTP_V2_PING_FRAME */ + ngx_http_v2_state_goaway, /* NGX_HTTP_V2_GOAWAY_FRAME */ + ngx_http_v2_state_window_update, /* NGX_HTTP_V2_WINDOW_UPDATE_FRAME */ + ngx_http_v2_state_continuation /* NGX_HTTP_V2_CONTINUATION_FRAME */ }; #define NGX_HTTP_V2_FRAME_STATES \ @@ -1046,7 +1046,7 @@ ngx_http_v2_state_headers(ngx_http_v2_co depend = 0; excl = 0; - weight = 16; + weight = NGX_HTTP_V2_DEFAULT_WEIGHT; if (priority) { dependency = ngx_http_v2_parse_uint32(pos); @@ -1059,7 +1059,8 @@ ngx_http_v2_state_headers(ngx_http_v2_co } ngx_log_debug4(NGX_LOG_DEBUG_HTTP, h2c->connection->log, 0, - "http2 HEADERS frame sid:%ui on %ui excl:%ui weight:%ui", + "http2 HEADERS frame sid:%ui " + "depends on %ui excl:%ui weight:%ui", h2c->state.sid, depend, excl, weight); if (h2c->state.sid % 2 == 0 || h2c->state.sid <= h2c->last_sid) { @@ -1788,7 +1789,8 @@ ngx_http_v2_state_priority(ngx_http_v2_c pos += NGX_HTTP_V2_PRIORITY_SIZE; ngx_log_debug4(NGX_LOG_DEBUG_HTTP, h2c->connection->log, 0, - "http2 PRIORITY frame sid:%ui on %ui excl:%ui weight:%ui", + "http2 PRIORITY frame sid:%ui " + "depends on %ui excl:%ui weight:%ui", h2c->state.sid, depend, excl, weight); if (h2c->state.sid == 0) { @@ -1986,6 +1988,9 @@ ngx_http_v2_state_settings_params(ngx_ht id = ngx_http_v2_parse_uint16(pos); value = ngx_http_v2_parse_uint32(&pos[2]); + ngx_log_debug2(NGX_LOG_DEBUG_HTTP, h2c->connection->log, 0, + "http2 setting %ui:%ui", id, value); + switch (id) { case NGX_HTTP_V2_INIT_WINDOW_SIZE_SETTING: @@ -3343,7 +3348,7 @@ ngx_http_v2_construct_request_line(ngx_h } else if (r->schema_start == NULL) { ngx_log_error(NGX_LOG_INFO, r->connection->log, 0, - "client sent no :schema header"); + "client sent no :scheme header"); } else { ngx_log_error(NGX_LOG_INFO, r->connection->log, 0, diff -r e11a0679d349 -r 61d276dcd493 src/http/v2/ngx_http_v2.h --- a/src/http/v2/ngx_http_v2.h Mon Jan 29 15:54:36 2018 +0300 +++ b/src/http/v2/ngx_http_v2.h Mon Jan 29 16:06:33 2018 +0300 @@ -49,6 +49,8 @@ #define NGX_HTTP_V2_MAX_WINDOW ((1U << 31) - 1) #define NGX_HTTP_V2_DEFAULT_WINDOW 65535 +#define NGX_HTTP_V2_DEFAULT_WEIGHT 16 + typedef struct ngx_http_v2_connection_s ngx_http_v2_connection_t; typedef struct ngx_http_v2_node_s ngx_http_v2_node_t; @@ -272,7 +274,6 @@ ngx_http_v2_queue_ordered_frame(ngx_http void ngx_http_v2_init(ngx_event_t *rev); -void ngx_http_v2_request_headers_init(void); ngx_int_t ngx_http_v2_read_request_body(ngx_http_request_t *r); ngx_int_t ngx_http_v2_read_unbuffered_request_body(ngx_http_request_t *r); diff -r e11a0679d349 -r 61d276dcd493 src/http/v2/ngx_http_v2_filter_module.c --- a/src/http/v2/ngx_http_v2_filter_module.c Mon Jan 29 15:54:36 2018 +0300 +++ b/src/http/v2/ngx_http_v2_filter_module.c Mon Jan 29 16:06:33 2018 +0300 @@ -138,6 +138,7 @@ ngx_http_v2_header_filter(ngx_http_reque ngx_table_elt_t *header; ngx_connection_t *fc; ngx_http_cleanup_t *cln; + ngx_http_v2_stream_t *stream; ngx_http_v2_out_frame_t *frame; ngx_http_v2_connection_t *h2c; ngx_http_core_loc_conf_t *clcf; @@ -157,7 +158,9 @@ ngx_http_v2_header_filter(ngx_http_reque ngx_http_v2_literal_size(NGINX_VER_BUILD); static u_char nginx_ver_build[ngx_http_v2_literal_size(NGINX_VER_BUILD)]; - if (!r->stream) { + stream = r->stream; + + if (!stream) { return ngx_http_next_header_filter(r); } @@ -236,7 +239,7 @@ ngx_http_v2_header_filter(ngx_http_reque } } - h2c = r->stream->connection; + h2c = stream->connection; len = h2c->table_update ? 1 : 0; @@ -633,9 +636,9 @@ ngx_http_v2_header_filter(ngx_http_reque return NGX_ERROR; } - ngx_http_v2_queue_blocked_frame(r->stream->connection, frame); + ngx_http_v2_queue_blocked_frame(h2c, frame); - r->stream->queued = 1; + stream->queued = 1; cln = ngx_http_cleanup_add(r, 0); if (cln == NULL) { @@ -643,124 +646,12 @@ ngx_http_v2_header_filter(ngx_http_reque } cln->handler = ngx_http_v2_filter_cleanup; - cln->data = r->stream; + cln->data = stream; fc->send_chain = ngx_http_v2_send_chain; fc->need_last_buf = 1; - return ngx_http_v2_filter_send(fc, r->stream); -} - - -static ngx_http_v2_out_frame_t * -ngx_http_v2_create_trailers_frame(ngx_http_request_t *r) -{ - u_char *pos, *start, *tmp; - size_t len, tmp_len; - ngx_uint_t i; - ngx_list_part_t *part; - ngx_table_elt_t *header; - - len = 0; - tmp_len = 0; - - part = &r->headers_out.trailers.part; - header = part->elts; - - for (i = 0; /* void */; i++) { - - if (i >= part->nelts) { - if (part->next == NULL) { - break; - } - - part = part->next; - header = part->elts; - i = 0; - } - - if (header[i].hash == 0) { - continue; - } - - if (header[i].key.len > NGX_HTTP_V2_MAX_FIELD) { - ngx_log_error(NGX_LOG_CRIT, r->connection->log, 0, - "too long response trailer name: \"%V\"", - &header[i].key); - return NULL; - } - - if (header[i].value.len > NGX_HTTP_V2_MAX_FIELD) { - ngx_log_error(NGX_LOG_CRIT, r->connection->log, 0, - "too long response trailer value: \"%V: %V\"", - &header[i].key, &header[i].value); - return NULL; - } - - len += 1 + NGX_HTTP_V2_INT_OCTETS + header[i].key.len - + NGX_HTTP_V2_INT_OCTETS + header[i].value.len; - - if (header[i].key.len > tmp_len) { - tmp_len = header[i].key.len; - } - - if (header[i].value.len > tmp_len) { - tmp_len = header[i].value.len; - } - } - - if (len == 0) { - return NGX_HTTP_V2_NO_TRAILERS; - } - - tmp = ngx_palloc(r->pool, tmp_len); - pos = ngx_pnalloc(r->pool, len); - - if (pos == NULL || tmp == NULL) { - return NULL; - } - - start = pos; - - part = &r->headers_out.trailers.part; - header = part->elts; - - for (i = 0; /* void */; i++) { - - if (i >= part->nelts) { - if (part->next == NULL) { - break; - } - - part = part->next; - header = part->elts; - i = 0; - } - - if (header[i].hash == 0) { - continue; - } - -#if (NGX_DEBUG) - if (r->connection->log->log_level & NGX_LOG_DEBUG_HTTP) { - ngx_strlow(tmp, header[i].key.data, header[i].key.len); - - ngx_log_debug3(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, - "http2 output trailer: \"%*s: %V\"", - header[i].key.len, tmp, &header[i].value); - } -#endif - - *pos++ = 0; - - pos = ngx_http_v2_write_name(pos, header[i].key.data, - header[i].key.len, tmp); - - pos = ngx_http_v2_write_value(pos, header[i].value.data, - header[i].value.len, tmp); - } - - return ngx_http_v2_create_headers_frame(r, start, pos, 1); + return ngx_http_v2_filter_send(fc, stream); } @@ -917,6 +808,120 @@ ngx_http_v2_create_headers_frame(ngx_htt } +static ngx_http_v2_out_frame_t * +ngx_http_v2_create_trailers_frame(ngx_http_request_t *r) +{ + u_char *pos, *start, *tmp; + size_t len, tmp_len; + ngx_uint_t i; + ngx_list_part_t *part; + ngx_table_elt_t *header; + ngx_connection_t *fc; + + fc = r->connection; + len = 0; + tmp_len = 0; + + part = &r->headers_out.trailers.part; + header = part->elts; + + for (i = 0; /* void */; i++) { + + if (i >= part->nelts) { + if (part->next == NULL) { + break; + } + + part = part->next; + header = part->elts; + i = 0; + } + + if (header[i].hash == 0) { + continue; + } + + if (header[i].key.len > NGX_HTTP_V2_MAX_FIELD) { + ngx_log_error(NGX_LOG_CRIT, fc->log, 0, + "too long response trailer name: \"%V\"", + &header[i].key); + return NULL; + } + + if (header[i].value.len > NGX_HTTP_V2_MAX_FIELD) { + ngx_log_error(NGX_LOG_CRIT, fc->log, 0, + "too long response trailer value: \"%V: %V\"", + &header[i].key, &header[i].value); + return NULL; + } + + len += 1 + NGX_HTTP_V2_INT_OCTETS + header[i].key.len + + NGX_HTTP_V2_INT_OCTETS + header[i].value.len; + + if (header[i].key.len > tmp_len) { + tmp_len = header[i].key.len; + } + + if (header[i].value.len > tmp_len) { + tmp_len = header[i].value.len; + } + } + + if (len == 0) { + return NGX_HTTP_V2_NO_TRAILERS; + } + + tmp = ngx_palloc(r->pool, tmp_len); + pos = ngx_pnalloc(r->pool, len); + + if (pos == NULL || tmp == NULL) { + return NULL; + } + + start = pos; + + part = &r->headers_out.trailers.part; + header = part->elts; + + for (i = 0; /* void */; i++) { + + if (i >= part->nelts) { + if (part->next == NULL) { + break; + } + + part = part->next; + header = part->elts; + i = 0; + } + + if (header[i].hash == 0) { + continue; + } + +#if (NGX_DEBUG) + if (fc->log->log_level & NGX_LOG_DEBUG_HTTP) { + ngx_strlow(tmp, header[i].key.data, header[i].key.len); + + ngx_log_debug3(NGX_LOG_DEBUG_HTTP, fc->log, 0, + "http2 output trailer: \"%*s: %V\"", + header[i].key.len, tmp, &header[i].value); + } +#endif + + *pos++ = 0; + + pos = ngx_http_v2_write_name(pos, header[i].key.data, + header[i].key.len, tmp); + + pos = ngx_http_v2_write_value(pos, header[i].value.data, + header[i].value.len, tmp); + } + + return ngx_http_v2_create_headers_frame(r, start, pos, 1); +} + + static ngx_chain_t * ngx_http_v2_send_chain(ngx_connection_t *fc, ngx_chain_t *in, off_t limit) { @@ -1240,31 +1245,6 @@ ngx_http_v2_filter_get_data_frame(ngx_ht static ngx_inline ngx_int_t -ngx_http_v2_filter_send(ngx_connection_t *fc, ngx_http_v2_stream_t *stream) -{ - stream->blocked = 1; - - if (ngx_http_v2_send_output_queue(stream->connection) == NGX_ERROR) { - fc->error = 1; - return NGX_ERROR; - } - - stream->blocked = 0; - - if (stream->queued) { - fc->buffered |= NGX_HTTP_V2_BUFFERED; - fc->write->active = 1; - fc->write->ready = 0; - return NGX_AGAIN; - } - - fc->buffered &= ~NGX_HTTP_V2_BUFFERED; - - return NGX_OK; -} - - -static ngx_inline ngx_int_t ngx_http_v2_flow_control(ngx_http_v2_connection_t *h2c, ngx_http_v2_stream_t *stream) { @@ -1317,6 +1297,30 @@ ngx_http_v2_waiting_queue(ngx_http_v2_co } +static ngx_inline ngx_int_t +ngx_http_v2_filter_send(ngx_connection_t *fc, ngx_http_v2_stream_t *stream) +{ + stream->blocked = 1; + + if (ngx_http_v2_send_output_queue(stream->connection) == NGX_ERROR) { + fc->error = 1; + return NGX_ERROR; + } + + stream->blocked = 0; + + if (stream->queued) { + fc->buffered |= NGX_HTTP_V2_BUFFERED; + fc->write->active = 1; + fc->write->ready = 0; + return NGX_AGAIN; + } + + fc->buffered &= ~NGX_HTTP_V2_BUFFERED; + + return NGX_OK; +} + static ngx_int_t ngx_http_v2_headers_frame_handler(ngx_http_v2_connection_t *h2c, From ru at nginx.com Tue Jan 30 12:26:30 2018 From: ru at nginx.com (Ruslan Ermilov) Date: Tue, 30 Jan 2018 12:26:30 +0000 Subject: [nginx] HTTP/2: finalize request as bad if parsing of pseudo-headers fails. Message-ID: details: http://hg.nginx.org/nginx/rev/d5a535774861 branches: changeset: 7192:d5a535774861 user: Ruslan Ermilov date: Tue Jan 30 14:44:31 2018 +0300 description: HTTP/2: finalize request as bad if parsing of pseudo-headers fails. This is in line when the required pseudo-headers are missing, and avoids spurious zero statuses in access.log. diffstat: src/http/v2/ngx_http_v2.c | 9 +-------- 1 files changed, 1 insertions(+), 8 deletions(-) diffs (19 lines): diff -r 61d276dcd493 -r d5a535774861 src/http/v2/ngx_http_v2.c --- a/src/http/v2/ngx_http_v2.c Mon Jan 29 16:06:33 2018 +0300 +++ b/src/http/v2/ngx_http_v2.c Tue Jan 30 14:44:31 2018 +0300 @@ -1583,14 +1583,7 @@ ngx_http_v2_state_process_header(ngx_htt } if (rc == NGX_DECLINED) { - if (ngx_http_v2_terminate_stream(h2c, h2c->state.stream, - NGX_HTTP_V2_PROTOCOL_ERROR) - == NGX_ERROR) - { - return ngx_http_v2_connection_error(h2c, - NGX_HTTP_V2_INTERNAL_ERROR); - } - + ngx_http_finalize_request(r, NGX_HTTP_BAD_REQUEST); goto error; } From pluknet at nginx.com Tue Jan 30 16:10:52 2018 From: pluknet at nginx.com (Sergey Kandaurov) Date: Tue, 30 Jan 2018 16:10:52 +0000 Subject: [nginx] SSL: using default server context in session remove (closes #1464). Message-ID: details: http://hg.nginx.org/nginx/rev/9d14931cec8c branches: changeset: 7193:9d14931cec8c user: Sergey Kandaurov date: Tue Jan 30 17:46:31 2018 +0300 description: SSL: using default server context in session remove (closes #1464). This fixes segfault in configurations with multiple virtual servers sharing the same port, where a non-default virtual server block misses certificate. diffstat: src/http/ngx_http_request.c | 4 ++-- src/mail/ngx_mail_handler.c | 4 ++-- src/stream/ngx_stream_ssl_module.c | 4 ++-- 3 files changed, 6 insertions(+), 6 deletions(-) diffs (63 lines): diff -r d5a535774861 -r 9d14931cec8c src/http/ngx_http_request.c --- a/src/http/ngx_http_request.c Tue Jan 30 14:44:31 2018 +0300 +++ b/src/http/ngx_http_request.c Tue Jan 30 17:46:31 2018 +0300 @@ -1902,7 +1902,7 @@ ngx_http_process_request(ngx_http_reques "client SSL certificate verify error: (%l:%s)", rc, X509_verify_cert_error_string(rc)); - ngx_ssl_remove_cached_session(sscf->ssl.ctx, + ngx_ssl_remove_cached_session(c->ssl->session_ctx, (SSL_get0_session(c->ssl->connection))); ngx_http_finalize_request(r, NGX_HTTPS_CERT_ERROR); @@ -1916,7 +1916,7 @@ ngx_http_process_request(ngx_http_reques ngx_log_error(NGX_LOG_INFO, c->log, 0, "client sent no required SSL certificate"); - ngx_ssl_remove_cached_session(sscf->ssl.ctx, + ngx_ssl_remove_cached_session(c->ssl->session_ctx, (SSL_get0_session(c->ssl->connection))); ngx_http_finalize_request(r, NGX_HTTPS_NO_CERT); diff -r d5a535774861 -r 9d14931cec8c src/mail/ngx_mail_handler.c --- a/src/mail/ngx_mail_handler.c Tue Jan 30 14:44:31 2018 +0300 +++ b/src/mail/ngx_mail_handler.c Tue Jan 30 17:46:31 2018 +0300 @@ -302,7 +302,7 @@ ngx_mail_verify_cert(ngx_mail_session_t "client SSL certificate verify error: (%l:%s)", rc, X509_verify_cert_error_string(rc)); - ngx_ssl_remove_cached_session(sslcf->ssl.ctx, + ngx_ssl_remove_cached_session(c->ssl->session_ctx, (SSL_get0_session(c->ssl->connection))); cscf = ngx_mail_get_module_srv_conf(s, ngx_mail_core_module); @@ -323,7 +323,7 @@ ngx_mail_verify_cert(ngx_mail_session_t ngx_log_error(NGX_LOG_INFO, c->log, 0, "client sent no required SSL certificate"); - ngx_ssl_remove_cached_session(sslcf->ssl.ctx, + ngx_ssl_remove_cached_session(c->ssl->session_ctx, (SSL_get0_session(c->ssl->connection))); cscf = ngx_mail_get_module_srv_conf(s, ngx_mail_core_module); diff -r d5a535774861 -r 9d14931cec8c src/stream/ngx_stream_ssl_module.c --- a/src/stream/ngx_stream_ssl_module.c Tue Jan 30 14:44:31 2018 +0300 +++ b/src/stream/ngx_stream_ssl_module.c Tue Jan 30 17:46:31 2018 +0300 @@ -328,7 +328,7 @@ ngx_stream_ssl_handler(ngx_stream_sessio "client SSL certificate verify error: (%l:%s)", rc, X509_verify_cert_error_string(rc)); - ngx_ssl_remove_cached_session(sslcf->ssl.ctx, + ngx_ssl_remove_cached_session(c->ssl->session_ctx, (SSL_get0_session(c->ssl->connection))); return NGX_ERROR; } @@ -340,7 +340,7 @@ ngx_stream_ssl_handler(ngx_stream_sessio ngx_log_error(NGX_LOG_INFO, c->log, 0, "client sent no required SSL certificate"); - ngx_ssl_remove_cached_session(sslcf->ssl.ctx, + ngx_ssl_remove_cached_session(c->ssl->session_ctx, (SSL_get0_session(c->ssl->connection))); return NGX_ERROR; } From ru at nginx.com Tue Jan 30 19:25:28 2018 From: ru at nginx.com (Ruslan Ermilov) Date: Tue, 30 Jan 2018 19:25:28 +0000 Subject: [nginx] Upstream: removed X-Powered-By from the list of special headers. Message-ID: details: http://hg.nginx.org/nginx/rev/0b72d545f098 branches: changeset: 7194:0b72d545f098 user: Ruslan Ermilov date: Tue Jan 30 22:23:58 2018 +0300 description: Upstream: removed X-Powered-By from the list of special headers. After 1e720b0be7ec, it's neither specially processed nor copied when redirecting with X-Accel-Redirect. diffstat: src/http/ngx_http_upstream.c | 4 ---- 1 files changed, 0 insertions(+), 4 deletions(-) diffs (14 lines): diff -r 9d14931cec8c -r 0b72d545f098 src/http/ngx_http_upstream.c --- a/src/http/ngx_http_upstream.c Tue Jan 30 17:46:31 2018 +0300 +++ b/src/http/ngx_http_upstream.c Tue Jan 30 22:23:58 2018 +0300 @@ -284,10 +284,6 @@ static ngx_http_upstream_header_t ngx_h ngx_http_upstream_process_vary, 0, ngx_http_upstream_copy_header_line, 0, 0 }, - { ngx_string("X-Powered-By"), - ngx_http_upstream_ignore_header_line, 0, - ngx_http_upstream_copy_header_line, 0, 0 }, - { ngx_string("X-Accel-Expires"), ngx_http_upstream_process_accel_expires, 0, ngx_http_upstream_copy_header_line, 0, 0 },