From maxim at nginx.com Mon Nov 2 10:06:31 2015 From: maxim at nginx.com (Maxim Konovalov) Date: Mon, 2 Nov 2015 13:06:31 +0300 Subject: Processing responses of unbounded sizes from upstream servers In-Reply-To: References: <20151027162516.GL48365@mdounin.ru> Message-ID: <563735A7.5070404@nginx.com> Hi Maxime, On 10/27/15 8:04 PM, Maxime Henrion wrote: > Hello Maxim, > > Thanks for your input, it *is* appreciated. > [...] We understand your disappointment -- that's share that nginx still doesn't have a proper and up to date documentation suite for developers. This is something that we are not proud of. We are still trying to figure out how to fix that. -- Maxim Konovalov From maxim at nginx.com Mon Nov 2 10:48:27 2015 From: maxim at nginx.com (Maxim Konovalov) Date: Mon, 2 Nov 2015 13:48:27 +0300 Subject: Processing responses of unbounded sizes from upstream servers In-Reply-To: <563735A7.5070404@nginx.com> References: <20151027162516.GL48365@mdounin.ru> <563735A7.5070404@nginx.com> Message-ID: <56373F7B.8010302@nginx.com> On 11/2/15 1:06 PM, Maxim Konovalov wrote: > Hi Maxime, > > On 10/27/15 8:04 PM, Maxime Henrion wrote: >> Hello Maxim, >> >> Thanks for your input, it *is* appreciated. >> > [...] > > We understand your disappointment -- that's share that nginx still s/share/shame/ > doesn't have a proper and up to date documentation suite for > developers. This is something that we are not proud of. > > We are still trying to figure out how to fix that. > -- Maxim Konovalov From hungnv at opensource.com.vn Mon Nov 2 10:55:32 2015 From: hungnv at opensource.com.vn (Hung Nguyen) Date: Mon, 2 Nov 2015 17:55:32 +0700 Subject: Processing responses of unbounded sizes from upstream servers In-Reply-To: <563735A7.5070404@nginx.com> References: <20151027162516.GL48365@mdounin.ru> <563735A7.5070404@nginx.com> Message-ID: Hello Maxim, +1 At least there?s someone understand community?s need. Most of open source nginx module was built by reading nginx source code and other open source module. None of those was from Nginx document which should be available for open source developer to use and understand since Nginx is an open source project. People like Evan Miller did a really nice job when writing their our document for nginx module development, but we still need a little bit more :). ? H?ng > On Nov 2, 2015, at 5:06 PM, Maxim Konovalov wrote: > > Hi Maxime, > > On 10/27/15 8:04 PM, Maxime Henrion wrote: >> Hello Maxim, >> >> Thanks for your input, it *is* appreciated. >> > [...] > > We understand your disappointment -- that's share that nginx still > doesn't have a proper and up to date documentation suite for > developers. This is something that we are not proud of. > > We are still trying to figure out how to fix that. > > -- > Maxim Konovalov > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel From ahutchings at nginx.com Wed Nov 4 20:05:21 2015 From: ahutchings at nginx.com (Andrew Hutchings) Date: Wed, 4 Nov 2015 20:05:21 +0000 Subject: Opera issue In-Reply-To: <787c0d6e-a9c2-4463-ad13-bec1660d3973@typeapp.com> References: <787c0d6e-a9c2-4463-ad13-bec1660d3973@typeapp.com> Message-ID: <563A6501.2020309@nginx.com> Hi Mateusz, First of all please see the caveats section of this post, you may need to alter your SSL configuration: https://www.nginx.com/blog/nginx-1-9-5/ If this doesn't fix it then it may be that Opera doesn't support NPN negotiation that is in OpenSSL 1.0.1 for HTTP/2. If you recompile with OpenSSL 1.0.2 it will use ALPN which is actually the standard for HTTP/2. For now most browsers are supporting both negotiation types but this may change in the future. Kind Regards Andrew On 31/10/15 22:51, Mateusz Gruszczy?ski wrote: > Hi everyone, I have a little trouble with the operation of HTTP / 2 > Opera browser > > For Mozilla FF and Chrome all OK. ( http://prntscr.com/8xlrln ;) > > In Opera I have an error in the style of "not received data." > -> http://prntscr.com/8xlr5j > > VHost-> http://pastebin.com/CqJqnvmG > Server CNF -> http://pastebin.com/QMtHN4a3 > NGINX packets-> http://pastebin.com/MDKrErmH > > Tested on a verified certificate and is the same thing. > > Version Opera 32.0 (Windows 8.1) > > Does anyone have a solution? > > It may be for Opera to disable http/2 and the rest have included? > > -- > > Pozdrawiam || Best regards > > Mateusz Gruszczy?ski > linuxiarz.pl > > > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > -- Andrew Hutchings (LinuxJedi) Senior Developer Advocate, Nginx Inc. From gru at linuxiarz.pl Wed Nov 4 21:13:47 2015 From: gru at linuxiarz.pl (=?UTF-8?Q?Mateusz_Gruszczy=C5=84ski?=) Date: Wed, 04 Nov 2015 22:13:47 +0100 Subject: Opera issue In-Reply-To: <563A6501.2020309@nginx.com> References: <787c0d6e-a9c2-4463-ad13-bec1660d3973@typeapp.com> <563A6501.2020309@nginx.com> Message-ID: <15a00182-357e-490e-8f6f-4a15e0ff1f95@typeapp.com> I was fix this problem by delete Windows from my hard drive and swap this by Archlinux. :) I was compile new one NGINX 1.9.6 -> by my how to http://linuxiarz.pl/1618/ubuntu-serwer-www-w-pigulce/ When n?w i Access to web site its ok. I see site without errors.? I am compile nginx with new open ssl and ty on ubuntu , arch and Debian. So i think. Opera has problem with support http/2 on nginx but work good with diffetent servers? ex. Google.com. Wys?ano z TypeMail W dniu 4 lis 2015, 21:05, o 21:05, u?ytkownik Andrew Hutchings napisa?: >Hi Mateusz, > >First of all please see the caveats section of this post, you may need >to alter your SSL configuration: >https://www.nginx.com/blog/nginx-1-9-5/ > >If this doesn't fix it then it may be that Opera doesn't support NPN >negotiation that is in OpenSSL 1.0.1 for HTTP/2. If you recompile with >OpenSSL 1.0.2 it will use ALPN which is actually the standard for >HTTP/2. > >For now most browsers are supporting both negotiation types but this >may >change in the future. > >Kind Regards >Andrew > >On 31/10/15 22:51, Mateusz Gruszczy?ski wrote: >> Hi everyone, I have a little trouble with the operation of HTTP / 2 >> Opera browser >> >> For Mozilla FF and Chrome all OK. ( http://prntscr.com/8xlrln ;) >> >> In Opera I have an error in the style of "not received data." >> -> http://prntscr.com/8xlr5j >> >> VHost-> http://pastebin.com/CqJqnvmG >> Server CNF -> http://pastebin.com/QMtHN4a3 >> NGINX packets-> http://pastebin.com/MDKrErmH >> >> Tested on a verified certificate and is the same thing. >> >> Version Opera 32.0 (Windows 8.1) >> >> Does anyone have a solution? >> >> It may be for Opera to disable http/2 and the rest have included? >> >> -- >> >> Pozdrawiam || Best regards >> >> Mateusz Gruszczy?ski >> linuxiarz.pl >> >> >> >> _______________________________________________ >> nginx-devel mailing list >> nginx-devel at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx-devel >> > >-- >Andrew Hutchings (LinuxJedi) >Senior Developer Advocate, Nginx Inc. > >_______________________________________________ >nginx-devel mailing list >nginx-devel at nginx.org >http://mailman.nginx.org/mailman/listinfo/nginx-devel -------------- next part -------------- An HTML attachment was scrubbed... URL: From vbart at nginx.com Thu Nov 5 11:59:24 2015 From: vbart at nginx.com (Valentin V. Bartenev) Date: Thu, 05 Nov 2015 14:59:24 +0300 Subject: Opera issue In-Reply-To: <563A6501.2020309@nginx.com> References: <787c0d6e-a9c2-4463-ad13-bec1660d3973@typeapp.com> <563A6501.2020309@nginx.com> Message-ID: <2801316.rvUcJL6kKR@vbart-workstation> On Wednesday 04 November 2015 20:05:21 Andrew Hutchings wrote: > Hi Mateusz, > > First of all please see the caveats section of this post, you may need > to alter your SSL configuration: https://www.nginx.com/blog/nginx-1-9-5/ > > If this doesn't fix it then it may be that Opera doesn't support NPN > negotiation that is in OpenSSL 1.0.1 for HTTP/2. If you recompile with > OpenSSL 1.0.2 it will use ALPN which is actually the standard for HTTP/2. > > For now most browsers are supporting both negotiation types but this may > change in the future. > [..] Missing negotiation causes fallback to HTTPS, not errors. wbr, Valentin V. Bartenev From vbart at nginx.com Thu Nov 5 12:01:45 2015 From: vbart at nginx.com (Valentin Bartenev) Date: Thu, 05 Nov 2015 12:01:45 +0000 Subject: [nginx] HTTP/2: backed out 16905ecbb49e (ticket #822). Message-ID: details: http://hg.nginx.org/nginx/rev/0f4b7800e681 branches: changeset: 6288:0f4b7800e681 user: Valentin Bartenev date: Thu Nov 05 15:01:01 2015 +0300 description: HTTP/2: backed out 16905ecbb49e (ticket #822). It caused inconsistency between setting "in_closed" flag and the moment when the last DATA frame was actually read. As a result, the body buffer might not be initialized properly in ngx_http_v2_init_request_body(), which led to a segmentation fault in ngx_http_v2_state_read_data(). Also it might cause start processing of incomplete body. This issue could be triggered when the processing of a request was delayed, e.g. in the limit_req or auth_request modules. diffstat: src/http/v2/ngx_http_v2.c | 8 +++++--- 1 files changed, 5 insertions(+), 3 deletions(-) diffs (32 lines): diff -r 4ccb37b04454 -r 0f4b7800e681 src/http/v2/ngx_http_v2.c --- a/src/http/v2/ngx_http_v2.c Fri Oct 30 21:43:30 2015 +0300 +++ b/src/http/v2/ngx_http_v2.c Thu Nov 05 15:01:01 2015 +0300 @@ -870,8 +870,6 @@ ngx_http_v2_state_data(ngx_http_v2_conne return ngx_http_v2_state_skip_padded(h2c, pos, end); } - stream->in_closed = h2c->state.flags & NGX_HTTP_V2_END_STREAM_FLAG; - h2c->state.stream = stream; return ngx_http_v2_state_read_data(h2c, pos, end); @@ -899,6 +897,8 @@ ngx_http_v2_state_read_data(ngx_http_v2_ } if (stream->skip_data) { + stream->in_closed = h2c->state.flags & NGX_HTTP_V2_END_STREAM_FLAG; + ngx_log_debug1(NGX_LOG_DEBUG_HTTP, h2c->connection->log, 0, "skipping http2 DATA frame, reason: %d", stream->skip_data); @@ -988,7 +988,9 @@ ngx_http_v2_state_read_data(ngx_http_v2_ ngx_http_v2_state_read_data); } - if (stream->in_closed) { + if (h2c->state.flags & NGX_HTTP_V2_END_STREAM_FLAG) { + stream->in_closed = 1; + if (r->headers_in.content_length_n < 0) { r->headers_in.content_length_n = rb->rest; From vbart at nginx.com Thu Nov 5 12:01:48 2015 From: vbart at nginx.com (Valentin Bartenev) Date: Thu, 05 Nov 2015 12:01:48 +0000 Subject: [nginx] SSL: only select HTTP/2 using NPN if "http2" is enabled. Message-ID: details: http://hg.nginx.org/nginx/rev/909b5b191f25 branches: changeset: 6289:909b5b191f25 user: Valentin Bartenev date: Thu Nov 05 15:01:09 2015 +0300 description: SSL: only select HTTP/2 using NPN if "http2" is enabled. OpenSSL doesn't check if the negotiated protocol has been announced. As a result, the client might force using HTTP/2 even if it wasn't enabled in configuration. diffstat: src/http/ngx_http_request.c | 30 ++++++++++++++++++------------ 1 files changed, 18 insertions(+), 12 deletions(-) diffs (47 lines): diff -r 0f4b7800e681 -r 909b5b191f25 src/http/ngx_http_request.c --- a/src/http/ngx_http_request.c Thu Nov 05 15:01:01 2015 +0300 +++ b/src/http/ngx_http_request.c Thu Nov 05 15:01:09 2015 +0300 @@ -768,25 +768,31 @@ ngx_http_ssl_handshake_handler(ngx_conne && (defined TLSEXT_TYPE_application_layer_protocol_negotiation \ || defined TLSEXT_TYPE_next_proto_neg)) { - unsigned int len; - const unsigned char *data; + unsigned int len; + const unsigned char *data; + ngx_http_connection_t *hc; + + hc = c->data; + + if (hc->addr_conf->http2) { #ifdef TLSEXT_TYPE_application_layer_protocol_negotiation - SSL_get0_alpn_selected(c->ssl->connection, &data, &len); + SSL_get0_alpn_selected(c->ssl->connection, &data, &len); #ifdef TLSEXT_TYPE_next_proto_neg - if (len == 0) { + if (len == 0) { + SSL_get0_next_proto_negotiated(c->ssl->connection, &data, &len); + } +#endif + +#else /* TLSEXT_TYPE_next_proto_neg */ SSL_get0_next_proto_negotiated(c->ssl->connection, &data, &len); - } #endif -#else /* TLSEXT_TYPE_next_proto_neg */ - SSL_get0_next_proto_negotiated(c->ssl->connection, &data, &len); -#endif - - if (len == 2 && data[0] == 'h' && data[1] == '2') { - ngx_http_v2_init(c->read); - return; + if (len == 2 && data[0] == 'h' && data[1] == '2') { + ngx_http_v2_init(c->read); + return; + } } } #endif From junpei.yoshino at gmail.com Thu Nov 5 15:45:49 2015 From: junpei.yoshino at gmail.com (=?UTF-8?B?5ZCJ6YeO57SU5bmz?=) Date: Fri, 6 Nov 2015 00:45:49 +0900 Subject: [PATCH]add proxy_protocol_port variable for rfc6302 Message-ID: # HG changeset patch # User Junpei Yoshino # Date 1446723407 -32400 # Thu Nov 05 20:36:47 2015 +0900 # Node ID 59cadccedf402ec325b078cb72a284465639e0fe # Parent 4ccb37b04454dec6afb9476d085c06aea00adaa0 Http: add proxy_protocol_port variable for rfc6302 Logging source port is recommended in rfc6302. use case logging sending information by http request headers diff -r 4ccb37b04454 -r 59cadccedf40 src/core/ngx_connection.h --- a/src/core/ngx_connection.h Fri Oct 30 21:43:30 2015 +0300 +++ b/src/core/ngx_connection.h Thu Nov 05 20:36:47 2015 +0900 @@ -146,6 +146,7 @@ ngx_str_t addr_text; ngx_str_t proxy_protocol_addr; + ngx_str_t proxy_protocol_port; #if (NGX_SSL) ngx_ssl_connection_t *ssl; diff -r 4ccb37b04454 -r 59cadccedf40 src/core/ngx_proxy_protocol.c --- a/src/core/ngx_proxy_protocol.c Fri Oct 30 21:43:30 2015 +0300 +++ b/src/core/ngx_proxy_protocol.c Thu Nov 05 20:36:47 2015 +0900 @@ -13,7 +13,7 @@ ngx_proxy_protocol_read(ngx_connection_t *c, u_char *buf, u_char *last) { size_t len; - u_char ch, *p, *addr; + u_char ch, *p, *addr, *port; p = buf; len = last - buf; @@ -71,8 +71,56 @@ ngx_memcpy(c->proxy_protocol_addr.data, addr, len); c->proxy_protocol_addr.len = len; + for ( ;; ) { + if (p == last) { + goto invalid; + } + + ch = *p++; + + if (ch == ' ') { + break; + } + + if (ch != ':' && ch != '.' + && (ch < 'a' || ch > 'f') + && (ch < 'A' || ch > 'F') + && (ch < '0' || ch > '9')) + { + goto invalid; + } + } + port = p; + for ( ;; ) { + if (p == last) { + goto invalid; + } + + ch = *p++; + + if (ch == ' ') { + break; + } + + if (ch < '0' || ch > '9') + { + goto invalid; + } + } + len = p - port - 1; + c->proxy_protocol_port.data = ngx_pnalloc(c->pool, len); + + if (c->proxy_protocol_port.data == NULL) { + return NULL; + } + + ngx_memcpy(c->proxy_protocol_port.data, port, len); + c->proxy_protocol_port.len = len; + ngx_log_debug1(NGX_LOG_DEBUG_CORE, c->log, 0, "PROXY protocol address: \"%V\"", &c->proxy_protocol_addr); + ngx_log_debug1(NGX_LOG_DEBUG_CORE, c->log, 0, + "PROXY protocol port: \"%V\"", &c->proxy_protocol_port); skip: diff -r 4ccb37b04454 -r 59cadccedf40 src/http/ngx_http_variables.c --- a/src/http/ngx_http_variables.c Fri Oct 30 21:43:30 2015 +0300 +++ b/src/http/ngx_http_variables.c Thu Nov 05 20:36:47 2015 +0900 @@ -58,6 +58,8 @@ ngx_http_variable_value_t *v, uintptr_t data); static ngx_int_t ngx_http_variable_proxy_protocol_addr(ngx_http_request_t *r, ngx_http_variable_value_t *v, uintptr_t data); +static ngx_int_t ngx_http_variable_proxy_protocol_port(ngx_http_request_t *r, + ngx_http_variable_value_t *v, uintptr_t data); static ngx_int_t ngx_http_variable_server_addr(ngx_http_request_t *r, ngx_http_variable_value_t *v, uintptr_t data); static ngx_int_t ngx_http_variable_server_port(ngx_http_request_t *r, @@ -192,6 +194,9 @@ { ngx_string("proxy_protocol_addr"), NULL, ngx_http_variable_proxy_protocol_addr, 0, 0, 0 }, + { ngx_string("proxy_protocol_port"), NULL, + ngx_http_variable_proxy_protocol_port, 0, 0, 0 }, + { ngx_string("server_addr"), NULL, ngx_http_variable_server_addr, 0, 0, 0 }, { ngx_string("server_port"), NULL, ngx_http_variable_server_port, 0, 0, 0 }, @@ -1250,6 +1255,20 @@ static ngx_int_t +ngx_http_variable_proxy_protocol_port(ngx_http_request_t *r, + ngx_http_variable_value_t *v, uintptr_t data) +{ + v->len = r->connection->proxy_protocol_port.len; + v->valid = 1; + v->no_cacheable = 0; + v->not_found = 0; + v->data = r->connection->proxy_protocol_port.data; + + return NGX_OK; +} + + +static ngx_int_t ngx_http_variable_server_addr(ngx_http_request_t *r, ngx_http_variable_value_t *v, uintptr_t data) { From Lukas.Prettenthaler at bwinparty.com Fri Nov 6 20:34:11 2015 From: Lukas.Prettenthaler at bwinparty.com (Lukas Prettenthaler) Date: Fri, 6 Nov 2015 20:34:11 +0000 Subject: [PATCH] Stream: client SSL certificate support Message-ID: <8378ea1a8dd348688eda0bd408350003@AT0P0WIMXS002.icepor.com> # HG changeset patch # User Lukas Prettenthaler # Date 1446841055 -3600 # Fri Nov 06 21:17:35 2015 +0100 # Node ID e34abe30ed6d749deb12b768c15059165b56f9c5 # Parent 909b5b191f25d0f9e03667a10d23f6ef27d014a3 Stream: client SSL certificate support The "ssl_verify_client", "ssl_verify_depth", "ssl_client_certificate", "ssl_trusted_certificate", and "ssl_crl" directives introduced to control SSL client certificate verification in the stream module. If there is no required certificate provided during an SSL handshake or certificate verification fails then the connection is simply closed. diff -r 909b5b191f25 -r e34abe30ed6d src/stream/ngx_stream_handler.c --- a/src/stream/ngx_stream_handler.c Thu Nov 05 15:01:09 2015 +0300 +++ b/src/stream/ngx_stream_handler.c Fri Nov 06 21:17:35 2015 +0100 @@ -17,6 +17,8 @@ #if (NGX_STREAM_SSL) static void ngx_stream_ssl_init_connection(ngx_ssl_t *ssl, ngx_connection_t *c); static void ngx_stream_ssl_handshake_handler(ngx_connection_t *c); +static ngx_int_t ngx_stream_verify_cert(ngx_stream_session_t *s, + ngx_connection_t *c); #endif @@ -265,11 +267,20 @@ static void ngx_stream_ssl_handshake_handler(ngx_connection_t *c) { + ngx_stream_session_t *s; + if (!c->ssl->handshaked) { ngx_stream_close_connection(c); return; } + s = c->data; + + if (ngx_stream_verify_cert(s, c) != NGX_OK) { + ngx_stream_close_connection(c); + return; + } + if (c->read->timer_set) { ngx_del_timer(c->read); } @@ -277,6 +288,53 @@ ngx_stream_init_session(c); } +static ngx_int_t +ngx_stream_verify_cert(ngx_stream_session_t *s, ngx_connection_t *c) +{ + long rc; + X509 *cert; + ngx_stream_ssl_conf_t *sslcf; + + sslcf = ngx_stream_get_module_srv_conf(s, ngx_stream_ssl_module); + + if (!sslcf->verify) { + return NGX_OK; + } + + rc = SSL_get_verify_result(c->ssl->connection); + + if (rc != X509_V_OK + && (sslcf->verify != 3 || !ngx_ssl_verify_error_optional(rc))) + { + ngx_log_error(NGX_LOG_INFO, c->log, 0, + "client SSL certificate verify error: (%l:%s)", + rc, X509_verify_cert_error_string(rc)); + + ngx_ssl_remove_cached_session(sslcf->ssl.ctx, + (SSL_get0_session(c->ssl->connection))); + + return NGX_ERROR; + } + + if (sslcf->verify == 1) { + cert = SSL_get_peer_certificate(c->ssl->connection); + + if (cert == NULL) { + ngx_log_error(NGX_LOG_INFO, c->log, 0, + "client sent no required SSL certificate"); + + ngx_ssl_remove_cached_session(sslcf->ssl.ctx, + (SSL_get0_session(c->ssl->connection))); + + return NGX_ERROR; + } + + X509_free(cert); + } + + return NGX_OK; +} + #endif diff -r 909b5b191f25 -r e34abe30ed6d src/stream/ngx_stream_ssl_module.c --- a/src/stream/ngx_stream_ssl_module.c Thu Nov 05 15:01:09 2015 +0300 +++ b/src/stream/ngx_stream_ssl_module.c Fri Nov 06 21:17:35 2015 +0100 @@ -33,6 +33,13 @@ { ngx_null_string, 0 } }; +static ngx_conf_enum_t ngx_stream_ssl_verify[] = { + { ngx_string("off"), 0 }, + { ngx_string("on"), 1 }, + { ngx_string("optional"), 2 }, + { ngx_string("optional_no_ca"), 3 }, + { ngx_null_string, 0 } +}; static ngx_command_t ngx_stream_ssl_commands[] = { @@ -127,6 +134,41 @@ offsetof(ngx_stream_ssl_conf_t, session_timeout), NULL }, + { ngx_string("ssl_verify_client"), + NGX_STREAM_MAIN_CONF|NGX_STREAM_SRV_CONF|NGX_CONF_TAKE1, + ngx_conf_set_enum_slot, + NGX_STREAM_SRV_CONF_OFFSET, + offsetof(ngx_stream_ssl_conf_t, verify), + &ngx_stream_ssl_verify }, + + { ngx_string("ssl_verify_depth"), + NGX_STREAM_MAIN_CONF|NGX_STREAM_SRV_CONF|NGX_CONF_TAKE1, + ngx_conf_set_num_slot, + NGX_STREAM_SRV_CONF_OFFSET, + offsetof(ngx_stream_ssl_conf_t, verify_depth), + NULL }, + + { ngx_string("ssl_client_certificate"), + NGX_STREAM_MAIN_CONF|NGX_STREAM_SRV_CONF|NGX_CONF_TAKE1, + ngx_conf_set_str_slot, + NGX_STREAM_SRV_CONF_OFFSET, + offsetof(ngx_stream_ssl_conf_t, client_certificate), + NULL }, + + { ngx_string("ssl_trusted_certificate"), + NGX_STREAM_MAIN_CONF|NGX_STREAM_SRV_CONF|NGX_CONF_TAKE1, + ngx_conf_set_str_slot, + NGX_STREAM_SRV_CONF_OFFSET, + offsetof(ngx_stream_ssl_conf_t, trusted_certificate), + NULL }, + + { ngx_string("ssl_crl"), + NGX_STREAM_MAIN_CONF|NGX_STREAM_SRV_CONF|NGX_CONF_TAKE1, + ngx_conf_set_str_slot, + NGX_STREAM_SRV_CONF_OFFSET, + offsetof(ngx_stream_ssl_conf_t, crl), + NULL }, + ngx_null_command }; @@ -179,6 +221,9 @@ * scf->certificate_key = { 0, NULL }; * scf->dhparam = { 0, NULL }; * scf->ecdh_curve = { 0, NULL }; + * scf->client_certificate = { 0, NULL }; + * scf->trusted_certificate = { 0, NULL }; + * scf->crl = { 0, NULL }; * scf->ciphers = { 0, NULL }; * scf->shm_zone = NULL; */ @@ -186,6 +231,8 @@ scf->handshake_timeout = NGX_CONF_UNSET_MSEC; scf->passwords = NGX_CONF_UNSET_PTR; scf->prefer_server_ciphers = NGX_CONF_UNSET; + scf->verify = NGX_CONF_UNSET_UINT; + scf->verify_depth = NGX_CONF_UNSET_UINT; scf->builtin_session_cache = NGX_CONF_UNSET; scf->session_timeout = NGX_CONF_UNSET; scf->session_tickets = NGX_CONF_UNSET; @@ -216,6 +263,9 @@ (NGX_CONF_BITMASK_SET|NGX_SSL_TLSv1 |NGX_SSL_TLSv1_1|NGX_SSL_TLSv1_2)); + ngx_conf_merge_uint_value(conf->verify, prev->verify, 0); + ngx_conf_merge_uint_value(conf->verify_depth, prev->verify_depth, 1); + ngx_conf_merge_str_value(conf->certificate, prev->certificate, ""); ngx_conf_merge_str_value(conf->certificate_key, prev->certificate_key, ""); @@ -226,6 +276,12 @@ ngx_conf_merge_str_value(conf->ecdh_curve, prev->ecdh_curve, NGX_DEFAULT_ECDH_CURVE); + ngx_conf_merge_str_value(conf->client_certificate, + prev->client_certificate, ""); + ngx_conf_merge_str_value(conf->trusted_certificate, + prev->trusted_certificate, ""); + ngx_conf_merge_str_value(conf->crl, prev->crl, ""); + ngx_conf_merge_str_value(conf->ciphers, prev->ciphers, NGX_DEFAULT_CIPHERS); @@ -262,6 +318,35 @@ return NGX_CONF_ERROR; } + if (conf->verify) { + + if (conf->client_certificate.len == 0 && conf->verify != 3) { + ngx_log_error(NGX_LOG_EMERG, cf->log, 0, + "no ssl_client_certificate for ssl_client_verify"); + return NGX_CONF_ERROR; + } + + if (ngx_ssl_client_certificate(cf, &conf->ssl, + &conf->client_certificate, + conf->verify_depth) + != NGX_OK) + { + return NGX_CONF_ERROR; + } + + if (ngx_ssl_trusted_certificate(cf, &conf->ssl, + &conf->trusted_certificate, + conf->verify_depth) + != NGX_OK) + { + return NGX_CONF_ERROR; + } + + if (ngx_ssl_crl(cf, &conf->ssl, &conf->crl) != NGX_OK) { + return NGX_CONF_ERROR; + } + } + if (SSL_CTX_set_cipher_list(conf->ssl.ctx, (const char *) conf->ciphers.data) == 0) diff -r 909b5b191f25 -r e34abe30ed6d src/stream/ngx_stream_ssl_module.h --- a/src/stream/ngx_stream_ssl_module.h Thu Nov 05 15:01:09 2015 +0300 +++ b/src/stream/ngx_stream_ssl_module.h Fri Nov 06 21:17:35 2015 +0100 @@ -23,6 +23,9 @@ ngx_uint_t protocols; + ngx_uint_t verify; + ngx_uint_t verify_depth; + ssize_t builtin_session_cache; time_t session_timeout; @@ -31,6 +34,9 @@ ngx_str_t certificate_key; ngx_str_t dhparam; ngx_str_t ecdh_curve; + ngx_str_t client_certificate; + ngx_str_t trusted_certificate; + ngx_str_t crl; ngx_str_t ciphers; From piotrsikora at google.com Sat Nov 7 01:45:16 2015 From: piotrsikora at google.com (Piotr Sikora) Date: Fri, 6 Nov 2015 17:45:16 -0800 Subject: [PATCH] Configure: remove redundant NGX_OPENSSL In-Reply-To: <20151027014058.GB48365@mdounin.ru> References: <7435401242d6dbfdb4b4.1445729176@piotrsikora.sfo.corp.google.com> <20151026125825.GU48365@mdounin.ru> <20151027014058.GB48365@mdounin.ru> Message-ID: Hey Maxim, > If you want to improve ngx_ssl_* abstraction layer there are lots > of places to work on. Agreed, and ./configure / #ifdef cleanup is as good place to start as any... ;) Best regards, Piotr Sikora From piotrsikora at google.com Sat Nov 7 01:47:50 2015 From: piotrsikora at google.com (Piotr Sikora) Date: Fri, 6 Nov 2015 17:47:50 -0800 Subject: [PATCH] Configure: remove redundant NGX_OPENSSL_MD5 In-Reply-To: <20151027020927.GC48365@mdounin.ru> References: <84b5500257121c6bd3e1.1445729182@piotrsikora.sfo.corp.google.com> <20151026125025.GS48365@mdounin.ru> <20151027020927.GC48365@mdounin.ru> Message-ID: Hey Maxim, > (I personally think that --with-md5 and --with-sha1 aren't really > useful at all, as well as auto/lib/{md5,sha1}. We have good > enough internal md5 implementation now (and I have a patch for > sha1 as well), so there is no real need to use external libraries. Agreed. > On the other hand, I don't think that removing --with-md5 and > --with-sha1 is a good idea either.) They add complexity to the code base, so if there is no real need for those libraries, then they should be removed, IMHO. Best regards, Piotr Sikora From piotrsikora at google.com Sat Nov 7 02:43:14 2015 From: piotrsikora at google.com (Piotr Sikora) Date: Fri, 06 Nov 2015 18:43:14 -0800 Subject: [PATCH] Configure: always respect C compiler options Message-ID: <22f0e600de213b579ca9.1446864194@piotrsikora.sfo.corp.google.com> # HG changeset patch # User Piotr Sikora # Date 1446864006 28800 # Fri Nov 06 18:40:06 2015 -0800 # Node ID 22f0e600de213b579ca921cce8f1a50b0a5c454e # Parent 909b5b191f25d0f9e03667a10d23f6ef27d014a3 Configure: always respect C compiler options. Previously, auto/cc/* and auto/include didn't respect C compiler options provided via --with-cc-opt and/or CFLAGS, which resulted in bogus errors when path to system headers and libraries was defined via --sysroot. While there, retain working GCC's -pipe for autotests. Signed-off-by: Piotr Sikora diff -r 909b5b191f25 -r 22f0e600de21 auto/cc/acc --- a/auto/cc/acc +++ b/auto/cc/acc @@ -8,7 +8,7 @@ # C89 mode CFLAGS="$CFLAGS -Ae" -CC_TEST_FLAGS="-Ae" +CC_TEST_FLAGS="$CC_TEST_FLAGS -Ae" PCRE_OPT="$PCRE_OPT -Ae" ZLIB_OPT="$ZLIB_OPT -Ae" diff -r 909b5b191f25 -r 22f0e600de21 auto/cc/clang --- a/auto/cc/clang +++ b/auto/cc/clang @@ -13,7 +13,7 @@ echo " + clang version: $NGX_CLANG_VER" have=NGX_COMPILER value="\"clang $NGX_CLANG_VER\"" . auto/define -CC_TEST_FLAGS="-pipe" +CC_TEST_FLAGS="$CC_TEST_FLAGS -pipe" # optimizations diff -r 909b5b191f25 -r 22f0e600de21 auto/cc/conf --- a/auto/cc/conf +++ b/auto/cc/conf @@ -29,12 +29,12 @@ ngx_spacer= ngx_long_regex_cont=$ngx_regex_cont ngx_long_cont=$ngx_cont +CC_TEST_FLAGS="$CFLAGS $NGX_CC_OPT" + . auto/cc/name if test -n "$CFLAGS"; then - CC_TEST_FLAGS="$CFLAGS $NGX_CC_OPT" - case $NGX_CC_NAME in ccc) @@ -129,8 +129,6 @@ else esac - CC_TEST_FLAGS="$CC_TEST_FLAGS $NGX_CC_OPT" - fi CFLAGS="$CFLAGS $NGX_CC_OPT" diff -r 909b5b191f25 -r 22f0e600de21 auto/cc/gcc --- a/auto/cc/gcc +++ b/auto/cc/gcc @@ -18,7 +18,7 @@ have=NGX_COMPILER value="\"gcc $NGX_GCC_ # Solaris 7's /usr/ccs/bin/as does not support "-pipe" -CC_TEST_FLAGS="-pipe" +CC_TEST_FLAGS="$CC_TEST_FLAGS -pipe" ngx_feature="gcc -pipe switch" ngx_feature_name= @@ -29,10 +29,10 @@ ngx_feature_libs= ngx_feature_test= . auto/feature -CC_TEST_FLAGS= - if [ $ngx_found = yes ]; then PIPE="-pipe" +else + CC_TEST_FLAGS="$CFLAGS $NGX_CC_OPT" fi diff -r 909b5b191f25 -r 22f0e600de21 auto/include --- a/auto/include +++ b/auto/include @@ -27,7 +27,8 @@ int main() { END -ngx_test="$CC -o $NGX_AUTOTEST $NGX_AUTOTEST.c" +ngx_test="$CC $CC_TEST_FLAGS $CC_AUX_FLAGS \ + -o $NGX_AUTOTEST $NGX_AUTOTEST.c $NGX_LD_OPT" eval "$ngx_test >> $NGX_AUTOCONF_ERR 2>&1" From piotrsikora at google.com Sat Nov 7 02:43:24 2015 From: piotrsikora at google.com (Piotr Sikora) Date: Fri, 06 Nov 2015 18:43:24 -0800 Subject: [PATCH] SSL: guard use of SSL_R_BLOCK_CIPHER_PAD_IS_WRONG Message-ID: <8aef9afa46e31a112fa1.1446864204@piotrsikora.sfo.corp.google.com> # HG changeset patch # User Piotr Sikora # Date 1446864006 28800 # Fri Nov 06 18:40:06 2015 -0800 # Node ID 8aef9afa46e31a112fa1ceaffaefbc5990dbde22 # Parent bfd17e00b5cf13df79c4212a1fca6a1bedd66168 SSL: guard use of SSL_R_BLOCK_CIPHER_PAD_IS_WRONG. This error was removed from BoringSSL. Signed-off-by: Piotr Sikora diff -r bfd17e00b5cf -r 8aef9afa46e3 src/event/ngx_event_openssl.c --- a/src/event/ngx_event_openssl.c +++ b/src/event/ngx_event_openssl.c @@ -1909,7 +1909,9 @@ ngx_ssl_connection_error(ngx_connection_ /* handshake failures */ if (n == SSL_R_BAD_CHANGE_CIPHER_SPEC /* 103 */ +#ifdef SSL_R_BLOCK_CIPHER_PAD_IS_WRONG || n == SSL_R_BLOCK_CIPHER_PAD_IS_WRONG /* 129 */ +#endif || n == SSL_R_DIGEST_CHECK_FAILED /* 149 */ || n == SSL_R_ERROR_IN_RECEIVED_CIPHER_LIST /* 151 */ || n == SSL_R_EXCESSIVE_MESSAGE_SIZE /* 152 */ From piotrsikora at google.com Sat Nov 7 02:43:28 2015 From: piotrsikora at google.com (Piotr Sikora) Date: Fri, 06 Nov 2015 18:43:28 -0800 Subject: [PATCH] SSL: cast hostname in SSL_set_tlsext_host_name() Message-ID: <9716b76675442d78d750.1446864208@piotrsikora.sfo.corp.google.com> # HG changeset patch # User Piotr Sikora # Date 1446864006 28800 # Fri Nov 06 18:40:06 2015 -0800 # Node ID 9716b76675442d78d750ee542e4c80fa86d9b355 # Parent 8aef9afa46e31a112fa1ceaffaefbc5990dbde22 SSL: cast hostname in SSL_set_tlsext_host_name(). BoringSSL promoted this macro to a proper function, so it requires parameters with correct types now. Signed-off-by: Piotr Sikora diff -r 8aef9afa46e3 -r 9716b7667544 src/http/ngx_http_upstream.c --- a/src/http/ngx_http_upstream.c +++ b/src/http/ngx_http_upstream.c @@ -1660,7 +1660,9 @@ ngx_http_upstream_ssl_name(ngx_http_requ ngx_log_debug1(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, "upstream SSL server name: \"%s\"", name.data); - if (SSL_set_tlsext_host_name(c->ssl->connection, name.data) == 0) { + if (SSL_set_tlsext_host_name(c->ssl->connection, (const char *) name.data) + == 0) + { ngx_ssl_error(NGX_LOG_ERR, r->connection->log, 0, "SSL_set_tlsext_host_name(\"%s\") failed", name.data); return NGX_ERROR; diff -r 8aef9afa46e3 -r 9716b7667544 src/stream/ngx_stream_proxy_module.c --- a/src/stream/ngx_stream_proxy_module.c +++ b/src/stream/ngx_stream_proxy_module.c @@ -851,7 +851,8 @@ ngx_stream_proxy_ssl_name(ngx_stream_ses ngx_log_debug1(NGX_LOG_DEBUG_STREAM, s->connection->log, 0, "upstream SSL server name: \"%s\"", name.data); - if (SSL_set_tlsext_host_name(u->peer.connection->ssl->connection, name.data) + if (SSL_set_tlsext_host_name(u->peer.connection->ssl->connection, + (const char *) name.data) == 0) { ngx_ssl_error(NGX_LOG_ERR, s->connection->log, 0, From piotrsikora at google.com Sat Nov 7 02:43:20 2015 From: piotrsikora at google.com (Piotr Sikora) Date: Fri, 06 Nov 2015 18:43:20 -0800 Subject: [PATCH] MD5: NGX_HAVE_OPENSSL_MD5_H implies OpenSSL-style function names Message-ID: # HG changeset patch # User Piotr Sikora # Date 1446864006 28800 # Fri Nov 06 18:40:06 2015 -0800 # Node ID bfd17e00b5cf13df79c4212a1fca6a1bedd66168 # Parent 22f0e600de213b579ca921cce8f1a50b0a5c454e MD5: NGX_HAVE_OPENSSL_MD5_H implies OpenSSL-style function names. Signed-off-by: Piotr Sikora diff -r 22f0e600de21 -r bfd17e00b5cf src/core/ngx_md5.h --- a/src/core/ngx_md5.h +++ b/src/core/ngx_md5.h @@ -25,7 +25,7 @@ typedef MD5_CTX ngx_md5_t; -#if (NGX_OPENSSL_MD5) +#if (NGX_HAVE_OPENSSL_MD5_H || NGX_OPENSSL_MD5) #define ngx_md5_init MD5_Init #define ngx_md5_update MD5_Update From mdounin at mdounin.ru Sat Nov 7 04:09:29 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sat, 7 Nov 2015 07:09:29 +0300 Subject: [PATCH] MD5: NGX_HAVE_OPENSSL_MD5_H implies OpenSSL-style function names In-Reply-To: References: Message-ID: <20151107040929.GN74233@mdounin.ru> Hello! On Fri, Nov 06, 2015 at 06:43:20PM -0800, Piotr Sikora wrote: > # HG changeset patch > # User Piotr Sikora > # Date 1446864006 28800 > # Fri Nov 06 18:40:06 2015 -0800 > # Node ID bfd17e00b5cf13df79c4212a1fca6a1bedd66168 > # Parent 22f0e600de213b579ca921cce8f1a50b0a5c454e > MD5: NGX_HAVE_OPENSSL_MD5_H implies OpenSSL-style function names. As of now this implication is localized in auto/lib/md5/conf, and I see no reasons to propagate this knowledge into the code. > > Signed-off-by: Piotr Sikora > > diff -r 22f0e600de21 -r bfd17e00b5cf src/core/ngx_md5.h > --- a/src/core/ngx_md5.h > +++ b/src/core/ngx_md5.h > @@ -25,7 +25,7 @@ > typedef MD5_CTX ngx_md5_t; > > > -#if (NGX_OPENSSL_MD5) > +#if (NGX_HAVE_OPENSSL_MD5_H || NGX_OPENSSL_MD5) > > #define ngx_md5_init MD5_Init > #define ngx_md5_update MD5_Update > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel -- Maxim Dounin http://nginx.org/ From maksim.yevmenkin at gmail.com Mon Nov 9 18:35:23 2015 From: maksim.yevmenkin at gmail.com (Maksim Yevmenkin) Date: Mon, 9 Nov 2015 10:35:23 -0800 Subject: streaming large nginx generated page Message-ID: hello, suppose i need to export large amount of nginx generated data (static page). i have used content handler (or content phase handler) to create all the in-memory chain and fed it to ngx_http_output_filter(). however, i would very much like to avoid allocating large chunk of memory to keep all the data before sending. is there a way to stream generated page (potentially applying different transfer encoding)? thanks! max From Julien.FROMENT at sagemcom.com Tue Nov 10 18:51:24 2015 From: Julien.FROMENT at sagemcom.com (Julien FROMENT) Date: Tue, 10 Nov 2015 13:51:24 -0500 Subject: Tracking sent responses Message-ID: <1BD844147F06C444BE95E92F0CE2B2D5076F75DD@ares.INTERSTARINC.COM> Hello, We would like to use Nginx to keep track of exactly what part of an upstream's server response was sent over a socket. Nginx could call an API asynchronously with the number of bytes sent over the socket for a given request. Here is the pseudo code: -- Client send a request -- Nginx processes the request and send it to the upstream ... -- The upstream returns the response -- Nginx sends the response to the client -- Nginx calls Async API with the number of bytes sent I read a little bit of "Emiller's Guide To Nginx Module Development", and I think we could write a Handler that provide some tracking information. But I am unsure if it is possible to hook it at a low enough level for our needs. Are there any expert on this mailing list that could provide us consulting services and guide us through the development of such functionality? Thanks in advance! Julien # " Ce courriel et les documents qui lui sont joints peuvent contenir des informations confidentielles ou ayant un caract?? priv??S'ils ne vous sont pas destin?? nous vous signalons qu'il est strictement interdit de les divulguer, de les reproduire ou d'en utiliser de quelque mani?? que ce soit le contenu. Si ce message vous a ?? transmis par erreur, merci d'en informer l'exp??teur et de supprimer imm??atement de votre syst?? informatique ce courriel ainsi que tous les documents qui y sont attach??" ****** " This e-mail and any attached documents may contain confidential or proprietary information. If you are not the intended recipient, you are notified that any dissemination, copying of this e-mail and any attachments thereto or use of their contents by any means whatsoever is strictly prohibited. If you have received this e-mail in error, please advise the sender immediately and delete this e-mail and all attached documents from your computer system." # -------------- next part -------------- An HTML attachment was scrubbed... URL: From serg.brester at sebres.de Tue Nov 10 19:29:36 2015 From: serg.brester at sebres.de (Sergey Brester) Date: Tue, 10 Nov 2015 20:29:36 +0100 Subject: Tracking sent responses In-Reply-To: <1BD844147F06C444BE95E92F0CE2B2D5076F75DD@ares.INTERSTARINC.COM> References: <1BD844147F06C444BE95E92F0CE2B2D5076F75DD@ares.INTERSTARINC.COM> Message-ID: <444e81941f9187c7e1a56a74647e0b94@sebres.de> Hi, I'm sure you can do that using on-board "equipment" of nginx, without deep integrating to the nginx (without write of own module). You can use for this a "post_action", something like: post_action @after_request_location; But (There is always a "but":), according to my last known stand: - the feature "post_action" is asynchronously; - the feature is not documentated (and possibly not recommended to use);- if location "executed" in post_action uses upstreams (fcgi, proxy_pass, etc.), it will always breaks a keepalive connection to the upstream channel (possibly fixed, but I've missed). Regards, sebres. Am 10.11.2015 19:51, schrieb Julien FROMENT: > Hello, > > We would like to use Nginx to keep track of exactly what part of an upstream's server response was sent over a socket. Nginx could call an API asynchronously with the number of bytes sent over the socket for a given request. > > Here is the pseudo code: > > -- Client send a request > > -- Nginx processes the request and send it to the upstream > > ... > > -- The upstream returns the response > > -- Nginx sends the response to the client > > -- Nginx calls Async API with the number of bytes sent > > I read a little bit of "Emiller's Guide To Nginx Module Development", and I think we could write a Handler that provide some tracking information. But I am unsure if it is possible to hook it at a low enough level for our needs. > > Are there any expert on this mailing list that could provide us consulting services and guide us through the development of such functionality? > > Thanks in advance! > > Julien > > # > " Ce courriel et les documents qui lui sont joints peuvent contenir des > informations confidentielles ou ayant un caract?? priv?(c)S'ils ne vous sont > pas destin?(c) nous vous signalons qu'il est strictement interdit de les > divulguer, de les reproduire ou d'en utiliser de quelque mani?? que ce > soit le contenu. Si ce message vous a ?(c) transmis par erreur, merci d'en > informer l'exp?(c)teur et de supprimer imm?(c)atement de votre syst?? > informatique ce courriel ainsi que tous les documents qui y sont attach?(c)" > > ****** > > " This e-mail and any attached documents may contain confidential or > proprietary information. If you are not the intended recipient, you are > notified that any dissemination, copying of this e-mail and any attachments > thereto or use of their contents by any means whatsoever is strictly > prohibited. If you have received this e-mail in error, please advise the > sender immediately and delete this e-mail and all attached documents > from your computer system." > # > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel [1] Links: ------ [1] http://mailman.nginx.org/mailman/listinfo/nginx-devel -------------- next part -------------- An HTML attachment was scrubbed... URL: From arut at nginx.com Wed Nov 11 12:51:33 2015 From: arut at nginx.com (Roman Arutyunyan) Date: Wed, 11 Nov 2015 12:51:33 +0000 Subject: [nginx] Upstream: proxy_cache_convert_head directive. Message-ID: details: http://hg.nginx.org/nginx/rev/4d5ac1a31d44 branches: changeset: 6290:4d5ac1a31d44 user: Roman Arutyunyan date: Wed Nov 11 15:47:30 2015 +0300 description: Upstream: proxy_cache_convert_head directive. The directive toggles conversion of HEAD to GET for cacheable proxy requests. When disabled, $request_method must be added to cache key for consistency. By default, HEAD is converted to GET as before. diffstat: src/http/modules/ngx_http_proxy_module.c | 11 +++++++++++ src/http/ngx_http_upstream.c | 2 +- src/http/ngx_http_upstream.h | 1 + 3 files changed, 13 insertions(+), 1 deletions(-) diffs (58 lines): diff -r 909b5b191f25 -r 4d5ac1a31d44 src/http/modules/ngx_http_proxy_module.c --- a/src/http/modules/ngx_http_proxy_module.c Thu Nov 05 15:01:09 2015 +0300 +++ b/src/http/modules/ngx_http_proxy_module.c Wed Nov 11 15:47:30 2015 +0300 @@ -533,6 +533,13 @@ static ngx_command_t ngx_http_proxy_com offsetof(ngx_http_proxy_loc_conf_t, upstream.cache_revalidate), NULL }, + { ngx_string("proxy_cache_convert_head"), + NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_FLAG, + ngx_conf_set_flag_slot, + NGX_HTTP_LOC_CONF_OFFSET, + offsetof(ngx_http_proxy_loc_conf_t, upstream.cache_convert_head), + NULL }, + #endif { ngx_string("proxy_temp_path"), @@ -2845,6 +2852,7 @@ ngx_http_proxy_create_loc_conf(ngx_conf_ conf->upstream.cache_lock_timeout = NGX_CONF_UNSET_MSEC; conf->upstream.cache_lock_age = NGX_CONF_UNSET_MSEC; conf->upstream.cache_revalidate = NGX_CONF_UNSET; + conf->upstream.cache_convert_head = NGX_CONF_UNSET; #endif conf->upstream.hide_headers = NGX_CONF_UNSET_PTR; @@ -3143,6 +3151,9 @@ ngx_http_proxy_merge_loc_conf(ngx_conf_t ngx_conf_merge_value(conf->upstream.cache_revalidate, prev->upstream.cache_revalidate, 0); + ngx_conf_merge_value(conf->upstream.cache_convert_head, + prev->upstream.cache_convert_head, 1); + #endif ngx_conf_merge_str_value(conf->method, prev->method, ""); diff -r 909b5b191f25 -r 4d5ac1a31d44 src/http/ngx_http_upstream.c --- a/src/http/ngx_http_upstream.c Thu Nov 05 15:01:09 2015 +0300 +++ b/src/http/ngx_http_upstream.c Wed Nov 11 15:47:30 2015 +0300 @@ -764,7 +764,7 @@ ngx_http_upstream_cache(ngx_http_request return rc; } - if (r->method & NGX_HTTP_HEAD) { + if ((r->method & NGX_HTTP_HEAD) && u->conf->cache_convert_head) { u->method = ngx_http_core_get_method; } diff -r 909b5b191f25 -r 4d5ac1a31d44 src/http/ngx_http_upstream.h --- a/src/http/ngx_http_upstream.h Thu Nov 05 15:01:09 2015 +0300 +++ b/src/http/ngx_http_upstream.h Wed Nov 11 15:47:30 2015 +0300 @@ -193,6 +193,7 @@ typedef struct { ngx_msec_t cache_lock_age; ngx_flag_t cache_revalidate; + ngx_flag_t cache_convert_head; ngx_array_t *cache_valid; ngx_array_t *cache_bypass; From Julien.FROMENT at sagemcom.com Wed Nov 11 18:17:47 2015 From: Julien.FROMENT at sagemcom.com (Julien FROMENT) Date: Wed, 11 Nov 2015 13:17:47 -0500 Subject: Tracking sent responses In-Reply-To: <444e81941f9187c7e1a56a74647e0b94@sebres.de> References: <1BD844147F06C444BE95E92F0CE2B2D5076F75DD@ares.INTERSTARINC.COM> <444e81941f9187c7e1a56a74647e0b94@sebres.de> Message-ID: <1BD844147F06C444BE95E92F0CE2B2D5076F7910@ares.INTERSTARINC.COM> Thanks for the reply, Using post_action could work, if we can sent to the @after_request_location enough reliable information. Can we use the all the variable documented in the ngx_http_core_module (http://nginx.org/en/docs/http/ngx_http_core_module.html#variables) ? Are there any other variables that we could use? Although, I am a bit concerned by your comment ?possibly not recommended to use?, could we clarify what you mean or what lead you to think it is not recommended? Rergard, Julien From: Sergey Brester [mailto:serg.brester at sebres.de] Sent: Tuesday, November 10, 2015 2:30 PM To: nginx-devel at nginx.org Cc: Julien FROMENT Subject: Re: Tracking sent responses Hi, I'm sure you can do that using on-board "equipment" of nginx, without deep integrating to the nginx (without write of own module). You can use for this a "post_action", something like: post_action @after_request_location; But (There is always a "but":), according to my last known stand: - the feature "post_action" is asynchronously; - the feature is not documentated (and possibly not recommended to use);- if location "executed" in post_action uses upstreams (fcgi, proxy_pass, etc.), it will always breaks a keepalive connection to the upstream channel (possibly fixed, but I've missed). Regards, sebres. Am 10.11.2015 19:51, schrieb Julien FROMENT: Hello, We would like to use Nginx to keep track of exactly what part of an upstream's server response was sent over a socket. Nginx could call an API asynchronously with the number of bytes sent over the socket for a given request. Here is the pseudo code: -- Client send a request -- Nginx processes the request and send it to the upstream ... -- The upstream returns the response -- Nginx sends the response to the client -- Nginx calls Async API with the number of bytes sent I read a little bit of "Emiller's Guide To Nginx Module Development", and I think we could write a Handler that provide some tracking information. But I am unsure if it is possible to hook it at a low enough level for our needs. Are there any expert on this mailing list that could provide us consulting services and guide us through the development of such functionality? Thanks in advance! Julien # " Ce courriel et les documents qui lui sont joints peuvent contenir des informations confidentielles ou ayant un caract?? priv??S'ils ne vous sont pas destin?? nous vous signalons qu'il est strictement interdit de les divulguer, de les reproduire ou d'en utiliser de quelque mani?? que ce soit le contenu. Si ce message vous a ?? transmis par erreur, merci d'en informer l'exp??teur et de supprimer imm??atement de votre syst?? informatique ce courriel ainsi que tous les documents qui y sont attach??" ****** " This e-mail and any attached documents may contain confidential or proprietary information. If you are not the intended recipient, you are notified that any dissemination, copying of this e-mail and any attachments thereto or use of their contents by any means whatsoever is strictly prohibited. If you have received this e-mail in error, please advise the sender immediately and delete this e-mail and all attached documents from your computer system." # _______________________________________________ nginx-devel mailing list nginx-devel at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx-devel # " Ce courriel et les documents qui lui sont joints peuvent contenir des informations confidentielles ou ayant un caract? priv?S'ils ne vous sont pas destin? nous vous signalons qu'il est strictement interdit de les divulguer, de les reproduire ou d'en utiliser de quelque mani? que ce soit le contenu. Si ce message vous a ? transmis par erreur, merci d'en informer l'exp?teur et de supprimer imm?atement de votre syst? informatique ce courriel ainsi que tous les documents qui y sont attach?" ****** " This e-mail and any attached documents may contain confidential or proprietary information. If you are not the intended recipient, you are notified that any dissemination, copying of this e-mail and any attachments thereto or use of their contents by any means whatsoever is strictly prohibited. If you have received this e-mail in error, please advise the sender immediately and delete this e-mail and all attached documents from your computer system." # -------------- next part -------------- An HTML attachment was scrubbed... URL: From screeley at redhat.com Wed Nov 11 18:23:38 2015 From: screeley at redhat.com (Scott Creeley) Date: Wed, 11 Nov 2015 13:23:38 -0500 (EST) Subject: Fwd: openshift-nginx docker image running as non-root In-Reply-To: <1206832728.7518114.1447262029592.JavaMail.zimbra@redhat.com> References: <1206832728.7518114.1447262029592.JavaMail.zimbra@redhat.com> Message-ID: <703386880.7557572.1447266218497.JavaMail.zimbra@redhat.com> ----- Forwarded Message ----- From: "Scott Creeley" To: nginx-devel at nginx.org Sent: Wednesday, November 11, 2015 12:13:49 PM Subject: openshift-nginx docker image running as non-root Hi, Been playing around with the https://github.com/nginxinc/openshift-nginx dockerfile and trying to find a way to run run nginx as non-root with openshift/k8/docker. Not having much luck, if I pass in a user or specify a user in the nginx.con or Dockerfile or via openshift/k8 runAsUser I always get some form permission errors. Is there a way to do this or am I wasting my time messing with this? nginx: [alert] could not open error log file: open() "/var/log/nginx/error.log" failed (13: Permission denied) 2015/11/10 14:40:40 [warn] 1#1: the "user" directive makes sense only if the master process runs with super-user privileges, ignored in /etc/nginx/nginx.conf:2 2015/11/10 14:40:40 [emerg] 1#1: mkdir() "/var/cache/nginx/client_temp" failed (13: Permission denied) From serg.brester at sebres.de Wed Nov 11 19:09:39 2015 From: serg.brester at sebres.de (Sergey Brester) Date: Wed, 11 Nov 2015 20:09:39 +0100 Subject: Tracking sent responses In-Reply-To: <1BD844147F06C444BE95E92F0CE2B2D5076F7910@ares.INTERSTARINC.COM> References: <1BD844147F06C444BE95E92F0CE2B2D5076F75DD@ares.INTERSTARINC.COM> <444e81941f9187c7e1a56a74647e0b94@sebres.de> <1BD844147F06C444BE95E92F0CE2B2D5076F7910@ares.INTERSTARINC.COM> Message-ID: 11.11.2015 19:17, Julien FROMENT wrote: > Thanks for the reply, Welcome :) > Using post_action could work, if we can sent to the @after_request_location enough reliable information. > > Can we use the all the variable documented in the ngx_http_core_module (http://nginx.org/en/docs /http/ngx_http_core_module.html#variables [1]) ? Are there any other variables that we could use? Yes, and your own specified also (or some from custom modules). Here is a more large list of all variables - http://nginx.org/en/docs/varindex.html And if you want to use some values returned from upstream you should get a variables beginning with "upstream_...". For example, if you need a http-header "X_MY_VAR", you should get $upstream_http_x_my_var. If you need a cookie value "example", you can get $upstream_cookie_example etc. For http status of response use $upstream_status. Here is the list of it all - http://nginx.org/en/docs/http/ngx_http_upstream_module.html#var_upstream_addr > Although, I am a bit concerned by your comment "possibly not recommended to use", could we clarify what you mean or what lead you to think it is not recommended? Well, you can read my small discussion with unambiguous answer about this from a nginx developer - https://www.mail-archive.com/nginx-devel at nginx.org/msg03680.html I will keep this feature in my own bundles (and in my own forks) - no matter what some nginx developers say about this. But ... it is my decision about. In any case, I believe it is not very complex to create a similar functionality as (replacement) module, if "post_action" will be removed later from nginx standard bundle. > Rergard, > > Julien Regards, Serg G. Brester (sebres) > FROM: Sergey Brester [mailto:serg.brester at sebres.de] > SENT: Tuesday, November 10, 2015 2:30 PM > TO: nginx-devel at nginx.org > CC: Julien FROMENT > SUBJECT: Re: Tracking sent responses > > Hi, > > I'm sure you can do that using on-board "equipment" of nginx, without deep integrating to the nginx (without write of own module). > > You can use for this a "post_action", something like: > > post_action @after_request_location; > > But (There is always a "but":), according to my last known stand: > > - the feature "post_action" is asynchronously; > - the feature is not documentated (and possibly not recommended to use);- if location "executed" in post_action uses upstreams (fcgi, proxy_pass, etc.), it will always breaks a keepalive connection to the upstream channel (possibly fixed, but I've missed). > > Regards, > sebres. > > Am 10.11.2015 19:51, schrieb Julien FROMENT: > >> Hello, >> >> We would like to use Nginx to keep track of exactly what part of an upstream's server response was sent over a socket. Nginx could call an API asynchronously with the number of bytes sent over the socket for a given request. >> >> &nbs p; >> >> Here is the pseudo code: >> >> -- Client send a request >> >> -- Nginx processes the request and send it to the upstream >> >> ... >> >> -- The upstream returns the response >> >> -- Nginx sends the response to the client >> >> -- Nginx calls Async API with the number of bytes sent >> >> I read a little bit of "Emiller's Guide To Nginx Module Development", and I think we could write a Handler that provide some tracking information. But I am unsure if it is possible to hook it at a low enough level for our needs. >> >> Are there any expert on this mailing list that could provide us consulting services and guide us through the development of such functionality? >> >> Thanks in advance! >> >> Julien >> >> # >> >> " Ce courriel et les documents qui lui sont joints peuvent contenir des >> >> informations confidentielles ou ayant un caract?? priv?(c)S'ils ne vous sont >> >> pas destin?(c) nous vous signalons qu'il est strictement interdit de les >> >> divulguer, de les reproduire ou d'en utiliser de quelque mani?? que ce >> >> soit le contenu. Si ce message vous a ?(c) transmis par erreur, merci d'en >> >> informer l'exp?(c)teur et de supprimer imm?(c)atement de votre syst?? >> >> informatique ce courriel ainsi que tous les documents qui y sont attach?(c)" >> >> ****** >> >> " This e-mail and any attached documents may contain confidential or >> >> proprietary information. If you are not the intended recipient, you are >> >> notified that any dissemination, copying of this e-mail and any attachments >> >> thereto or use of their contents by any means whatsoever is strictly >> >> prohibited. If you have received this e-mail in error, please advise the >> >> sender immediately and delete this e-mail and all attached documents >> >> from your computer system." >> >> # >> >> _______________________________________________ >> >> nginx-devel mailing list >> >> nginx-devel at nginx.org >> >> http://mailman.nginx.org/mailman/listinfo/nginx-devel [2] > > # > " Ce courriel et les documents qui lui sont joints peuvent contenir des > informations confidentielles ou ayant un caract? priv?S'ils ne vous sont > pas destin? nous vous signalons qu'il est strictement interdit de les > divulguer, de les reproduire ou d'en utiliser de quelque mani? que ce > soit le contenu. Si ce message vous a ? transmis par erreur, merci d'en > informer l'exp?teur et de supprimer imm?atement de votre syst? > informatique ce courriel ainsi que tous les documents qui y sont attach?" > > ****** > > " This e-mail and any attached documents may contain confidential or > proprietary information. If you are not the intended recipient, you are > notified that any dissemination, copying of this e-mail and any attachments > thereto or use of their contents by any means whatsoever is strictly > prohibited. If you have received this e-mail in error, please advise the > sender immediately and delete this e-mail and all attached documents > from your computer system." > # Links: ------ [1] http://nginx.org/en/docs/http/ngx_http_core_module.html#variables [2] http://mailman.nginx. org/mailman/listinfo/nginx-devel -------------- next part -------------- An HTML attachment was scrubbed... URL: From al-nginx at none.at Wed Nov 11 20:10:44 2015 From: al-nginx at none.at (Aleksandar Lazic) Date: Wed, 11 Nov 2015 21:10:44 +0100 Subject: Fwd: openshift-nginx docker image running as non-root In-Reply-To: <703386880.7557572.1447266218497.JavaMail.zimbra@redhat.com> References: <1206832728.7518114.1447262029592.JavaMail.zimbra@redhat.com> <703386880.7557572.1447266218497.JavaMail.zimbra@redhat.com> Message-ID: <05e647760123aeff9b81815b952676ec@none.at> Dear Scott. I think this is not a devel question so I answer primarly to nginx list. Am 11-11-2015 19:23, schrieb Scott Creeley: > ----- Forwarded Message ----- > From: "Scott Creeley" > To: nginx-devel at nginx.org > Sent: Wednesday, November 11, 2015 12:13:49 PM > Subject: openshift-nginx docker image running as non-root > > Hi, > Been playing around with the > https://github.com/nginxinc/openshift-nginx dockerfile and trying to > find a way to run run nginx as non-root with openshift/k8/docker. Not > having much luck, if I pass in a user or specify a user in the > nginx.con or Dockerfile or via openshift/k8 runAsUser I always get > some form permission errors. Is there a way to do this or am I > wasting my time messing with this? > > nginx: [alert] could not open error log file: open() > "/var/log/nginx/error.log" failed (13: Permission denied) > 2015/11/10 14:40:40 [warn] 1#1: the "user" directive makes sense only > if the master process runs with super-user privileges, ignored in > /etc/nginx/nginx.conf:2 > 2015/11/10 14:40:40 [emerg] 1#1: mkdir() > "/var/cache/nginx/client_temp" failed (13: Permission denied) We had the same problem. tl;dr Add this to the dockerfile. RUN .... && chmod -R 777 /var/log/nginx /var/cache/nginx/ \ && chmod 644 /etc/nginx/* Longer explanation. Openshift v3 uses a randomly User inside the container. This makes the user and group setting in the most Dockerfile and app not very helpfully. You can take a look into the node-js example container oc exec nodejs-example-1-qerx1 -it bash ###### bash-4.2$ ps aafxu USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND 1000100+ 19 0.0 0.0 11740 1840 ? Ss 14:58 0:00 bash 1000100+ 34 0.0 0.0 19764 1204 ? R+ 14:58 0:00 \_ ps aafxu 1000100+ 1 0.0 0.0 863264 26216 ? Ssl Nov09 0:00 npm 1000100+ 17 0.0 0.0 701120 25892 ? Sl Nov09 0:00 node server.js ####### The reason why the most of the programs have this user & group stuff is a security reason. Due to the fact that almost all Containers in Openshift v3 runs under a dedicated user (e.g.: 1000100+) you don't need and not allowed to change to a dedicated user. Please take a look into this docs. Due to the fact that I don't know if you use Openshift Enterprise (OSE) or Openshift origin I post the doc links from the origin ;-) https://docs.openshift.org/latest/architecture/index.html https://docs.openshift.org/latest/creating_images/guidelines.html https://docs.openshift.org/latest/admin_guide/manage_scc.html#enable-images-to-run-with-user-in-the-dockerfile https://docs.openshift.org/latest/using_images/docker_images/index.html https://docs.openshift.org/latest/architecture/core_concepts/pods_and_services.html https://docs.openshift.org/latest/architecture/additional_concepts/authorization.html#security-context-constraints Please give you some time to learn the Openshift ecosystem it's not like a 'docker run ...' on any machine ;-) BR Aleks From vbart at nginx.com Thu Nov 12 19:29:54 2015 From: vbart at nginx.com (Valentin V. Bartenev) Date: Thu, 12 Nov 2015 22:29:54 +0300 Subject: Tracking sent responses In-Reply-To: <1BD844147F06C444BE95E92F0CE2B2D5076F7910@ares.INTERSTARINC.COM> References: <1BD844147F06C444BE95E92F0CE2B2D5076F75DD@ares.INTERSTARINC.COM> <444e81941f9187c7e1a56a74647e0b94@sebres.de> <1BD844147F06C444BE95E92F0CE2B2D5076F7910@ares.INTERSTARINC.COM> Message-ID: <1765928.8lpQjaPBB8@vbart-workstation> On Wednesday 11 November 2015 13:17:47 Julien FROMENT wrote: > Using post_action could work, if we can sent to the @after_request_location enough reliable information. [..] Using "access_log syslog" for this purpose will be much better solution. See the http_log module documentation for details: http://nginx.org/en/docs/http/ngx_http_log_module.html wbr, Valentin V. Bartenev From screeley at redhat.com Thu Nov 12 19:34:26 2015 From: screeley at redhat.com (Scott Creeley) Date: Thu, 12 Nov 2015 14:34:26 -0500 (EST) Subject: openshift-nginx docker image running as non-root In-Reply-To: <05e647760123aeff9b81815b952676ec@none.at> References: <1206832728.7518114.1447262029592.JavaMail.zimbra@redhat.com> <703386880.7557572.1447266218497.JavaMail.zimbra@redhat.com> <05e647760123aeff9b81815b952676ec@none.at> Message-ID: <530289079.8714637.1447356866374.JavaMail.zimbra@redhat.com> Thanks Aleks, I got it to work with a combo of what you provided and I also had to chmod /var/run or I would get a permission error on the /var/run/nginx.pid and it wouldn't start. thanks, Scott ----- Original Message ----- From: "Aleksandar Lazic" To: nginx at nginx.org Cc: "Scott Creeley" , nginx-devel at nginx.org Sent: Wednesday, November 11, 2015 3:10:44 PM Subject: Re: Fwd: openshift-nginx docker image running as non-root Dear Scott. I think this is not a devel question so I answer primarly to nginx list. Am 11-11-2015 19:23, schrieb Scott Creeley: > ----- Forwarded Message ----- > From: "Scott Creeley" > To: nginx-devel at nginx.org > Sent: Wednesday, November 11, 2015 12:13:49 PM > Subject: openshift-nginx docker image running as non-root > > Hi, > Been playing around with the > https://github.com/nginxinc/openshift-nginx dockerfile and trying to > find a way to run run nginx as non-root with openshift/k8/docker. Not > having much luck, if I pass in a user or specify a user in the > nginx.con or Dockerfile or via openshift/k8 runAsUser I always get > some form permission errors. Is there a way to do this or am I > wasting my time messing with this? > > nginx: [alert] could not open error log file: open() > "/var/log/nginx/error.log" failed (13: Permission denied) > 2015/11/10 14:40:40 [warn] 1#1: the "user" directive makes sense only > if the master process runs with super-user privileges, ignored in > /etc/nginx/nginx.conf:2 > 2015/11/10 14:40:40 [emerg] 1#1: mkdir() > "/var/cache/nginx/client_temp" failed (13: Permission denied) We had the same problem. tl;dr Add this to the dockerfile. RUN .... && chmod -R 777 /var/log/nginx /var/cache/nginx/ \ && chmod 644 /etc/nginx/* Longer explanation. Openshift v3 uses a randomly User inside the container. This makes the user and group setting in the most Dockerfile and app not very helpfully. You can take a look into the node-js example container oc exec nodejs-example-1-qerx1 -it bash ###### bash-4.2$ ps aafxu USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND 1000100+ 19 0.0 0.0 11740 1840 ? Ss 14:58 0:00 bash 1000100+ 34 0.0 0.0 19764 1204 ? R+ 14:58 0:00 \_ ps aafxu 1000100+ 1 0.0 0.0 863264 26216 ? Ssl Nov09 0:00 npm 1000100+ 17 0.0 0.0 701120 25892 ? Sl Nov09 0:00 node server.js ####### The reason why the most of the programs have this user & group stuff is a security reason. Due to the fact that almost all Containers in Openshift v3 runs under a dedicated user (e.g.: 1000100+) you don't need and not allowed to change to a dedicated user. Please take a look into this docs. Due to the fact that I don't know if you use Openshift Enterprise (OSE) or Openshift origin I post the doc links from the origin ;-) https://docs.openshift.org/latest/architecture/index.html https://docs.openshift.org/latest/creating_images/guidelines.html https://docs.openshift.org/latest/admin_guide/manage_scc.html#enable-images-to-run-with-user-in-the-dockerfile https://docs.openshift.org/latest/using_images/docker_images/index.html https://docs.openshift.org/latest/architecture/core_concepts/pods_and_services.html https://docs.openshift.org/latest/architecture/additional_concepts/authorization.html#security-context-constraints Please give you some time to learn the Openshift ecosystem it's not like a 'docker run ...' on any machine ;-) BR Aleks From vbart at nginx.com Fri Nov 13 17:11:51 2015 From: vbart at nginx.com (Valentin Bartenev) Date: Fri, 13 Nov 2015 17:11:51 +0000 Subject: [nginx] HTTP/2: fixed invalid headers handling (ticket #831). Message-ID: details: http://hg.nginx.org/nginx/rev/932a465537ef branches: changeset: 6291:932a465537ef user: Valentin Bartenev date: Fri Nov 13 20:10:50 2015 +0300 description: HTTP/2: fixed invalid headers handling (ticket #831). The r->invalid_header flag wasn't reset once an invalid header appeared in a request, resulting in all subsequent headers in the request were also marked as invalid. diffstat: src/http/v2/ngx_http_v2.c | 2 ++ 1 files changed, 2 insertions(+), 0 deletions(-) diffs (12 lines): diff -r 4d5ac1a31d44 -r 932a465537ef src/http/v2/ngx_http_v2.c --- a/src/http/v2/ngx_http_v2.c Wed Nov 11 15:47:30 2015 +0300 +++ b/src/http/v2/ngx_http_v2.c Fri Nov 13 20:10:50 2015 +0300 @@ -2949,6 +2949,8 @@ ngx_http_v2_validate_header(ngx_http_req return NGX_ERROR; } + r->invalid_header = 0; + cscf = ngx_http_get_module_srv_conf(r, ngx_http_core_module); for (i = (header->name.data[0] == ':'); i != header->name.len; i++) { From vbart at nginx.com Fri Nov 13 17:11:53 2015 From: vbart at nginx.com (Valentin Bartenev) Date: Fri, 13 Nov 2015 17:11:53 +0000 Subject: [nginx] HTTP/2: fixed handling of output HEADERS frames. Message-ID: details: http://hg.nginx.org/nginx/rev/f72d3129cd35 branches: changeset: 6292:f72d3129cd35 user: Valentin Bartenev date: Fri Nov 13 20:10:50 2015 +0300 description: HTTP/2: fixed handling of output HEADERS frames. The HEADERS frame is always represented by more than one buffer since b930e598a199, but the handling code hasn't been adjusted. Only the first buffer of HEADERS frame was checked and if it had been sent while others had not, the rest of the frame was dropped, resulting in broken connection. Before b930e598a199, the problem could only be seen in case of HEADERS frame with CONTINUATION. diffstat: src/http/v2/ngx_http_v2_filter_module.c | 25 +++++++++++++++++++------ 1 files changed, 19 insertions(+), 6 deletions(-) diffs (40 lines): diff -r 932a465537ef -r f72d3129cd35 src/http/v2/ngx_http_v2_filter_module.c --- a/src/http/v2/ngx_http_v2_filter_module.c Fri Nov 13 20:10:50 2015 +0300 +++ b/src/http/v2/ngx_http_v2_filter_module.c Fri Nov 13 20:10:50 2015 +0300 @@ -1054,17 +1054,30 @@ static ngx_int_t ngx_http_v2_headers_frame_handler(ngx_http_v2_connection_t *h2c, ngx_http_v2_out_frame_t *frame) { - ngx_buf_t *buf; + ngx_chain_t *cl; ngx_http_v2_stream_t *stream; - buf = frame->first->buf; + stream = frame->stream; + cl = frame->first; - if (buf->pos != buf->last) { - return NGX_AGAIN; + for ( ;; ) { + if (cl->buf->pos != cl->buf->last) { + frame->first = cl; + + ngx_log_debug2(NGX_LOG_DEBUG_HTTP, h2c->connection->log, 0, + "http2:%ui HEADERS frame %p was sent partially", + stream->node->id, frame); + + return NGX_AGAIN; + } + + if (cl == frame->last) { + break; + } + + cl = cl->next; } - stream = frame->stream; - ngx_log_debug2(NGX_LOG_DEBUG_HTTP, h2c->connection->log, 0, "http2:%ui HEADERS frame %p was sent", stream->node->id, frame); From vbart at nginx.com Fri Nov 13 17:11:56 2015 From: vbart at nginx.com (Valentin Bartenev) Date: Fri, 13 Nov 2015 17:11:56 +0000 Subject: [nginx] HTTP/2: reused HEADERS and CONTINUATION frames buffers. Message-ID: details: http://hg.nginx.org/nginx/rev/ec6b07be88a5 branches: changeset: 6293:ec6b07be88a5 user: Valentin Bartenev date: Fri Nov 13 20:10:50 2015 +0300 description: HTTP/2: reused HEADERS and CONTINUATION frames buffers. diffstat: src/http/v2/ngx_http_v2.h | 2 +- src/http/v2/ngx_http_v2_filter_module.c | 29 ++++++++++++++++++++--------- 2 files changed, 21 insertions(+), 10 deletions(-) diffs (103 lines): diff -r f72d3129cd35 -r ec6b07be88a5 src/http/v2/ngx_http_v2.h --- a/src/http/v2/ngx_http_v2.h Fri Nov 13 20:10:50 2015 +0300 +++ b/src/http/v2/ngx_http_v2.h Fri Nov 13 20:10:50 2015 +0300 @@ -178,7 +178,7 @@ struct ngx_http_v2_stream_s { size_t recv_window; ngx_http_v2_out_frame_t *free_frames; - ngx_chain_t *free_data_headers; + ngx_chain_t *free_frame_headers; ngx_chain_t *free_bufs; ngx_queue_t queue; diff -r f72d3129cd35 -r ec6b07be88a5 src/http/v2/ngx_http_v2_filter_module.c --- a/src/http/v2/ngx_http_v2_filter_module.c Fri Nov 13 20:10:50 2015 +0300 +++ b/src/http/v2/ngx_http_v2_filter_module.c Fri Nov 13 20:10:50 2015 +0300 @@ -624,6 +624,8 @@ ngx_http_v2_create_headers_frame(ngx_htt *b->last++ = flags; b->last = ngx_http_v2_write_sid(b->last, stream->node->id); + b->tag = (ngx_buf_tag_t) &ngx_http_v2_module; + cl = ngx_alloc_chain_link(r->pool); if (cl == NULL) { return NULL; @@ -929,7 +931,7 @@ ngx_http_v2_filter_get_data_frame(ngx_ht stream->node->id, frame, len, (ngx_uint_t) flags); cl = ngx_chain_get_free_buf(stream->request->pool, - &stream->free_data_headers); + &stream->free_frame_headers); if (cl == NULL) { return NULL; } @@ -946,7 +948,7 @@ ngx_http_v2_filter_get_data_frame(ngx_ht buf->end = buf->start + NGX_HTTP_V2_FRAME_HEADER_SIZE; buf->last = buf->end; - buf->tag = (ngx_buf_tag_t) &ngx_http_v2_filter_get_data_frame; + buf->tag = (ngx_buf_tag_t) &ngx_http_v2_module; buf->memory = 1; } @@ -1054,7 +1056,7 @@ static ngx_int_t ngx_http_v2_headers_frame_handler(ngx_http_v2_connection_t *h2c, ngx_http_v2_out_frame_t *frame) { - ngx_chain_t *cl; + ngx_chain_t *cl, *ln; ngx_http_v2_stream_t *stream; stream = frame->stream; @@ -1071,19 +1073,28 @@ ngx_http_v2_headers_frame_handler(ngx_ht return NGX_AGAIN; } + ln = cl->next; + + if (cl->buf->tag == (ngx_buf_tag_t) &ngx_http_v2_module) { + cl->next = stream->free_frame_headers; + stream->free_frame_headers = cl; + + } else { + cl->next = stream->free_bufs; + stream->free_bufs = cl; + } + if (cl == frame->last) { break; } - cl = cl->next; + cl = ln; } ngx_log_debug2(NGX_LOG_DEBUG_HTTP, h2c->connection->log, 0, "http2:%ui HEADERS frame %p was sent", stream->node->id, frame); - ngx_free_chain(stream->request->pool, frame->first); - ngx_http_v2_handle_frame(stream, frame); ngx_http_v2_handle_stream(h2c, stream); @@ -1104,7 +1115,7 @@ ngx_http_v2_data_frame_handler(ngx_http_ cl = frame->first; - if (cl->buf->tag == (ngx_buf_tag_t) &ngx_http_v2_filter_get_data_frame) { + if (cl->buf->tag == (ngx_buf_tag_t) &ngx_http_v2_module) { if (cl->buf->pos != cl->buf->last) { ngx_log_debug2(NGX_LOG_DEBUG_HTTP, h2c->connection->log, 0, @@ -1116,8 +1127,8 @@ ngx_http_v2_data_frame_handler(ngx_http_ ln = cl->next; - cl->next = stream->free_data_headers; - stream->free_data_headers = cl; + cl->next = stream->free_frame_headers; + stream->free_frame_headers = cl; if (cl == frame->last) { goto done; From donatas.abraitis at gmail.com Sat Nov 14 10:09:00 2015 From: donatas.abraitis at gmail.com (Donatas Abraitis) Date: Sat, 14 Nov 2015 12:09:00 +0200 Subject: $msec variable in proxy_set_header Message-ID: Hi there! I want to know when $msec is used in proxy_set_header directive? I set: location / { proxy_set_header X-Queue-Start $msec; } But it's set with the same value as $r->start_sec + $r->start_msec. Is it true, that $msec == ("%d.%d", $r->start_sec, $r->start_msec)? Or it's really generated separately on demand? As I see in backtrace (hooked on ngx_http_variable_msec()), it should be generated normally, but it isn't (I'm getting $msec == $r...): ~$ stap -e 'probe process("/usr/sbin/nginx").function("ngx_http_variable_msec") { print_ubacktrace(); }' 0x446c74 : ngx_http_variable_msec+0x4/0x80 [/usr/sbin/nginx] 0x4464f8 : ngx_http_get_indexed_variable+0x78/0x100 [/usr/sbin/nginx] 0x449381 : ngx_http_script_copy_var_len_code+0x21/0x60 [/usr/sbin/nginx] 0x47a005 : ngx_http_proxy_create_request+0x1b5/0xa60 [/usr/sbin/nginx] The problem is, that we are getting very high latency when using proxy_pass for example: X-Queue-Start: 1447494119.609 (Nginx X-Queue-Start header $msec value) X-Queue-Start: 1447494119.678595 (Ruby Time.now.to_f) X-Queue-Start: 1447494121.709 (The same as above) X-Queue-Start: 1447494121.7741442 (The same as above) An this information distort real situation calculating request queuing (NewRelic). Thank you very much! -- Donatas -------------- next part -------------- An HTML attachment was scrubbed... URL: From donatas.abraitis at gmail.com Sat Nov 14 22:07:42 2015 From: donatas.abraitis at gmail.com (Donatas Abraitis) Date: Sun, 15 Nov 2015 00:07:42 +0200 Subject: $msec variable in proxy_set_header In-Reply-To: References: Message-ID: Already found the problem. There was timer_resolution set to 100ms, which distorted these latencies. On Sat, Nov 14, 2015 at 12:09 PM, Donatas Abraitis < donatas.abraitis at gmail.com> wrote: > Hi there! > > I want to know when $msec is used in proxy_set_header directive? I set: > > location / { > proxy_set_header X-Queue-Start $msec; > } > > But it's set with the same value as $r->start_sec + $r->start_msec. Is it > true, that $msec == ("%d.%d", $r->start_sec, $r->start_msec)? Or it's > really generated separately on demand? > > As I see in backtrace (hooked on ngx_http_variable_msec()), it should be > generated normally, but it isn't (I'm getting $msec == $r...): > > ~$ stap -e 'probe > process("/usr/sbin/nginx").function("ngx_http_variable_msec") { > print_ubacktrace(); }' > 0x446c74 : ngx_http_variable_msec+0x4/0x80 [/usr/sbin/nginx] > 0x4464f8 : ngx_http_get_indexed_variable+0x78/0x100 [/usr/sbin/nginx] > 0x449381 : ngx_http_script_copy_var_len_code+0x21/0x60 [/usr/sbin/nginx] > 0x47a005 : ngx_http_proxy_create_request+0x1b5/0xa60 [/usr/sbin/nginx] > > The problem is, that we are getting very high latency when using > proxy_pass for example: > X-Queue-Start: 1447494119.609 (Nginx X-Queue-Start header $msec value) > X-Queue-Start: 1447494119.678595 (Ruby Time.now.to_f) > X-Queue-Start: 1447494121.709 (The same as above) > X-Queue-Start: 1447494121.7741442 (The same as above) > > An this information distort real situation calculating request queuing > (NewRelic). > > Thank you very much! > > -- > Donatas > -- Donatas -------------- next part -------------- An HTML attachment was scrubbed... URL: From awalgarg at gmail.com Sun Nov 15 08:44:25 2015 From: awalgarg at gmail.com (Awal Garg) Date: Sun, 15 Nov 2015 14:14:25 +0530 Subject: Question regarding syntax of nginx configs Message-ID: Heyo! I am writing a configuration parser for nginx (in Python). It appears as if the following constructs are dismissed as invalid by https://github.com/nginx/nginx/blob/master/src/core/ngx_conf_file.c#l636: ``` ... ; ``` and ``` ... ; ``` (STRING_TOKEN is any token like `server` or `listen`. I don't really know what to call them here :/) IOW, it seems that for every directive, a leaf representing a block can only come at the end of a directive and must not be followed by a statement-terminator. Am I correct in inferring this? Does this mean there isn't any directive possible which takes more than one block at once? Thanks a ton! Regards, Awal From sorin.v.manole at gmail.com Sun Nov 15 11:55:22 2015 From: sorin.v.manole at gmail.com (Sorin Manole) Date: Sun, 15 Nov 2015 13:55:22 +0200 Subject: streaming large nginx generated page In-Reply-To: References: Message-ID: If the data is generated during the requests and there is no way to precompute it, I guess you best bet would be to generate and feed the data from a filter handler. Basically, the content handler for a particular location could be a the empty_gif module, and while the response is sent to the client you ignore the data generated by empty_gif, but instead send your stream data to the client. And when all the generated data is sent to the client, mark the empty_gif buffers as read. empty_gif can be replaced with static file serving or anything really. I'm not sure this would work at all, and how you will deal with the headers, but looking forward to see your progress. Also, why not consider generating this data in a separate daemon and just using proxy_module or something. 2015-11-09 20:35 GMT+02:00 Maksim Yevmenkin : > hello, > > suppose i need to export large amount of nginx generated data (static > page). i have used content handler (or content phase handler) to > create all the in-memory chain and fed it to ngx_http_output_filter(). > however, i would very much like to avoid allocating large chunk of > memory to keep all the data before sending. > > is there a way to stream generated page (potentially applying > different transfer encoding)? > > thanks! > max > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bacek at bacek.com Sun Nov 15 22:38:41 2015 From: bacek at bacek.com (Vasily Chekalkin) Date: Mon, 16 Nov 2015 09:38:41 +1100 Subject: [PATCH] Provide support for CXXFLAGS for c++ modules Message-ID: # HG changeset patch # User Vasily Chekalkin # Date 1447626235 -39600 # Mon Nov 16 09:23:55 2015 +1100 # Node ID 07ad1f9b4307940ecd0b952badba80fab7caba4b # Parent ec6b07be88a5108d5b48386e06abe1d1bf975ab3 Provide support for CXXFLAGS for c++ modules. When you want to implement module in modern C++ (e.g. c++11, etc) you have to specify --std=c++11 in compiler command line. In this case GCC will complain about using this flag for non-c++ sources. To avoid unnecessary clutter we separate CFLAGS and CXXFLAGS. diff -r ec6b07be88a5 -r 07ad1f9b4307 auto/make --- a/auto/make Fri Nov 13 20:10:50 2015 +0300 +++ b/auto/make Mon Nov 16 09:23:55 2015 +1100 @@ -22,6 +22,7 @@ CC = $CC CFLAGS = $CFLAGS +CXXFLAGS = $CXXFLAGS CPP = $CPP LINK = $LINK @@ -410,10 +411,16 @@ ngx_src=`echo $ngx_src | sed -e "s/\//$ngx_regex_dirsep/g"` + # Append CXXFLAGS iff source is c++ + ngx_cpp=`echo $ngx_src \ + | sed -e "s#^.*\.cpp\\$# \\$(CXXFLAGS)#" \ + -e "s#^.*\.cc\\$# \\$(CXXFLAGS)#" \ + -e "s#^$ngx_src\\$##g"` + cat << END >> $NGX_MAKEFILE $ngx_obj: \$(ADDON_DEPS)$ngx_cont$ngx_src - $ngx_cc$ngx_tab$ngx_objout$ngx_obj$ngx_tab$ngx_src$NGX_AUX + $ngx_cc$ngx_cpp$ngx_tab$ngx_objout$ngx_obj$ngx_tab$ngx_src$NGX_AUX END done From mdounin at mdounin.ru Mon Nov 16 14:24:58 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 16 Nov 2015 17:24:58 +0300 Subject: Question regarding syntax of nginx configs In-Reply-To: References: Message-ID: <20151116142458.GV74233@mdounin.ru> Hello! On Sun, Nov 15, 2015 at 02:14:25PM +0530, Awal Garg wrote: > Heyo! > > I am writing a configuration parser for nginx (in Python). It appears > as if the following constructs are dismissed as invalid by > https://github.com/nginx/nginx/blob/master/src/core/ngx_conf_file.c#l636: > > ``` > ... ; > ``` > and > ``` > ... ; > ``` > > (STRING_TOKEN is any token like `server` or `listen`. I don't really > know what to call them here :/) > > IOW, it seems that for every directive, a leaf representing a block > can only come at the end of a directive and must not be followed by a > statement-terminator. > > Am I correct in inferring this? Yes. > Does this mean there isn't any > directive possible which takes more than one block at once? Yes. -- Maxim Dounin http://nginx.org/ From vl at nginx.com Mon Nov 16 15:06:25 2015 From: vl at nginx.com (Vladimir Homutov) Date: Mon, 16 Nov 2015 15:06:25 +0000 Subject: [nginx] Realip: the $realip_remote_addr variable. Message-ID: details: http://hg.nginx.org/nginx/rev/cebe43bace93 branches: changeset: 6294:cebe43bace93 user: Ruslan Ermilov date: Mon Nov 16 16:02:02 2015 +0300 description: Realip: the $realip_remote_addr variable. diffstat: src/http/modules/ngx_http_realip_module.c | 72 ++++++++++++++++++++++++++++++- 1 files changed, 71 insertions(+), 1 deletions(-) diffs (110 lines): diff -r ec6b07be88a5 -r cebe43bace93 src/http/modules/ngx_http_realip_module.c --- a/src/http/modules/ngx_http_realip_module.c Fri Nov 13 20:10:50 2015 +0300 +++ b/src/http/modules/ngx_http_realip_module.c Mon Nov 16 16:02:02 2015 +0300 @@ -43,9 +43,14 @@ static char *ngx_http_realip(ngx_conf_t static void *ngx_http_realip_create_loc_conf(ngx_conf_t *cf); static char *ngx_http_realip_merge_loc_conf(ngx_conf_t *cf, void *parent, void *child); +static ngx_int_t ngx_http_realip_add_variables(ngx_conf_t *cf); static ngx_int_t ngx_http_realip_init(ngx_conf_t *cf); +static ngx_int_t ngx_http_realip_remote_addr_variable(ngx_http_request_t *r, + ngx_http_variable_value_t *v, uintptr_t data); + + static ngx_command_t ngx_http_realip_commands[] = { { ngx_string("set_real_ip_from"), @@ -75,7 +80,7 @@ static ngx_command_t ngx_http_realip_co static ngx_http_module_t ngx_http_realip_module_ctx = { - NULL, /* preconfiguration */ + ngx_http_realip_add_variables, /* preconfiguration */ ngx_http_realip_init, /* postconfiguration */ NULL, /* create main configuration */ @@ -105,6 +110,15 @@ ngx_module_t ngx_http_realip_module = { }; +static ngx_http_variable_t ngx_http_realip_vars[] = { + + { ngx_string("realip_remote_addr"), NULL, + ngx_http_realip_remote_addr_variable, 0, 0, 0 }, + + { ngx_null_string, NULL, NULL, 0, 0, 0 } +}; + + static ngx_int_t ngx_http_realip_handler(ngx_http_request_t *r) { @@ -417,6 +431,25 @@ ngx_http_realip_merge_loc_conf(ngx_conf_ static ngx_int_t +ngx_http_realip_add_variables(ngx_conf_t *cf) +{ + ngx_http_variable_t *var, *v; + + for (v = ngx_http_realip_vars; v->name.len; v++) { + var = ngx_http_add_variable(cf, &v->name, v->flags); + if (var == NULL) { + return NGX_ERROR; + } + + var->get_handler = v->get_handler; + var->data = v->data; + } + + return NGX_OK; +} + + +static ngx_int_t ngx_http_realip_init(ngx_conf_t *cf) { ngx_http_handler_pt *h; @@ -440,3 +473,40 @@ ngx_http_realip_init(ngx_conf_t *cf) return NGX_OK; } + + +static ngx_int_t +ngx_http_realip_remote_addr_variable(ngx_http_request_t *r, + ngx_http_variable_value_t *v, uintptr_t data) +{ + ngx_str_t *addr_text; + ngx_pool_cleanup_t *cln; + ngx_http_realip_ctx_t *ctx; + + ctx = ngx_http_get_module_ctx(r, ngx_http_realip_module); + + if (ctx == NULL && (r->internal || r->filter_finalize)) { + + /* + * if module context was reset, the original address + * can still be found in the cleanup handler + */ + + for (cln = r->pool->cleanup; cln; cln = cln->next) { + if (cln->handler == ngx_http_realip_cleanup) { + ctx = cln->data; + break; + } + } + } + + addr_text = ctx ? &ctx->addr_text : &r->connection->addr_text; + + v->len = addr_text->len; + v->valid = 1; + v->no_cacheable = 0; + v->not_found = 0; + v->data = addr_text->data; + + return NGX_OK; +} From mdounin at mdounin.ru Mon Nov 16 15:29:47 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 16 Nov 2015 18:29:47 +0300 Subject: [PATCH] Provide support for CXXFLAGS for c++ modules In-Reply-To: References: Message-ID: <20151116152947.GZ74233@mdounin.ru> Hello! On Mon, Nov 16, 2015 at 09:38:41AM +1100, Vasily Chekalkin wrote: > # HG changeset patch > # User Vasily Chekalkin > # Date 1447626235 -39600 > # Mon Nov 16 09:23:55 2015 +1100 > # Node ID 07ad1f9b4307940ecd0b952badba80fab7caba4b > # Parent ec6b07be88a5108d5b48386e06abe1d1bf975ab3 > Provide support for CXXFLAGS for c++ modules. > > When you want to implement module in modern C++ (e.g. c++11, etc) you have to > specify --std=c++11 in compiler command line. In this case GCC will complain > about using this flag for non-c++ sources. To avoid unnecessary clutter we > separate CFLAGS and CXXFLAGS. > > diff -r ec6b07be88a5 -r 07ad1f9b4307 auto/make > --- a/auto/make Fri Nov 13 20:10:50 2015 +0300 > +++ b/auto/make Mon Nov 16 09:23:55 2015 +1100 > @@ -22,6 +22,7 @@ > > CC = $CC > CFLAGS = $CFLAGS > +CXXFLAGS = $CXXFLAGS > CPP = $CPP > LINK = $LINK > > @@ -410,10 +411,16 @@ > > ngx_src=`echo $ngx_src | sed -e "s/\//$ngx_regex_dirsep/g"` > > + # Append CXXFLAGS iff source is c++ > + ngx_cpp=`echo $ngx_src \ > + | sed -e "s#^.*\.cpp\\$# \\$(CXXFLAGS)#" \ > + -e "s#^.*\.cc\\$# \\$(CXXFLAGS)#" \ > + -e "s#^$ngx_src\\$##g"` > + > cat << END >> $NGX_MAKEFILE > > $ngx_obj: \$(ADDON_DEPS)$ngx_cont$ngx_src > - $ngx_cc$ngx_tab$ngx_objout$ngx_obj$ngx_tab$ngx_src$NGX_AUX > + $ngx_cc$ngx_cpp$ngx_tab$ngx_objout$ngx_obj$ngx_tab$ngx_src$NGX_AUX This way CFLAGS and CXXFLAGS are not separated, but rather CXXFLAGS is expected to complement CFLAGS. This is not how CXXFLAGS are expected to work. While CXXFLAGS are not specified by any standard, AFAIK, at least GNU catalogue of built-in rules says, https://www.gnu.org/software/make/manual/html_node/Catalogue-of-Rules.html#Catalogue-of-Rules: : Compiling C programs : n.o is made automatically from n.c with a recipe of the form : ?$(CC) $(CPPFLAGS) $(CFLAGS) -c?. : Compiling C++ programs : n.o is made automatically from n.cc, n.cpp, or n.C : with a recipe of the form ?$(CXX) $(CPPFLAGS) $(CXXFLAGS) -c?. We : encourage you to use the suffix ?.cc? for C++ source files instead : of ?.C?. That is, CXXFLAGS is expected to be a replacement of CFLAGS, not an addition to CFLAGS. As far as I understand, correct support for CXXFLAGS will require much more changes, and I'm not sure it worth the effort. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Tue Nov 17 14:56:25 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 17 Nov 2015 14:56:25 +0000 Subject: [nginx] nginx-1.9.7-RELEASE Message-ID: details: http://hg.nginx.org/nginx/rev/54117529e40b branches: changeset: 6295:54117529e40b user: Maxim Dounin date: Tue Nov 17 17:50:56 2015 +0300 description: nginx-1.9.7-RELEASE diffstat: docs/xml/nginx/changes.xml | 76 ++++++++++++++++++++++++++++++++++++++++++++++ 1 files changed, 76 insertions(+), 0 deletions(-) diffs (86 lines): diff --git a/docs/xml/nginx/changes.xml b/docs/xml/nginx/changes.xml --- a/docs/xml/nginx/changes.xml +++ b/docs/xml/nginx/changes.xml @@ -5,6 +5,82 @@ + + + + +???????? nohostname ???????????? ? syslog. + + +the "nohostname" parameter of logging to syslog. + + + + + +????????? proxy_cache_convert_head. + + +the "proxy_cache_convert_head" directive. + + + + + +?????????? $realip_remote_addr ? ?????? ngx_http_realip_module. + + +the $realip_remote_addr in the ngx_http_realip_module. + + + + + +????????? expires ????? ?? ??????????? ??? ????????????? ??????????. + + +the "expires" directive might not work when using variables. + + + + + +??? ????????????? HTTP/2 +? ??????? ???????? ??? ????????? segmentation fault; +?????? ????????? ? 1.9.6. + + +a segmentation fault might occur in a worker process +when using HTTP/2; +the bug had appeared in 1.9.6. + + + + + +???? nginx ??? ?????? ? ??????? ngx_http_v2_module, +???????? HTTP/2 ??? ???? ??????????? ????????, +???? ???? ?? ??? ?????? ???????? http2 ????????? listen. + + +if nginx was built with the ngx_http_v2_module +it was possible to use the HTTP/2 protocol +even if the "http2" parameter of the "listen" directive was not specified. + + + + + +? ?????? ngx_http_v2_module. + + +in the ngx_http_v2_module. + + + + + + From mdounin at mdounin.ru Tue Nov 17 14:56:27 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 17 Nov 2015 14:56:27 +0000 Subject: [nginx] release-1.9.7 tag Message-ID: details: http://hg.nginx.org/nginx/rev/4221623f2e46 branches: changeset: 6296:4221623f2e46 user: Maxim Dounin date: Tue Nov 17 17:50:57 2015 +0300 description: release-1.9.7 tag diffstat: .hgtags | 1 + 1 files changed, 1 insertions(+), 0 deletions(-) diffs (8 lines): diff --git a/.hgtags b/.hgtags --- a/.hgtags +++ b/.hgtags @@ -389,3 +389,4 @@ e27a215601292872f545a733859e06d01af1017d 5cb7e2eed2031e32d2e5422caf9402758c38a6ad release-1.9.4 942475e10cb47654205ede7ccbe7d568698e665b release-1.9.5 b78018cfaa2f0ec20494fccb16252daa87c48a31 release-1.9.6 +54117529e40b988590ea2d38aae909b0b191663f release-1.9.7 From vbart at nginx.com Tue Nov 17 16:02:13 2015 From: vbart at nginx.com (Valentin Bartenev) Date: Tue, 17 Nov 2015 16:02:13 +0000 Subject: [nginx] Version bump. Message-ID: details: http://hg.nginx.org/nginx/rev/92482faf5d8a branches: changeset: 6297:92482faf5d8a user: Valentin Bartenev date: Tue Nov 17 19:01:41 2015 +0300 description: Version bump. diffstat: src/core/nginx.h | 4 ++-- 1 files changed, 2 insertions(+), 2 deletions(-) diffs (14 lines): diff -r 4221623f2e46 -r 92482faf5d8a src/core/nginx.h --- a/src/core/nginx.h Tue Nov 17 17:50:57 2015 +0300 +++ b/src/core/nginx.h Tue Nov 17 19:01:41 2015 +0300 @@ -9,8 +9,8 @@ #define _NGINX_H_INCLUDED_ -#define nginx_version 1009007 -#define NGINX_VERSION "1.9.7" +#define nginx_version 1009008 +#define NGINX_VERSION "1.9.8" #define NGINX_VER "nginx/" NGINX_VERSION #ifdef NGX_BUILD From vbart at nginx.com Tue Nov 17 16:02:16 2015 From: vbart at nginx.com (Valentin Bartenev) Date: Tue, 17 Nov 2015 16:02:16 +0000 Subject: [nginx] Adjusted file->sys_offset after the write() syscall. Message-ID: details: http://hg.nginx.org/nginx/rev/8f6d753c1953 branches: changeset: 6298:8f6d753c1953 user: Valentin Bartenev date: Tue Nov 17 19:01:41 2015 +0300 description: Adjusted file->sys_offset after the write() syscall. This fixes suboptimal behavior caused by surplus lseek() for sequential writes on systems without pwrite(). A consecutive read after write might result in an error on systems without pread() and pwrite(). Fortunately, at the moment there are no widely used systems without these syscalls. diffstat: src/os/unix/ngx_files.c | 1 + 1 files changed, 1 insertions(+), 0 deletions(-) diffs (11 lines): diff -r 92482faf5d8a -r 8f6d753c1953 src/os/unix/ngx_files.c --- a/src/os/unix/ngx_files.c Tue Nov 17 19:01:41 2015 +0300 +++ b/src/os/unix/ngx_files.c Tue Nov 17 19:01:41 2015 +0300 @@ -226,6 +226,7 @@ ngx_write_file(ngx_file_t *file, u_char return NGX_ERROR; } + file->sys_offset += n; file->offset += n; written += n; From vbart at nginx.com Tue Nov 17 16:02:18 2015 From: vbart at nginx.com (Valentin Bartenev) Date: Tue, 17 Nov 2015 16:02:18 +0000 Subject: [nginx] Handled EINTR from write() and pwrite() syscalls. Message-ID: details: http://hg.nginx.org/nginx/rev/5170c3040ce1 branches: changeset: 6299:5170c3040ce1 user: Valentin Bartenev date: Tue Nov 17 19:01:41 2015 +0300 description: Handled EINTR from write() and pwrite() syscalls. This is in addition to 6fce16b1fc10. diffstat: src/os/unix/ngx_files.c | 23 ++++++++++++++++++++--- 1 files changed, 20 insertions(+), 3 deletions(-) diffs (47 lines): diff -r 8f6d753c1953 -r 5170c3040ce1 src/os/unix/ngx_files.c --- a/src/os/unix/ngx_files.c Tue Nov 17 19:01:41 2015 +0300 +++ b/src/os/unix/ngx_files.c Tue Nov 17 19:01:41 2015 +0300 @@ -176,7 +176,8 @@ ngx_thread_read_handler(void *data, ngx_ ssize_t ngx_write_file(ngx_file_t *file, u_char *buf, size_t size, off_t offset) { - ssize_t n, written; + ssize_t n, written; + ngx_err_t err; ngx_log_debug4(NGX_LOG_DEBUG_CORE, file->log, 0, "write: %d, %p, %uz, %O", file->fd, buf, size, offset); @@ -189,7 +190,15 @@ ngx_write_file(ngx_file_t *file, u_char n = pwrite(file->fd, buf + written, size, offset); if (n == -1) { - ngx_log_error(NGX_LOG_CRIT, file->log, ngx_errno, + err = ngx_errno; + + if (err == NGX_EINTR) { + ngx_log_debug0(NGX_LOG_DEBUG_CORE, file->log, err, + "pwrite() was interrupted"); + continue; + } + + ngx_log_error(NGX_LOG_CRIT, file->log, err, "pwrite() \"%s\" failed", file->name.data); return NGX_ERROR; } @@ -221,7 +230,15 @@ ngx_write_file(ngx_file_t *file, u_char n = write(file->fd, buf + written, size); if (n == -1) { - ngx_log_error(NGX_LOG_CRIT, file->log, ngx_errno, + err = ngx_errno; + + if (err == NGX_EINTR) { + ngx_log_debug0(NGX_LOG_DEBUG_CORE, file->log, err, + "write() was interrupted"); + continue; + } + + ngx_log_error(NGX_LOG_CRIT, file->log, err, "write() \"%s\" failed", file->name.data); return NGX_ERROR; } From vbart at nginx.com Tue Nov 17 16:02:21 2015 From: vbart at nginx.com (Valentin Bartenev) Date: Tue, 17 Nov 2015 16:02:21 +0000 Subject: [nginx] Moved file writev() handling code to a separate function. Message-ID: details: http://hg.nginx.org/nginx/rev/be6af0906a4d branches: changeset: 6300:be6af0906a4d user: Valentin Bartenev date: Tue Nov 17 19:01:41 2015 +0300 description: Moved file writev() handling code to a separate function. No functional changes. diffstat: src/os/unix/ngx_files.c | 95 +++++++++++++++++++++++++++++------------------- 1 files changed, 57 insertions(+), 38 deletions(-) diffs (129 lines): diff -r 5170c3040ce1 -r be6af0906a4d src/os/unix/ngx_files.c --- a/src/os/unix/ngx_files.c Tue Nov 17 19:01:41 2015 +0300 +++ b/src/os/unix/ngx_files.c Tue Nov 17 19:01:41 2015 +0300 @@ -14,6 +14,9 @@ static void ngx_thread_read_handler(void *data, ngx_log_t *log); #endif +static ssize_t ngx_writev_file(ngx_file_t *file, ngx_array_t *vec, size_t size, + off_t offset); + #if (NGX_HAVE_FILE_AIO) @@ -282,7 +285,6 @@ ngx_write_chain_to_file(ngx_file_t *file u_char *prev; size_t size; ssize_t total, n; - ngx_err_t err; ngx_array_t vec; struct iovec *iov, iovs[NGX_IOVS]; @@ -344,46 +346,12 @@ ngx_write_chain_to_file(ngx_file_t *file return total + n; } - if (file->sys_offset != offset) { - if (lseek(file->fd, offset, SEEK_SET) == -1) { - ngx_log_error(NGX_LOG_CRIT, file->log, ngx_errno, - "lseek() \"%s\" failed", file->name.data); - return NGX_ERROR; - } + n = ngx_writev_file(file, &vec, size, offset); - file->sys_offset = offset; + if (n == NGX_ERROR) { + return n; } -eintr: - - n = writev(file->fd, vec.elts, vec.nelts); - - if (n == -1) { - err = ngx_errno; - - if (err == NGX_EINTR) { - ngx_log_debug0(NGX_LOG_DEBUG_CORE, file->log, err, - "writev() was interrupted"); - goto eintr; - } - - ngx_log_error(NGX_LOG_CRIT, file->log, err, - "writev() \"%s\" failed", file->name.data); - return NGX_ERROR; - } - - if ((size_t) n != size) { - ngx_log_error(NGX_LOG_CRIT, file->log, 0, - "writev() \"%s\" has written only %z of %uz", - file->name.data, n, size); - return NGX_ERROR; - } - - ngx_log_debug2(NGX_LOG_DEBUG_CORE, file->log, 0, - "writev: %d, %z", file->fd, n); - - file->sys_offset += n; - file->offset += n; offset += n; total += n; @@ -393,6 +361,57 @@ eintr: } +static ssize_t +ngx_writev_file(ngx_file_t *file, ngx_array_t *vec, size_t size, off_t offset) +{ + ssize_t n; + ngx_err_t err; + + if (file->sys_offset != offset) { + if (lseek(file->fd, offset, SEEK_SET) == -1) { + ngx_log_error(NGX_LOG_CRIT, file->log, ngx_errno, + "lseek() \"%s\" failed", file->name.data); + return NGX_ERROR; + } + + file->sys_offset = offset; + } + +eintr: + + n = writev(file->fd, vec->elts, vec->nelts); + + if (n == -1) { + err = ngx_errno; + + if (err == NGX_EINTR) { + ngx_log_debug0(NGX_LOG_DEBUG_CORE, file->log, err, + "writev() was interrupted"); + goto eintr; + } + + ngx_log_error(NGX_LOG_CRIT, file->log, err, + "writev() \"%s\" failed", file->name.data); + return NGX_ERROR; + } + + if ((size_t) n != size) { + ngx_log_error(NGX_LOG_CRIT, file->log, 0, + "writev() \"%s\" has written only %z of %uz", + file->name.data, n, size); + return NGX_ERROR; + } + + ngx_log_debug2(NGX_LOG_DEBUG_CORE, file->log, 0, + "writev: %d, %z", file->fd, n); + + file->sys_offset += n; + file->offset += n; + + return n; +} + + ngx_int_t ngx_set_file_time(u_char *name, ngx_fd_t fd, time_t s) { From vbart at nginx.com Tue Nov 17 16:02:23 2015 From: vbart at nginx.com (Valentin Bartenev) Date: Tue, 17 Nov 2015 16:02:23 +0000 Subject: [nginx] Used the pwritev() syscall for writing files where possi... Message-ID: details: http://hg.nginx.org/nginx/rev/b5a87b51be24 branches: changeset: 6301:b5a87b51be24 user: Valentin Bartenev date: Tue Nov 17 19:01:41 2015 +0300 description: Used the pwritev() syscall for writing files where possible. It is more effective, because it doesn't require a separate lseek(). diffstat: auto/unix | 16 ++++++++++++++++ src/os/unix/ngx_files.c | 38 +++++++++++++++++++++++++++++++++++--- 2 files changed, 51 insertions(+), 3 deletions(-) diffs (82 lines): diff -r be6af0906a4d -r b5a87b51be24 auto/unix --- a/auto/unix Tue Nov 17 19:01:41 2015 +0300 +++ b/auto/unix Tue Nov 17 19:01:41 2015 +0300 @@ -589,6 +589,22 @@ ngx_feature_test="char buf[1]; ssize_t n . auto/feature +# pwritev() was introduced in FreeBSD 6 and Linux 2.6.30, glibc 2.10 + +ngx_feature="pwritev()" +ngx_feature_name="NGX_HAVE_PWRITEV" +ngx_feature_run=no +ngx_feature_incs='#include ' +ngx_feature_path= +ngx_feature_libs= +ngx_feature_test="char buf[1]; struct iovec vec[1]; ssize_t n; + vec[0].iov_base = buf; + vec[0].iov_len = 1; + n = pwritev(1, vec, 1, 0); + if (n == -1) return 1" +. auto/feature + + ngx_feature="sys_nerr" ngx_feature_name="NGX_SYS_NERR" ngx_feature_run=value diff -r be6af0906a4d -r b5a87b51be24 src/os/unix/ngx_files.c --- a/src/os/unix/ngx_files.c Tue Nov 17 19:01:41 2015 +0300 +++ b/src/os/unix/ngx_files.c Tue Nov 17 19:01:41 2015 +0300 @@ -367,6 +367,38 @@ ngx_writev_file(ngx_file_t *file, ngx_ar ssize_t n; ngx_err_t err; + ngx_log_debug3(NGX_LOG_DEBUG_CORE, file->log, 0, + "writev: %d, %uz, %O", file->fd, size, offset); + +#if (NGX_HAVE_PWRITEV) + +eintr: + + n = pwritev(file->fd, vec->elts, vec->nelts, offset); + + if (n == -1) { + err = ngx_errno; + + if (err == NGX_EINTR) { + ngx_log_debug0(NGX_LOG_DEBUG_CORE, file->log, err, + "pwritev() was interrupted"); + goto eintr; + } + + ngx_log_error(NGX_LOG_CRIT, file->log, err, + "pwritev() \"%s\" failed", file->name.data); + return NGX_ERROR; + } + + if ((size_t) n != size) { + ngx_log_error(NGX_LOG_CRIT, file->log, 0, + "pwritev() \"%s\" has written only %z of %uz", + file->name.data, n, size); + return NGX_ERROR; + } + +#else + if (file->sys_offset != offset) { if (lseek(file->fd, offset, SEEK_SET) == -1) { ngx_log_error(NGX_LOG_CRIT, file->log, ngx_errno, @@ -402,10 +434,10 @@ eintr: return NGX_ERROR; } - ngx_log_debug2(NGX_LOG_DEBUG_CORE, file->log, 0, - "writev: %d, %z", file->fd, n); + file->sys_offset += n; - file->sys_offset += n; +#endif + file->offset += n; return n; From mdounin at mdounin.ru Tue Nov 17 16:42:28 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 17 Nov 2015 16:42:28 +0000 Subject: [nginx] Missing "variable" word added. Message-ID: details: http://hg.nginx.org/nginx/rev/bec5b3093337 branches: changeset: 6302:bec5b3093337 user: Maxim Dounin date: Tue Nov 17 19:41:39 2015 +0300 description: Missing "variable" word added. diffstat: docs/xml/nginx/changes.xml | 2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diffs (12 lines): diff --git a/docs/xml/nginx/changes.xml b/docs/xml/nginx/changes.xml --- a/docs/xml/nginx/changes.xml +++ b/docs/xml/nginx/changes.xml @@ -30,7 +30,7 @@ the "proxy_cache_convert_head" directive ?????????? $realip_remote_addr ? ?????? ngx_http_realip_module. -the $realip_remote_addr in the ngx_http_realip_module. +the $realip_remote_addr variable in the ngx_http_realip_module. From rmind at noxt.eu Tue Nov 17 17:25:30 2015 From: rmind at noxt.eu (Mindaugas Rasiukevicius) Date: Tue, 17 Nov 2015 17:25:30 +0000 Subject: Mark stale cache content as "invalid" on non-cacheable responses Message-ID: <20151117172530.2b69f1b105b92e3a4a6ed376@noxt.eu> Hi, Context: consider nginx used as a cache with proxy_cache_use_stale set to 'http_500' and the 'updating' parameter set i.e. it caches errors and serves the stale content while updating. Suppose the upstream temporarily responds with HTTP 504 and Cache-Control being max-age=3. The error gets cached, but after 3 seconds it expires. At this point, let's say the upstream server starts serving HTTP 200 responses, but with Cache-Control set to 'no-cache'. The cache manager will not LRU the expired content immediately; it will stay in the EXPIRED state while subsequent requests will result in 200s. Problem: if there are multiple processes racing, the ones in the UPDATING state will serve stale 504s. That results in sporadic errors, e.g.: 200 EXPIRED 504 UPDATING 200 EXPIRED ... At the very least, I think the stale cache content should be marked as "invalid" after the no-cache response (with the possibility to become valid again if it becomes cacheable). Whether the object should be kept at all is something to debate. Please find the preliminary patch attached. -- Mindaugas -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: ngx_stale_cache_inv.patch URL: From rmind at noxt.eu Tue Nov 17 17:26:52 2015 From: rmind at noxt.eu (Mindaugas Rasiukevicius) Date: Tue, 17 Nov 2015 17:26:52 +0000 Subject: ngx_ext_rename_file: remove the target file if ngx_copy_file() fails In-Reply-To: <20150803095205.GM19190@mdounin.ru> References: <20150709141048.f42bc4b73ec7edcd661207c4@noxt.eu> <20150803095205.GM19190@mdounin.ru> Message-ID: <20151117172652.23f32c65adc41932bbe4dd66@noxt.eu> Hi, Sorry for late response. Maxim Dounin wrote: > ... > > By calling ngx_delete_file() at this point, you do this for all > errors returned by ngx_copy_file(), including cases when it wasn't > able to open a destination file for some reason. This will result > in additional confusing errors, and looks like a wrong approach. > > If you want nginx to remove a destination file, this should be > done in ngx_copy_file(). > Good point. Please find the new patch attached. -- Mindaugas -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: ngx_copy_file.patch URL: From richard at fussenegger.info Tue Nov 17 18:15:43 2015 From: richard at fussenegger.info (Richard Fussenegger) Date: Tue, 17 Nov 2015 19:15:43 +0100 Subject: [nginx_gzip_static] Necessity to create empty file with always option. Message-ID: <564B6ECF.4060009@fussenegger.info> Hi guys! I have the following weird situation: Several files with .gz extension are on disk and I have a location were requests are processed that do not include it, so I set the option gzip_static to always and also installed the gunzip module. The problem is, I still need to create EMPTY files without the .gz extension on disk for everything to work as expected. Expected is that gunzip extracts the archives if no GZIP support is announced by the client and nginx directly streams the response if the client did. The configuration is fairly easy: location /var/files { internal; gunzip on; gzip_proxied any; gzip_static always; aio on; sendfile off; tcp_nodelay on; tcp_nopush off; try_files $uri =404; } location / { location ~ '^/[a-z0-9]{40}\.[a-z0-9-\.]+$' { include php.ngx; try_files /validate-token.php =404; } return 444; } The PHP logic is simple: References: <20151117172530.2b69f1b105b92e3a4a6ed376@noxt.eu> Message-ID: <20151117182500.GQ74233@mdounin.ru> Hello! On Tue, Nov 17, 2015 at 05:25:30PM +0000, Mindaugas Rasiukevicius wrote: > Hi, > > Context: consider nginx used as a cache with proxy_cache_use_stale set > to 'http_500' and the 'updating' parameter set i.e. it caches errors and > serves the stale content while updating. Suppose the upstream temporarily > responds with HTTP 504 and Cache-Control being max-age=3. The error gets > cached, but after 3 seconds it expires. At this point, let's say the > upstream server starts serving HTTP 200 responses, but with Cache-Control > set to 'no-cache'. > > The cache manager will not LRU the expired content immediately; it will > stay in the EXPIRED state while subsequent requests will result in 200s. > Problem: if there are multiple processes racing, the ones in the UPDATING > state will serve stale 504s. That results in sporadic errors, e.g.: > > 200 EXPIRED > 504 UPDATING > 200 EXPIRED > ... > > At the very least, I think the stale cache content should be marked as > "invalid" after the no-cache response (with the possibility to become > valid again if it becomes cacheable). Whether the object should be kept > at all is something to debate. > > Please find the preliminary patch attached. I don't see how a response with "no-cache" is no different from an earlier error. Consider slightly different scenario: - a response is cached and then expires, - an attempt to fetch new response results in a non-cacheable error. In such a case, removing previously cached response is the worst thing we can possibly do. We are expected to return previously cached stale responses in all cases we are configured to do so. The change you've proposed completely rules out possibility of correct handling of this scenario. Trivial solutions to the problem you've described would be to disable use of stale responses completely (which is the default), or use "proxy_cache_use_stale http_504", or to avoid caching of 504 errors (and the later is something RFC suggests to do by default with any errors). And while I agree that it would be good to behave better in the scenario you've described, I tend to disagree with the change suggested, and I'm not even sure a good solution exists. -- Maxim Dounin http://nginx.org/ From vbart at nginx.com Tue Nov 17 18:29:00 2015 From: vbart at nginx.com (Valentin V. Bartenev) Date: Tue, 17 Nov 2015 21:29 +0300 Subject: [nginx_gzip_static] Necessity to create empty file with always option. In-Reply-To: <564B6ECF.4060009@fussenegger.info> References: <564B6ECF.4060009@fussenegger.info> Message-ID: <68945185.SmVcZVqrS9@vbart-workstation> On Tuesday 17 November 2015 19:15:43 Richard Fussenegger wrote: > Hi guys! > > I have the following weird situation: Several files with .gz extension > are on disk and I have a location were requests are processed that do > not include it, so I set the option gzip_static to always and also > installed the gunzip module. The problem is, I still need to create > EMPTY files without the .gz extension on disk for everything to work as > expected. Expected is that gunzip extracts the archives if no GZIP > support is announced by the client and nginx directly streams the > response if the client did. > > The configuration is fairly easy: > > location /var/files { > internal; > > gunzip on; > gzip_proxied any; > gzip_static always; > > aio on; > sendfile off; > tcp_nodelay on; > tcp_nopush off; > > try_files $uri =404; > } [..] Why do you have this useless "try_files $uri =404;" directive here? It causes your problem. Please note, that this mailing list is for developers. You should ask questions in user's one: http://mailman.nginx.org/mailman/listinfo/nginx wbr, Valentin V. Bartenev From richard at fussenegger.info Tue Nov 17 18:31:40 2015 From: richard at fussenegger.info (Richard Fussenegger) Date: Tue, 17 Nov 2015 19:31:40 +0100 Subject: [nginx_gzip_static] Necessity to create empty file with always option. In-Reply-To: <68945185.SmVcZVqrS9@vbart-workstation> References: <564B6ECF.4060009@fussenegger.info> <68945185.SmVcZVqrS9@vbart-workstation> Message-ID: <564B728C.5070203@fussenegger.info> Thanks for the answer and solution! Sorry for using the wrong mailing list, will not happen again. Richard On 11/17/2015 7:29 PM, Valentin V. Bartenev wrote: > On Tuesday 17 November 2015 19:15:43 Richard Fussenegger wrote: >> Hi guys! >> >> I have the following weird situation: Several files with .gz extension >> are on disk and I have a location were requests are processed that do >> not include it, so I set the option gzip_static to always and also >> installed the gunzip module. The problem is, I still need to create >> EMPTY files without the .gz extension on disk for everything to work as >> expected. Expected is that gunzip extracts the archives if no GZIP >> support is announced by the client and nginx directly streams the >> response if the client did. >> >> The configuration is fairly easy: >> >> location /var/files { >> internal; >> >> gunzip on; >> gzip_proxied any; >> gzip_static always; >> >> aio on; >> sendfile off; >> tcp_nodelay on; >> tcp_nopush off; >> >> try_files $uri =404; >> } > [..] > > Why do you have this useless "try_files $uri =404;" directive here? > It causes your problem. > > Please note, that this mailing list is for developers. You should ask > questions in user's one: http://mailman.nginx.org/mailman/listinfo/nginx > > wbr, Valentin V. Bartenev > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel -- Liebe Gr??e, *Richard Fussenegger, MSc* Web ? Mail ? Skype ? Phone (+49 176 4245 3664) Richard Fussenegger Logo T?ckingstr. 50, 41460 Neuss, NRW, Deutschland Ich bevorzuge verschl?sselte E-Mails. Der Fingerabdruck meines ?ffentlichen Schl?ssels lautet |917D AF3F 5A0A AE6C 8661 2330 C24B E2A6 A907 11B9|. Lerne auch du, wie du deine E-Mails verschl?sseln kannst: E-Mail-Selbstverteidigung From rmind at noxt.eu Tue Nov 17 22:22:00 2015 From: rmind at noxt.eu (Mindaugas Rasiukevicius) Date: Tue, 17 Nov 2015 22:22:00 +0000 Subject: Mark stale cache content as "invalid" on non-cacheable responses In-Reply-To: <20151117182500.GQ74233@mdounin.ru> References: <20151117172530.2b69f1b105b92e3a4a6ed376@noxt.eu> <20151117182500.GQ74233@mdounin.ru> Message-ID: <20151117222200.011e1e0df0a15dd6d3969ff1@noxt.eu> Maxim Dounin wrote: > > Context: consider nginx used as a cache with proxy_cache_use_stale set > > to 'http_500' and the 'updating' parameter set i.e. it caches errors and > > serves the stale content while updating. Suppose the upstream > > temporarily responds with HTTP 504 and Cache-Control being max-age=3. > > The error gets cached, but after 3 seconds it expires. At this point, > > let's say the upstream server starts serving HTTP 200 responses, but > > with Cache-Control set to 'no-cache'. > > > > The cache manager will not LRU the expired content immediately; it will > > stay in the EXPIRED state while subsequent requests will result in 200s. > > Problem: if there are multiple processes racing, the ones in the > > UPDATING state will serve stale 504s. That results in sporadic errors, > > e.g.: > > > > 200 EXPIRED > > 504 UPDATING > > 200 EXPIRED > > ... > > > > At the very least, I think the stale cache content should be marked as > > "invalid" after the no-cache response (with the possibility to become > > valid again if it becomes cacheable). Whether the object should be kept > > at all is something to debate. > > > > Please find the preliminary patch attached. > > I don't see how a response with "no-cache" is no different from an > earlier error. Consider slightly different scenario: > > - a response is cached and then expires, > > - an attempt to fetch new response results in a non-cacheable > error. > > In such a case, removing previously cached response is the worst > thing we can possibly do. We are expected to return previously > cached stale responses in all cases we are configured to do so. > > The change you've proposed completely rules out possibility of > correct handling of this scenario. > In your scenario, the upstream server requested such behaviour; it is a transition point. The "worst thing" also happens if the response would result in a temporary cacheable error. This is primarily a question of trusting/calibrating your upstream server (i.e. setting the Cache-Control headers) vs deliberately overriding it. There is no "correct" handling in a general sense here, because this really depends on the caching layers you build or integrate with. Also, I would argue that the expectation is to serve the stale content while the new content and its parameters are *unknown* (say, because, for instance, it is still being fetched). The point here is that the upstream server has made it known by serving a 200 and indicating the desire for it to not be cached. Let me put it this way: how else the upstream server could tell the cache in front that it has to exit the serve-stale state? Currently, nginx gets stuck -- the only way to eliminate those sporadic errors is to manually purge those stale files. > Trivial solutions to the problem you've described would be to > disable use of stale responses completely (which is the default), > or use "proxy_cache_use_stale http_504", or to avoid caching of > 504 errors (and the later is something RFC suggests to do by > default with any errors). > > And while I agree that it would be good to behave better in the > scenario you've described, I tend to disagree with the change > suggested, and I'm not even sure a good solution exists. > Right, whether 504s specifically (and other timeouts) should be cached is something what can be debated. The real question here is what the users want to achieve with proxy_cache_use_stale. It is a mechanism provided to avoid the redundant requests to the upstream server, right? And one aspect in particular is caching the errors for very short time to defend a struggling or failing upstream server. It hope we can agree that it is rather practical to recover from such state. Sporadically serving errors makes users unhappy. However, it is not even about the errors here. You can also reproduce the problem with different content i.e. if the upstream server serves cacheable HTTP 200 (call it A) and then non-cacheable HTTP 200 (call it B). Some clients will get A and some will get B (depending on who is winning the update race). Hence the real problem is that nginx is not consistent: it serves different content based on a *race condition*. How exactly is this beneficial or desirable? -- Mindaugas From mdounin at mdounin.ru Wed Nov 18 17:15:04 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 18 Nov 2015 20:15:04 +0300 Subject: Mark stale cache content as "invalid" on non-cacheable responses In-Reply-To: <20151117222200.011e1e0df0a15dd6d3969ff1@noxt.eu> References: <20151117172530.2b69f1b105b92e3a4a6ed376@noxt.eu> <20151117182500.GQ74233@mdounin.ru> <20151117222200.011e1e0df0a15dd6d3969ff1@noxt.eu> Message-ID: <20151118171504.GU74233@mdounin.ru> Hello! On Tue, Nov 17, 2015 at 10:22:00PM +0000, Mindaugas Rasiukevicius wrote: > Maxim Dounin wrote: > > > Context: consider nginx used as a cache with proxy_cache_use_stale set > > > to 'http_500' and the 'updating' parameter set i.e. it caches errors and > > > serves the stale content while updating. Suppose the upstream > > > temporarily responds with HTTP 504 and Cache-Control being max-age=3. > > > The error gets cached, but after 3 seconds it expires. At this point, > > > let's say the upstream server starts serving HTTP 200 responses, but > > > with Cache-Control set to 'no-cache'. > > > > > > The cache manager will not LRU the expired content immediately; it will > > > stay in the EXPIRED state while subsequent requests will result in 200s. > > > Problem: if there are multiple processes racing, the ones in the > > > UPDATING state will serve stale 504s. That results in sporadic errors, > > > e.g.: > > > > > > 200 EXPIRED > > > 504 UPDATING > > > 200 EXPIRED > > > ... > > > > > > At the very least, I think the stale cache content should be marked as > > > "invalid" after the no-cache response (with the possibility to become > > > valid again if it becomes cacheable). Whether the object should be kept > > > at all is something to debate. > > > > > > Please find the preliminary patch attached. > > > > I don't see how a response with "no-cache" is no different from an > > earlier error. Consider slightly different scenario: > > > > - a response is cached and then expires, > > > > - an attempt to fetch new response results in a non-cacheable > > error. > > > > In such a case, removing previously cached response is the worst > > thing we can possibly do. We are expected to return previously > > cached stale responses in all cases we are configured to do so. > > > > The change you've proposed completely rules out possibility of > > correct handling of this scenario. > > > > In your scenario, the upstream server requested such behaviour; it is a > transition point. It didn't requested anything. It merely returned an error. > The "worst thing" also happens if the response would > result in a temporary cacheable error. And that's why returning a "temporary cacheable error" is a bad idea if you are using proxy_cache_use_stale. > This is primarily a question of > trusting/calibrating your upstream server (i.e. setting the Cache-Control > headers) vs deliberately overriding it. There is no "correct" handling > in a general sense here, because this really depends on the caching layers > you build or integrate with. I agree: there is no correct handling if you don't know your upstream server behaviour. By enabling use of stale responses you agree that your upstream server will behave accordingly. In your scenario, the upstream server misbehaves, and this (expectedly) causes the problem. > Also, I would argue that the expectation is to serve the stale content > while the new content and its parameters are *unknown* (say, because, for > instance, it is still being fetched). The point here is that the upstream > server has made it known by serving a 200 and indicating the desire for it > to not be cached. Let me put it this way: how else the upstream server > could tell the cache in front that it has to exit the serve-stale state? > Currently, nginx gets stuck -- the only way to eliminate those sporadic > errors is to manually purge those stale files. As of now, there is no way how upstream server can control how previously cached responses will be used to serve stale responses (if nginx is configured to do so). You suggest to address it by making 200 + no-cache to be special and mean something "please remove anything cached". This disagree with the code you've provided though, as it makes any non-cacheable response special. Additionally, this disagree with various use cases when a non-cacheable response doesn't mean anything special, but rather an error, even if returned with status 200. Or, in some more complicated setups, it may be just a user-specific response (which shouldn't be cached, in contrast to generic responses to the same resource). > > Trivial solutions to the problem you've described would be to > > disable use of stale responses completely (which is the default), > > or use "proxy_cache_use_stale http_504", or to avoid caching of > > 504 errors (and the later is something RFC suggests to do by > > default with any errors). > > > > And while I agree that it would be good to behave better in the > > scenario you've described, I tend to disagree with the change > > suggested, and I'm not even sure a good solution exists. > > Right, whether 504s specifically (and other timeouts) should be cached is > something what can be debated. The real question here is what the users > want to achieve with proxy_cache_use_stale. It is a mechanism provided > to avoid the redundant requests to the upstream server, right? And one > aspect in particular is caching the errors for very short time to defend > a struggling or failing upstream server. It hope we can agree that it is > rather practical to recover from such state. Caching errors is not something proxy_cache_use_stale was introduced for. And this case rather contradicts proxy_cache_use_stale assumptions about upstream server behaviour. That is, two basic options are to either change the behaviour, or to avoid using "proxy_cache_use_stale updating". > Sporadically serving errors makes users unhappy. However, it is not even > about the errors here. You can also reproduce the problem with different > content i.e. if the upstream server serves cacheable HTTP 200 (call it A) > and then non-cacheable HTTP 200 (call it B). Some clients will get A and > some will get B (depending on who is winning the update race). Hence the > real problem is that nginx is not consistent: it serves different content > based on a *race condition*. How exactly is this beneficial or desirable? This example is basically the same, so see above. Again, I don't say current behaviour is good. It has an obvious limitation, and it would be good to resolve this limitation. But the solution proposed doesn't look like a good one either. -- Maxim Dounin http://nginx.org/ From rmind at noxt.eu Wed Nov 18 18:56:38 2015 From: rmind at noxt.eu (Mindaugas Rasiukevicius) Date: Wed, 18 Nov 2015 18:56:38 +0000 Subject: Mark stale cache content as "invalid" on non-cacheable responses In-Reply-To: <20151118171504.GU74233@mdounin.ru> References: <20151117172530.2b69f1b105b92e3a4a6ed376@noxt.eu> <20151117182500.GQ74233@mdounin.ru> <20151117222200.011e1e0df0a15dd6d3969ff1@noxt.eu> <20151118171504.GU74233@mdounin.ru> Message-ID: <20151118185638.13b072bc53455b083e267689@noxt.eu> Maxim Dounin wrote: > <...> > > > > In your scenario, the upstream server requested such behaviour; it is a > > transition point. > > It didn't requested anything. It merely returned an error. > I am afraid I cannot agree with this. Cache-Control is a directive which requests certain behaviour from a cache. Think of 'no-cache' as a barrier marking the necessary transition point. RFC 7234 section 4.2.4 ("Serving Stale Responses") seems to be clear on the stale case too (section 4 also makes an obvious point that the most recent response should be obeyed): A cache MUST NOT generate a stale response if it is prohibited by an explicit in-protocol directive (e.g., by a "no-store" or "no-cache" cache directive, a "must-revalidate" cache-response-directive, or an applicable "s-maxage" or "proxy-revalidate" cache-response-directive; see Section 5.2.2). > > The "worst thing" also happens if the response would > > result in a temporary cacheable error. > > And that's why returning a "temporary cacheable error" is a bad > idea if you are using proxy_cache_use_stale. > > > This is primarily a question of > > trusting/calibrating your upstream server (i.e. setting the > > Cache-Control headers) vs deliberately overriding it. There is no > > "correct" handling in a general sense here, because this really depends > > on the caching layers you build or integrate with. > > I agree: there is no correct handling if you don't know your > upstream server behaviour. By enabling use of stale responses you > agree that your upstream server will behave accordingly. In your > scenario, the upstream server misbehaves, and this (expectedly) > causes the problem. Why temporary caching of an error is a bad idea? The upstream server in my example had such configuration deliberately, it did not misbehave. For the given URI it does serve the dynamic content which must never be cached. However, it has a more general policy asking to cache the errors for 3 seconds. This is to defend the potentially struggling or failing origin. It seems like a quite practical reason; I think it is something used quite commonly in the industry. > > Also, I would argue that the expectation is to serve the stale content > > while the new content and its parameters are *unknown* (say, because, > > for instance, it is still being fetched). The point here is that the > > upstream server has made it known by serving a 200 and indicating the > > desire for it to not be cached. Let me put it this way: how else the > > upstream server could tell the cache in front that it has to exit the > > serve-stale state? Currently, nginx gets stuck -- the only way to > > eliminate those sporadic errors is to manually purge those stale files. > > As of now, there is no way how upstream server can control how > previously cached responses will be used to serve stale responses > (if nginx is configured to do so). Again, the way I interpret RFC, is that the Cache-Control header *is* the way. > You suggest to address it by making 200 + no-cache to be special > and mean something "please remove anything cached". This disagree > with the code you've provided though, as it makes any non-cacheable > response special. Additionally, this disagree with various use > cases when a non-cacheable response doesn't mean anything special, > but rather an error, even if returned with status 200. Or, in > some more complicated setups, it may be just a user-specific > response (which shouldn't be cached, in contrast to generic > responses to the same resource). In the original case, nginx sporadically throws errors at users when there is no real error, while temporarily caching errors when they indeed happen is a beneficial and desired feature. However, I do not think it really matters whether one of the responses is an error or not. Let's talk about the generic case. If we have a sequence of cacheable responses and then a response with the Cache-Control header set to 'no-cache', then I believe the cache must invalidate that content. Because otherwise it does not obey the upstream server and does not preserve the consistency of the content. Let's put it this way: what is your use case i.e. when is such behaviour problematic? If you have a location (object or page) where the upstream server constantly mixes "cache me" and "don't cache me", then there is no point to cache it (i.e. it is inherently not cacheable content which just busts your cache anyway). > > Right, whether 504s specifically (and other timeouts) should be cached > > is something what can be debated. The real question here is what the > > users want to achieve with proxy_cache_use_stale. It is a mechanism > > provided to avoid the redundant requests to the upstream server, > > right? And one aspect in particular is caching the errors for very > > short time to defend a struggling or failing upstream server. It hope > > we can agree that it is rather practical to recover from such state. > > Caching errors is not something proxy_cache_use_stale was > introduced for. And this case rather contradicts > proxy_cache_use_stale assumptions about upstream server behaviour. > That is, two basic options are to either change the behaviour, or > to avoid using "proxy_cache_use_stale updating". Perhaps it was not, but it provides such option and the option is used in the wildness. Again, the presence of error here does not matter much as the real problem is obeying the upstream server directives and preserving the consistency. > > Sporadically serving errors makes users unhappy. However, it is not > > even about the errors here. You can also reproduce the problem with > > different content i.e. if the upstream server serves cacheable HTTP 200 > > (call it A) and then non-cacheable HTTP 200 (call it B). Some clients > > will get A and some will get B (depending on who is winning the update > > race). Hence the real problem is that nginx is not consistent: it > > serves different content based on a *race condition*. How exactly is > > this beneficial or desirable? > > This example is basically the same, so see above. > Right, it just a good illustration of the consistency problem. I do not really see a conceptual between the current nginx behaviour and a database sporadically returning the result of some old transaction. It's broken. > Again, I don't say current behaviour is good. It has an obvious > limitation, and it would be good to resolve this limitation. But > the solution proposed doesn't look like a good one either. Okay, so what solution do you propose? -- Mindaugas From mdounin at mdounin.ru Wed Nov 18 20:25:58 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 18 Nov 2015 23:25:58 +0300 Subject: Mark stale cache content as "invalid" on non-cacheable responses In-Reply-To: <20151118185638.13b072bc53455b083e267689@noxt.eu> References: <20151117172530.2b69f1b105b92e3a4a6ed376@noxt.eu> <20151117182500.GQ74233@mdounin.ru> <20151117222200.011e1e0df0a15dd6d3969ff1@noxt.eu> <20151118171504.GU74233@mdounin.ru> <20151118185638.13b072bc53455b083e267689@noxt.eu> Message-ID: <20151118202558.GX74233@mdounin.ru> Hello! On Wed, Nov 18, 2015 at 06:56:38PM +0000, Mindaugas Rasiukevicius wrote: > Maxim Dounin wrote: > > <...> > > > > > > In your scenario, the upstream server requested such behaviour; it is a > > > transition point. > > > > It didn't requested anything. It merely returned an error. > > > > I am afraid I cannot agree with this. Cache-Control is a directive which > requests certain behaviour from a cache. Think of 'no-cache' as a barrier > marking the necessary transition point. RFC 7234 section 4.2.4 ("Serving > Stale Responses") seems to be clear on the stale case too (section 4 also > makes an obvious point that the most recent response should be obeyed): > > A cache MUST NOT generate a stale response if it is prohibited by an > explicit in-protocol directive (e.g., by a "no-store" or "no-cache" > cache directive, a "must-revalidate" cache-response-directive, or an > applicable "s-maxage" or "proxy-revalidate" cache-response-directive; > see Section 5.2.2). The response stored in cache doesn't have "no-cache" nor any other directives in it, and this "MUST NOT" certainly doesn't apply to it. In the scenario I've described, the response in cache is a correct (but stale) response, as returned by an upstream server when it was up and running normally. In the scenario you've described, the response in cache is a "temporary cacheable error", and it doesn't have any directives attached to it either. > > > The "worst thing" also happens if the response would > > > result in a temporary cacheable error. > > > > And that's why returning a "temporary cacheable error" is a bad > > idea if you are using proxy_cache_use_stale. > > > > > This is primarily a question of > > > trusting/calibrating your upstream server (i.e. setting the > > > Cache-Control headers) vs deliberately overriding it. There is no > > > "correct" handling in a general sense here, because this really depends > > > on the caching layers you build or integrate with. > > > > I agree: there is no correct handling if you don't know your > > upstream server behaviour. By enabling use of stale responses you > > agree that your upstream server will behave accordingly. In your > > scenario, the upstream server misbehaves, and this (expectedly) > > causes the problem. > > Why temporary caching of an error is a bad idea? The upstream server > in my example had such configuration deliberately, it did not misbehave. > For the given URI it does serve the dynamic content which must never be > cached. However, it has a more general policy asking to cache the errors > for 3 seconds. This is to defend the potentially struggling or failing > origin. It seems like a quite practical reason; I think it is something > used quite commonly in the industry. The problem is that such a configuration isn't compatible with "proxy_cache_use_stale updating" assumptions about the upstream behaviour. > > > Also, I would argue that the expectation is to serve the stale content > > > while the new content and its parameters are *unknown* (say, because, > > > for instance, it is still being fetched). The point here is that the > > > upstream server has made it known by serving a 200 and indicating the > > > desire for it to not be cached. Let me put it this way: how else the > > > upstream server could tell the cache in front that it has to exit the > > > serve-stale state? Currently, nginx gets stuck -- the only way to > > > eliminate those sporadic errors is to manually purge those stale files. > > > > As of now, there is no way how upstream server can control how > > previously cached responses will be used to serve stale responses > > (if nginx is configured to do so). > > Again, the way I interpret RFC, is that the Cache-Control header *is* > the way. The Cache-Control header allows you to control cacheability of a particular response, and returning 504 errors with "Cache-Control: no-cache" will resolve the problem in your scenario. Though I see no reasons why Cache-Control on another response should be appliciable to a previously stored response - in general it's not possible at all, as a response may be returned to a different client. > > You suggest to address it by making 200 + no-cache to be special > > and mean something "please remove anything cached". This disagree > > with the code you've provided though, as it makes any non-cacheable > > response special. Additionally, this disagree with various use > > cases when a non-cacheable response doesn't mean anything special, > > but rather an error, even if returned with status 200. Or, in > > some more complicated setups, it may be just a user-specific > > response (which shouldn't be cached, in contrast to generic > > responses to the same resource). > > In the original case, nginx sporadically throws errors at users when there > is no real error, while temporarily caching errors when they indeed happen > is a beneficial and desired feature. However, I do not think it really > matters whether one of the responses is an error or not. Let's talk about > the generic case. If we have a sequence of cacheable responses and then a > response with the Cache-Control header set to 'no-cache', then I believe > the cache must invalidate that content. Because otherwise it does not obey > the upstream server and does not preserve the consistency of the content. As explained, the upstream server has no way to say something additional about a response it returned previously. > Let's put it this way: what is your use case i.e. when is such behaviour > problematic? If you have a location (object or page) where the upstream > server constantly mixes "cache me" and "don't cache me", then there is no > point to cache it (i.e. it is inherently not cacheable content which just > busts your cache anyway). I've already described at least 2 use cases where current behaviour works fine, and the one you suggests is problematic. Again: Use case 1, a cache with possible errors: A high traffic resource, which normally can be cached for a long time, but takes a long time to generate. A response is stored in the cache, and "proxy_cache_use_stale updating" is used to prevent multiple clients from updating the cache at the same time. If at some point a request to update the cache fails / times out, an "degraded" version is returned with caching disabled (this can be an error, or normal response without some data). The response previously stored in the cache is preserved and will be returned to other clients while we'll try to update the cache again. Use case 2, a cache with non-cacheable private responses: A resource has two versions: one is "general" and can/should be cached (e.g., "guest user" version of a page), and another one is private and should not be cached by nginx ("logged in user" version). The "proxy_cache_bypass" directive is used to determine if a cached version can be returned, or a request to an upstream server is needed. "Logged in" responses are returned with disabled caching, while "guest user" responses are cacheable. Both use cases are real. First one is basically a use case the "proxy_cache_use_stale updating" was originally introduced for. Second one is something often seen in the mailing list as configured by various nginx users. Both will be broken by your patch. > > > Right, whether 504s specifically (and other timeouts) should be cached > > > is something what can be debated. The real question here is what the > > > users want to achieve with proxy_cache_use_stale. It is a mechanism > > > provided to avoid the redundant requests to the upstream server, > > > right? And one aspect in particular is caching the errors for very > > > short time to defend a struggling or failing upstream server. It hope > > > we can agree that it is rather practical to recover from such state. > > > > Caching errors is not something proxy_cache_use_stale was > > introduced for. And this case rather contradicts > > proxy_cache_use_stale assumptions about upstream server behaviour. > > That is, two basic options are to either change the behaviour, or > > to avoid using "proxy_cache_use_stale updating". > > Perhaps it was not, but it provides such option and the option is used in > the wildness. Again, the presence of error here does not matter much as > the real problem is obeying the upstream server directives and preserving > the consistency. The two options suggested still apply: either change the upstream server behaviour to match "proxy_cache_use_stale updating" assumptions (basically, don't try to convert a cacheable resource to non-cacheable one), or switch it off. > > > Sporadically serving errors makes users unhappy. However, it is not > > > even about the errors here. You can also reproduce the problem with > > > different content i.e. if the upstream server serves cacheable HTTP 200 > > > (call it A) and then non-cacheable HTTP 200 (call it B). Some clients > > > will get A and some will get B (depending on who is winning the update > > > race). Hence the real problem is that nginx is not consistent: it > > > serves different content based on a *race condition*. How exactly is > > > this beneficial or desirable? > > > > This example is basically the same, so see above. > > > > Right, it just a good illustration of the consistency problem. I do not > really see a conceptual between the current nginx behaviour and a database > sporadically returning the result of some old transaction. It's broken. See above. It's expected to be broken if you try to use it in conditions it's not expected to be used. > > Again, I don't say current behaviour is good. It has an obvious > > limitation, and it would be good to resolve this limitation. But > > the solution proposed doesn't look like a good one either. > > Okay, so what solution do you propose? As I already wrote in the very first reply, I'm not even sure a good solution exists. May be some timeouts like ones proposed by rfc5861 will work (though this will limit various "use stale" cases considerably with low timeouts, and won't help much with high ones). Or may be we can introduce some counters/heuristics to detect cacheable->uncacheable transitions. May be just enforcing "inactive" time on such resources regardless of actual requests will work (but unlikely, as an upstream server can be down for a considerable time in some cases). -- Maxim Dounin http://nginx.org/ From rmind at noxt.eu Wed Nov 18 22:47:18 2015 From: rmind at noxt.eu (Mindaugas Rasiukevicius) Date: Wed, 18 Nov 2015 22:47:18 +0000 Subject: Mark stale cache content as "invalid" on non-cacheable responses In-Reply-To: <20151118202558.GX74233@mdounin.ru> References: <20151117172530.2b69f1b105b92e3a4a6ed376@noxt.eu> <20151117182500.GQ74233@mdounin.ru> <20151117222200.011e1e0df0a15dd6d3969ff1@noxt.eu> <20151118171504.GU74233@mdounin.ru> <20151118185638.13b072bc53455b083e267689@noxt.eu> <20151118202558.GX74233@mdounin.ru> Message-ID: <20151118224718.924017f46fb77aced83efad6@noxt.eu> Maxim Dounin wrote: > > > > A cache MUST NOT generate a stale response if it is prohibited by an > > explicit in-protocol directive (e.g., by a "no-store" or "no-cache" > > cache directive, a "must-revalidate" cache-response-directive, or an > > applicable "s-maxage" or "proxy-revalidate" cache-response-directive; > > see Section 5.2.2). > > The response stored in cache doesn't have "no-cache" nor any other > directives in it, and this "MUST NOT" certainly doesn't apply to > it. > > In the scenario I've described, the response in cache is a correct > (but stale) response, as returned by an upstream server when it > was up and running normally. > > In the scenario you've described, the response in cache is a > "temporary cacheable error", and it doesn't have any directives > attached to it either. It does not, but the *subsequent* request does and the cache should obey the most recent one. I am not sure why are you focusing on the per-request narrative when the cache is inherently about the state. The Cache-Control header is a way to control that state. Again, RFC seems to be fairly clear: the "cache MUST NOT reuse a stored response" (note the word "reuse"), unless as described in the last bullet point of section 4, the stored response is either: * fresh (see Section 4.2), or * allowed to be served stale (see Section 4.2.4), or * successfully validated (see Section 4.3). The race conditions we are talking about happen *after* the upstream server advertises 'no-cache', therefore the second point is no longer satisfied (and, of course, neither are the other two). And further: When more than one suitable response is stored, a cache MUST use the most recent response (as determined by the Date header field). It can also forward the request with "Cache-Control: max-age=0" or "Cache-Control: no-cache" to disambiguate which response to use. > Use case 1, a cache with possible errors: > > A high traffic resource, which normally can be cached for a long > time, but takes a long time to generate. A response is stored in > the cache, and "proxy_cache_use_stale updating" is used to prevent > multiple clients from updating the cache at the same time. If at > some point a request to update the cache fails / times out, an > "degraded" version is returned with caching disabled (this can be > an error, or normal response without some data). The response > previously stored in the cache is preserved and will be returned > to other clients while we'll try to update the cache again. The cache can still legitimately serve the content while update is in progress and if the cache itself experienced a timeout while fetching from the upstream server (because the state is still "unknown" for the cache). However, if the upstream server sent a response with 'no-cache', then as undesirable as it sounds, I think the correct thing to do is to invalidate the existing stale object. Simply because the cache cannot know whether it is a temporary error or a deliberate change of state into an error. The invalidation is inefficient, but it ensures correctness. I agree it is real a problem, though. It seems that the 'stale-if-error' proposed in RFC 5861 you mentioned was suggested exactly for this purpose. On the other hand, if you do not have the control over the upstream server and it responds with a cacheable error (say max-age=3 as in the previous example), then that will also nuke your stale cache object. > Use case 2, a cache with non-cacheable private responses: > > A resource has two versions: one is "general" and can/should be > cached (e.g., "guest user" version of a page), and another one > is private and should not be cached by nginx ("logged in user" > version). The "proxy_cache_bypass" directive is used to determine > if a cached version can be returned, or a request to an upstream > server is needed. "Logged in" responses are returned with disabled > caching, while "guest user" responses are cacheable. Fair point, but in this case the cache invalidation logic should take into account the proxy_cache_bypass condition. My patch simply did not address that. > > > > Okay, so what solution do you propose? > > As I already wrote in the very first reply, I'm not even sure a > good solution exists. May be some timeouts like ones proposed by > rfc5861 will work (though this will limit various "use stale" cases > considerably with low timeouts, and won't help much with high > ones). Or may be we can introduce some counters/heuristics to > detect cacheable->uncacheable transitions. May be just enforcing > "inactive" time on such resources regardless of actual requests > will work (but unlikely, as an upstream server can be down for a > considerable time in some cases). > I would say the right way would be to invalidate the object on 'no-cache', but provide an nginx option equivalent to 'stale-if-error' logic (or even better -- a generic directive to override Cache-Control value for a given HTTP code range). I understand that breaking the existing configs would be undesirable. How about introducing the inverted logic option, e.g. extending proxy_cache_use_stale with 'invalidate-on-nocache'? -- Mindaugas From awalgarg at gmail.com Thu Nov 19 12:15:18 2015 From: awalgarg at gmail.com (Awal Garg) Date: Thu, 19 Nov 2015 17:45:18 +0530 Subject: Question regarding syntax of nginx configs In-Reply-To: References: <4229AAD4-05E6-44DB-998A-A1D05A294F4B@lindenbergsoftware.com> <365D5343-4045-4562-B3B4-BFB4C8B23A38@gmail.com> <1FB2C62C-3D76-457C-B143-2E78B65C12A5@gmail.com> Message-ID: <1447935318.6800.6.camel@gmail.com> Ah, thanks for the info Maxim! This probably doesn't suit the thread subject very well, but might I also know if there is a test suite, or even just a set of configuration files which should be accepted by a parser for nginx configurations? Regards Awal From mdounin at mdounin.ru Thu Nov 19 13:35:04 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 19 Nov 2015 16:35:04 +0300 Subject: Question regarding syntax of nginx configs In-Reply-To: <1447935318.6800.6.camel@gmail.com> References: <4229AAD4-05E6-44DB-998A-A1D05A294F4B@lindenbergsoftware.com> <365D5343-4045-4562-B3B4-BFB4C8B23A38@gmail.com> <1FB2C62C-3D76-457C-B143-2E78B65C12A5@gmail.com> <1447935318.6800.6.camel@gmail.com> Message-ID: <20151119133504.GE74233@mdounin.ru> Hello! On Thu, Nov 19, 2015 at 05:45:18PM +0530, Awal Garg wrote: > Ah, thanks for the info Maxim! > > This probably doesn't suit the thread subject very well, but might I > also know if there is a test suite, or even just a set of configuration > files which should be accepted by a parser for nginx configurations? Our test suite is available here: http://hg.nginx.org/nginx-tests It doesn't try to test configuration parsing hard though. -- Maxim Dounin http://nginx.org/ From alessandro at cloudflare.com Fri Nov 20 09:45:53 2015 From: alessandro at cloudflare.com (Alessandro Ghedini) Date: Fri, 20 Nov 2015 09:45:53 +0000 Subject: [PATCH] HTTP: implement 'connect' and 'close' phases Message-ID: <20151120094553.GA849@mandy.local> # HG changeset patch # User Alessandro Ghedini # Date 1447956026 0 # Thu Nov 19 18:00:26 2015 +0000 # Node ID 9d265c320050a00ff24fa8d84371701e46147e8a # Parent bec5b3093337708cbdb59f9fc253f8e1cd6d7848 HTTP: implement 'connect' and 'close' phases This patch adds the NGX_HTTP_CONNECT_PHASE and NGX_HTTP_CLOSE_PHASE phases. Handlers for these phases are called when a connection is estabilished and when it is closed, and they take a ngx_connection_t as argument instead of ngx_http_request_t like the other phase handlers. These can be useful for keeping track of TCP connections for debugging, monitoring and logging purposes, and can also be used to apply custom configurations (e.g. socket options). This patch also adds a "ctx" field to ngx_connection_t, to be used by modules to store their own context structures (just like the ctx field in ngx_http_request_t). diff -r bec5b3093337 -r 9d265c320050 src/core/ngx_connection.h --- a/src/core/ngx_connection.h Tue Nov 17 19:41:39 2015 +0300 +++ b/src/core/ngx_connection.h Thu Nov 19 18:00:26 2015 +0000 @@ -36,6 +36,7 @@ /* handler of accepted connection */ ngx_connection_handler_pt handler; + void *handler_data; void *servers; /* array of ngx_http_in_addr_t, for example */ @@ -133,6 +134,8 @@ ngx_recv_chain_pt recv_chain; ngx_send_chain_pt send_chain; + void **ctx; + ngx_listening_t *listening; off_t sent; diff -r bec5b3093337 -r 9d265c320050 src/http/ngx_http.c --- a/src/http/ngx_http.c Tue Nov 17 19:41:39 2015 +0300 +++ b/src/http/ngx_http.c Thu Nov 19 18:00:26 2015 +0000 @@ -396,6 +396,20 @@ return NGX_ERROR; } + if (ngx_array_init(&cmcf->phases[NGX_HTTP_CONNECT_PHASE].handlers, + cf->pool, 1, sizeof(ngx_http_handler_pt)) + != NGX_OK) + { + return NGX_ERROR; + } + + if (ngx_array_init(&cmcf->phases[NGX_HTTP_CLOSE_PHASE].handlers, + cf->pool, 1, sizeof(ngx_http_handler_pt)) + != NGX_OK) + { + return NGX_ERROR; + } + if (ngx_array_init(&cmcf->phases[NGX_HTTP_LOG_PHASE].handlers, cf->pool, 1, sizeof(ngx_http_handler_pt)) != NGX_OK) @@ -1776,6 +1790,8 @@ ls->pool_size = cscf->connection_pool_size; ls->post_accept_timeout = cscf->client_header_timeout; + ls->handler_data = cscf->ctx; + clcf = cscf->ctx->loc_conf[ngx_http_core_module.ctx_index]; ls->logp = clcf->error_log; diff -r bec5b3093337 -r 9d265c320050 src/http/ngx_http_core_module.h --- a/src/http/ngx_http_core_module.h Tue Nov 17 19:41:39 2015 +0300 +++ b/src/http/ngx_http_core_module.h Thu Nov 19 18:00:26 2015 +0000 @@ -134,6 +134,9 @@ NGX_HTTP_TRY_FILES_PHASE, NGX_HTTP_CONTENT_PHASE, + NGX_HTTP_CONNECT_PHASE, + NGX_HTTP_CLOSE_PHASE, + NGX_HTTP_LOG_PHASE } ngx_http_phases; diff -r bec5b3093337 -r 9d265c320050 src/http/ngx_http_request.c --- a/src/http/ngx_http_request.c Tue Nov 17 19:41:39 2015 +0300 +++ b/src/http/ngx_http_request.c Thu Nov 19 18:00:26 2015 +0000 @@ -37,6 +37,8 @@ static ngx_int_t ngx_http_find_virtual_server(ngx_connection_t *c, ngx_http_virtual_names_t *virtual_names, ngx_str_t *host, ngx_http_request_t *r, ngx_http_core_srv_conf_t **cscfp); +static ngx_int_t ngx_http_run_conn_phases(ngx_connection_t *c, + ngx_http_phases p); static void ngx_http_request_handler(ngx_event_t *ev); static void ngx_http_terminate_request(ngx_http_request_t *r, ngx_int_t rc); @@ -214,6 +216,12 @@ c->data = hc; + c->ctx = ngx_pcalloc(c->pool, sizeof(void *) * ngx_http_max_module); + if (c->ctx == NULL) { + ngx_http_close_connection(c); + return; + } + /* find the server configuration for the address:port */ port = c->listening->servers; @@ -318,6 +326,8 @@ } #endif + ngx_http_run_conn_phases(c, NGX_HTTP_CONNECT_PHASE); + #if (NGX_HTTP_SSL) { ngx_http_ssl_srv_conf_t *sscf; @@ -3522,6 +3532,31 @@ } +ngx_int_t +ngx_http_run_conn_phases(ngx_connection_t *c, ngx_http_phases p) +{ + ngx_uint_t i, n; + ngx_http_conf_ctx_t *ctx; + ngx_http_conn_handler_pt *handler; + ngx_http_core_main_conf_t *cmcf; + + ctx = c->listening->handler_data; + + cmcf = ngx_http_get_module_main_conf(ctx, ngx_http_core_module); + + if (c->fd != (ngx_socket_t) -1) { + handler = cmcf->phases[p].handlers.elts; + n = cmcf->phases[p].handlers.nelts; + + for (i = 0; i < n; i++) { + handler[i](c); + } + } + + return NGX_OK; +} + + void ngx_http_close_connection(ngx_connection_t *c) { @@ -3530,6 +3565,8 @@ ngx_log_debug1(NGX_LOG_DEBUG_HTTP, c->log, 0, "close http connection: %d", c->fd); + ngx_http_run_conn_phases(c, NGX_HTTP_CLOSE_PHASE); + #if (NGX_HTTP_SSL) if (c->ssl) { diff -r bec5b3093337 -r 9d265c320050 src/http/ngx_http_request.h --- a/src/http/ngx_http_request.h Tue Nov 17 19:41:39 2015 +0300 +++ b/src/http/ngx_http_request.h Thu Nov 19 18:00:26 2015 +0000 @@ -354,6 +354,7 @@ typedef ngx_int_t (*ngx_http_handler_pt)(ngx_http_request_t *r); +typedef ngx_int_t (*ngx_http_conn_handler_pt)(ngx_connection_t *c); typedef void (*ngx_http_event_handler_pt)(ngx_http_request_t *r); From carlos-eduardo-rodrigues at telecom.pt Fri Nov 20 12:28:29 2015 From: carlos-eduardo-rodrigues at telecom.pt (Carlos Eduardo Ferreira Rodrigues) Date: Fri, 20 Nov 2015 12:28:29 +0000 Subject: [PATCH] HTTP: implement 'connect' and 'close' phases In-Reply-To: <20151120094553.GA849@mandy.local> References: <20151120094553.GA849@mandy.local> Message-ID: On 20-11-2015 09:45, Alessandro Ghedini wrote: > [...] > These can be useful for keeping track of TCP connections for debugging, > monitoring and logging purposes, and can also be used to apply custom > configurations (e.g. socket options). > [...] How feasible would it be to have a similar phase for new requests inside the same (keep-alive) connection? For me, the interesting use case here would be marking connections with IP_TOS/SO_MARK for traffic-shaping/routing purposes and being able to change marks while the connection is still open depending on the specific content that's being transferred for that request. Best regards, -- Carlos Rodrigues From alessandro at cloudflare.com Fri Nov 20 12:49:48 2015 From: alessandro at cloudflare.com (Alessandro Ghedini) Date: Fri, 20 Nov 2015 12:49:48 +0000 Subject: [PATCH] HTTP: implement 'connect' and 'close' phases In-Reply-To: References: <20151120094553.GA849@mandy.local> Message-ID: <20151120124948.GA11865@mandy.local> On Fri, Nov 20, 2015 at 12:28:29pm +0000, Carlos Eduardo Ferreira Rodrigues wrote: > On 20-11-2015 09:45, Alessandro Ghedini wrote: > > [...] > > These can be useful for keeping track of TCP connections for debugging, > > monitoring and logging purposes, and can also be used to apply custom > > configurations (e.g. socket options). > > [...] > > How feasible would it be to have a similar phase for new requests inside > the same (keep-alive) connection? Not sure I follow... > For me, the interesting use case here would be marking connections with > IP_TOS/SO_MARK for traffic-shaping/routing purposes and being able to > change marks while the connection is still open depending on the > specific content that's being transferred for that request. You can set an initial mark in the NGX_HTTP_CONNECT_PHASE, and then update it in any of the content phases, no? How does the keep-alive figure in this? Cheers From mdounin at mdounin.ru Fri Nov 20 14:37:07 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 20 Nov 2015 17:37:07 +0300 Subject: [PATCH] HTTP: implement 'connect' and 'close' phases In-Reply-To: References: <20151120094553.GA849@mandy.local> Message-ID: <20151120143707.GL74233@mdounin.ru> Hello! On Fri, Nov 20, 2015 at 12:28:29PM +0000, Carlos Eduardo Ferreira Rodrigues wrote: > On 20-11-2015 09:45, Alessandro Ghedini wrote: > > [...] > > These can be useful for keeping track of TCP connections for debugging, > > monitoring and logging purposes, and can also be used to apply custom > > configurations (e.g. socket options). > > [...] > > How feasible would it be to have a similar phase for new requests inside > the same (keep-alive) connection? > > For me, the interesting use case here would be marking connections with > IP_TOS/SO_MARK for traffic-shaping/routing purposes and being able to > change marks while the connection is still open depending on the > specific content that's being transferred for that request. For IP_TOS, consider marking connection during an actual request processing. This will also allow to do different marking based on a location configuration, see here for an example: http://mdounin.ru/hg/ngx_http_ip_tos_filter_module/ -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Fri Nov 20 14:46:12 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 20 Nov 2015 17:46:12 +0300 Subject: [PATCH] HTTP: implement 'connect' and 'close' phases In-Reply-To: <20151120094553.GA849@mandy.local> References: <20151120094553.GA849@mandy.local> Message-ID: <20151120144612.GM74233@mdounin.ru> Hello! On Fri, Nov 20, 2015 at 09:45:53AM +0000, Alessandro Ghedini wrote: > # HG changeset patch > # User Alessandro Ghedini > # Date 1447956026 0 > # Thu Nov 19 18:00:26 2015 +0000 > # Node ID 9d265c320050a00ff24fa8d84371701e46147e8a > # Parent bec5b3093337708cbdb59f9fc253f8e1cd6d7848 > HTTP: implement 'connect' and 'close' phases > > This patch adds the NGX_HTTP_CONNECT_PHASE and NGX_HTTP_CLOSE_PHASE > phases. > > Handlers for these phases are called when a connection is estabilished > and when it is closed, and they take a ngx_connection_t as argument > instead of ngx_http_request_t like the other phase handlers. > > These can be useful for keeping track of TCP connections for debugging, > monitoring and logging purposes, and can also be used to apply custom > configurations (e.g. socket options). > > This patch also adds a "ctx" field to ngx_connection_t, to be used by > modules to store their own context structures (just like the ctx field > in ngx_http_request_t). Phases are request processing phases, and what you are trying to do doesn't looks like request processing. Additionally, ctx in ngx_connection_t implies noticeable memory overhead for keepalive connections. Instead, consider: - starting your processing at any request processing stage as needed; - using a connection pool cleanup handler if you want to track connection termination; - searching though connection pool cleanups if you want to preserve some connection-specific data. [...] -- Maxim Dounin http://nginx.org/ From alessandro at cloudflare.com Fri Nov 20 15:35:36 2015 From: alessandro at cloudflare.com (Alessandro Ghedini) Date: Fri, 20 Nov 2015 15:35:36 +0000 Subject: [PATCH] HTTP: implement 'connect' and 'close' phases In-Reply-To: <20151120144612.GM74233@mdounin.ru> References: <20151120094553.GA849@mandy.local> <20151120144612.GM74233@mdounin.ru> Message-ID: <20151120153536.GA17793@mandy.local> On Fri, Nov 20, 2015 at 05:46:12PM +0300, Maxim Dounin wrote: > Hello! Hi, > On Fri, Nov 20, 2015 at 09:45:53AM +0000, Alessandro Ghedini wrote: > > > # HG changeset patch > > # User Alessandro Ghedini > > # Date 1447956026 0 > > # Thu Nov 19 18:00:26 2015 +0000 > > # Node ID 9d265c320050a00ff24fa8d84371701e46147e8a > > # Parent bec5b3093337708cbdb59f9fc253f8e1cd6d7848 > > HTTP: implement 'connect' and 'close' phases > > > > This patch adds the NGX_HTTP_CONNECT_PHASE and NGX_HTTP_CLOSE_PHASE > > phases. > > > > Handlers for these phases are called when a connection is estabilished > > and when it is closed, and they take a ngx_connection_t as argument > > instead of ngx_http_request_t like the other phase handlers. > > > > These can be useful for keeping track of TCP connections for debugging, > > monitoring and logging purposes, and can also be used to apply custom > > configurations (e.g. socket options). > > > > This patch also adds a "ctx" field to ngx_connection_t, to be used by > > modules to store their own context structures (just like the ctx field > > in ngx_http_request_t). > > Phases are request processing phases, and what you are trying to > do doesn't looks like request processing. Well, I guess not. But processing at the connection level could still be useful for request processing as well. > Additionally, ctx in ngx_connection_t implies noticeable memory overhead for > keepalive connections. > > Instead, consider: > > - starting your processing at any request processing stage as > needed; > > - using a connection pool cleanup handler if you want to track > connection termination; > > - searching though connection pool cleanups if you want to > preserve some connection-specific data. > > [...] My intention is to expose this functionality to the lua-nginx module to do my application-specific processing in Lua, and leave the nginx changes as general-purpose as possible. AFAICT none of the methods above allows me to do that, not if I want to do the processing only once at connection creation or closing, and I really can't think of any alternative. I'm very much open to suggestions though. Cheers From ru at nginx.com Sat Nov 21 07:45:07 2015 From: ru at nginx.com (Ruslan Ermilov) Date: Sat, 21 Nov 2015 07:45:07 +0000 Subject: [nginx] Upstream: fixed "no port" detection in evaluated upstreams. Message-ID: details: http://hg.nginx.org/nginx/rev/a93345ee8f52 branches: changeset: 6303:a93345ee8f52 user: Ruslan Ermilov date: Sat Nov 21 10:44:07 2015 +0300 description: Upstream: fixed "no port" detection in evaluated upstreams. If an upstream with variables evaluated to address without a port, then instead of a "no port in upstream" error an attempt was made to connect() which failed with EADDRNOTAVAIL. diffstat: src/http/modules/ngx_http_fastcgi_module.c | 5 +++-- src/http/modules/ngx_http_proxy_module.c | 5 +++-- src/http/modules/ngx_http_scgi_module.c | 5 +++-- src/http/modules/ngx_http_uwsgi_module.c | 5 +++-- src/http/ngx_http_upstream.c | 12 ++++++++++-- 5 files changed, 22 insertions(+), 10 deletions(-) diffs (99 lines): diff -r bec5b3093337 -r a93345ee8f52 src/http/modules/ngx_http_fastcgi_module.c --- a/src/http/modules/ngx_http_fastcgi_module.c Tue Nov 17 19:41:39 2015 +0300 +++ b/src/http/modules/ngx_http_fastcgi_module.c Sat Nov 21 10:44:07 2015 +0300 @@ -773,10 +773,11 @@ ngx_http_fastcgi_eval(ngx_http_request_t } else { u->resolved->host = url.host; - u->resolved->port = url.port; - u->resolved->no_port = url.no_port; } + u->resolved->port = url.port; + u->resolved->no_port = url.no_port; + return NGX_OK; } diff -r bec5b3093337 -r a93345ee8f52 src/http/modules/ngx_http_proxy_module.c --- a/src/http/modules/ngx_http_proxy_module.c Tue Nov 17 19:41:39 2015 +0300 +++ b/src/http/modules/ngx_http_proxy_module.c Sat Nov 21 10:44:07 2015 +0300 @@ -1015,10 +1015,11 @@ ngx_http_proxy_eval(ngx_http_request_t * } else { u->resolved->host = url.host; - u->resolved->port = (in_port_t) (url.no_port ? port : url.port); - u->resolved->no_port = url.no_port; } + u->resolved->port = (in_port_t) (url.no_port ? port : url.port); + u->resolved->no_port = url.no_port; + return NGX_OK; } diff -r bec5b3093337 -r a93345ee8f52 src/http/modules/ngx_http_scgi_module.c --- a/src/http/modules/ngx_http_scgi_module.c Tue Nov 17 19:41:39 2015 +0300 +++ b/src/http/modules/ngx_http_scgi_module.c Sat Nov 21 10:44:07 2015 +0300 @@ -569,10 +569,11 @@ ngx_http_scgi_eval(ngx_http_request_t *r } else { u->resolved->host = url.host; - u->resolved->port = url.port; - u->resolved->no_port = url.no_port; } + u->resolved->port = url.port; + u->resolved->no_port = url.no_port; + return NGX_OK; } diff -r bec5b3093337 -r a93345ee8f52 src/http/modules/ngx_http_uwsgi_module.c --- a/src/http/modules/ngx_http_uwsgi_module.c Tue Nov 17 19:41:39 2015 +0300 +++ b/src/http/modules/ngx_http_uwsgi_module.c Sat Nov 21 10:44:07 2015 +0300 @@ -771,10 +771,11 @@ ngx_http_uwsgi_eval(ngx_http_request_t * } else { u->resolved->host = url.host; - u->resolved->port = url.port; - u->resolved->no_port = url.no_port; } + u->resolved->port = url.port; + u->resolved->no_port = url.no_port; + return NGX_OK; } diff -r bec5b3093337 -r a93345ee8f52 src/http/ngx_http_upstream.c --- a/src/http/ngx_http_upstream.c Tue Nov 17 19:41:39 2015 +0300 +++ b/src/http/ngx_http_upstream.c Sat Nov 21 10:44:07 2015 +0300 @@ -633,8 +633,18 @@ ngx_http_upstream_init_request(ngx_http_ u->ssl_name = u->resolved->host; #endif + host = &u->resolved->host; + if (u->resolved->sockaddr) { + if (u->resolved->port == 0) { + ngx_log_error(NGX_LOG_ERR, r->connection->log, 0, + "no port in upstream \"%V\"", host); + ngx_http_upstream_finalize_request(r, u, + NGX_HTTP_INTERNAL_SERVER_ERROR); + return; + } + if (ngx_http_upstream_create_round_robin_peer(r, u->resolved) != NGX_OK) { @@ -648,8 +658,6 @@ ngx_http_upstream_init_request(ngx_http_ return; } - host = &u->resolved->host; - umcf = ngx_http_get_module_main_conf(r, ngx_http_upstream_module); uscfp = umcf->upstreams.elts; From ritesh.jha at hotmail.com Sat Nov 21 21:40:57 2015 From: ritesh.jha at hotmail.com (Ritesh Jha) Date: Sat, 21 Nov 2015 21:40:57 +0000 Subject: Unit testing approach for nginx modules Message-ID: Hello everyone, We are developing nginx modules to implement few usecases in our product. Most of the other usecases cases have been implemented using Java. At my office we follow TDD for Java development. TDD for Java development is easy due to availability of unit-testing and mocking frameworks. We are wondering if we can follow TDD for development of nginx modules as well. We have tried couple of unit-testing frameworks for C (Unity and CMocka) but we have found it very difficult to write useful testcases using these frameworks. Can you please suggest a suitable approach? Also if unit testing is not the way to go, then what should be the approach for developing, testing and maintaining large nginx modules? Thanks & Regards, Ritesh -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Mon Nov 23 00:42:19 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 23 Nov 2015 03:42:19 +0300 Subject: Unit testing approach for nginx modules In-Reply-To: References: Message-ID: <20151123004219.GP74233@mdounin.ru> Hello! On Sat, Nov 21, 2015 at 09:40:57PM +0000, Ritesh Jha wrote: > Hello everyone, > We are developing nginx modules to implement few usecases in our > product. Most of the other usecases cases have been implemented > using Java. At my office we follow TDD for Java development. TDD > for Java development is easy due to availability of unit-testing > and mocking frameworks. We are wondering if we can follow TDD > for development of nginx modules as well. We have tried couple > of unit-testing frameworks for C (Unity and CMocka) but we have > found it very difficult to write useful testcases using these > frameworks. > > Can you please suggest a suitable approach? Also if unit testing > is not the way to go, then what should be the approach for > developing, testing and maintaining large nginx modules? We have a test suite we use in nginx development, available here: http://hg.nginx.org/nginx-tests It was originally written by me for testing both nginx itself and my modules for nginx. And example use in a module can be seen here: http://mdounin.ru/hg/ngx_http_bytes_filter_module/file/57365655ee44/t -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Mon Nov 23 01:17:10 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 23 Nov 2015 04:17:10 +0300 Subject: Mark stale cache content as "invalid" on non-cacheable responses In-Reply-To: <20151118224718.924017f46fb77aced83efad6@noxt.eu> References: <20151117172530.2b69f1b105b92e3a4a6ed376@noxt.eu> <20151117182500.GQ74233@mdounin.ru> <20151117222200.011e1e0df0a15dd6d3969ff1@noxt.eu> <20151118171504.GU74233@mdounin.ru> <20151118185638.13b072bc53455b083e267689@noxt.eu> <20151118202558.GX74233@mdounin.ru> <20151118224718.924017f46fb77aced83efad6@noxt.eu> Message-ID: <20151123011710.GQ74233@mdounin.ru> Hello! On Wed, Nov 18, 2015 at 10:47:18PM +0000, Mindaugas Rasiukevicius wrote: > Maxim Dounin wrote: > > > > > > A cache MUST NOT generate a stale response if it is prohibited by an > > > explicit in-protocol directive (e.g., by a "no-store" or "no-cache" > > > cache directive, a "must-revalidate" cache-response-directive, or an > > > applicable "s-maxage" or "proxy-revalidate" cache-response-directive; > > > see Section 5.2.2). > > > > The response stored in cache doesn't have "no-cache" nor any other > > directives in it, and this "MUST NOT" certainly doesn't apply to > > it. > > > > In the scenario I've described, the response in cache is a correct > > (but stale) response, as returned by an upstream server when it > > was up and running normally. > > > > In the scenario you've described, the response in cache is a > > "temporary cacheable error", and it doesn't have any directives > > attached to it either. > > It does not, but the *subsequent* request does and the cache should obey > the most recent one. I am not sure why are you focusing on the > per-request narrative when the cache is inherently about the state. The > Cache-Control header is a way to control that state. Again, RFC seems to > be fairly clear: the "cache MUST NOT reuse a stored response" (note the > word "reuse"), unless as described in the last bullet point of section 4, > the stored response is either: > > * fresh (see Section 4.2), or > * allowed to be served stale (see Section 4.2.4), or > * successfully validated (see Section 4.3). > > The race conditions we are talking about happen *after* the upstream > server advertises 'no-cache', therefore the second point is no longer > satisfied (and, of course, neither are the other two). And further: > > When more than one suitable response is stored, a cache MUST use the > most recent response (as determined by the Date header field). It > can also forward the request with "Cache-Control: max-age=0" or > "Cache-Control: no-cache" to disambiguate which response to use. No claims here suggest that cache may not use a previously stored response if some other response was received with "Cache-Control: no-cache" (and thus not stored by cache). Anyway, RFC more or less completely rules out returning stale content anyway. And this is also not something nginx does by default. For nginx to return stale content you have to explicitly configure it to do so. > > Use case 1, a cache with possible errors: > > > > A high traffic resource, which normally can be cached for a long > > time, but takes a long time to generate. A response is stored in > > the cache, and "proxy_cache_use_stale updating" is used to prevent > > multiple clients from updating the cache at the same time. If at > > some point a request to update the cache fails / times out, an > > "degraded" version is returned with caching disabled (this can be > > an error, or normal response without some data). The response > > previously stored in the cache is preserved and will be returned > > to other clients while we'll try to update the cache again. > > The cache can still legitimately serve the content while update is in > progress and if the cache itself experienced a timeout while fetching > from the upstream server (because the state is still "unknown" for the > cache). However, if the upstream server sent a response with 'no-cache', > then as undesirable as it sounds, I think the correct thing to do is to > invalidate the existing stale object. Simply because the cache cannot > know whether it is a temporary error or a deliberate change of state into > an error. The invalidation is inefficient, but it ensures correctness. > > I agree it is real a problem, though. It seems that the 'stale-if-error' > proposed in RFC 5861 you mentioned was suggested exactly for this purpose. > On the other hand, if you do not have the control over the upstream server > and it responds with a cacheable error (say max-age=3 as in the previous > example), then that will also nuke your stale cache object. Most trivial behaviour to ensure correctness is to don't use stale content at all. And this is what's done by default. > > Use case 2, a cache with non-cacheable private responses: > > > > A resource has two versions: one is "general" and can/should be > > cached (e.g., "guest user" version of a page), and another one > > is private and should not be cached by nginx ("logged in user" > > version). The "proxy_cache_bypass" directive is used to determine > > if a cached version can be returned, or a request to an upstream > > server is needed. "Logged in" responses are returned with disabled > > caching, while "guest user" responses are cacheable. > > Fair point, but in this case the cache invalidation logic should take > into account the proxy_cache_bypass condition. My patch simply did not > address that. It simply breaks this use case and the previous one. And that's why the patch is rejected. > > > Okay, so what solution do you propose? > > > > As I already wrote in the very first reply, I'm not even sure a > > good solution exists. May be some timeouts like ones proposed by > > rfc5861 will work (though this will limit various "use stale" cases > > considerably with low timeouts, and won't help much with high > > ones). Or may be we can introduce some counters/heuristics to > > detect cacheable->uncacheable transitions. May be just enforcing > > "inactive" time on such resources regardless of actual requests > > will work (but unlikely, as an upstream server can be down for a > > considerable time in some cases). > > > > I would say the right way would be to invalidate the object on 'no-cache', > but provide an nginx option equivalent to 'stale-if-error' logic (or even > better -- a generic directive to override Cache-Control value for a given > HTTP code range). I understand that breaking the existing configs would > be undesirable. How about introducing the inverted logic option, e.g. > extending proxy_cache_use_stale with 'invalidate-on-nocache'? As I previously said, I don't see a good solution. -- Maxim Dounin http://nginx.org/ From a.marinov at ucdn.com Mon Nov 23 14:47:33 2015 From: a.marinov at ucdn.com (Anatoli Marinov) Date: Mon, 23 Nov 2015 16:47:33 +0200 Subject: Unit testing approach for nginx modules In-Reply-To: <20151123004219.GP74233@mdounin.ru> References: <20151123004219.GP74233@mdounin.ru> Message-ID: Hi Maxim, How these tests could be run? Do I need something special installed? On Mon, Nov 23, 2015 at 2:42 AM, Maxim Dounin wrote: > Hello! > > On Sat, Nov 21, 2015 at 09:40:57PM +0000, Ritesh Jha wrote: > > > Hello everyone, > > We are developing nginx modules to implement few usecases in our > > product. Most of the other usecases cases have been implemented > > using Java. At my office we follow TDD for Java development. TDD > > for Java development is easy due to availability of unit-testing > > and mocking frameworks. We are wondering if we can follow TDD > > for development of nginx modules as well. We have tried couple > > of unit-testing frameworks for C (Unity and CMocka) but we have > > found it very difficult to write useful testcases using these > > frameworks. > > > > Can you please suggest a suitable approach? Also if unit testing > > is not the way to go, then what should be the approach for > > developing, testing and maintaining large nginx modules? > > We have a test suite we use in nginx development, available here: > > http://hg.nginx.org/nginx-tests > > It was originally written by me for testing both nginx itself and > my modules for nginx. And example use in a module can be seen > here: > > http://mdounin.ru/hg/ngx_http_bytes_filter_module/file/57365655ee44/t > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alessandro at cloudflare.com Mon Nov 23 15:05:07 2015 From: alessandro at cloudflare.com (Alessandro Ghedini) Date: Mon, 23 Nov 2015 15:05:07 +0000 Subject: [PATCH] HTTP: implement 'connect' and 'close' phases In-Reply-To: <20151120144612.GM74233@mdounin.ru> References: <20151120094553.GA849@mandy.local> <20151120144612.GM74233@mdounin.ru> Message-ID: <20151123150507.GA14983@mandy.local> On Fri, Nov 20, 2015 at 05:46:12pm +0300, Maxim Dounin wrote: > Hello! Hi again, > On Fri, Nov 20, 2015 at 09:45:53AM +0000, Alessandro Ghedini wrote: > > > # HG changeset patch > > # User Alessandro Ghedini > > # Date 1447956026 0 > > # Thu Nov 19 18:00:26 2015 +0000 > > # Node ID 9d265c320050a00ff24fa8d84371701e46147e8a > > # Parent bec5b3093337708cbdb59f9fc253f8e1cd6d7848 > > HTTP: implement 'connect' and 'close' phases > > > > This patch adds the NGX_HTTP_CONNECT_PHASE and NGX_HTTP_CLOSE_PHASE > > phases. > > > > Handlers for these phases are called when a connection is estabilished > > and when it is closed, and they take a ngx_connection_t as argument > > instead of ngx_http_request_t like the other phase handlers. > > > > These can be useful for keeping track of TCP connections for debugging, > > monitoring and logging purposes, and can also be used to apply custom > > configurations (e.g. socket options). > > > > This patch also adds a "ctx" field to ngx_connection_t, to be used by > > modules to store their own context structures (just like the ctx field > > in ngx_http_request_t). > > Phases are request processing phases, and what you are trying to > do doesn't looks like request processing. Additionally, ctx in > ngx_connection_t implies noticeable memory overhead for keepalive > connections. I think I can do without the ctx field. Would removing that help in getting this functionality (or something similar) merged? > Instead, consider: > > - starting your processing at any request processing stage as > needed; > > - using a connection pool cleanup handler if you want to track > connection termination; The problem with the above is that the pool cleanup callbacks seem to be called too late, after the connection has already been closed, so I get an invalid socket (basically I need the socket to retrieve some information using getsockopt()). Thanks for your help. Cheers From mdounin at mdounin.ru Mon Nov 23 15:25:30 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 23 Nov 2015 18:25:30 +0300 Subject: [PATCH] HTTP: implement 'connect' and 'close' phases In-Reply-To: <20151123150507.GA14983@mandy.local> References: <20151120094553.GA849@mandy.local> <20151120144612.GM74233@mdounin.ru> <20151123150507.GA14983@mandy.local> Message-ID: <20151123152530.GU74233@mdounin.ru> Hello! On Mon, Nov 23, 2015 at 03:05:07PM +0000, Alessandro Ghedini wrote: > On Fri, Nov 20, 2015 at 05:46:12pm +0300, Maxim Dounin wrote: > > Hello! > > Hi again, > > > On Fri, Nov 20, 2015 at 09:45:53AM +0000, Alessandro Ghedini wrote: > > > > > # HG changeset patch > > > # User Alessandro Ghedini > > > # Date 1447956026 0 > > > # Thu Nov 19 18:00:26 2015 +0000 > > > # Node ID 9d265c320050a00ff24fa8d84371701e46147e8a > > > # Parent bec5b3093337708cbdb59f9fc253f8e1cd6d7848 > > > HTTP: implement 'connect' and 'close' phases > > > > > > This patch adds the NGX_HTTP_CONNECT_PHASE and NGX_HTTP_CLOSE_PHASE > > > phases. > > > > > > Handlers for these phases are called when a connection is estabilished > > > and when it is closed, and they take a ngx_connection_t as argument > > > instead of ngx_http_request_t like the other phase handlers. > > > > > > These can be useful for keeping track of TCP connections for debugging, > > > monitoring and logging purposes, and can also be used to apply custom > > > configurations (e.g. socket options). > > > > > > This patch also adds a "ctx" field to ngx_connection_t, to be used by > > > modules to store their own context structures (just like the ctx field > > > in ngx_http_request_t). > > > > Phases are request processing phases, and what you are trying to > > do doesn't looks like request processing. Additionally, ctx in > > ngx_connection_t implies noticeable memory overhead for keepalive > > connections. > > I think I can do without the ctx field. Would removing that help in getting > this functionality (or something similar) merged? Unlikely. Though it's certainly a prerequisite. > > Instead, consider: > > > > - starting your processing at any request processing stage as > > needed; > > > > - using a connection pool cleanup handler if you want to track > > connection termination; > > The problem with the above is that the pool cleanup callbacks seem to be > called too late, after the connection has already been closed, so I get an > invalid socket (basically I need the socket to retrieve some information > using getsockopt()). If you want to collect something using getsockopt(), consider doing this in the log phase. Instead of trying to introduce a generic mechanism you may want to focus on what you want to do and if it's possible to do this with already existing mechanisms. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Mon Nov 23 19:49:10 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 23 Nov 2015 19:49:10 +0000 Subject: [nginx] Configure: fixed using OpenSSL include paths. Message-ID: details: http://hg.nginx.org/nginx/rev/520ec1917f1d branches: changeset: 6304:520ec1917f1d user: Maxim Dounin date: Mon Nov 23 22:48:31 2015 +0300 description: Configure: fixed using OpenSSL include paths. diffstat: auto/lib/openssl/conf | 1 + 1 files changed, 1 insertions(+), 0 deletions(-) diffs (11 lines): diff --git a/auto/lib/openssl/conf b/auto/lib/openssl/conf --- a/auto/lib/openssl/conf +++ b/auto/lib/openssl/conf @@ -105,6 +105,7 @@ else if [ $ngx_found = yes ]; then have=NGX_SSL . auto/have + CORE_INCS="$CORE_INCS $ngx_feature_path" CORE_LIBS="$CORE_LIBS $ngx_feature_libs $NGX_LIBDL" OPENSSL=YES fi From ru at nginx.com Mon Nov 23 21:41:39 2015 From: ru at nginx.com (Ruslan Ermilov) Date: Mon, 23 Nov 2015 21:41:39 +0000 Subject: [nginx] Core: enabled "include" inside http upstreams (ticket #6... Message-ID: details: http://hg.nginx.org/nginx/rev/18428f775b2c branches: changeset: 6305:18428f775b2c user: Ruslan Ermilov date: Mon Nov 23 12:40:19 2015 +0300 description: Core: enabled "include" inside http upstreams (ticket #635). The directive already works inside stream upstream blocks. diffstat: src/core/ngx_conf_file.h | 2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diffs (12 lines): diff -r 520ec1917f1d -r 18428f775b2c src/core/ngx_conf_file.h --- a/src/core/ngx_conf_file.h Mon Nov 23 22:48:31 2015 +0300 +++ b/src/core/ngx_conf_file.h Mon Nov 23 12:40:19 2015 +0300 @@ -50,7 +50,7 @@ #define NGX_DIRECT_CONF 0x00010000 #define NGX_MAIN_CONF 0x01000000 -#define NGX_ANY_CONF 0x0F000000 +#define NGX_ANY_CONF 0x1F000000 From mat999 at gmail.com Tue Nov 24 13:25:19 2015 From: mat999 at gmail.com (SplitIce) Date: Wed, 25 Nov 2015 00:25:19 +1100 Subject: SO_REUSEPORT Message-ID: Hi all, I couldn't find anything in the mailing list about this issue, surely we are not the only one? When activating reuseport I am seeing all requests be served from a single nginx process. All others are just idling (SIGALARM interruption of epoll_wait / epoll_wait timeout according to strace). Process 442 attached - interrupt to quit epoll_wait(60, 8225010, 512, 4294967295) = -1 EINTR (Interrupted system call) --- SIGALRM (Alarm clock) @ 0 (0) --- rt_sigreturn(0xe) = -1 EINTR (Interrupted system call) epoll_wait(60, 8225010, 512, 4294967295) = -1 EINTR (Interrupted system call) --- SIGALRM (Alarm clock) @ 0 (0) --- This only occurs with reuseport, as soon as it is disabled the load is correctly distributed again. Configuration: worker_processes 12; # 2x8 cores on server multiple server blocks on different IP's and ports with reuseaddr. Linux kernel: 3.18.20 Server nic has interrupts over all cores: # sudo ethtool -S eth0 |grep rx | grep pack rx_packets: 11244443305 rx_queue_0_packets: 1381842455 rx_queue_1_packets: 1373383493 rx_queue_2_packets: 1490287703 rx_queue_3_packets: 1440591930 rx_queue_4_packets: 1378550073 rx_queue_5_packets: 1373473609 rx_queue_6_packets: 1437806438 We have also experimented with disabling iptables and anything else on the server that could be interfering. I have also loaded it onto three other fresh servers with the same kernel (same OS image), but with different nic cards (with and without multiple rx queues) with no changes. This has me stumped. Ideas? Regards, Mathew -------------- next part -------------- An HTML attachment was scrubbed... URL: From vbart at nginx.com Tue Nov 24 13:42:22 2015 From: vbart at nginx.com (Valentin V. Bartenev) Date: Tue, 24 Nov 2015 16:42:22 +0300 Subject: SO_REUSEPORT In-Reply-To: References: Message-ID: <24551737.WtuzNpBEqe@vbart-workstation> On Wednesday 25 November 2015 00:25:19 SplitIce wrote: > Hi all, > > I couldn't find anything in the mailing list about this issue, surely we > are not the only one? > > When activating reuseport I am seeing all requests be served from a single > nginx process. All others are just idling (SIGALARM interruption of > epoll_wait / epoll_wait timeout according to strace). > > Process 442 attached - interrupt to quit > epoll_wait(60, 8225010, 512, 4294967295) = -1 EINTR (Interrupted system > call) > --- SIGALRM (Alarm clock) @ 0 (0) --- > rt_sigreturn(0xe) = -1 EINTR (Interrupted system call) > epoll_wait(60, 8225010, 512, 4294967295) = -1 EINTR (Interrupted system > call) > --- SIGALRM (Alarm clock) @ 0 (0) --- > > > > This only occurs with reuseport, as soon as it is disabled the load is > correctly distributed again. > > Configuration: > worker_processes 12; # 2x8 cores on server > multiple server blocks on different IP's and ports with reuseaddr. > Linux kernel: 3.18.20 > > Server nic has interrupts over all cores: > > # sudo ethtool -S eth0 |grep rx | grep pack > rx_packets: 11244443305 > rx_queue_0_packets: 1381842455 > rx_queue_1_packets: 1373383493 > rx_queue_2_packets: 1490287703 > rx_queue_3_packets: 1440591930 > rx_queue_4_packets: 1378550073 > rx_queue_5_packets: 1373473609 > rx_queue_6_packets: 1437806438 > > > We have also experimented with disabling iptables and anything else on the > server that could be interfering. I have also loaded it onto three other > fresh servers with the same kernel (same OS image), but with different nic > cards (with and without multiple rx queues) with no changes. > > This has me stumped. Ideas? > You should try another kernel. wbr, Valentin V. Bartenev From mat999 at gmail.com Tue Nov 24 14:01:07 2015 From: mat999 at gmail.com (SplitIce) Date: Wed, 25 Nov 2015 01:01:07 +1100 Subject: SO_REUSEPORT In-Reply-To: <24551737.WtuzNpBEqe@vbart-workstation> References: <24551737.WtuzNpBEqe@vbart-workstation> Message-ID: Ok, I have a virtual machine which has a residual kernel of 3.16.0-0.bpo.4 (Debian wheezy-backport's) available I confirmed the issue is replicatable on that machine too. The testing methodology is using ab from 12 other remote servers (hopefully in order to prevent collisions) at a fairly low rate of 1r/s (the virtual machine with this older kernel is not particularly large). During this testing zero requests where received at one sampled process, while the first process showed approximately 10% CPU usage. Regards, Mathew On Wed, Nov 25, 2015 at 12:42 AM, Valentin V. Bartenev wrote: > On Wednesday 25 November 2015 00:25:19 SplitIce wrote: > > Hi all, > > > > I couldn't find anything in the mailing list about this issue, surely we > > are not the only one? > > > > When activating reuseport I am seeing all requests be served from a > single > > nginx process. All others are just idling (SIGALARM interruption of > > epoll_wait / epoll_wait timeout according to strace). > > > > Process 442 attached - interrupt to quit > > epoll_wait(60, 8225010, 512, 4294967295) = -1 EINTR (Interrupted system > > call) > > --- SIGALRM (Alarm clock) @ 0 (0) --- > > rt_sigreturn(0xe) = -1 EINTR (Interrupted system > call) > > epoll_wait(60, 8225010, 512, 4294967295) = -1 EINTR (Interrupted system > > call) > > --- SIGALRM (Alarm clock) @ 0 (0) --- > > > > > > > > This only occurs with reuseport, as soon as it is disabled the load is > > correctly distributed again. > > > > Configuration: > > worker_processes 12; # 2x8 cores on server > > multiple server blocks on different IP's and ports with reuseaddr. > > Linux kernel: 3.18.20 > > > > Server nic has interrupts over all cores: > > > > # sudo ethtool -S eth0 |grep rx | grep pack > > rx_packets: 11244443305 > > rx_queue_0_packets: 1381842455 > > rx_queue_1_packets: 1373383493 > > rx_queue_2_packets: 1490287703 > > rx_queue_3_packets: 1440591930 > > rx_queue_4_packets: 1378550073 > > rx_queue_5_packets: 1373473609 > > rx_queue_6_packets: 1437806438 > > > > > > We have also experimented with disabling iptables and anything else on > the > > server that could be interfering. I have also loaded it onto three other > > fresh servers with the same kernel (same OS image), but with different > nic > > cards (with and without multiple rx queues) with no changes. > > > > This has me stumped. Ideas? > > > > You should try another kernel. > > wbr, Valentin V. Bartenev > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mat999 at gmail.com Tue Nov 24 14:02:53 2015 From: mat999 at gmail.com (SplitIce) Date: Wed, 25 Nov 2015 01:02:53 +1100 Subject: SO_REUSEPORT In-Reply-To: <24551737.WtuzNpBEqe@vbart-workstation> References: <24551737.WtuzNpBEqe@vbart-workstation> Message-ID: Note: In all cases the nginx binary deployed is the exact same, next I will be disabling the lua module and srcache in the unlikely event they are at fault and try to replicate this issue on a 100% clean binary. I dont see how either module can interact on this level myself. On Wed, Nov 25, 2015 at 12:42 AM, Valentin V. Bartenev wrote: > On Wednesday 25 November 2015 00:25:19 SplitIce wrote: > > Hi all, > > > > I couldn't find anything in the mailing list about this issue, surely we > > are not the only one? > > > > When activating reuseport I am seeing all requests be served from a > single > > nginx process. All others are just idling (SIGALARM interruption of > > epoll_wait / epoll_wait timeout according to strace). > > > > Process 442 attached - interrupt to quit > > epoll_wait(60, 8225010, 512, 4294967295) = -1 EINTR (Interrupted system > > call) > > --- SIGALRM (Alarm clock) @ 0 (0) --- > > rt_sigreturn(0xe) = -1 EINTR (Interrupted system > call) > > epoll_wait(60, 8225010, 512, 4294967295) = -1 EINTR (Interrupted system > > call) > > --- SIGALRM (Alarm clock) @ 0 (0) --- > > > > > > > > This only occurs with reuseport, as soon as it is disabled the load is > > correctly distributed again. > > > > Configuration: > > worker_processes 12; # 2x8 cores on server > > multiple server blocks on different IP's and ports with reuseaddr. > > Linux kernel: 3.18.20 > > > > Server nic has interrupts over all cores: > > > > # sudo ethtool -S eth0 |grep rx | grep pack > > rx_packets: 11244443305 > > rx_queue_0_packets: 1381842455 > > rx_queue_1_packets: 1373383493 > > rx_queue_2_packets: 1490287703 > > rx_queue_3_packets: 1440591930 > > rx_queue_4_packets: 1378550073 > > rx_queue_5_packets: 1373473609 > > rx_queue_6_packets: 1437806438 > > > > > > We have also experimented with disabling iptables and anything else on > the > > server that could be interfering. I have also loaded it onto three other > > fresh servers with the same kernel (same OS image), but with different > nic > > cards (with and without multiple rx queues) with no changes. > > > > This has me stumped. Ideas? > > > > You should try another kernel. > > wbr, Valentin V. Bartenev > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bartw at xs4all.nl Tue Nov 24 14:10:12 2015 From: bartw at xs4all.nl (Bart Warmerdam) Date: Tue, 24 Nov 2015 15:10:12 +0100 Subject: Unexpected behaviour "aio threads" option Message-ID: <1448374212.1723.27.camel@xs4all.nl> Hello, On a system with a load of about 500-600 URI/sec I see some unexpected behaviour when using "aio threads" option in the configuration. System setup: The system runs on RHEL6.6 with 3 workers running nginx 1.9.6 with thread support. Content is cached and populated by a proxied-upstream. The cache location is a tmpfs file system with more then enough space at all times. Proxy buffer size 8k. The output buffer is default (no config item, so 2 32k). Keepalive timeout 75s. Sendfile is enabled. Seen behaviour: On the WAF in front of this system I see occasional hangs on resources (mainly larger files like js, jpeg, ..). Seen in the WAF log is that this WAF waits for the transfer to be completed until nginx closes the connection at the keepalive time of 75s. In the nginx access.log I see the entry served from cache (upstream server '-') with the correct content length. In the tcp dump I see the response of this call to contain a content-length header with the correct length, a server time header over 1 minute older then the tcpdump timestamp (all servers are ntp-connected). The served jpeg is half-way in its cache lifetime at that time and there are previous served entries from cache without incomplete transfers. In the tcp dump the jpeg file starts to differ from the original after 32168 bytes and misses 8192 bytes after which the remaining content is served (which is identical to original). From the tcpdump I can extract the file which is missing 8192 bytes. We have also a dump when during the proxied call this same behaviour was seen. The upstream call is started to get a jpeg from the origin. After a few packets the data is sent to the WAF. The complete upstream file is retrieved (can be validated in the tcpdump that the jpeg is complete and correctly retrieved), but not all the data is sent to the listening socket to the WAF. If I change the setup to "aio on" or "aio off" this behaviour is not seen. This is the only change in the configuration between the tests. It looks like this behaviour only affects bigger files. I have not seen this effect on small files or proxied responses. Does anyone have the same experience with this option. And what is the best way to proceed in tracing this? Regards, B. From agentzh at gmail.com Tue Nov 24 14:10:31 2015 From: agentzh at gmail.com (Yichun Zhang (agentzh)) Date: Tue, 24 Nov 2015 22:10:31 +0800 Subject: Unit testing approach for nginx modules In-Reply-To: References: Message-ID: Hello! On Sun, Nov 22, 2015 at 5:40 AM, Ritesh Jha wrote: > Hello everyone, > We are developing nginx modules to implement few usecases in our product. > Most of the other usecases cases have been implemented using Java. At my > office we follow TDD for Java development. TDD for Java development is easy > due to availability of unit-testing and mocking frameworks. We are wondering > if we can follow TDD for development of nginx modules as well. We have tried > couple of unit-testing frameworks for C (Unity and CMocka) but we have found > it very difficult to write useful testcases using these frameworks. > > Can you please suggest a suitable approach? Also if unit testing is not the > way to go, then what should be the approach for developing, testing and > maintaining large nginx modules? > I've been using my Test::Nginx module on CPAN for all my nginx modules' test suite for years: https://metacpan.org/pod/Test::Nginx::Socket We've also been using subclasses of this test framework to drive CloudFlare's Lua business systems' test suites :) Almost all the test suites of the NGINX components listed in the following page for my Amazon EC2 test cluster's test report are driven by Test::Nginx: http://qa.openresty.org All these opensource modules' test suites can serve as live examples. Oh yeah, we definitely need more hands-on tutorials to explain all the powerful features of this test framework. I'll write up something soon :) Best regards, -agentzh From vbart at nginx.com Tue Nov 24 15:32:11 2015 From: vbart at nginx.com (Valentin V. Bartenev) Date: Tue, 24 Nov 2015 18:32:11 +0300 Subject: Unexpected behaviour "aio threads" option In-Reply-To: <1448374212.1723.27.camel@xs4all.nl> References: <1448374212.1723.27.camel@xs4all.nl> Message-ID: <2245122.KWr9x6AL8q@vbart-workstation> On Tuesday 24 November 2015 15:10:12 Bart Warmerdam wrote: > > Hello, > > On a system with a load of about 500-600 URI/sec I see some unexpected > behaviour when using "aio threads" option in the configuration. > > System setup: > The system runs on RHEL6.6 with 3 workers running nginx 1.9.6 with > thread support. Content is cached and populated by a proxied-upstream. > The cache location is a tmpfs file system with more then enough space > at all times. Proxy buffer size 8k. The output buffer is default (no > config item, so 2 32k). Keepalive timeout 75s. Sendfile is enabled. > > Seen behaviour: > On the WAF in front of this system I see occasional hangs on resources > (mainly larger files like js, jpeg, ..). Seen in the WAF log is that > this WAF waits for the transfer to be completed until nginx closes the > connection at the keepalive time of 75s. In the nginx access.log I see > the entry served from cache (upstream server '-') with the correct > content length. In the tcp dump I see the response of this call to > contain a content-length header with the correct length, a server time > header over 1 minute older then the tcpdump timestamp (all servers are > ntp-connected). The served jpeg is half-way in its cache lifetime at > that time and there are previous served entries from cache without > incomplete transfers. In the tcp dump the jpeg file starts to differ > from the original after 32168 bytes and misses 8192 bytes after which > the remaining content is served (which is identical to original). From > the tcpdump I can extract the file which is missing 8192 bytes. > > We have also a dump when during the proxied call this same behaviour > was seen. The upstream call is started to get a jpeg from the origin. > After a few packets the data is sent to the WAF. The complete upstream > file is retrieved (can be validated in the tcpdump that the jpeg is > complete and correctly retrieved), but not all the data is sent to the > listening socket to the WAF. > > > If I change the setup to "aio on" or "aio off" this behaviour is not > seen. This is the only change in the configuration between the tests. > It looks like this behaviour only affects bigger files. I have not seen > this effect on small files or proxied responses. > > > Does anyone have the same experience with this option. And what is the > best way to proceed in tracing this? > [..] Could you provide the debug log? http://nginx.org/en/docs/debugging_log.html wbr, Valentin V. Bartenev From ru at nginx.com Tue Nov 24 20:41:32 2015 From: ru at nginx.com (Ruslan Ermilov) Date: Tue, 24 Nov 2015 20:41:32 +0000 Subject: [nginx] Style: unified request method checks. Message-ID: details: http://hg.nginx.org/nginx/rev/b1858fc47e3b branches: changeset: 6306:b1858fc47e3b user: Ruslan Ermilov date: Fri Nov 06 15:22:43 2015 +0300 description: Style: unified request method checks. diffstat: src/http/modules/ngx_http_chunked_filter_module.c | 2 +- src/http/modules/ngx_http_static_module.c | 2 +- src/http/modules/ngx_http_stub_status_module.c | 2 +- src/http/ngx_http_request.c | 2 +- src/http/ngx_http_upstream.c | 2 +- 5 files changed, 5 insertions(+), 5 deletions(-) diffs (60 lines): diff -r 18428f775b2c -r b1858fc47e3b src/http/modules/ngx_http_chunked_filter_module.c --- a/src/http/modules/ngx_http_chunked_filter_module.c Mon Nov 23 12:40:19 2015 +0300 +++ b/src/http/modules/ngx_http_chunked_filter_module.c Fri Nov 06 15:22:43 2015 +0300 @@ -64,7 +64,7 @@ ngx_http_chunked_header_filter(ngx_http_ || r->headers_out.status == NGX_HTTP_NO_CONTENT || r->headers_out.status < NGX_HTTP_OK || r != r->main - || (r->method & NGX_HTTP_HEAD)) + || r->method == NGX_HTTP_HEAD) { return ngx_http_next_header_filter(r); } diff -r 18428f775b2c -r b1858fc47e3b src/http/modules/ngx_http_static_module.c --- a/src/http/modules/ngx_http_static_module.c Mon Nov 23 12:40:19 2015 +0300 +++ b/src/http/modules/ngx_http_static_module.c Fri Nov 06 15:22:43 2015 +0300 @@ -204,7 +204,7 @@ ngx_http_static_handler(ngx_http_request #endif - if (r->method & NGX_HTTP_POST) { + if (r->method == NGX_HTTP_POST) { return NGX_HTTP_NOT_ALLOWED; } diff -r 18428f775b2c -r b1858fc47e3b src/http/modules/ngx_http_stub_status_module.c --- a/src/http/modules/ngx_http_stub_status_module.c Mon Nov 23 12:40:19 2015 +0300 +++ b/src/http/modules/ngx_http_stub_status_module.c Fri Nov 06 15:22:43 2015 +0300 @@ -89,7 +89,7 @@ ngx_http_stub_status_handler(ngx_http_re ngx_chain_t out; ngx_atomic_int_t ap, hn, ac, rq, rd, wr, wa; - if (r->method != NGX_HTTP_GET && r->method != NGX_HTTP_HEAD) { + if (!(r->method & (NGX_HTTP_GET|NGX_HTTP_HEAD))) { return NGX_HTTP_NOT_ALLOWED; } diff -r 18428f775b2c -r b1858fc47e3b src/http/ngx_http_request.c --- a/src/http/ngx_http_request.c Mon Nov 23 12:40:19 2015 +0300 +++ b/src/http/ngx_http_request.c Fri Nov 06 15:22:43 2015 +0300 @@ -1788,7 +1788,7 @@ ngx_http_process_request_header(ngx_http } } - if (r->method & NGX_HTTP_TRACE) { + if (r->method == NGX_HTTP_TRACE) { ngx_log_error(NGX_LOG_INFO, r->connection->log, 0, "client sent TRACE method"); ngx_http_finalize_request(r, NGX_HTTP_NOT_ALLOWED); diff -r 18428f775b2c -r b1858fc47e3b src/http/ngx_http_upstream.c --- a/src/http/ngx_http_upstream.c Mon Nov 23 12:40:19 2015 +0300 +++ b/src/http/ngx_http_upstream.c Fri Nov 06 15:22:43 2015 +0300 @@ -772,7 +772,7 @@ ngx_http_upstream_cache(ngx_http_request return rc; } - if ((r->method & NGX_HTTP_HEAD) && u->conf->cache_convert_head) { + if (r->method == NGX_HTTP_HEAD && u->conf->cache_convert_head) { u->method = ngx_http_core_get_method; } From sorin.v.manole at gmail.com Tue Nov 24 21:31:16 2015 From: sorin.v.manole at gmail.com (Sorin Manole) Date: Tue, 24 Nov 2015 23:31:16 +0200 Subject: [nginx] Style: unified request method checks. In-Reply-To: References: Message-ID: 2015-11-24 22:41 GMT+02:00 Ruslan Ermilov : > details: http://hg.nginx.org/nginx/rev/b1858fc47e3b > branches: > changeset: 6306:b1858fc47e3b > user: Ruslan Ermilov > date: Fri Nov 06 15:22:43 2015 +0300 > description: > Style: unified request method checks. > > diffstat: > > src/http/modules/ngx_http_chunked_filter_module.c | 2 +- > src/http/modules/ngx_http_static_module.c | 2 +- > src/http/modules/ngx_http_stub_status_module.c | 2 +- > src/http/ngx_http_request.c | 2 +- > src/http/ngx_http_upstream.c | 2 +- > 5 files changed, 5 insertions(+), 5 deletions(-) > > diffs (60 lines): > > diff -r 18428f775b2c -r b1858fc47e3b > src/http/modules/ngx_http_chunked_filter_module.c > --- a/src/http/modules/ngx_http_chunked_filter_module.c Mon Nov 23 > 12:40:19 2015 +0300 > +++ b/src/http/modules/ngx_http_chunked_filter_module.c Fri Nov 06 > 15:22:43 2015 +0300 > @@ -64,7 +64,7 @@ ngx_http_chunked_header_filter(ngx_http_ > || r->headers_out.status == NGX_HTTP_NO_CONTENT > || r->headers_out.status < NGX_HTTP_OK > || r != r->main > - || (r->method & NGX_HTTP_HEAD)) > + || r->method == NGX_HTTP_HEAD) > { > return ngx_http_next_header_filter(r); > } > diff -r 18428f775b2c -r b1858fc47e3b > src/http/modules/ngx_http_static_module.c > --- a/src/http/modules/ngx_http_static_module.c Mon Nov 23 12:40:19 2015 > +0300 > +++ b/src/http/modules/ngx_http_static_module.c Fri Nov 06 15:22:43 2015 > +0300 > @@ -204,7 +204,7 @@ ngx_http_static_handler(ngx_http_request > > #endif > > - if (r->method & NGX_HTTP_POST) { > + if (r->method == NGX_HTTP_POST) { > return NGX_HTTP_NOT_ALLOWED; > } > > diff -r 18428f775b2c -r b1858fc47e3b > src/http/modules/ngx_http_stub_status_module.c > --- a/src/http/modules/ngx_http_stub_status_module.c Mon Nov 23 > 12:40:19 2015 +0300 > +++ b/src/http/modules/ngx_http_stub_status_module.c Fri Nov 06 > 15:22:43 2015 +0300 > @@ -89,7 +89,7 @@ ngx_http_stub_status_handler(ngx_http_re > ngx_chain_t out; > ngx_atomic_int_t ap, hn, ac, rq, rd, wr, wa; > > - if (r->method != NGX_HTTP_GET && r->method != NGX_HTTP_HEAD) { > + if (!(r->method & (NGX_HTTP_GET|NGX_HTTP_HEAD))) { > Since it's about the style, not really an unification I would say. > return NGX_HTTP_NOT_ALLOWED; > } > > diff -r 18428f775b2c -r b1858fc47e3b src/http/ngx_http_request.c > --- a/src/http/ngx_http_request.c Mon Nov 23 12:40:19 2015 +0300 > +++ b/src/http/ngx_http_request.c Fri Nov 06 15:22:43 2015 +0300 > @@ -1788,7 +1788,7 @@ ngx_http_process_request_header(ngx_http > } > } > > - if (r->method & NGX_HTTP_TRACE) { > + if (r->method == NGX_HTTP_TRACE) { > ngx_log_error(NGX_LOG_INFO, r->connection->log, 0, > "client sent TRACE method"); > ngx_http_finalize_request(r, NGX_HTTP_NOT_ALLOWED); > diff -r 18428f775b2c -r b1858fc47e3b src/http/ngx_http_upstream.c > --- a/src/http/ngx_http_upstream.c Mon Nov 23 12:40:19 2015 +0300 > +++ b/src/http/ngx_http_upstream.c Fri Nov 06 15:22:43 2015 +0300 > @@ -772,7 +772,7 @@ ngx_http_upstream_cache(ngx_http_request > return rc; > } > > - if ((r->method & NGX_HTTP_HEAD) && u->conf->cache_convert_head) { > + if (r->method == NGX_HTTP_HEAD && u->conf->cache_convert_head) { > u->method = ngx_http_core_get_method; > } > > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mat999 at gmail.com Tue Nov 24 22:38:22 2015 From: mat999 at gmail.com (SplitIce) Date: Wed, 25 Nov 2015 09:38:22 +1100 Subject: SO_REUSEPORT In-Reply-To: <24551737.WtuzNpBEqe@vbart-workstation> References: <24551737.WtuzNpBEqe@vbart-workstation> Message-ID: Ok, I have found it to be a configuration bug. With a fresh configuration (default configuration from package, with reuseport added) and reuseport enabled the feature works (same 3.18 kernel, same nginx binary). Now I just need to identify which line of our production configuration creates this bug. I will update when I know more. On Wed, Nov 25, 2015 at 12:42 AM, Valentin V. Bartenev wrote: > On Wednesday 25 November 2015 00:25:19 SplitIce wrote: > > Hi all, > > > > I couldn't find anything in the mailing list about this issue, surely we > > are not the only one? > > > > When activating reuseport I am seeing all requests be served from a > single > > nginx process. All others are just idling (SIGALARM interruption of > > epoll_wait / epoll_wait timeout according to strace). > > > > Process 442 attached - interrupt to quit > > epoll_wait(60, 8225010, 512, 4294967295) = -1 EINTR (Interrupted system > > call) > > --- SIGALRM (Alarm clock) @ 0 (0) --- > > rt_sigreturn(0xe) = -1 EINTR (Interrupted system > call) > > epoll_wait(60, 8225010, 512, 4294967295) = -1 EINTR (Interrupted system > > call) > > --- SIGALRM (Alarm clock) @ 0 (0) --- > > > > > > > > This only occurs with reuseport, as soon as it is disabled the load is > > correctly distributed again. > > > > Configuration: > > worker_processes 12; # 2x8 cores on server > > multiple server blocks on different IP's and ports with reuseaddr. > > Linux kernel: 3.18.20 > > > > Server nic has interrupts over all cores: > > > > # sudo ethtool -S eth0 |grep rx | grep pack > > rx_packets: 11244443305 > > rx_queue_0_packets: 1381842455 > > rx_queue_1_packets: 1373383493 > > rx_queue_2_packets: 1490287703 > > rx_queue_3_packets: 1440591930 > > rx_queue_4_packets: 1378550073 > > rx_queue_5_packets: 1373473609 > > rx_queue_6_packets: 1437806438 > > > > > > We have also experimented with disabling iptables and anything else on > the > > server that could be interfering. I have also loaded it onto three other > > fresh servers with the same kernel (same OS image), but with different > nic > > cards (with and without multiple rx queues) with no changes. > > > > This has me stumped. Ideas? > > > > You should try another kernel. > > wbr, Valentin V. Bartenev > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mat999 at gmail.com Tue Nov 24 22:43:44 2015 From: mat999 at gmail.com (SplitIce) Date: Wed, 25 Nov 2015 09:43:44 +1100 Subject: SO_REUSEPORT In-Reply-To: References: <24551737.WtuzNpBEqe@vbart-workstation> Message-ID: Issue found. If worker_processes is set at the start of the config file the feature works fine, if it is set at the end of the config file it does not. On Wed, Nov 25, 2015 at 9:38 AM, SplitIce wrote: > Ok, > > I have found it to be a configuration bug. With a fresh configuration > (default configuration from package, with reuseport added) and reuseport > enabled the feature works (same 3.18 kernel, same nginx binary). > > Now I just need to identify which line of our production configuration > creates this bug. > > I will update when I know more. > > On Wed, Nov 25, 2015 at 12:42 AM, Valentin V. Bartenev > wrote: > >> On Wednesday 25 November 2015 00:25:19 SplitIce wrote: >> > Hi all, >> > >> > I couldn't find anything in the mailing list about this issue, surely we >> > are not the only one? >> > >> > When activating reuseport I am seeing all requests be served from a >> single >> > nginx process. All others are just idling (SIGALARM interruption of >> > epoll_wait / epoll_wait timeout according to strace). >> > >> > Process 442 attached - interrupt to quit >> > epoll_wait(60, 8225010, 512, 4294967295) = -1 EINTR (Interrupted system >> > call) >> > --- SIGALRM (Alarm clock) @ 0 (0) --- >> > rt_sigreturn(0xe) = -1 EINTR (Interrupted system >> call) >> > epoll_wait(60, 8225010, 512, 4294967295) = -1 EINTR (Interrupted system >> > call) >> > --- SIGALRM (Alarm clock) @ 0 (0) --- >> > >> > >> > >> > This only occurs with reuseport, as soon as it is disabled the load is >> > correctly distributed again. >> > >> > Configuration: >> > worker_processes 12; # 2x8 cores on server >> > multiple server blocks on different IP's and ports with reuseaddr. >> > Linux kernel: 3.18.20 >> > >> > Server nic has interrupts over all cores: >> > >> > # sudo ethtool -S eth0 |grep rx | grep pack >> > rx_packets: 11244443305 >> > rx_queue_0_packets: 1381842455 >> > rx_queue_1_packets: 1373383493 >> > rx_queue_2_packets: 1490287703 >> > rx_queue_3_packets: 1440591930 >> > rx_queue_4_packets: 1378550073 >> > rx_queue_5_packets: 1373473609 >> > rx_queue_6_packets: 1437806438 >> > >> > >> > We have also experimented with disabling iptables and anything else on >> the >> > server that could be interfering. I have also loaded it onto three other >> > fresh servers with the same kernel (same OS image), but with different >> nic >> > cards (with and without multiple rx queues) with no changes. >> > >> > This has me stumped. Ideas? >> > >> >> You should try another kernel. >> >> wbr, Valentin V. Bartenev >> >> _______________________________________________ >> nginx-devel mailing list >> nginx-devel at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx-devel >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ru at nginx.com Wed Nov 25 11:07:34 2015 From: ru at nginx.com (Ruslan Ermilov) Date: Wed, 25 Nov 2015 14:07:34 +0300 Subject: [nginx] Style: unified request method checks. In-Reply-To: References: Message-ID: <20151125110734.GH54949@lo0.su> On Tue, Nov 24, 2015 at 11:31:16PM +0200, Sorin Manole wrote: > 2015-11-24 22:41 GMT+02:00 Ruslan Ermilov : > > > details: http://hg.nginx.org/nginx/rev/b1858fc47e3b > > branches: > > changeset: 6306:b1858fc47e3b > > user: Ruslan Ermilov > > date: Fri Nov 06 15:22:43 2015 +0300 > > description: > > Style: unified request method checks. > > > > diffstat: > > > > src/http/modules/ngx_http_chunked_filter_module.c | 2 +- > > src/http/modules/ngx_http_static_module.c | 2 +- > > src/http/modules/ngx_http_stub_status_module.c | 2 +- > > src/http/ngx_http_request.c | 2 +- > > src/http/ngx_http_upstream.c | 2 +- > > 5 files changed, 5 insertions(+), 5 deletions(-) > > > > diffs (60 lines): > > > > diff -r 18428f775b2c -r b1858fc47e3b > > src/http/modules/ngx_http_chunked_filter_module.c > > --- a/src/http/modules/ngx_http_chunked_filter_module.c Mon Nov 23 > > 12:40:19 2015 +0300 > > +++ b/src/http/modules/ngx_http_chunked_filter_module.c Fri Nov 06 > > 15:22:43 2015 +0300 > > @@ -64,7 +64,7 @@ ngx_http_chunked_header_filter(ngx_http_ > > || r->headers_out.status == NGX_HTTP_NO_CONTENT > > || r->headers_out.status < NGX_HTTP_OK > > || r != r->main > > - || (r->method & NGX_HTTP_HEAD)) > > + || r->method == NGX_HTTP_HEAD) > > { > > return ngx_http_next_header_filter(r); > > } > > diff -r 18428f775b2c -r b1858fc47e3b > > src/http/modules/ngx_http_static_module.c > > --- a/src/http/modules/ngx_http_static_module.c Mon Nov 23 12:40:19 2015 > > +0300 > > +++ b/src/http/modules/ngx_http_static_module.c Fri Nov 06 15:22:43 2015 > > +0300 > > @@ -204,7 +204,7 @@ ngx_http_static_handler(ngx_http_request > > > > #endif > > > > - if (r->method & NGX_HTTP_POST) { > > + if (r->method == NGX_HTTP_POST) { > > return NGX_HTTP_NOT_ALLOWED; > > } > > > > diff -r 18428f775b2c -r b1858fc47e3b > > src/http/modules/ngx_http_stub_status_module.c > > --- a/src/http/modules/ngx_http_stub_status_module.c Mon Nov 23 > > 12:40:19 2015 +0300 > > +++ b/src/http/modules/ngx_http_stub_status_module.c Fri Nov 06 > > 15:22:43 2015 +0300 > > @@ -89,7 +89,7 @@ ngx_http_stub_status_handler(ngx_http_re > > ngx_chain_t out; > > ngx_atomic_int_t ap, hn, ac, rq, rd, wr, wa; > > > > - if (r->method != NGX_HTTP_GET && r->method != NGX_HTTP_HEAD) { > > + if (!(r->method & (NGX_HTTP_GET|NGX_HTTP_HEAD))) { > > > Since it's about the style, not really an unification I would say. What did you mean to say, I don't quite follow? This particular part of the change is about unification: ngx_http_autoindex_module.c: if (!(r->method & (NGX_HTTP_GET|NGX_HTTP_HEAD))) { ngx_http_empty_gif_module.c: if (!(r->method & (NGX_HTTP_GET|NGX_HTTP_HEAD))) { ngx_http_flv_module.c: if (!(r->method & (NGX_HTTP_GET|NGX_HTTP_HEAD))) { ngx_http_gzip_static_module.c: if (!(r->method & (NGX_HTTP_GET|NGX_HTTP_HEAD))) { ngx_http_index_module.c: if (!(r->method & (NGX_HTTP_GET|NGX_HTTP_HEAD|NGX_HTTP_POST))) { ngx_http_memcached_module.c: if (!(r->method & (NGX_HTTP_GET|NGX_HTTP_HEAD))) { ngx_http_mp4_module.c: if (!(r->method & (NGX_HTTP_GET|NGX_HTTP_HEAD))) { ngx_http_random_index_module.c: if (!(r->method & (NGX_HTTP_GET|NGX_HTTP_HEAD|NGX_HTTP_POST))) { ngx_http_static_module.c: if (!(r->method & (NGX_HTTP_GET|NGX_HTTP_HEAD|NGX_HTTP_POST))) { ngx_http_stub_status_module.c: if (!(r->method & (NGX_HTTP_GET|NGX_HTTP_HEAD))) { > > return NGX_HTTP_NOT_ALLOWED; > > } > > > > diff -r 18428f775b2c -r b1858fc47e3b src/http/ngx_http_request.c > > --- a/src/http/ngx_http_request.c Mon Nov 23 12:40:19 2015 +0300 > > +++ b/src/http/ngx_http_request.c Fri Nov 06 15:22:43 2015 +0300 > > @@ -1788,7 +1788,7 @@ ngx_http_process_request_header(ngx_http > > } > > } > > > > - if (r->method & NGX_HTTP_TRACE) { > > + if (r->method == NGX_HTTP_TRACE) { > > ngx_log_error(NGX_LOG_INFO, r->connection->log, 0, > > "client sent TRACE method"); > > ngx_http_finalize_request(r, NGX_HTTP_NOT_ALLOWED); > > diff -r 18428f775b2c -r b1858fc47e3b src/http/ngx_http_upstream.c > > --- a/src/http/ngx_http_upstream.c Mon Nov 23 12:40:19 2015 +0300 > > +++ b/src/http/ngx_http_upstream.c Fri Nov 06 15:22:43 2015 +0300 > > @@ -772,7 +772,7 @@ ngx_http_upstream_cache(ngx_http_request > > return rc; > > } > > > > - if ((r->method & NGX_HTTP_HEAD) && u->conf->cache_convert_head) { > > + if (r->method == NGX_HTTP_HEAD && u->conf->cache_convert_head) { > > u->method = ngx_http_core_get_method; > > } > > > > > > _______________________________________________ > > nginx-devel mailing list > > nginx-devel at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx-devel > > -- Ruslan Ermilov From sorin.v.manole at gmail.com Wed Nov 25 11:19:09 2015 From: sorin.v.manole at gmail.com (Sorin Manole) Date: Wed, 25 Nov 2015 13:19:09 +0200 Subject: [nginx] Style: unified request method checks. In-Reply-To: <20151125110734.GH54949@lo0.su> References: <20151125110734.GH54949@lo0.su> Message-ID: I meant, from the diff, it looked like all single HTTP method checks were transitioned from & to ==, while that single one was changed from == to &. I understand now, the checks were unified with the ones that were not in the diff (duh, I should have though about that). Still, as a result in some places "&" is used, and in another == is. So IMHO I don't see the point in using == for single method checks and & for multiple in terms of unifying usage throughout the code. But, thanks a lot for clarifying. It makes sense. 2015-11-25 13:07 GMT+02:00 Ruslan Ermilov : > On Tue, Nov 24, 2015 at 11:31:16PM +0200, Sorin Manole wrote: > > 2015-11-24 22:41 GMT+02:00 Ruslan Ermilov : > > > > > details: http://hg.nginx.org/nginx/rev/b1858fc47e3b > > > branches: > > > changeset: 6306:b1858fc47e3b > > > user: Ruslan Ermilov > > > date: Fri Nov 06 15:22:43 2015 +0300 > > > description: > > > Style: unified request method checks. > > > > > > diffstat: > > > > > > src/http/modules/ngx_http_chunked_filter_module.c | 2 +- > > > src/http/modules/ngx_http_static_module.c | 2 +- > > > src/http/modules/ngx_http_stub_status_module.c | 2 +- > > > src/http/ngx_http_request.c | 2 +- > > > src/http/ngx_http_upstream.c | 2 +- > > > 5 files changed, 5 insertions(+), 5 deletions(-) > > > > > > diffs (60 lines): > > > > > > diff -r 18428f775b2c -r b1858fc47e3b > > > src/http/modules/ngx_http_chunked_filter_module.c > > > --- a/src/http/modules/ngx_http_chunked_filter_module.c Mon Nov 23 > > > 12:40:19 2015 +0300 > > > +++ b/src/http/modules/ngx_http_chunked_filter_module.c Fri Nov 06 > > > 15:22:43 2015 +0300 > > > @@ -64,7 +64,7 @@ ngx_http_chunked_header_filter(ngx_http_ > > > || r->headers_out.status == NGX_HTTP_NO_CONTENT > > > || r->headers_out.status < NGX_HTTP_OK > > > || r != r->main > > > - || (r->method & NGX_HTTP_HEAD)) > > > + || r->method == NGX_HTTP_HEAD) > > > { > > > return ngx_http_next_header_filter(r); > > > } > > > diff -r 18428f775b2c -r b1858fc47e3b > > > src/http/modules/ngx_http_static_module.c > > > --- a/src/http/modules/ngx_http_static_module.c Mon Nov 23 12:40:19 > 2015 > > > +0300 > > > +++ b/src/http/modules/ngx_http_static_module.c Fri Nov 06 15:22:43 > 2015 > > > +0300 > > > @@ -204,7 +204,7 @@ ngx_http_static_handler(ngx_http_request > > > > > > #endif > > > > > > - if (r->method & NGX_HTTP_POST) { > > > + if (r->method == NGX_HTTP_POST) { > > > return NGX_HTTP_NOT_ALLOWED; > > > } > > > > > > diff -r 18428f775b2c -r b1858fc47e3b > > > src/http/modules/ngx_http_stub_status_module.c > > > --- a/src/http/modules/ngx_http_stub_status_module.c Mon Nov 23 > > > 12:40:19 2015 +0300 > > > +++ b/src/http/modules/ngx_http_stub_status_module.c Fri Nov 06 > > > 15:22:43 2015 +0300 > > > @@ -89,7 +89,7 @@ ngx_http_stub_status_handler(ngx_http_re > > > ngx_chain_t out; > > > ngx_atomic_int_t ap, hn, ac, rq, rd, wr, wa; > > > > > > - if (r->method != NGX_HTTP_GET && r->method != NGX_HTTP_HEAD) { > > > + if (!(r->method & (NGX_HTTP_GET|NGX_HTTP_HEAD))) { > > > > > Since it's about the style, not really an unification I would say. > > What did you mean to say, I don't quite follow? > This particular part of the change is about unification: > > ngx_http_autoindex_module.c: if (!(r->method & > (NGX_HTTP_GET|NGX_HTTP_HEAD))) { > ngx_http_empty_gif_module.c: if (!(r->method & > (NGX_HTTP_GET|NGX_HTTP_HEAD))) { > ngx_http_flv_module.c: if (!(r->method & (NGX_HTTP_GET|NGX_HTTP_HEAD))) > { > ngx_http_gzip_static_module.c: if (!(r->method & > (NGX_HTTP_GET|NGX_HTTP_HEAD))) { > ngx_http_index_module.c: if (!(r->method & > (NGX_HTTP_GET|NGX_HTTP_HEAD|NGX_HTTP_POST))) { > ngx_http_memcached_module.c: if (!(r->method & > (NGX_HTTP_GET|NGX_HTTP_HEAD))) { > ngx_http_mp4_module.c: if (!(r->method & (NGX_HTTP_GET|NGX_HTTP_HEAD))) > { > ngx_http_random_index_module.c: if (!(r->method & > (NGX_HTTP_GET|NGX_HTTP_HEAD|NGX_HTTP_POST))) { > ngx_http_static_module.c: if (!(r->method & > (NGX_HTTP_GET|NGX_HTTP_HEAD|NGX_HTTP_POST))) { > ngx_http_stub_status_module.c: if (!(r->method & > (NGX_HTTP_GET|NGX_HTTP_HEAD))) { > > > > return NGX_HTTP_NOT_ALLOWED; > > > } > > > > > > diff -r 18428f775b2c -r b1858fc47e3b src/http/ngx_http_request.c > > > --- a/src/http/ngx_http_request.c Mon Nov 23 12:40:19 2015 +0300 > > > +++ b/src/http/ngx_http_request.c Fri Nov 06 15:22:43 2015 +0300 > > > @@ -1788,7 +1788,7 @@ ngx_http_process_request_header(ngx_http > > > } > > > } > > > > > > - if (r->method & NGX_HTTP_TRACE) { > > > + if (r->method == NGX_HTTP_TRACE) { > > > ngx_log_error(NGX_LOG_INFO, r->connection->log, 0, > > > "client sent TRACE method"); > > > ngx_http_finalize_request(r, NGX_HTTP_NOT_ALLOWED); > > > diff -r 18428f775b2c -r b1858fc47e3b src/http/ngx_http_upstream.c > > > --- a/src/http/ngx_http_upstream.c Mon Nov 23 12:40:19 2015 +0300 > > > +++ b/src/http/ngx_http_upstream.c Fri Nov 06 15:22:43 2015 +0300 > > > @@ -772,7 +772,7 @@ ngx_http_upstream_cache(ngx_http_request > > > return rc; > > > } > > > > > > - if ((r->method & NGX_HTTP_HEAD) && > u->conf->cache_convert_head) { > > > + if (r->method == NGX_HTTP_HEAD && > u->conf->cache_convert_head) { > > > u->method = ngx_http_core_get_method; > > > } > > > > > > > > > _______________________________________________ > > > nginx-devel mailing list > > > nginx-devel at nginx.org > > > http://mailman.nginx.org/mailman/listinfo/nginx-devel > > > > > > -- > Ruslan Ermilov > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Wed Nov 25 13:17:10 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 25 Nov 2015 16:17:10 +0300 Subject: SO_REUSEPORT In-Reply-To: References: <24551737.WtuzNpBEqe@vbart-workstation> Message-ID: <20151125131710.GP74233@mdounin.ru> Hello! On Wed, Nov 25, 2015 at 09:43:44AM +1100, SplitIce wrote: > Issue found. > > If worker_processes is set at the start of the config file the feature > works fine, if it is set at the end of the config file it does not. This looks like a bug, though unlikely it'll be fixed soon unless you'll provide a patch. Trivial workaround is to configure worker_processes before listening sockets. Note well that configuring global options after the http{} block may beat your in some other places as well - but likely only in very exotic setups. -- Maxim Dounin http://nginx.org/ From bartw at xs4all.nl Fri Nov 27 08:33:04 2015 From: bartw at xs4all.nl (Bart Warmerdam) Date: Fri, 27 Nov 2015 09:33:04 +0100 Subject: Unexpected behaviour "aio threads" option In-Reply-To: <2245122.KWr9x6AL8q@vbart-workstation> References: <1448374212.1723.27.camel@xs4all.nl> <2245122.KWr9x6AL8q@vbart-workstation> Message-ID: A test with debug logging is scheduled for upcoming Monday. I hope to be able to deliver the logs by then. Thanks, B. On November 24, 2015 4:32:11 PM GMT+01:00, "Valentin V. Bartenev" wrote: >On Tuesday 24 November 2015 15:10:12 Bart Warmerdam wrote: >> >> Hello, >> >> On a system with a load of about 500-600 URI/sec I see some >unexpected >> behaviour when using "aio threads" option in the configuration. >> >> System setup: >> The system runs on RHEL6.6 with 3 workers running nginx 1.9.6 with >> thread support. Content is cached and populated by a >proxied-upstream. >> The cache location is a tmpfs file system with more then enough space >> at all times. Proxy buffer size 8k. The output buffer is default (no >> config item, so 2 32k). Keepalive timeout 75s. Sendfile is enabled. >> >> Seen behaviour: >> On the WAF in front of this system I see occasional hangs on >resources >> (mainly larger files like js, jpeg, ..). Seen in the WAF log is that >> this WAF waits for the transfer to be completed until nginx closes >the >> connection at the keepalive time of 75s. In the nginx access.log I >see >> the entry served from cache (upstream server '-') with the correct >> content length. In the tcp dump I see the response of this call to >> contain a content-length header with the correct length, a server >time >> header over 1 minute older then the tcpdump timestamp (all servers >are >> ntp-connected). The served jpeg is half-way in its cache lifetime at >> that time and there are previous served entries from cache without >> incomplete transfers. In the tcp dump the jpeg file starts to differ >> from the original after 32168 bytes and misses 8192 bytes after which >> the remaining content is served (which is identical to original). >From >> the tcpdump I can extract the file which is missing 8192 bytes. >> >> We have also a dump when during the proxied call this same behaviour >> was seen. The upstream call is started to get a jpeg from the origin. >> After a few packets the data is sent to the WAF. The complete >upstream >> file is retrieved (can be validated in the tcpdump that the jpeg is >> complete and correctly retrieved), but not all the data is sent to >the >> listening socket to the WAF. >> >> >> If I change the setup to "aio on" or "aio off" this behaviour is not >> seen. This is the only change in the configuration between the tests. >> It looks like this behaviour only affects bigger files. I have not >seen >> this effect on small files or proxied responses. >> >> >> Does anyone have the same experience with this option. And what is >the >> best way to proceed in tracing this? >> >[..] > >Could you provide the debug log? >http://nginx.org/en/docs/debugging_log.html > > wbr, Valentin V. Bartenev > >_______________________________________________ >nginx-devel mailing list >nginx-devel at nginx.org >http://mailman.nginx.org/mailman/listinfo/nginx-devel -- Sent from my Android device with K-9 Mail. Please excuse my brevity. -------------- next part -------------- An HTML attachment was scrubbed... URL: From aviram at adallom.com Mon Nov 30 13:20:02 2015 From: aviram at adallom.com (Aviram Cohen) Date: Mon, 30 Nov 2015 13:20:02 +0000 Subject: [BUG] Gunzip module may cause requests to fail Message-ID: Hello! A couple of years ago, I've reported the following bug: http://mailman.nginx.org/pipermail/nginx-devel/2013-October/004442.html Responses with empty bodies with the header "Content-Encoding: gzip" used to cause requests to hang. There has been a fix, but now it seems that the requests simply fails. Reviewing the code, it appears that the following happens: - An empty last buffer arrives into the gunzip module's body filter. - The gunzip module's ngx_http_gunzip_filter_add_data() calculates and input buffer size (it is 0), and it is later in fed to zlib's inflate(), along with the paramter Z_FINISH - inflate() is later called, and returned Z_BUF_ERROR. This causes error handling to shut down the request and the connection. The client gets an empty response. I'm not sure what a proper fix would be, but I can suggest the following: 1. In ngx_http_gunzip_header_filter() check the content length, and don't create a gunzip ctx if it is 0. 2. In ngx_http_gunzip_body_filter(), check if gunzip has started ("!ctx->started"). If it hasn't and the input buffer is the last one, simply jump to the next filter. This handles the case that the response with is chunked encoding. Would be great to hear the development team's opinion. Best regards, Aviram -------------- next part -------------- An HTML attachment was scrubbed... URL: From emptydocks at gmail.com Mon Nov 30 13:46:54 2015 From: emptydocks at gmail.com (Koby Nachmany) Date: Mon, 30 Nov 2015 15:46:54 +0200 Subject: Configurable 'npoints' for ngx_http_upstream_hash Message-ID: Hello! I have a use case for an even, consistent balancing of a caching layer upstream cluster. I.e using the "Ketama algorithm" Current consistent hashing implementation in ngx_http_upstream_hash is hard-coded to '160' vbuckets and real world results show a 20% variance in balancing, which is not acceptable in our case. Following is a patch (Thanks to agentz) that will allow a configurable vbuckets configuration param. Default will remain the same = 160. Please consider pushing this "upstream" . No pun intended ;) Koby N --- a/src/http/modules/ngx_http_upstream_hash_module.c 2015-07-15 00:46:06.000000000 +0800 +++ b/src/http/modules/ngx_http_upstream_hash_module.c 2015-10-11 22:26:47.952670175 +0800 @@ -23,6 +23,7 @@ typedef struct { typedef struct { + ngx_uint_t npoints; ngx_http_complex_value_t key; ngx_http_upstream_chash_points_t *points; } ngx_http_upstream_hash_srv_conf_t; @@ -66,7 +67,7 @@ static char *ngx_http_upstream_hash(ngx_ static ngx_command_t ngx_http_upstream_hash_commands[] = { { ngx_string("hash"), - NGX_HTTP_UPS_CONF|NGX_CONF_TAKE12, + NGX_HTTP_UPS_CONF|NGX_CONF_TAKE123, ngx_http_upstream_hash, NGX_HTTP_SRV_CONF_OFFSET, 0, @@ -296,7 +297,10 @@ ngx_http_upstream_init_chash(ngx_conf_t us->peer.init = ngx_http_upstream_init_chash_peer; peers = us->peer.data; - npoints = peers->total_weight * 160; + + hcf = ngx_http_conf_upstream_srv_conf(us, ngx_http_upstream_hash_module); + + npoints = peers->total_weight * hcf->npoints; size = sizeof(ngx_http_upstream_chash_points_t) + sizeof(ngx_http_upstream_chash_point_t) * (npoints - 1); @@ -355,7 +359,7 @@ ngx_http_upstream_init_chash(ngx_conf_t ngx_crc32_update(&base_hash, port, port_len); prev_hash.value = 0; - npoints = peer->weight * 160; + npoints = peer->weight * hcf->npoints; for (j = 0; j < npoints; j++) { hash = base_hash; @@ -391,7 +395,6 @@ ngx_http_upstream_init_chash(ngx_conf_t points->number = i + 1; - hcf = ngx_http_conf_upstream_srv_conf(us, ngx_http_upstream_hash_module); hcf->points = points; return NGX_OK; @@ -657,6 +660,19 @@ ngx_http_upstream_hash(ngx_conf_t *cf, n } else if (ngx_strcmp(value[2].data, "consistent") == 0) { uscf->peer.init_upstream = ngx_http_upstream_init_chash; + if (cf->args->nelts > 3) { + hcf->npoints = ngx_atoi(value[3].data, value[3].len); + + if (hcf->npoints == (ngx_uint_t) NGX_ERROR) { + ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, + "invalid npoints parameter \"%V\"", &value[3]); + return NGX_CONF_ERROR; + } + + } else { + hcf->npoints = 160; + } + } else { ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, "invalid parameter \"%V\"", &value[2]); -------------- next part -------------- An HTML attachment was scrubbed... URL: From vbart at nginx.com Mon Nov 30 15:16:08 2015 From: vbart at nginx.com (Valentin V. Bartenev) Date: Mon, 30 Nov 2015 18:16:08 +0300 Subject: [BUG] Gunzip module may cause requests to fail In-Reply-To: References: Message-ID: <2571972.HTVaeomj3b@vbart-workstation> On Monday 30 November 2015 13:20:02 Aviram Cohen wrote: > Hello! > > A couple of years ago, I've reported the following bug: > http://mailman.nginx.org/pipermail/nginx-devel/2013-October/004442.html > > Responses with empty bodies with the header "Content-Encoding: gzip" used to cause requests to hang. > There has been a fix, but now it seems that the requests simply fails. > > Reviewing the code, it appears that the following happens: > - An empty last buffer arrives into the gunzip module's body filter. > - The gunzip module's ngx_http_gunzip_filter_add_data() calculates and input buffer size (it is 0), and it is later in fed to zlib's inflate(), along with the paramter Z_FINISH > - inflate() is later called, and returned Z_BUF_ERROR. This causes error handling to shut down the request and the connection. The client gets an empty response. > > I'm not sure what a proper fix would be, but I can suggest the following: > 1. In ngx_http_gunzip_header_filter() check the content length, and don't create a gunzip ctx if it is 0. > 2. In ngx_http_gunzip_body_filter(), check if gunzip has started ("!ctx->started"). If it hasn't and the input buffer is the last one, simply jump to the next filter. This handles the case that the response with is chunked encoding. > > Would be great to hear the development team's opinion. > Why do you think that it's a bug in nginx? For me "Content-Encoding gzip" without gzip wrapper doesn't look like a valid gzip encoded response. wbr, Valentin V. Bartenev From x at chrisbranch.co.uk Mon Nov 30 15:20:17 2015 From: x at chrisbranch.co.uk (Chris Branch) Date: Mon, 30 Nov 2015 15:20:17 +0000 Subject: Closing upstream keepalive connections in an invalid state Message-ID: <7FE1A2EB-124C-4A7F-804C-FC5D55D7579F@chrisbranch.co.uk> There was a thread on the nginx mailing list last week, regarding upstream keepalive connections being placed in an invalid state due to a partially-transmitted request body. With regard to that discussion, I?m submitting two patches for your review. The first adds a test case to nginx-tests demonstrating the problem as of nginx 1.9.7. Most of the change involves extending the mock origin to consume a request body, and verify the method transmitted. Currently, nginx will reuse the upstream connection for a subsequent request and (from the point of view of an upstream client) insert some or all of a request line and headers into the previous request's body. The result is typically a 400 Bad Request error due to a malformed request. The second patch fixes this bug using the method suggested by Maxim, i.e. close the upstream connection when a response is received before the request body is completely sent. This is the behaviour suggested in RFC 2616 section 8.2.2. The relevant Trac issue is #669. -------------- next part -------------- A non-text attachment was scrubbed... Name: nginx-fix-bad-request.patch Type: application/octet-stream Size: 623 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: nginx-tests.patch Type: application/octet-stream Size: 2639 bytes Desc: not available URL: From ru at nginx.com Mon Nov 30 16:02:30 2015 From: ru at nginx.com (Ruslan Ermilov) Date: Mon, 30 Nov 2015 16:02:30 +0000 Subject: [nginx] Configure: removed comment obsolete in 3b763d36e055. Message-ID: details: http://hg.nginx.org/nginx/rev/2c8874c22073 branches: changeset: 6307:2c8874c22073 user: Ruslan Ermilov date: Mon Nov 30 19:01:53 2015 +0300 description: Configure: removed comment obsolete in 3b763d36e055. diffstat: auto/sources | 3 --- 1 files changed, 0 insertions(+), 3 deletions(-) diffs (13 lines): diff -r b1858fc47e3b -r 2c8874c22073 auto/sources --- a/auto/sources Fri Nov 06 15:22:43 2015 +0300 +++ b/auto/sources Mon Nov 30 19:01:53 2015 +0300 @@ -254,9 +254,6 @@ NGX_WIN32_ICONS="src/os/win32/nginx.ico" NGX_WIN32_RC="src/os/win32/nginx.rc" -# the http modules that have their logging formats -# must be after ngx_http_log_module - HTTP_MODULES="ngx_http_module \ ngx_http_core_module \ ngx_http_log_module \ From aviram at adallom.com Mon Nov 30 16:29:09 2015 From: aviram at adallom.com (Aviram Cohen) Date: Mon, 30 Nov 2015 16:29:09 +0000 Subject: [BUG] Gunzip module may cause requests to fail References: <2571972.HTVaeomj3b@vbart-workstation> Message-ID: Valentin, You are right, response bodies that are empty but still "encoded as gzip" are a bit malformed. Unfortunately, sometimes we don't control the behavior of the server. And still, I think Nginx should be able to handle such responses and not disconnect the client. Regards -----Original Message----- From: nginx-devel [mailto:nginx-devel-bounces at nginx.org] On Behalf Of Valentin V. Bartenev Sent: ????? 30 ?????? 2015 17:16 To: nginx-devel at nginx.org Subject: Re: [BUG] Gunzip module may cause requests to fail On Monday 30 November 2015 13:20:02 Aviram Cohen wrote: > Hello! > > A couple of years ago, I've reported the following bug: > https://na01.safelinks.protection.outlook.com/?url=http%3a%2f%2fmailma > n.nginx.org%2fpipermail%2fnginx-devel%2f2013-October%2f004442.html&dat > a=01%7c01%7cavcohe%40064d.mgd.microsoft.com%7cc38e39e22c5742dc11e908d2 > f999378d%7c72f988bf86f141af91ab2d7cd011db47%7c1&sdata=58REjgVeya98VvYp > wf6WE3veHmoaixSkNS8neZWFgi0%3d > > Responses with empty bodies with the header "Content-Encoding: gzip" used to cause requests to hang. > There has been a fix, but now it seems that the requests simply fails. > > Reviewing the code, it appears that the following happens: > - An empty last buffer arrives into the gunzip module's body filter. > - The gunzip module's ngx_http_gunzip_filter_add_data() calculates and > input buffer size (it is 0), and it is later in fed to zlib's > inflate(), along with the paramter Z_FINISH > - inflate() is later called, and returned Z_BUF_ERROR. This causes error handling to shut down the request and the connection. The client gets an empty response. > > I'm not sure what a proper fix would be, but I can suggest the following: > 1. In ngx_http_gunzip_header_filter() check the content length, and don't create a gunzip ctx if it is 0. > 2. In ngx_http_gunzip_body_filter(), check if gunzip has started ("!ctx->started"). If it hasn't and the input buffer is the last one, simply jump to the next filter. This handles the case that the response with is chunked encoding. > > Would be great to hear the development team's opinion. > Why do you think that it's a bug in nginx? For me "Content-Encoding gzip" without gzip wrapper doesn't look like a valid gzip encoded response. wbr, Valentin V. Bartenev _______________________________________________ nginx-devel mailing list nginx-devel at nginx.org https://na01.safelinks.protection.outlook.com/?url=http%3a%2f%2fmailman.nginx.org%2fmailman%2flistinfo%2fnginx-devel&data=01%7c01%7cavcohe%40064d.mgd.microsoft.com%7cc38e39e22c5742dc11e908d2f999378d%7c72f988bf86f141af91ab2d7cd011db47%7c1&sdata=EHW7aPHhYvhW92eDs4TtiH5wUhitURsOo0FD8hKsd0s%3d From mdounin at mdounin.ru Mon Nov 30 17:37:17 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 30 Nov 2015 20:37:17 +0300 Subject: [BUG] Gunzip module may cause requests to fail In-Reply-To: References: <2571972.HTVaeomj3b@vbart-workstation> Message-ID: <20151130173717.GJ74233@mdounin.ru> Hello! On Mon, Nov 30, 2015 at 04:29:09PM +0000, Aviram Cohen wrote: > You are right, response bodies that are empty but still "encoded > as gzip" are a bit malformed. > Unfortunately, sometimes we don't control the behavior of the > server. And still, I think Nginx should be able to handle such > responses and not disconnect the client. As you said, such responses are "a bit malformed". And nginx does its best at handling such malformed responses: it logs an error and closes the connection to prevent further damage. The only potentially better option I can think of would be to don't touch such responses at all. Unfortunately, this isn't really possible as response headers are already modified and sent to the client at the point when we know the response body is malformed. Another obvious solution would be to instruct nginx to don't try to gunzip responses if you don't control responses of your backend and there are malformed ones. Actually, this is the default. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Mon Nov 30 18:09:46 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 30 Nov 2015 21:09:46 +0300 Subject: Closing upstream keepalive connections in an invalid state In-Reply-To: <7FE1A2EB-124C-4A7F-804C-FC5D55D7579F@chrisbranch.co.uk> References: <7FE1A2EB-124C-4A7F-804C-FC5D55D7579F@chrisbranch.co.uk> Message-ID: <20151130180946.GK74233@mdounin.ru> Hello! On Mon, Nov 30, 2015 at 03:20:17PM +0000, Chris Branch wrote: > There was a thread on the nginx mailing list last week, > regarding upstream keepalive connections being placed in an > invalid state due to a partially-transmitted request body. With > regard to that discussion, I?m submitting two patches for your > review. > > The first adds a test case to nginx-tests demonstrating the > problem as of nginx 1.9.7. Most of the change involves extending > the mock origin to consume a request body, and verify the method > transmitted. Currently, nginx will reuse the upstream connection > for a subsequent request and (from the point of view of an > upstream client) insert some or all of a request line and > headers into the previous request's body. The result is > typically a 400 Bad Request error due to a malformed request. A test case for this was already committed by Sergey Kandaurov a couple of days ago: http://hg.nginx.org/nginx-tests/rev/2f292082c8a0 > The second patch fixes this bug using the method suggested by > Maxim, i.e. close the upstream connection when a response is > received before the request body is completely sent. This is the > behaviour suggested in RFC 2616 section 8.2.2. The relevant Trac > issue is #669. The patch looks incomplete for me. It doesn't seem to handle the "next upstream" case. And the condition used looks wrong, too, as it doesn't take into account what nginx actually tried to send. (And please also take a look at this article: http://nginx.org/en/docs/contributing_changes.html) Quick and dirty patch to address this is as follows. Though I can't say I like it. diff --git a/src/http/modules/ngx_http_upstream_keepalive_module.c b/src/http/modules/ngx_http_upstream_keepalive_module.c --- a/src/http/modules/ngx_http_upstream_keepalive_module.c +++ b/src/http/modules/ngx_http_upstream_keepalive_module.c @@ -302,6 +302,10 @@ ngx_http_upstream_free_keepalive_peer(ng goto invalid; } + if (!u->request_body_sent) { + goto invalid; + } + if (ngx_terminate || ngx_exiting) { goto invalid; } diff --git a/src/http/ngx_http_upstream.c b/src/http/ngx_http_upstream.c --- a/src/http/ngx_http_upstream.c +++ b/src/http/ngx_http_upstream.c @@ -1818,6 +1818,8 @@ ngx_http_upstream_send_request(ngx_http_ /* rc == NGX_OK */ + u->request_body_sent = 1; + if (c->write->timer_set) { ngx_del_timer(c->write); } diff --git a/src/http/ngx_http_upstream.h b/src/http/ngx_http_upstream.h --- a/src/http/ngx_http_upstream.h +++ b/src/http/ngx_http_upstream.h @@ -370,6 +370,7 @@ struct ngx_http_upstream_s { unsigned upgrade:1; unsigned request_sent:1; + unsigned request_body_sent:1; unsigned header_sent:1; }; -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Mon Nov 30 19:10:06 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 30 Nov 2015 22:10:06 +0300 Subject: Closing upstream keepalive connections in an invalid state In-Reply-To: <20151130180946.GK74233@mdounin.ru> References: <7FE1A2EB-124C-4A7F-804C-FC5D55D7579F@chrisbranch.co.uk> <20151130180946.GK74233@mdounin.ru> Message-ID: <20151130191006.GL74233@mdounin.ru> Hello! On Mon, Nov 30, 2015 at 09:09:46PM +0300, Maxim Dounin wrote: > Hello! > > On Mon, Nov 30, 2015 at 03:20:17PM +0000, Chris Branch wrote: > > > There was a thread on the nginx mailing list last week, > > regarding upstream keepalive connections being placed in an > > invalid state due to a partially-transmitted request body. With > > regard to that discussion, I?m submitting two patches for your > > review. > > > > The first adds a test case to nginx-tests demonstrating the > > problem as of nginx 1.9.7. Most of the change involves extending > > the mock origin to consume a request body, and verify the method > > transmitted. Currently, nginx will reuse the upstream connection > > for a subsequent request and (from the point of view of an > > upstream client) insert some or all of a request line and > > headers into the previous request's body. The result is > > typically a 400 Bad Request error due to a malformed request. > > A test case for this was already committed by Sergey Kandaurov a > couple of days ago: > > http://hg.nginx.org/nginx-tests/rev/2f292082c8a0 > > > The second patch fixes this bug using the method suggested by > > Maxim, i.e. close the upstream connection when a response is > > received before the request body is completely sent. This is the > > behaviour suggested in RFC 2616 section 8.2.2. The relevant Trac > > issue is #669. > > The patch looks incomplete for me. It doesn't seem to handle the > "next upstream" case. And the condition used looks wrong, too, as > it doesn't take into account what nginx actually tried to send. > > (And please also take a look at this article: > http://nginx.org/en/docs/contributing_changes.html) > > Quick and dirty patch to address this is as follows. Though I > can't say I like it. > > diff --git a/src/http/modules/ngx_http_upstream_keepalive_module.c b/src/http/modules/ngx_http_upstream_keepalive_module.c > --- a/src/http/modules/ngx_http_upstream_keepalive_module.c > +++ b/src/http/modules/ngx_http_upstream_keepalive_module.c > @@ -302,6 +302,10 @@ ngx_http_upstream_free_keepalive_peer(ng > goto invalid; > } > > + if (!u->request_body_sent) { > + goto invalid; > + } > + > if (ngx_terminate || ngx_exiting) { > goto invalid; > } > diff --git a/src/http/ngx_http_upstream.c b/src/http/ngx_http_upstream.c > --- a/src/http/ngx_http_upstream.c > +++ b/src/http/ngx_http_upstream.c > @@ -1818,6 +1818,8 @@ ngx_http_upstream_send_request(ngx_http_ > > /* rc == NGX_OK */ > > + u->request_body_sent = 1; > + > if (c->write->timer_set) { > ngx_del_timer(c->write); > } > diff --git a/src/http/ngx_http_upstream.h b/src/http/ngx_http_upstream.h > --- a/src/http/ngx_http_upstream.h > +++ b/src/http/ngx_http_upstream.h > @@ -370,6 +370,7 @@ struct ngx_http_upstream_s { > unsigned upgrade:1; > > unsigned request_sent:1; > + unsigned request_body_sent:1; > unsigned header_sent:1; > }; > And an additional part to properly reset the flag on next upstream: diff --git a/src/http/ngx_http_upstream.c b/src/http/ngx_http_upstream.c --- a/src/http/ngx_http_upstream.c +++ b/src/http/ngx_http_upstream.c @@ -1434,6 +1434,7 @@ ngx_http_upstream_connect(ngx_http_reque } u->request_sent = 0; + u->request_body_sent = 0; if (rc == NGX_AGAIN) { ngx_add_timer(c->write, u->conf->connect_timeout); -- Maxim Dounin http://nginx.org/ From ru at nginx.com Mon Nov 30 19:37:50 2015 From: ru at nginx.com (Ruslan Ermilov) Date: Mon, 30 Nov 2015 19:37:50 +0000 Subject: [nginx] Configure: improved workaround for system perl on OS X. Message-ID: details: http://hg.nginx.org/nginx/rev/7e241b36819d branches: changeset: 6308:7e241b36819d user: Ruslan Ermilov date: Mon Nov 30 12:04:29 2015 +0300 description: Configure: improved workaround for system perl on OS X. The workaround from baf2816d556d stopped to work because the order of "-arch x86_64" and "-arch i386" has changed. diffstat: auto/lib/perl/conf | 2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diffs (12 lines): diff -r 2c8874c22073 -r 7e241b36819d auto/lib/perl/conf --- a/auto/lib/perl/conf Mon Nov 30 19:01:53 2015 +0300 +++ b/auto/lib/perl/conf Mon Nov 30 12:04:29 2015 +0300 @@ -57,7 +57,7 @@ if test -n "$NGX_PERL_VER"; then if [ "$NGX_SYSTEM" = "Darwin" ]; then # OS X system perl wants to link universal binaries ngx_perl_ldopts=`echo $ngx_perl_ldopts \ - | sed -e 's/-arch x86_64 -arch i386//'` + | sed -e 's/-arch i386//' -e 's/-arch x86_64//'` fi CORE_LINK="$CORE_LINK $ngx_perl_ldopts" From junpei.yoshino at gmail.com Mon Nov 30 23:08:30 2015 From: junpei.yoshino at gmail.com (junpei yoshino) Date: Tue, 1 Dec 2015 08:08:30 +0900 Subject: [PATCH]add proxy_protocol_port variable for rfc6302 In-Reply-To: References: Message-ID: hello, is there anything wrong? could you give me any advice? Best Regards, Junpei Yoshino # HG changeset patch # User Junpei Yoshino # Date 1446723407 -32400 # Thu Nov 05 20:36:47 2015 +0900 # Node ID 59cadccedf402ec325b078cb72a284465639e0fe # Parent 4ccb37b04454dec6afb9476d085c06aea00adaa0 Http: add proxy_protocol_port variable for rfc6302 Logging source port is recommended in rfc6302. use case logging sending information by http request headers diff -r 4ccb37b04454 -r 59cadccedf40 src/core/ngx_connection.h --- a/src/core/ngx_connection.h Fri Oct 30 21:43:30 2015 +0300 +++ b/src/core/ngx_connection.h Thu Nov 05 20:36:47 2015 +0900 @@ -146,6 +146,7 @@ ngx_str_t addr_text; ngx_str_t proxy_protocol_addr; + ngx_str_t proxy_protocol_port; #if (NGX_SSL) ngx_ssl_connection_t *ssl; diff -r 4ccb37b04454 -r 59cadccedf40 src/core/ngx_proxy_protocol.c --- a/src/core/ngx_proxy_protocol.c Fri Oct 30 21:43:30 2015 +0300 +++ b/src/core/ngx_proxy_protocol.c Thu Nov 05 20:36:47 2015 +0900 @@ -13,7 +13,7 @@ ngx_proxy_protocol_read(ngx_connection_t *c, u_char *buf, u_char *last) { size_t len; - u_char ch, *p, *addr; + u_char ch, *p, *addr, *port; p = buf; len = last - buf; @@ -71,8 +71,56 @@ ngx_memcpy(c->proxy_protocol_addr.data, addr, len); c->proxy_protocol_addr.len = len; + for ( ;; ) { + if (p == last) { + goto invalid; + } + + ch = *p++; + + if (ch == ' ') { + break; + } + + if (ch != ':' && ch != '.' + && (ch < 'a' || ch > 'f') + && (ch < 'A' || ch > 'F') + && (ch < '0' || ch > '9')) + { + goto invalid; + } + } + port = p; + for ( ;; ) { + if (p == last) { + goto invalid; + } + + ch = *p++; + + if (ch == ' ') { + break; + } + + if (ch < '0' || ch > '9') + { + goto invalid; + } + } + len = p - port - 1; + c->proxy_protocol_port.data = ngx_pnalloc(c->pool, len); + + if (c->proxy_protocol_port.data == NULL) { + return NULL; + } + + ngx_memcpy(c->proxy_protocol_port.data, port, len); + c->proxy_protocol_port.len = len; + ngx_log_debug1(NGX_LOG_DEBUG_CORE, c->log, 0, "PROXY protocol address: \"%V\"", &c->proxy_protocol_addr); + ngx_log_debug1(NGX_LOG_DEBUG_CORE, c->log, 0, + "PROXY protocol port: \"%V\"", &c->proxy_protocol_port); skip: diff -r 4ccb37b04454 -r 59cadccedf40 src/http/ngx_http_variables.c --- a/src/http/ngx_http_variables.c Fri Oct 30 21:43:30 2015 +0300 +++ b/src/http/ngx_http_variables.c Thu Nov 05 20:36:47 2015 +0900 @@ -58,6 +58,8 @@ ngx_http_variable_value_t *v, uintptr_t data); static ngx_int_t ngx_http_variable_proxy_protocol_addr(ngx_http_request_t *r, ngx_http_variable_value_t *v, uintptr_t data); +static ngx_int_t ngx_http_variable_proxy_protocol_port(ngx_http_request_t *r, + ngx_http_variable_value_t *v, uintptr_t data); static ngx_int_t ngx_http_variable_server_addr(ngx_http_request_t *r, ngx_http_variable_value_t *v, uintptr_t data); static ngx_int_t ngx_http_variable_server_port(ngx_http_request_t *r, @@ -192,6 +194,9 @@ { ngx_string("proxy_protocol_addr"), NULL, ngx_http_variable_proxy_protocol_addr, 0, 0, 0 }, + { ngx_string("proxy_protocol_port"), NULL, + ngx_http_variable_proxy_protocol_port, 0, 0, 0 }, + { ngx_string("server_addr"), NULL, ngx_http_variable_server_addr, 0, 0, 0 }, { ngx_string("server_port"), NULL, ngx_http_variable_server_port, 0, 0, 0 }, @@ -1250,6 +1255,20 @@ static ngx_int_t +ngx_http_variable_proxy_protocol_port(ngx_http_request_t *r, + ngx_http_variable_value_t *v, uintptr_t data) +{ + v->len = r->connection->proxy_protocol_port.len; + v->valid = 1; + v->no_cacheable = 0; + v->not_found = 0; + v->data = r->connection->proxy_protocol_port.data; + + return NGX_OK; +} + + +static ngx_int_t ngx_http_variable_server_addr(ngx_http_request_t *r, ngx_http_variable_value_t *v, uintptr_t data) { -------------- next part -------------- An HTML attachment was scrubbed... URL: