From mdounin at mdounin.ru Fri Nov 1 10:46:17 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 1 Nov 2013 14:46:17 +0400 Subject: [PATCH] RSA+DSA+ECC bundles In-Reply-To: <5272C477.6060901@comodo.com> References: <5261BF11.7010708@comodo.com> <20131019101424.GS2144@mdounin.ru> <52659F5B.1050708@comodo.com> <20131022120939.GU7074@mdounin.ru> <52667E15.1010301@comodo.com> <20131023002525.GZ7074@mdounin.ru> <20131024002653.GK7074@mdounin.ru> <5272C477.6060901@comodo.com> Message-ID: <20131101104616.GC95765@mdounin.ru> Hello! On Thu, Oct 31, 2013 at 08:58:31PM +0000, Rob Stradling wrote: > On 24/10/13 01:26, Maxim Dounin wrote: > > >As for multiple certs per se, I don't think it should be limited > >to recent OpenSSL versions only. As far as I can tell, current > >versions of OpenSSL will work just fine (well, mostly) as long as > >both ECDSA and RSA certs use the same certificate chain. I > >believe at least some CAs issue ECDSA certs this way, and this > >should work. > > > >Limiting support for multiple certs with separate certificate > >chains to only recent OpenSSL versions seems reasonable for me, > >but if Rob wants to try to make it work with older versions - I > >don't really object. If it won't be too hacky it might worth > >supporting. > > Updated patch attached. This implements multiple certs and makes > OCSP Stapling work correctly with them. It works with all of the > active OpenSSL branches (including 0_9_8). > > I'm afraid it's a much larger patch than I anticipated it would be > when I started working on it! > > Maxim, does this patch look commit-able? It looks like it needs to be broken down into a patch series to be at least reviewable. I haven't looked into details yet, but I tend to dislike at least changing the ngx_ssl_certificate() function into a monster which configures everything. Preserving a separate call to configure stapling would be much better. Checks for extra ceritifcate chains with unsupported OpenSSL versions looks a bit too extensive. I would think of just dropping them completely. -- Maxim Dounin http://nginx.org/en/donation.html From rob.stradling at comodo.com Fri Nov 1 12:09:08 2013 From: rob.stradling at comodo.com (Rob Stradling) Date: Fri, 01 Nov 2013 12:09:08 +0000 Subject: [PATCH] RSA+DSA+ECC bundles In-Reply-To: <20131101104616.GC95765@mdounin.ru> References: <5261BF11.7010708@comodo.com> <20131019101424.GS2144@mdounin.ru> <52659F5B.1050708@comodo.com> <20131022120939.GU7074@mdounin.ru> <52667E15.1010301@comodo.com> <20131023002525.GZ7074@mdounin.ru> <20131024002653.GK7074@mdounin.ru> <5272C477.6060901@comodo.com> <20131101104616.GC95765@mdounin.ru> Message-ID: <527399E4.9090307@comodo.com> On 01/11/13 10:46, Maxim Dounin wrote: >> I'm afraid it's a much larger patch than I anticipated it would be >> when I started working on it! >> >> Maxim, does this patch look commit-able? Maxim, thanks for your initial comments. > It looks like it needs to be broken down into a patch series to > be at least reviewable. I thought you might say that. Is it acceptable for there to be compilation errors if you only apply some of the patches in a patch series? (I was assuming that would be unacceptable, hence the one large patch). > I haven't looked into details yet, but I tend to dislike at least > changing the ngx_ssl_certificate() function into a monster which > configures everything. Preserving a separate call to configure > stapling would be much better. I had hoped to keep those calls separate, but I couldn't see a clean way to keep track of multiple server certs plus associated issuer certs inbetween the calls to ngx_ssl_certificate() and ngx_ssl_stapling(). By combining the certificate configuration and stapling configuration functions, I made this problem go away. To preserve ngx_ssl_certificate() and ngx_ssl_stapling() as separate functions, I think I'd have to: - change ngx_ssl_certificate_index to keep an array (either ngx_array_t or STACK_OF) of server certs. - have ngx_ssl_certificate() put all of the intermediate CA certificates it encounters into a temporary cert store; have ngx_ssl_stapling() look in this temporary cert store for issuer certificates; then destroy the temporary cert store. Would that be preferable? Or do you have any better ideas? > Checks for extra ceritifcate chains with unsupported OpenSSL > versions looks a bit too extensive. I would think of just > dropping them completely. OK, (assuming you mean drop the checks, rather than drop support for those OpenSSL versions!) -- Rob Stradling Senior Research & Development Scientist COMODO - Creating Trust Online From mdounin at mdounin.ru Fri Nov 1 12:37:26 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 01 Nov 2013 12:37:26 +0000 Subject: [nginx] Win32: plugged memory leak. Message-ID: details: http://hg.nginx.org/nginx/rev/dea321e5c021 branches: changeset: 5437:dea321e5c021 user: Maxim Dounin date: Thu Oct 31 18:23:49 2013 +0400 description: Win32: plugged memory leak. diffstat: src/os/win32/ngx_files.c | 2 ++ 1 files changed, 2 insertions(+), 0 deletions(-) diffs (12 lines): diff --git a/src/os/win32/ngx_files.c b/src/os/win32/ngx_files.c --- a/src/os/win32/ngx_files.c +++ b/src/os/win32/ngx_files.c @@ -753,6 +753,8 @@ ngx_win32_check_filename(u_char *name, u goto invalid; } + ngx_free(lu); + return NGX_OK; invalid: From mdounin at mdounin.ru Fri Nov 1 14:25:06 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 1 Nov 2013 18:25:06 +0400 Subject: [PATCH] RSA+DSA+ECC bundles In-Reply-To: <527399E4.9090307@comodo.com> References: <52659F5B.1050708@comodo.com> <20131022120939.GU7074@mdounin.ru> <52667E15.1010301@comodo.com> <20131023002525.GZ7074@mdounin.ru> <20131024002653.GK7074@mdounin.ru> <5272C477.6060901@comodo.com> <20131101104616.GC95765@mdounin.ru> <527399E4.9090307@comodo.com> Message-ID: <20131101142506.GE95765@mdounin.ru> Hello! On Fri, Nov 01, 2013 at 12:09:08PM +0000, Rob Stradling wrote: > On 01/11/13 10:46, Maxim Dounin wrote: > > >>I'm afraid it's a much larger patch than I anticipated it would be > >>when I started working on it! > >> > >>Maxim, does this patch look commit-able? > > Maxim, thanks for your initial comments. > > >It looks like it needs to be broken down into a patch series to > >be at least reviewable. > > I thought you might say that. Is it acceptable for there to be > compilation errors if you only apply some of the patches in a patch > series? (I was assuming that would be unacceptable, hence the one > large patch). Each patch is expected to make sense by it's own, and shouldn't break anything previously working, including compilation (but may do e.g. otherwise unneeded and/or strange refactoring, or provide some incomplete functionality). > >I haven't looked into details yet, but I tend to dislike at least > >changing the ngx_ssl_certificate() function into a monster which > >configures everything. Preserving a separate call to configure > >stapling would be much better. > > I had hoped to keep those calls separate, but I couldn't see a clean > way to keep track of multiple server certs plus associated issuer > certs inbetween the calls to ngx_ssl_certificate() and > ngx_ssl_stapling(). > By combining the certificate configuration and stapling > configuration functions, I made this problem go away. > > To preserve ngx_ssl_certificate() and ngx_ssl_stapling() as separate > functions, I think I'd have to: > - change ngx_ssl_certificate_index to keep an array (either > ngx_array_t or STACK_OF) of server certs. > - have ngx_ssl_certificate() put all of the intermediate CA > certificates it encounters into a temporary cert store; have > ngx_ssl_stapling() look in this temporary cert store for issuer > certificates; then destroy the temporary cert store. > > Would that be preferable? Or do you have any better ideas? Given the number of things we have to store here and there, I tend to think we should eventually just add an index with some generic pointer to a struct with our data. To minimize changes in this particular case, using an array is probably good enough. > >Checks for extra ceritifcate chains with unsupported OpenSSL > >versions looks a bit too extensive. I would think of just > >dropping them completely. > > OK, (assuming you mean drop the checks, rather than drop support for > those OpenSSL versions!) Yes, I mean to drop checks. -- Maxim Dounin http://nginx.org/en/donation.html From mellery451 at gmail.com Fri Nov 1 16:41:20 2013 From: mellery451 at gmail.com (Michael Ellery) Date: Fri, 01 Nov 2013 09:41:20 -0700 Subject: how to clear a cookie value in request object Message-ID: <5273D9B0.3020004@gmail.com> devs, I'm looking for advice about how to properly clear a cookie value from the current request so that it will be omitted from the request when it goes upstream (to proxy). Here's the code I currently have: static ngx_str_t my_cookie_name = ngx_string("MyMagicCookieName"); ngx_uint_t i; ngx_table_elt_t **h; ngx_str_t null_header_value = ngx_null_string; h = r->headers_in.cookies.elts; for (i = 0; i < r->headers_in.cookies.nelts; i++) { if (h[i]->value.len > my_cookie_name.len && 0 == ngx_strncmp(h[i]->value.data, my_cookie_name.data, my_cookie_name.len)) { h[i]->value = null_header_value; break; } } my main concern is leaking memory -- will the nulling of this value cause memory to be leaked? If so, how can I fix this? A secondary concern is that I believe value can actually contain a list of comma separated of cookie name/vals, although I've not actually encountered that problem so far. What would be the right way to wipe out only PART of the value data, if that's indeed what I need to do? Thanks, Mike Ellery From mdounin at mdounin.ru Fri Nov 1 23:41:26 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sat, 2 Nov 2013 03:41:26 +0400 Subject: how to clear a cookie value in request object In-Reply-To: <5273D9B0.3020004@gmail.com> References: <5273D9B0.3020004@gmail.com> Message-ID: <20131101234126.GJ95765@mdounin.ru> Hello! On Fri, Nov 01, 2013 at 09:41:20AM -0700, Michael Ellery wrote: > devs, > > I'm looking for advice about how to properly clear a cookie value from the current request so that it will be omitted > from the request when it goes upstream (to proxy). Here's the code I currently have: > > static ngx_str_t my_cookie_name = ngx_string("MyMagicCookieName"); > > > ngx_uint_t i; > ngx_table_elt_t **h; > ngx_str_t null_header_value = ngx_null_string; > h = r->headers_in.cookies.elts; > for (i = 0; i < r->headers_in.cookies.nelts; i++) { > if (h[i]->value.len > my_cookie_name.len && > 0 == ngx_strncmp(h[i]->value.data, my_cookie_name.data, my_cookie_name.len)) > { > h[i]->value = null_header_value; > break; > } > } > > > my main concern is leaking memory -- will the nulling of this value cause memory to be leaked? If so, how can I fix this? > > A secondary concern is that I believe value can actually contain a list of comma separated of cookie name/vals, although > I've not actually encountered that problem so far. What would be the right way to wipe out only PART of the value data, > if that's indeed what I need to do? There are at least two problems with the above code: - It tries to modify r->headers_in, which is wrong. The r->headers_in contains headers as received from a client, and they are not expected to be modified. Modifications will likely result in undefined behaviour. - As you correctly assume, there can be more than one cookie in a single Cookie header (and usually there are - as long as there are more than one cookie used for a domain). Proper solution would be to provide a new value for the Cookie header in a variable (taking into account multiple cookies in a single header), and then use proxy_set_header Cookie $your_new_cookie_value; in configuration, much like with the $proxy_add_x_forwarded_for variable as available for X-Forwarded-For header modification. -- Maxim Dounin http://nginx.org/en/donation.html From mellery451 at gmail.com Sat Nov 2 18:04:30 2013 From: mellery451 at gmail.com (Mike Ellery) Date: Sat, 2 Nov 2013 11:04:30 -0700 Subject: how to clear a cookie value in request object Message-ID: > Message: 5 > Date: Sat, 2 Nov 2013 03:41:26 +0400 > From: Maxim Dounin > To: nginx-devel at nginx.org > Subject: Re: how to clear a cookie value in request object > Message-ID: <20131101234126.GJ95765 at mdounin.ru> > Content-Type: text/plain; charset=us-ascii > > Hello! > > On Fri, Nov 01, 2013 at 09:41:20AM -0700, Michael Ellery wrote: > >> devs, >> >> I'm looking for advice about how to properly clear a cookie value from the current request so that it will be omitted >> from the request when it goes upstream (to proxy). Here's the code I currently have: >> >> static ngx_str_t my_cookie_name = ngx_string("MyMagicCookieName"); >> >> >> ngx_uint_t i; >> ngx_table_elt_t **h; >> ngx_str_t null_header_value = ngx_null_string; >> h = r->headers_in.cookies.elts; >> for (i = 0; i < r->headers_in.cookies.nelts; i++) { >> if (h[i]->value.len > my_cookie_name.len && >> 0 == ngx_strncmp(h[i]->value.data, my_cookie_name.data, my_cookie_name.len)) >> { >> h[i]->value = null_header_value; >> break; >> } >> } >> >> >> my main concern is leaking memory -- will the nulling of this value cause memory to be leaked? If so, how can I fix this? >> >> A secondary concern is that I believe value can actually contain a list of comma separated of cookie name/vals, although >> I've not actually encountered that problem so far. What would be the right way to wipe out only PART of the value data, >> if that's indeed what I need to do? > > There are at least two problems with the above code: > > - It tries to modify r->headers_in, which is wrong. The > r->headers_in contains headers as received from a client, and > they are not expected to be modified. Modifications will likely > result in undefined behaviour. > > - As you correctly assume, there can be more than one cookie in a > single Cookie header (and usually there are - as long as there > are more than one cookie used for a domain). > > Proper solution would be to provide a new value for the Cookie > header in a variable (taking into account multiple cookies in a > single header), and then use > > proxy_set_header Cookie $your_new_cookie_value; > > in configuration, much like with the $proxy_add_x_forwarded_for > variable as available for X-Forwarded-For header modification. > > -- > Maxim Dounin > http://nginx.org/en/donation.html > > Maxim, Thanks for your helpful response. Is there a predefined variable that I can use to store my modified cookie value or will I just need to create a new custom variable value? Thanks, Mike -- +++++++++++++++++++ Mike Ellery mellery451 at gmail.com +++++++++++++++++++ From piotr at cloudflare.com Mon Nov 4 10:27:44 2013 From: piotr at cloudflare.com (Piotr Sikora) Date: Mon, 4 Nov 2013 02:27:44 -0800 Subject: [PATCH] SSL: support ALPN (IETF's successor to NPN) In-Reply-To: References: Message-ID: Hey, minor style change (moved declaration of hc back to the top). Any chances for this getting in before OpenSSL-1.0.2 is released? Best reagards, Piotr Sikora # HG changeset patch # User Piotr Sikora # Date 1383560396 28800 # Mon Nov 04 02:19:56 2013 -0800 # Node ID 78d793c51d5aa0ba8eec48340de49bfc3d17c97d # Parent dea321e5c0216efccbb23e84bbce7cf3e28f130c SSL: support ALPN (IETF's successor to NPN). Signed-off-by: Piotr Sikora diff -r dea321e5c021 -r 78d793c51d5a src/http/modules/ngx_http_ssl_module.c --- a/src/http/modules/ngx_http_ssl_module.c Thu Oct 31 18:23:49 2013 +0400 +++ b/src/http/modules/ngx_http_ssl_module.c Mon Nov 04 02:19:56 2013 -0800 @@ -17,6 +17,17 @@ typedef ngx_int_t (*ngx_ssl_variable_han #define NGX_DEFAULT_CIPHERS "HIGH:!aNULL:!MD5" #define NGX_DEFAULT_ECDH_CURVE "prime256v1" +#if (defined TLSEXT_TYPE_application_layer_protocol_negotiation \ + || defined TLSEXT_TYPE_next_proto_neg) +#define NGX_HTTP_NPN_ADVERTISE "\x08http/1.1" +#endif + + +#ifdef TLSEXT_TYPE_application_layer_protocol_negotiation +static int ngx_http_ssl_alpn_select(ngx_ssl_conn_t *ssl_conn, + const unsigned char **out, unsigned char *outlen, + const unsigned char *in, unsigned int inlen, void *arg); +#endif #ifdef TLSEXT_TYPE_next_proto_neg static int ngx_http_ssl_npn_advertised(ngx_ssl_conn_t *ssl_conn, @@ -274,10 +285,66 @@ static ngx_http_variable_t ngx_http_ssl static ngx_str_t ngx_http_ssl_sess_id_ctx = ngx_string("HTTP"); +#ifdef TLSEXT_TYPE_application_layer_protocol_negotiation + +static int +ngx_http_ssl_alpn_select(ngx_ssl_conn_t *ssl_conn, const unsigned char **out, + unsigned char *outlen, const unsigned char *in, unsigned int inlen, + void *arg) +{ + unsigned int srvlen; + unsigned char *srv; +#if (NGX_DEBUG) + unsigned int i; +#endif +#if (NGX_HTTP_SPDY) + ngx_http_connection_t *hc; +#endif +#if (NGX_HTTP_SPDY || NGX_DEBUG) + ngx_connection_t *c; + + c = ngx_ssl_get_connection(ssl_conn); +#endif + +#if (NGX_DEBUG) + for (i = 0; i < inlen; i += in[i] + 1) { + ngx_log_debug2(NGX_LOG_DEBUG_HTTP, c->log, 0, + "SSL ALPN supported by client: %*s", in[i], &in[i + 1]); + } +#endif + +#if (NGX_HTTP_SPDY) + hc = c->data; + + if (hc->addr_conf->spdy) { + srv = (unsigned char *) NGX_SPDY_NPN_ADVERTISE NGX_HTTP_NPN_ADVERTISE; + srvlen = sizeof(NGX_SPDY_NPN_ADVERTISE NGX_HTTP_NPN_ADVERTISE) - 1; + + } else +#endif + { + srv = (unsigned char *) NGX_HTTP_NPN_ADVERTISE; + srvlen = sizeof(NGX_HTTP_NPN_ADVERTISE) - 1; + } + + if (SSL_select_next_proto((unsigned char **) out, outlen, srv, srvlen, + in, inlen) + != OPENSSL_NPN_NEGOTIATED) + { + return SSL_TLSEXT_ERR_NOACK; + } + + ngx_log_debug2(NGX_LOG_DEBUG_HTTP, c->log, 0, + "SSL ALPN selected: %*s", *outlen, *out); + + return SSL_TLSEXT_ERR_OK; +} + +#endif + + #ifdef TLSEXT_TYPE_next_proto_neg -#define NGX_HTTP_NPN_ADVERTISE "\x08http/1.1" - static int ngx_http_ssl_npn_advertised(ngx_ssl_conn_t *ssl_conn, const unsigned char **out, unsigned int *outlen, void *arg) @@ -542,6 +609,10 @@ ngx_http_ssl_merge_srv_conf(ngx_conf_t * #endif +#ifdef TLSEXT_TYPE_application_layer_protocol_negotiation + SSL_CTX_set_alpn_select_cb(conf->ssl.ctx, ngx_http_ssl_alpn_select, NULL); +#endif + #ifdef TLSEXT_TYPE_next_proto_neg SSL_CTX_set_next_protos_advertised_cb(conf->ssl.ctx, ngx_http_ssl_npn_advertised, NULL); diff -r dea321e5c021 -r 78d793c51d5a src/http/ngx_http.c --- a/src/http/ngx_http.c Thu Oct 31 18:23:49 2013 +0400 +++ b/src/http/ngx_http.c Mon Nov 04 02:19:56 2013 -0800 @@ -1349,11 +1349,12 @@ ngx_http_add_address(ngx_conf_t *cf, ngx } } -#if (NGX_HTTP_SPDY && NGX_HTTP_SSL && !defined TLSEXT_TYPE_next_proto_neg) +#if (NGX_HTTP_SPDY && NGX_HTTP_SSL && !defined TLSEXT_TYPE_next_proto_neg \ + && !defined TLSEXT_TYPE_application_layer_protocol_negotiation) if (lsopt->spdy && lsopt->ssl) { ngx_conf_log_error(NGX_LOG_WARN, cf, 0, - "nginx was built without OpenSSL NPN support, " - "SPDY is not enabled for %s", lsopt->addr); + "nginx was built without OpenSSL ALPN and NPN " + "support, SPDY is not enabled for %s", lsopt->addr); } #endif diff -r dea321e5c021 -r 78d793c51d5a src/http/ngx_http_request.c --- a/src/http/ngx_http_request.c Thu Oct 31 18:23:49 2013 +0400 +++ b/src/http/ngx_http_request.c Mon Nov 04 02:19:56 2013 -0800 @@ -728,18 +728,31 @@ ngx_http_ssl_handshake_handler(ngx_conne c->ssl->no_wait_shutdown = 1; -#if (NGX_HTTP_SPDY && defined TLSEXT_TYPE_next_proto_neg) +#if (NGX_HTTP_SPDY \ + && (defined TLSEXT_TYPE_application_layer_protocol_negotiation \ + || defined TLSEXT_TYPE_next_proto_neg)) { unsigned int len; const unsigned char *data; static const ngx_str_t spdy = ngx_string(NGX_SPDY_NPN_NEGOTIATED); - SSL_get0_next_proto_negotiated(c->ssl->connection, &data, &len); +#ifdef TLSEXT_TYPE_application_layer_protocol_negotiation + SSL_get0_alpn_selected(c->ssl->connection, &data, &len); if (len == spdy.len && ngx_strncmp(data, spdy.data, spdy.len) == 0) { ngx_http_spdy_init(c->read); return; } +#endif + +#ifdef TLSEXT_TYPE_next_proto_neg + SSL_get0_next_proto_negotiated(c->ssl->connection, &data, &len); + + if (len == spdy.len && ngx_strncmp(data, spdy.data, spdy.len) == 0) { + ngx_http_spdy_init(c->read); + return; + } +#endif } #endif diff -r dea321e5c021 -r 78d793c51d5a src/http/ngx_http_spdy.h --- a/src/http/ngx_http_spdy.h Thu Oct 31 18:23:49 2013 +0400 +++ b/src/http/ngx_http_spdy.h Mon Nov 04 02:19:56 2013 -0800 @@ -17,7 +17,8 @@ #define NGX_SPDY_VERSION 2 -#ifdef TLSEXT_TYPE_next_proto_neg +#if (defined TLSEXT_TYPE_application_layer_protocol_negotiation \ + || defined TLSEXT_TYPE_next_proto_neg) #define NGX_SPDY_NPN_ADVERTISE "\x06spdy/2" #define NGX_SPDY_NPN_NEGOTIATED "spdy/2" #endif From piotr at cloudflare.com Mon Nov 4 10:27:56 2013 From: piotr at cloudflare.com (Piotr Sikora) Date: Mon, 4 Nov 2013 02:27:56 -0800 Subject: [PATCH] SSL: support automatic selection of ECDH temporary key parameters In-Reply-To: References: Message-ID: Hey, it looks that the new OpenSSL API is more powerful than I originally expected, much better patch attached. Any chances for this getting in before OpenSSL-1.0.2 is released? Best reagards, Piotr Sikora # HG changeset patch # User Piotr Sikora # Date 1383560410 28800 # Mon Nov 04 02:20:10 2013 -0800 # Node ID 3da92dd8525d7c6155e230d8f367ee9defcff01d # Parent dea321e5c0216efccbb23e84bbce7cf3e28f130c SSL: support automatic selection of ECDH temporary key parameters. The colon separated list of supported curves can be provided using either curve NIDs: ssl_ecdh_curve secp521r1:secp384r1:prime256v1; or names: ssl_ecdh_curve P-521:P-384:P-256; Signed-off-by: Piotr Sikora diff -r dea321e5c021 -r 3da92dd8525d src/event/ngx_event_openssl.c --- a/src/event/ngx_event_openssl.c Thu Oct 31 18:23:49 2013 +0400 +++ b/src/event/ngx_event_openssl.c Mon Nov 04 02:20:10 2013 -0800 @@ -679,6 +679,25 @@ ngx_ssl_ecdh_curve(ngx_conf_t *cf, ngx_s { #if OPENSSL_VERSION_NUMBER >= 0x0090800fL #ifndef OPENSSL_NO_ECDH +#ifdef SSL_CTRL_SET_ECDH_AUTO + + if (SSL_CTX_set1_curves_list(ssl->ctx, name->data) == 0) { + ngx_ssl_error(NGX_LOG_EMERG, ssl->log, 0, + "Unknown curve in \"%s\"", name->data); + return NGX_ERROR; + } + + if (SSL_CTX_set_ecdh_auto(ssl->ctx, 1) == 0) { + ngx_ssl_error(NGX_LOG_EMERG, ssl->log, 0, + "Unable to set automatic curve selection for \"%s\"", + name->data); + return NGX_ERROR; + } + + return NGX_OK; + +#else + int nid; EC_KEY *ecdh; @@ -708,6 +727,8 @@ ngx_ssl_ecdh_curve(ngx_conf_t *cf, ngx_s SSL_CTX_set_tmp_ecdh(ssl->ctx, ecdh); EC_KEY_free(ecdh); + +#endif #endif #endif From agentzh at gmail.com Mon Nov 4 20:55:45 2013 From: agentzh at gmail.com (Yichun Zhang (agentzh)) Date: Mon, 4 Nov 2013 12:55:45 -0800 Subject: [PATCH] Cache: gracefully exit the cache manager process Message-ID: Hello! I've recently run into an issue in the nginx cache manager process which does not call ngx_worker_process_exit to gracefully exit, thus tragically skipping all my cleanup code registered by my (3rd-party) modules. Below is a patch to fix this. This fix also makes the cache manager process valgrind-clean. Thanks! -agentzh # HG changeset patch # User Yichun Zhang # Date 1383598130 28800 # Node ID f64218e1ac963337d84092536f588b8e0d99bbaa # Parent dea321e5c0216efccbb23e84bbce7cf3e28f130c Cache: gracefully exit the cache manager process. diff -r dea321e5c021 -r f64218e1ac96 src/os/unix/ngx_process_cycle.c --- a/src/os/unix/ngx_process_cycle.c Thu Oct 31 18:23:49 2013 +0400 +++ b/src/os/unix/ngx_process_cycle.c Mon Nov 04 12:48:50 2013 -0800 @@ -1335,7 +1335,7 @@ if (ngx_terminate || ngx_quit) { ngx_log_error(NGX_LOG_NOTICE, cycle->log, 0, "exiting"); - exit(0); + ngx_worker_process_exit(cycle); } if (ngx_reopen) { -------------- next part -------------- A non-text attachment was scrubbed... Name: cache-manager-exit.patch Type: text/x-patch Size: 688 bytes Desc: not available URL: From piotr at cloudflare.com Mon Nov 4 21:14:17 2013 From: piotr at cloudflare.com (Piotr Sikora) Date: Mon, 4 Nov 2013 13:14:17 -0800 Subject: [PATCH] SSL: support automatic selection of ECDH temporary key parameters In-Reply-To: References: Message-ID: Hey, > Any chances for this getting in before OpenSSL-1.0.2 is released? Since I've been asked about this off-list: No, I don't expect OpenSSL-1.0.2 to be released any time soon, but I want to let people use those features if they are willing to compile against OpenSSL-1.0.2 from snapshots or git. Best regards, Piotr Sikora From mdounin at mdounin.ru Mon Nov 4 21:31:31 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 5 Nov 2013 01:31:31 +0400 Subject: [PATCH] Cache: gracefully exit the cache manager process In-Reply-To: References: Message-ID: <20131104213131.GM95765@mdounin.ru> Hello! On Mon, Nov 04, 2013 at 12:55:45PM -0800, Yichun Zhang (agentzh) wrote: > Hello! > > I've recently run into an issue in the nginx cache manager process > which does not call ngx_worker_process_exit to gracefully exit, thus > tragically skipping all my cleanup code registered by my (3rd-party) > modules. > > Below is a patch to fix this. This fix also makes the cache manager > process valgrind-clean. > > Thanks! > -agentzh > > # HG changeset patch > # User Yichun Zhang > # Date 1383598130 28800 > # Node ID f64218e1ac963337d84092536f588b8e0d99bbaa > # Parent dea321e5c0216efccbb23e84bbce7cf3e28f130c > Cache: gracefully exit the cache manager process. > > diff -r dea321e5c021 -r f64218e1ac96 src/os/unix/ngx_process_cycle.c > --- a/src/os/unix/ngx_process_cycle.c Thu Oct 31 18:23:49 2013 +0400 > +++ b/src/os/unix/ngx_process_cycle.c Mon Nov 04 12:48:50 2013 -0800 > @@ -1335,7 +1335,7 @@ > > if (ngx_terminate || ngx_quit) { > ngx_log_error(NGX_LOG_NOTICE, cycle->log, 0, "exiting"); > - exit(0); > + ngx_worker_process_exit(cycle); > } The cache manager process isn't worker process, so calling ngx_worker_process_exit() looks strange. -- Maxim Dounin http://nginx.org/en/donation.html From mdounin at mdounin.ru Mon Nov 4 21:57:24 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 5 Nov 2013 01:57:24 +0400 Subject: how to clear a cookie value in request object In-Reply-To: References: Message-ID: <20131104215724.GO95765@mdounin.ru> Hello! On Sat, Nov 02, 2013 at 11:04:30AM -0700, Mike Ellery wrote: [...] > > Proper solution would be to provide a new value for the Cookie > > header in a variable (taking into account multiple cookies in a > > single header), and then use > > > > proxy_set_header Cookie $your_new_cookie_value; > > > > in configuration, much like with the $proxy_add_x_forwarded_for > > variable as available for X-Forwarded-For header modification. > > > > -- > > Maxim Dounin > > http://nginx.org/en/donation.html > > > > > > Maxim, > > Thanks for your helpful response. Is there a predefined variable that > I can use to store my modified cookie value or will I just need to > create a new custom variable value? You have to create your custom variable. -- Maxim Dounin http://nginx.org/en/donation.html From agentzh at gmail.com Mon Nov 4 22:07:23 2013 From: agentzh at gmail.com (Yichun Zhang (agentzh)) Date: Mon, 4 Nov 2013 14:07:23 -0800 Subject: [PATCH] Cache: gracefully exit the cache manager process In-Reply-To: <20131104213131.GM95765@mdounin.ru> References: <20131104213131.GM95765@mdounin.ru> Message-ID: Hello! On Mon, Nov 4, 2013 at 1:31 PM, Maxim Dounin wrote: > > The cache manager process isn't worker process, so calling > ngx_worker_process_exit() looks strange. > But ngx_cache_manager_process_cycle() is already calling ngx_worker_process_init(). Are you suggesting removing this call as well? Regards, -agentzh From piotr at cloudflare.com Mon Nov 4 23:41:32 2013 From: piotr at cloudflare.com (Piotr Sikora) Date: Mon, 4 Nov 2013 15:41:32 -0800 Subject: [PATCH] Configure: assorted changes. Message-ID: Hey, attached patch with a few configure changes for 3rd party libs that were in my local tree. I didn't want to spam you with one-line commits, so I squashed them as one, hopefully that's OK as they are kind of related. Best regards, Piotr Sikora # HG changeset patch # User Piotr Sikora # Date 1383608221 28800 # Mon Nov 04 15:37:01 2013 -0800 # Node ID e204bf14905a0471f6efaeecb9690147983c3841 # Parent dea321e5c0216efccbb23e84bbce7cf3e28f130c Configure: assorted changes. Libatomic: - pass CC to the configure script, - call "make clean" before rebuild. OpenSSL: - pass CC to the configure script, - call "make clean" only if Makefile exists (allows build from git), - don't build man pages (25% faster build time). Signed-off-by: Piotr Sikora diff -r dea321e5c021 -r e204bf14905a auto/lib/libatomic/make --- a/auto/lib/libatomic/make Thu Oct 31 18:23:49 2013 +0400 +++ b/auto/lib/libatomic/make Mon Nov 04 15:37:01 2013 -0800 @@ -6,9 +6,11 @@ cat << END >> $NGX_MAKEFILE $NGX_LIBATOMIC/src/libatomic_ops.a: $NGX_LIBATOMIC/Makefile - cd $NGX_LIBATOMIC && \$(MAKE) + cd $NGX_LIBATOMIC \\ + && \$(MAKE) clean \\ + && \$(MAKE) $NGX_LIBATOMIC/Makefile: $NGX_MAKEFILE - cd $NGX_LIBATOMIC && ./configure + cd $NGX_LIBATOMIC && CC="\$(CC)" ./configure END diff -r dea321e5c021 -r e204bf14905a auto/lib/openssl/make --- a/auto/lib/openssl/make Thu Oct 31 18:23:49 2013 +0400 +++ b/auto/lib/openssl/make Mon Nov 04 15:37:01 2013 -0800 @@ -55,10 +55,10 @@ END $OPENSSL/.openssl/include/openssl/ssl.h: $NGX_MAKEFILE cd $OPENSSL \\ - && \$(MAKE) clean \\ - && ./config --prefix=$ngx_prefix no-shared $OPENSSL_OPT \\ + && if [ -f Makefile ]; then \$(MAKE) clean; fi \\ + && CC="\$(CC)" ./config --prefix=$ngx_prefix no-shared $OPENSSL_OPT \\ && \$(MAKE) \\ - && \$(MAKE) install LIBDIR=lib + && \$(MAKE) install_sw LIBDIR=lib END From piotr at cloudflare.com Mon Nov 4 23:47:56 2013 From: piotr at cloudflare.com (Piotr Sikora) Date: Mon, 4 Nov 2013 15:47:56 -0800 Subject: [PATCH] Configure: assorted changes. In-Reply-To: References: Message-ID: Hey, so it looks like gmail doesn't like tabs. Patch attached. Best regards, Piotr Sikora -------------- next part -------------- A non-text attachment was scrubbed... Name: nginx__configure_assorted_changes.patch Type: application/octet-stream Size: 1698 bytes Desc: not available URL: From mdounin at mdounin.ru Tue Nov 5 00:06:05 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 5 Nov 2013 04:06:05 +0400 Subject: [PATCH] Cache: gracefully exit the cache manager process In-Reply-To: References: <20131104213131.GM95765@mdounin.ru> Message-ID: <20131105000605.GP95765@mdounin.ru> Hello! On Mon, Nov 04, 2013 at 02:07:23PM -0800, Yichun Zhang (agentzh) wrote: > Hello! > > On Mon, Nov 4, 2013 at 1:31 PM, Maxim Dounin wrote: > > > > The cache manager process isn't worker process, so calling > > ngx_worker_process_exit() looks strange. > > > > But ngx_cache_manager_process_cycle() is already calling > ngx_worker_process_init(). Yes, and this is known to cause problems. E.g., more than one module was broken due to init_process callback being called in cache manager/loader processes which are quite different from worker processes. > Are you suggesting removing this call as well? No, but it is something we may consider. In any case, I don't think that adding calls to worker-related stuff is a good idea, at least without a thoughtful analysis of possible side effects. -- Maxim Dounin http://nginx.org/en/donation.html From mdounin at mdounin.ru Tue Nov 5 00:33:09 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 5 Nov 2013 04:33:09 +0400 Subject: [PATCH] Configure: assorted changes. In-Reply-To: References: Message-ID: <20131105003309.GQ95765@mdounin.ru> Hello! On Mon, Nov 04, 2013 at 03:41:32PM -0800, Piotr Sikora wrote: > Hey, > attached patch with a few configure changes for 3rd party libs > that were in my local tree. > > I didn't want to spam you with one-line commits, so I squashed > them as one, hopefully that's OK as they are kind of related. I would suggest breaking this at least into several patches. [...] > && \$(MAKE) \\ > - && \$(MAKE) install LIBDIR=lib > + && \$(MAKE) install_sw LIBDIR=lib This is not going to work as oldest version of the OpenSSL library nginx supports is 0.9.7. -- Maxim Dounin http://nginx.org/en/donation.html From piotr at cloudflare.com Tue Nov 5 00:40:23 2013 From: piotr at cloudflare.com (Piotr Sikora) Date: Mon, 4 Nov 2013 16:40:23 -0800 Subject: [PATCH] Configure: assorted changes. In-Reply-To: <20131105003309.GQ95765@mdounin.ru> References: <20131105003309.GQ95765@mdounin.ru> Message-ID: Hey Maxim, > I would suggest breaking this at least into several patches. OK, will do. > This is not going to work as oldest version of the OpenSSL library > nginx supports is 0.9.7. Seriously? This is available since 0.9.7e. Do you really expect people to compile nginx against OpenSSL sources older than 9 years? Best regards, Piotr Sikora From piotr at cloudflare.com Tue Nov 5 01:06:41 2013 From: piotr at cloudflare.com (Piotr Sikora) Date: Mon, 4 Nov 2013 17:06:41 -0800 Subject: [PATCH] Configure: pass CC to the configure scripts. Message-ID: # HG changeset patch # User Piotr Sikora # Date 1383613222 28800 # Mon Nov 04 17:00:22 2013 -0800 # Node ID 76f8950686ce0adc72e491c600295de8986532fb # Parent dea321e5c0216efccbb23e84bbce7cf3e28f130c Configure: pass CC to the configure scripts. Signed-off-by: Piotr Sikora diff -r dea321e5c021 -r 76f8950686ce auto/lib/libatomic/make --- a/auto/lib/libatomic/make Thu Oct 31 18:23:49 2013 +0400 +++ b/auto/lib/libatomic/make Mon Nov 04 17:00:22 2013 -0800 @@ -9,6 +9,6 @@ cd $NGX_LIBATOMIC && \$(MAKE) $NGX_LIBATOMIC/Makefile: $NGX_MAKEFILE - cd $NGX_LIBATOMIC && ./configure + cd $NGX_LIBATOMIC && CC="\$(CC)" ./configure END diff -r dea321e5c021 -r 76f8950686ce auto/lib/openssl/make --- a/auto/lib/openssl/make Thu Oct 31 18:23:49 2013 +0400 +++ b/auto/lib/openssl/make Mon Nov 04 17:00:22 2013 -0800 @@ -56,7 +56,7 @@ END $OPENSSL/.openssl/include/openssl/ssl.h: $NGX_MAKEFILE cd $OPENSSL \\ && \$(MAKE) clean \\ - && ./config --prefix=$ngx_prefix no-shared $OPENSSL_OPT \\ + && CC="\$(CC)" ./config --prefix=$ngx_prefix no-shared $OPENSSL_OPT \\ && \$(MAKE) \\ && \$(MAKE) install LIBDIR=lib -------------- next part -------------- A non-text attachment was scrubbed... Name: nginx__configure_1.patch Type: application/octet-stream Size: 1169 bytes Desc: not available URL: From piotr at cloudflare.com Tue Nov 5 01:06:53 2013 From: piotr at cloudflare.com (Piotr Sikora) Date: Mon, 4 Nov 2013 17:06:53 -0800 Subject: [PATCH] Configure: call "make clean" before rebuild of libatomic. Message-ID: # HG changeset patch # User Piotr Sikora # Date 1383613223 28800 # Mon Nov 04 17:00:23 2013 -0800 # Node ID 68fefa8cd1d6c5164347bb3cfa33fc3d8f0acd14 # Parent dea321e5c0216efccbb23e84bbce7cf3e28f130c Configure: call "make clean" before rebuild of libatomic. Signed-off-by: Piotr Sikora diff -r dea321e5c021 -r 68fefa8cd1d6 auto/lib/libatomic/make --- a/auto/lib/libatomic/make Thu Oct 31 18:23:49 2013 +0400 +++ b/auto/lib/libatomic/make Mon Nov 04 17:00:23 2013 -0800 @@ -6,7 +6,9 @@ cat << END >> $NGX_MAKEFILE $NGX_LIBATOMIC/src/libatomic_ops.a: $NGX_LIBATOMIC/Makefile - cd $NGX_LIBATOMIC && \$(MAKE) + cd $NGX_LIBATOMIC \\ + && \$(MAKE) clean \\ + && \$(MAKE) $NGX_LIBATOMIC/Makefile: $NGX_MAKEFILE cd $NGX_LIBATOMIC && ./configure -------------- next part -------------- A non-text attachment was scrubbed... Name: nginx__configure_2.patch Type: application/octet-stream Size: 846 bytes Desc: not available URL: From piotr at cloudflare.com Tue Nov 5 01:07:03 2013 From: piotr at cloudflare.com (Piotr Sikora) Date: Mon, 4 Nov 2013 17:07:03 -0800 Subject: [PATCH] Configure: call "make clean" for OpenSSL only if Makefile exists. Message-ID: # HG changeset patch # User Piotr Sikora # Date 1383613225 28800 # Mon Nov 04 17:00:25 2013 -0800 # Node ID 6d03c58d4b1c3fdd87f42a9ceaf8daa68d11365a # Parent dea321e5c0216efccbb23e84bbce7cf3e28f130c Configure: call "make clean" for OpenSSL only if Makefile exists. This change allows to build nginx against git checkout of OpenSSL. Signed-off-by: Piotr Sikora diff -r dea321e5c021 -r 6d03c58d4b1c auto/lib/openssl/make --- a/auto/lib/openssl/make Thu Oct 31 18:23:49 2013 +0400 +++ b/auto/lib/openssl/make Mon Nov 04 17:00:25 2013 -0800 @@ -55,7 +55,7 @@ END $OPENSSL/.openssl/include/openssl/ssl.h: $NGX_MAKEFILE cd $OPENSSL \\ - && \$(MAKE) clean \\ + && if [ -f Makefile ]; then \$(MAKE) clean; fi \\ && ./config --prefix=$ngx_prefix no-shared $OPENSSL_OPT \\ && \$(MAKE) \\ && \$(MAKE) install LIBDIR=lib -------------- next part -------------- A non-text attachment was scrubbed... Name: nginx__configure_3.patch Type: application/octet-stream Size: 875 bytes Desc: not available URL: From piotr at cloudflare.com Tue Nov 5 01:07:11 2013 From: piotr at cloudflare.com (Piotr Sikora) Date: Mon, 4 Nov 2013 17:07:11 -0800 Subject: Configure: don't build man pages for OpenSSL. Message-ID: # HG changeset patch # User Piotr Sikora # Date 1383613226 28800 # Mon Nov 04 17:00:26 2013 -0800 # Node ID 4f53edac4459684b69ed8f90c86d8eba8bc1185d # Parent dea321e5c0216efccbb23e84bbce7cf3e28f130c Configure: don't build man pages for OpenSSL. This change speedups build time by 25%. Signed-off-by: Piotr Sikora diff -r dea321e5c021 -r 4f53edac4459 auto/lib/openssl/make --- a/auto/lib/openssl/make Thu Oct 31 18:23:49 2013 +0400 +++ b/auto/lib/openssl/make Mon Nov 04 17:00:26 2013 -0800 @@ -58,7 +58,7 @@ END && \$(MAKE) clean \\ && ./config --prefix=$ngx_prefix no-shared $OPENSSL_OPT \\ && \$(MAKE) \\ - && \$(MAKE) install LIBDIR=lib + && \$(MAKE) install_sw LIBDIR=lib END -------------- next part -------------- A non-text attachment was scrubbed... Name: nginx__configure_4.patch Type: application/octet-stream Size: 746 bytes Desc: not available URL: From aaron.peschel at gmail.com Tue Nov 5 01:14:12 2013 From: aaron.peschel at gmail.com (Aaron Peschel) Date: Mon, 4 Nov 2013 17:14:12 -0800 Subject: Add Support for Weak ETags In-Reply-To: References: Message-ID: Hello, I reduced the scope of the changes to just the gzip and gunzip modules. -Aaron # HG changeset patch # User Aaron Peschel # Date 1383613159 28800 # Mon Nov 04 16:59:19 2013 -0800 # Node ID c0a50e6aac95feac6393dd6bff0b30bd1a05ef9e # Parent e6a1623f87bc96d5ec62b6d77356aa47dbc60756 Add Support for Weak ETags This is a response to rev 4746 which removed ETags. 4746 removes the ETag field from the header in all instances where content is modified by the web server prior to being sent to the requesting client. This is far more stringent than required by the HTTP spec. The HTTP spec requires that strict ETags be dependent on the variant that is returned by the server. While removing all ETags from these variants meets the spec, it is a bit extreme. This commit modifies the gzip and gunzip modules to check if the ETag is marked as a weak ETag. IFF that case, the ETag is retained, and not dropped. diff -r e6a1623f87bc -r c0a50e6aac95 src/http/modules/ngx_http_gunzip_filter_module.c --- a/src/http/modules/ngx_http_gunzip_filter_module.c Mon Oct 21 18:20:32 2013 +0800 +++ b/src/http/modules/ngx_http_gunzip_filter_module.c Mon Nov 04 16:59:19 2013 -0800 @@ -165,7 +165,9 @@ ngx_http_clear_content_length(r); ngx_http_clear_accept_ranges(r); - ngx_http_clear_etag(r); + if (!ngx_http_has_weak_etag(r)) { + ngx_http_clear_etag(r); + } return ngx_http_next_header_filter(r); } diff -r e6a1623f87bc -r c0a50e6aac95 src/http/modules/ngx_http_gzip_filter_module.c --- a/src/http/modules/ngx_http_gzip_filter_module.c Mon Oct 21 18:20:32 2013 +0800 +++ b/src/http/modules/ngx_http_gzip_filter_module.c Mon Nov 04 16:59:19 2013 -0800 @@ -306,7 +306,9 @@ ngx_http_clear_content_length(r); ngx_http_clear_accept_ranges(r); - ngx_http_clear_etag(r); + if (!ngx_http_has_weak_etag(r)) { + ngx_http_clear_etag(r); + } return ngx_http_next_header_filter(r); } diff -r e6a1623f87bc -r c0a50e6aac95 src/http/ngx_http_core_module.h --- a/src/http/ngx_http_core_module.h Mon Oct 21 18:20:32 2013 +0800 +++ b/src/http/ngx_http_core_module.h Mon Nov 04 16:59:19 2013 -0800 @@ -572,6 +572,11 @@ r->headers_out.location = NULL; \ } +#define ngx_http_has_weak_etag(r) \ + \ + ((r->headers_out.etag) && \ + (ngx_strncmp(r->headers_out.etag->value.data, "W/", 2))) + #define ngx_http_clear_etag(r) \ \ if (r->headers_out.etag) { \ On Tue, Oct 29, 2013 at 3:04 PM, Aaron Peschel wrote: > On Fri, Oct 25, 2013 at 2:54 PM, Piotr Sikora wrote: >> Hi Aaron, >> I disagree with your patch... While retaining weak ETags in case of >> gzip/gunzip modules is correct, other modules are modifying the >> content and weak ETags should be removed from responses processed by >> them. >> >> Best regards, >> Piotr Sikora >> >> _______________________________________________ >> nginx-devel mailing list >> nginx-devel at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx-devel > > What are your thoughts on the correct way to proceed from here? Should there be > two macros, ngx_http_clear_strict_etag and ngx_http_clear_etag and then have > the gunzip and gzip only clear the strict etag? > > EG > > #define ngx_http_clear_etag(r) \ > \ > if (r->headers_out.etag) { \ > r->headers_out.etag->hash = 0; \ > r->headers_out.etag = NULL; \ > } > > #define ngx_http_clear_strict_etag(r) \ > \ > if (r->headers_out.etag) { \ > if (! ngx_strncmp(r->headers_out.etag->value.data, "W/", 2)) { \ > r->headers_out.etag->hash = 0; \ > r->headers_out.etag = NULL; \ > } \ > } > > The gzip module is the module I am most interested in providing weak ETag > support for. Please let me know what the suggested path is here, and I will put > in the work for it. > > Thank you, > > -Aaron From mdounin at mdounin.ru Tue Nov 5 01:17:31 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 5 Nov 2013 05:17:31 +0400 Subject: [PATCH] Configure: assorted changes. In-Reply-To: References: <20131105003309.GQ95765@mdounin.ru> Message-ID: <20131105011731.GS95765@mdounin.ru> Hello! On Mon, Nov 04, 2013 at 04:40:23PM -0800, Piotr Sikora wrote: > > nginx supports is 0.9.7. > > Seriously? This is available since 0.9.7e. Do you really expect people > to compile nginx against OpenSSL sources older than 9 years? That's what we claim to support now (since nginx 0.8.7, see CHANGES). We'll probably switch to 0.9.8 as a minimum supported version eventually, but it's yet to happen. -- Maxim Dounin http://nginx.org/en/donation.html From piotr at cloudflare.com Tue Nov 5 01:22:29 2013 From: piotr at cloudflare.com (Piotr Sikora) Date: Mon, 4 Nov 2013 17:22:29 -0800 Subject: [PATCH] Configure: assorted changes. In-Reply-To: <20131105011731.GS95765@mdounin.ru> References: <20131105003309.GQ95765@mdounin.ru> <20131105011731.GS95765@mdounin.ru> Message-ID: Hey Maxim, > That's what we claim to support now (since nginx 0.8.7, see > CHANGES). We'll probably switch to 0.9.8 as a minimum supported > version eventually, but it's yet to happen. Fair enough, but I don't think it's reasonable for anyone to compile nginx against OpenSSL _sources_ that are 9 or more years old... That just doesn't make any sense. Support for dynamically-linked OpenSSL is of course a different matter. Best regards, Piotr Sikora From mdounin at mdounin.ru Tue Nov 5 02:58:53 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 5 Nov 2013 06:58:53 +0400 Subject: [PATCH] Configure: assorted changes. In-Reply-To: References: <20131105003309.GQ95765@mdounin.ru> <20131105011731.GS95765@mdounin.ru> Message-ID: <20131105025853.GT95765@mdounin.ru> Hello! On Mon, Nov 04, 2013 at 05:22:29PM -0800, Piotr Sikora wrote: > Hey Maxim, > > > That's what we claim to support now (since nginx 0.8.7, see > > CHANGES). We'll probably switch to 0.9.8 as a minimum supported > > version eventually, but it's yet to happen. > > Fair enough, but I don't think it's reasonable for anyone to compile > nginx against OpenSSL _sources_ that are 9 or more years old... That > just doesn't make any sense. At least this is how compatibility with OpenSSL 0.9.7 is tested. And breaking this is not an option. -- Maxim Dounin http://nginx.org/en/donation.html From piotr at cloudflare.com Tue Nov 5 05:40:40 2013 From: piotr at cloudflare.com (Piotr Sikora) Date: Mon, 4 Nov 2013 21:40:40 -0800 Subject: [PATCH] Configure: assorted changes. In-Reply-To: <20131105025853.GT95765@mdounin.ru> References: <20131105003309.GQ95765@mdounin.ru> <20131105011731.GS95765@mdounin.ru> <20131105025853.GT95765@mdounin.ru> Message-ID: Hey Maxim, > At least this is how compatibility with OpenSSL 0.9.7 is tested. And > breaking this is not an option. Fair enough, just keep in mind that 0.9.7 branch reached EOL 5 years ago. Best regards, Piotr Sikora From faskiri.devel at gmail.com Tue Nov 5 07:50:03 2013 From: faskiri.devel at gmail.com (Fasih) Date: Tue, 5 Nov 2013 13:20:03 +0530 Subject: Question about asynchronous filter_header/body Message-ID: Hi I want to have a filter header/body that makes an asynchronous call. On success, a completion handler is called. The result of this completion handler decides the output of filter header/body. I understand subrequest can be used to do this. But are there alternatives to this? Lets say, I want to filter the body of the response to uppercase the body after 10 secs, how do I do that? This is what I tried: 1. In the body filter, create a timer, set the handler to my function, and return NGX_DONE 2. In the handler, I call the next_body_filter This works but there are edgecases that I am not sure how to handle, e.g. how to handle errors from next_body_filter, (specifically NGX_AGAIN) and so on. Pointers will be greatly appreciated. Thanks! -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Tue Nov 5 14:50:55 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 5 Nov 2013 18:50:55 +0400 Subject: [PATCH] Configure: call "make clean" before rebuild of libatomic. In-Reply-To: References: Message-ID: <20131105145055.GW95765@mdounin.ru> Hello! On Mon, Nov 04, 2013 at 05:06:53PM -0800, Piotr Sikora wrote: > # HG changeset patch > # User Piotr Sikora > # Date 1383613223 28800 > # Mon Nov 04 17:00:23 2013 -0800 > # Node ID 68fefa8cd1d6c5164347bb3cfa33fc3d8f0acd14 > # Parent dea321e5c0216efccbb23e84bbce7cf3e28f130c > Configure: call "make clean" before rebuild of libatomic. > > Signed-off-by: Piotr Sikora > > diff -r dea321e5c021 -r 68fefa8cd1d6 auto/lib/libatomic/make > --- a/auto/lib/libatomic/make Thu Oct 31 18:23:49 2013 +0400 > +++ b/auto/lib/libatomic/make Mon Nov 04 17:00:23 2013 -0800 > @@ -6,7 +6,9 @@ > cat << END >> $NGX_MAKEFILE > > $NGX_LIBATOMIC/src/libatomic_ops.a: $NGX_LIBATOMIC/Makefile > - cd $NGX_LIBATOMIC && \$(MAKE) > + cd $NGX_LIBATOMIC \\ > + && \$(MAKE) clean \\ > + && \$(MAKE) > > $NGX_LIBATOMIC/Makefile: $NGX_MAKEFILE > cd $NGX_LIBATOMIC && ./configure Which problem this patch tries to solve? As far as I see, libatomic properly rebuilds itself based on config.status, and an extra "make clean" shouldn't be needed. -- Maxim Dounin http://nginx.org/en/donation.html From mdounin at mdounin.ru Tue Nov 5 16:14:12 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 5 Nov 2013 20:14:12 +0400 Subject: [PATCH] Configure: pass CC to the configure scripts. In-Reply-To: References: Message-ID: <20131105161412.GX95765@mdounin.ru> Hello! On Mon, Nov 04, 2013 at 05:06:41PM -0800, Piotr Sikora wrote: > # HG changeset patch > # User Piotr Sikora > # Date 1383613222 28800 > # Mon Nov 04 17:00:22 2013 -0800 > # Node ID 76f8950686ce0adc72e491c600295de8986532fb > # Parent dea321e5c0216efccbb23e84bbce7cf3e28f130c > Configure: pass CC to the configure scripts. > > Signed-off-by: Piotr Sikora > > diff -r dea321e5c021 -r 76f8950686ce auto/lib/libatomic/make > --- a/auto/lib/libatomic/make Thu Oct 31 18:23:49 2013 +0400 > +++ b/auto/lib/libatomic/make Mon Nov 04 17:00:22 2013 -0800 > @@ -9,6 +9,6 @@ > cd $NGX_LIBATOMIC && \$(MAKE) > > $NGX_LIBATOMIC/Makefile: $NGX_MAKEFILE > - cd $NGX_LIBATOMIC && ./configure > + cd $NGX_LIBATOMIC && CC="\$(CC)" ./configure > > END > diff -r dea321e5c021 -r 76f8950686ce auto/lib/openssl/make > --- a/auto/lib/openssl/make Thu Oct 31 18:23:49 2013 +0400 > +++ b/auto/lib/openssl/make Mon Nov 04 17:00:22 2013 -0800 > @@ -56,7 +56,7 @@ END > $OPENSSL/.openssl/include/openssl/ssl.h: $NGX_MAKEFILE > cd $OPENSSL \\ > && \$(MAKE) clean \\ > - && ./config --prefix=$ngx_prefix no-shared $OPENSSL_OPT \\ > + && CC="\$(CC)" ./config --prefix=$ngx_prefix no-shared $OPENSSL_OPT \\ > && \$(MAKE) \\ > && \$(MAKE) install LIBDIR=lib OpenSSL's ./config code suggests that this may affect e.g. builds on SunOS with gcc due to GCCVER no longer being set. Have you looked into this? -- Maxim Dounin http://nginx.org/en/donation.html From mdounin at mdounin.ru Tue Nov 5 16:30:46 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 05 Nov 2013 16:30:46 +0000 Subject: [nginx] Configure: call "make clean" for OpenSSL only if Makefil... Message-ID: details: http://hg.nginx.org/nginx/rev/f817f9d1cded branches: changeset: 5438:f817f9d1cded user: Piotr Sikora date: Mon Nov 04 17:00:25 2013 -0800 description: Configure: call "make clean" for OpenSSL only if Makefile exists. This change allows to build nginx against git checkout of OpenSSL. Signed-off-by: Piotr Sikora diffstat: auto/lib/openssl/make | 2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diffs (12 lines): diff --git a/auto/lib/openssl/make b/auto/lib/openssl/make --- a/auto/lib/openssl/make +++ b/auto/lib/openssl/make @@ -55,7 +55,7 @@ END $OPENSSL/.openssl/include/openssl/ssl.h: $NGX_MAKEFILE cd $OPENSSL \\ - && \$(MAKE) clean \\ + && if [ -f Makefile ]; then \$(MAKE) clean; fi \\ && ./config --prefix=$ngx_prefix no-shared $OPENSSL_OPT \\ && \$(MAKE) \\ && \$(MAKE) install LIBDIR=lib From mdounin at mdounin.ru Tue Nov 5 16:30:53 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 5 Nov 2013 20:30:53 +0400 Subject: [PATCH] Configure: call "make clean" for OpenSSL only if Makefile exists. In-Reply-To: References: Message-ID: <20131105163053.GY95765@mdounin.ru> Hello! On Mon, Nov 04, 2013 at 05:07:03PM -0800, Piotr Sikora wrote: > # HG changeset patch > # User Piotr Sikora > # Date 1383613225 28800 > # Mon Nov 04 17:00:25 2013 -0800 > # Node ID 6d03c58d4b1c3fdd87f42a9ceaf8daa68d11365a > # Parent dea321e5c0216efccbb23e84bbce7cf3e28f130c > Configure: call "make clean" for OpenSSL only if Makefile exists. > > This change allows to build nginx against git checkout of OpenSSL. > > Signed-off-by: Piotr Sikora > > diff -r dea321e5c021 -r 6d03c58d4b1c auto/lib/openssl/make > --- a/auto/lib/openssl/make Thu Oct 31 18:23:49 2013 +0400 > +++ b/auto/lib/openssl/make Mon Nov 04 17:00:25 2013 -0800 > @@ -55,7 +55,7 @@ END > > $OPENSSL/.openssl/include/openssl/ssl.h: $NGX_MAKEFILE > cd $OPENSSL \\ > - && \$(MAKE) clean \\ > + && if [ -f Makefile ]; then \$(MAKE) clean; fi \\ > && ./config --prefix=$ngx_prefix no-shared $OPENSSL_OPT \\ > && \$(MAKE) \\ > && \$(MAKE) install LIBDIR=lib Committed, thanks. -- Maxim Dounin http://nginx.org/en/donation.html From vbart at nginx.com Tue Nov 5 17:00:08 2013 From: vbart at nginx.com (Valentin V. Bartenev) Date: Tue, 5 Nov 2013 21:00:08 +0400 Subject: [PATCH] SSL: support ALPN (IETF's successor to NPN) In-Reply-To: References: Message-ID: <201311052100.08114.vbart@nginx.com> On Monday 04 November 2013 14:27:44 Piotr Sikora wrote: > Hey, > minor style change (moved declaration of hc back to the top). > > Any chances for this getting in before OpenSSL-1.0.2 is released? > [..] Sorry for the long delay. I'm going to look at this one and some other of your patches next week. wbr, Valentin V. Bartenev From piotr at cloudflare.com Tue Nov 5 19:58:38 2013 From: piotr at cloudflare.com (Piotr Sikora) Date: Tue, 5 Nov 2013 11:58:38 -0800 Subject: [PATCH] Configure: call "make clean" before rebuild of libatomic. In-Reply-To: <20131105145055.GW95765@mdounin.ru> References: <20131105145055.GW95765@mdounin.ru> Message-ID: Hey Maxim, > Which problem this patch tries to solve? As far as I see, > libatomic properly rebuilds itself based on config.status, and an > extra "make clean" shouldn't be needed. While not a problem per se, it the library wouldn't be rebuild if CC changed... But the change is mostly just to keep it in sync with rest of the 3rd-party libraries, which nginx always rebuilds from scratch. Best regards, Piotr Sikora From piotr at cloudflare.com Tue Nov 5 20:25:48 2013 From: piotr at cloudflare.com (Piotr Sikora) Date: Tue, 5 Nov 2013 12:25:48 -0800 Subject: [PATCH] Configure: pass CC to the configure scripts. In-Reply-To: <20131105161412.GX95765@mdounin.ru> References: <20131105161412.GX95765@mdounin.ru> Message-ID: Hey Maxim, > OpenSSL's ./config code suggests that this may affect e.g. builds > on SunOS with gcc due to GCCVER no longer being set. Have you > looked into this? Good call, I did not. It seems that it could indeed cause issues on 32-bit only Solaris on SPARCv9, since it would try to build 64-bit binary, at least that's what I think would happened... But does that configuration even exist (SPARCv9 seems to be 64-bit) and is it supposed to be supported by nginx? Best regards, Piotr Sikora From piotr at cloudflare.com Tue Nov 5 20:44:24 2013 From: piotr at cloudflare.com (Piotr Sikora) Date: Tue, 5 Nov 2013 12:44:24 -0800 Subject: [PATCH] Configure: pass CC to the configure scripts. In-Reply-To: References: <20131105161412.GX95765@mdounin.ru> Message-ID: Hey, > It seems that it could indeed cause issues on 32-bit only Solaris on > SPARCv9, since it would try to build 64-bit binary, at least that's > what I think would happened... Actually, I believe that I know how to fix this, I just need to get ahold of a Solaris box. For now please put this on hold. Best regards, Piotr Sikora From mdounin at mdounin.ru Wed Nov 6 15:31:50 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 6 Nov 2013 19:31:50 +0400 Subject: [PATCH] Configure: call "make clean" before rebuild of libatomic. In-Reply-To: References: <20131105145055.GW95765@mdounin.ru> Message-ID: <20131106153150.GG95765@mdounin.ru> Hello! On Tue, Nov 05, 2013 at 11:58:38AM -0800, Piotr Sikora wrote: > Hey Maxim, > > > Which problem this patch tries to solve? As far as I see, > > libatomic properly rebuilds itself based on config.status, and an > > extra "make clean" shouldn't be needed. > > While not a problem per se, it the library wouldn't be rebuild if CC > changed... But the change is mostly just to keep it in sync with rest > of the 3rd-party libraries, which nginx always rebuilds from scratch. Shouldn't it be "make distclean" before configure then, like it's done with other libraries? -- Maxim Dounin http://nginx.org/en/donation.html From fasihullah.askiri at gmail.com Thu Nov 7 10:55:47 2013 From: fasihullah.askiri at gmail.com (Fasihullah Askiri) Date: Thu, 7 Nov 2013 16:25:47 +0530 Subject: Question about asynchronous filter_header/body In-Reply-To: References: Message-ID: Ping? On Tue, Nov 5, 2013 at 1:20 PM, Fasih wrote: > Hi > > I want to have a filter header/body that makes an asynchronous call. On > success, a completion handler is called. The result of this completion > handler decides the output of filter header/body. I understand subrequest > can be used to do this. But are there alternatives to this? > > Lets say, I want to filter the body of the response to uppercase the body > after 10 secs, how do I do that? > > This is what I tried: > 1. In the body filter, create a timer, set the handler to my function, and > return NGX_DONE > 2. In the handler, I call the next_body_filter > > This works but there are edgecases that I am not sure how to handle, e.g. > how to handle errors from next_body_filter, (specifically NGX_AGAIN) and so > on. Pointers will be greatly appreciated. > > Thanks! > -- +Fasih Life is 10% what happens to you and 90% how you react to it -------------- next part -------------- An HTML attachment was scrubbed... URL: From manlio.perillo at gmail.com Fri Nov 8 11:38:00 2013 From: manlio.perillo at gmail.com (Manlio Perillo) Date: Fri, 08 Nov 2013 12:38:00 +0100 Subject: about CPP Makefile macro Message-ID: <527CCD18.9070504@gmail.com> Hi. I have noted that the generated Makefile has a CPP macro, but it is never used. I also tried with: ./configure --with-cpp=xxx and no errors were reported (xxx is not a placeholder, but the actual non existent name I have used). CPP support was added in revision (hg) 286, but was only declared. Was the CPP variable ever used? Regards Manlio Perillo From mdounin at mdounin.ru Fri Nov 8 12:20:48 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 8 Nov 2013 16:20:48 +0400 Subject: about CPP Makefile macro In-Reply-To: <527CCD18.9070504@gmail.com> References: <527CCD18.9070504@gmail.com> Message-ID: <20131108122048.GU95765@mdounin.ru> Hello! On Fri, Nov 08, 2013 at 12:38:00PM +0100, Manlio Perillo wrote: > Hi. > > I have noted that the generated Makefile has a CPP macro, but it is > never used. > > I also tried with: > > ./configure --with-cpp=xxx > > and no errors were reported (xxx is not a placeholder, but the > actual non existent name I have used). > > CPP support was added in revision (hg) 286, but was only declared. > Was the CPP variable ever used? As far as I can see, it's still used in auto/lib/md5/make and auto/lib/sha1/make. And it's actually needed when using md5 library at least from openssl-0.9.7/crypto/md5. -- Maxim Dounin http://nginx.org/en/donation.html From manlio.perillo at gmail.com Fri Nov 8 14:28:25 2013 From: manlio.perillo at gmail.com (Manlio Perillo) Date: Fri, 08 Nov 2013 15:28:25 +0100 Subject: about CPP Makefile macro In-Reply-To: <20131108122048.GU95765@mdounin.ru> References: <527CCD18.9070504@gmail.com> <20131108122048.GU95765@mdounin.ru> Message-ID: <527CF509.5090000@gmail.com> On 08/11/2013 13:20, Maxim Dounin wrote: > Hello! > > On Fri, Nov 08, 2013 at 12:38:00PM +0100, Manlio Perillo wrote: > >> Hi. >> >> I have noted that the generated Makefile has a CPP macro, but it is >> never used. >> >> I also tried with: >> >> ./configure --with-cpp=xxx >> >> and no errors were reported (xxx is not a placeholder, but the >> actual non existent name I have used). >> >> CPP support was added in revision (hg) 286, but was only declared. >> Was the CPP variable ever used? > > As far as I can see, it's still used in auto/lib/md5/make and > auto/lib/sha1/make. And it's actually needed when using md5 > library at least from openssl-0.9.7/crypto/md5. > Right, sorry. I did not noticed that a separate make was called when building md5 and sha1. Regards Manlio Perillo From piotr at cloudflare.com Mon Nov 11 10:00:50 2013 From: piotr at cloudflare.com (Piotr Sikora) Date: Mon, 11 Nov 2013 02:00:50 -0800 Subject: [PATCH] Configure: call "make clean" before rebuild of libatomic. In-Reply-To: <20131106153150.GG95765@mdounin.ru> References: <20131105145055.GW95765@mdounin.ru> <20131106153150.GG95765@mdounin.ru> Message-ID: Hey Maxim, > Shouldn't it be "make distclean" before configure then, like it's > done with other libraries? Sure, why not. # HG changeset patch # User Piotr Sikora # Date 1384163987 28800 # Mon Nov 11 01:59:47 2013 -0800 # Node ID 9b3bbaddb1ef7bb52bae1e8967ad13b017ea00c4 # Parent f817f9d1cded8316dc804b50527dfab19d928834 Configure: call "make distclean" for libatomic. Signed-off-by: Piotr Sikora diff -r f817f9d1cded -r 9b3bbaddb1ef auto/lib/libatomic/make --- a/auto/lib/libatomic/make Mon Nov 04 17:00:25 2013 -0800 +++ b/auto/lib/libatomic/make Mon Nov 11 01:59:47 2013 -0800 @@ -9,6 +9,8 @@ cd $NGX_LIBATOMIC && \$(MAKE) $NGX_LIBATOMIC/Makefile: $NGX_MAKEFILE - cd $NGX_LIBATOMIC && ./configure + cd $NGX_LIBATOMIC \\ + && if [ -f Makefile ]; then \$(MAKE) distclean; fi \\ + && ./configure END Best regards, Piotr Sikora -------------- next part -------------- A non-text attachment was scrubbed... Name: nginx__configure_2a.patch Type: application/octet-stream Size: 740 bytes Desc: not available URL: From mdounin at mdounin.ru Mon Nov 11 13:46:27 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 11 Nov 2013 13:46:27 +0000 Subject: [nginx] Configure: call "make distclean" for libatomic. Message-ID: details: http://hg.nginx.org/nginx/rev/9b3bbaddb1ef branches: changeset: 5439:9b3bbaddb1ef user: Piotr Sikora date: Mon Nov 11 01:59:47 2013 -0800 description: Configure: call "make distclean" for libatomic. Signed-off-by: Piotr Sikora diffstat: auto/lib/libatomic/make | 4 +++- 1 files changed, 3 insertions(+), 1 deletions(-) diffs (13 lines): diff --git a/auto/lib/libatomic/make b/auto/lib/libatomic/make --- a/auto/lib/libatomic/make +++ b/auto/lib/libatomic/make @@ -9,6 +9,8 @@ cd $NGX_LIBATOMIC && \$(MAKE) $NGX_LIBATOMIC/Makefile: $NGX_MAKEFILE - cd $NGX_LIBATOMIC && ./configure + cd $NGX_LIBATOMIC \\ + && if [ -f Makefile ]; then \$(MAKE) distclean; fi \\ + && ./configure END From mdounin at mdounin.ru Mon Nov 11 13:46:34 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 11 Nov 2013 17:46:34 +0400 Subject: [PATCH] Configure: call "make clean" before rebuild of libatomic. In-Reply-To: References: <20131105145055.GW95765@mdounin.ru> <20131106153150.GG95765@mdounin.ru> Message-ID: <20131111134634.GC95765@mdounin.ru> Hello! On Mon, Nov 11, 2013 at 02:00:50AM -0800, Piotr Sikora wrote: > Hey Maxim, > > > Shouldn't it be "make distclean" before configure then, like it's > > done with other libraries? > > Sure, why not. > > # HG changeset patch > # User Piotr Sikora > # Date 1384163987 28800 > # Mon Nov 11 01:59:47 2013 -0800 > # Node ID 9b3bbaddb1ef7bb52bae1e8967ad13b017ea00c4 > # Parent f817f9d1cded8316dc804b50527dfab19d928834 > Configure: call "make distclean" for libatomic. > > Signed-off-by: Piotr Sikora > > diff -r f817f9d1cded -r 9b3bbaddb1ef auto/lib/libatomic/make > --- a/auto/lib/libatomic/make Mon Nov 04 17:00:25 2013 -0800 > +++ b/auto/lib/libatomic/make Mon Nov 11 01:59:47 2013 -0800 > @@ -9,6 +9,8 @@ > cd $NGX_LIBATOMIC && \$(MAKE) > > $NGX_LIBATOMIC/Makefile: $NGX_MAKEFILE > - cd $NGX_LIBATOMIC && ./configure > + cd $NGX_LIBATOMIC \\ > + && if [ -f Makefile ]; then \$(MAKE) distclean; fi \\ > + && ./configure > > END Committed, thnx. -- Maxim Dounin http://nginx.org/en/donation.html From vbart at nginx.com Mon Nov 11 14:56:56 2013 From: vbart at nginx.com (Valentin Bartenev) Date: Mon, 11 Nov 2013 14:56:56 +0000 Subject: [nginx] SPDY: fixed request hang with the auth request module. Message-ID: details: http://hg.nginx.org/nginx/rev/cbb9a6c7493c branches: changeset: 5440:cbb9a6c7493c user: Valentin Bartenev date: Mon Nov 11 18:49:35 2013 +0400 description: SPDY: fixed request hang with the auth request module. We should just call post_handler() when subrequest wants to read body, like it happens for HTTP since rev. f458156fd46a. An attempt to init request body for subrequests results in hang if the body was not already read. diffstat: src/http/ngx_http_request_body.c | 2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diffs (12 lines): diff -r 9b3bbaddb1ef -r cbb9a6c7493c src/http/ngx_http_request_body.c --- a/src/http/ngx_http_request_body.c Mon Nov 11 01:59:47 2013 -0800 +++ b/src/http/ngx_http_request_body.c Mon Nov 11 18:49:35 2013 +0400 @@ -43,7 +43,7 @@ ngx_http_read_client_request_body(ngx_ht r->main->count++; #if (NGX_HTTP_SPDY) - if (r->spdy_stream) { + if (r->spdy_stream && r == r->main) { rc = ngx_http_spdy_read_request_body(r, post_handler); goto done; } From ru at nginx.com Tue Nov 12 04:23:41 2013 From: ru at nginx.com (Ruslan Ermilov) Date: Tue, 12 Nov 2013 08:23:41 +0400 Subject: IPv6 support in resolver In-Reply-To: References: <20130617153021.GH72282@mdounin.ru> <51E3F300.3070509@nginx.com> Message-ID: <20131112042341.GC5585@lo0.su> Hi, On Tue, Oct 29, 2013 at 04:06:35PM +0400, ToSHiC wrote: > Yesterday you had a talk on Highload++ and said about lack of IPv6 resolver > support. Do you have any news about my patch? This is just to let you know I've started to work on adding IPv6 support into nginx's resolver. From thierry.magnien at sfr.com Tue Nov 12 15:04:01 2013 From: thierry.magnien at sfr.com (MAGNIEN, Thierry) Date: Tue, 12 Nov 2013 15:04:01 +0000 Subject: Passing response from upstream server to another upstream server Message-ID: <5D103CE839D50E4CBC62C9FD7B83287C274C0275@EXCN015.encara.local.ads> Hi, I would like to achieve something like this: - Nginx receives request from client - Nginx forwards request to an upstream server and reads response - Nginx sends this response to another upstream server (it will perform some content modification) - Nginx gets final response and sends it over to client Is there a way to achieve this by using existing modules and an adequate configuration, or do I need to write my own "double upstream" module ? Thanks a lot, Thierry From mdounin at mdounin.ru Tue Nov 12 16:46:27 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 12 Nov 2013 20:46:27 +0400 Subject: Add Support for Weak ETags In-Reply-To: References: Message-ID: <20131112164627.GO95765@mdounin.ru> Hello! On Mon, Nov 04, 2013 at 05:14:12PM -0800, Aaron Peschel wrote: > I reduced the scope of the changes to just the gzip and gunzip modules. I don't think that limiting the scope to gzip and gunzip is correct either. From cache validation point of view, weak etags are mostly identical to Last-Modified, and removing etags completely should mostly follow ngx_http_clear_last_modified(). Here is a patch, untested: # HG changeset patch # User Maxim Dounin # Date 1384274233 -14400 # Tue Nov 12 20:37:13 2013 +0400 # Node ID 2c4e71a1c9a3467ba53115b693549c41f248164e # Parent f3c95d2d7d5e2d69c4b4fd99421b3193d43adab0 Entity tags: only clear strict etags if weak ones are ok. diff --git a/src/http/modules/ngx_http_addition_filter_module.c b/src/http/modules/ngx_http_addition_filter_module.c --- a/src/http/modules/ngx_http_addition_filter_module.c +++ b/src/http/modules/ngx_http_addition_filter_module.c @@ -121,7 +121,7 @@ ngx_http_addition_header_filter(ngx_http ngx_http_clear_content_length(r); ngx_http_clear_accept_ranges(r); - ngx_http_clear_etag(r); + ngx_http_clear_strict_etag(r); return ngx_http_next_header_filter(r); } diff --git a/src/http/modules/ngx_http_gunzip_filter_module.c b/src/http/modules/ngx_http_gunzip_filter_module.c --- a/src/http/modules/ngx_http_gunzip_filter_module.c +++ b/src/http/modules/ngx_http_gunzip_filter_module.c @@ -165,7 +165,7 @@ ngx_http_gunzip_header_filter(ngx_http_r ngx_http_clear_content_length(r); ngx_http_clear_accept_ranges(r); - ngx_http_clear_etag(r); + ngx_http_clear_strict_etag(r); return ngx_http_next_header_filter(r); } diff --git a/src/http/modules/ngx_http_gzip_filter_module.c b/src/http/modules/ngx_http_gzip_filter_module.c --- a/src/http/modules/ngx_http_gzip_filter_module.c +++ b/src/http/modules/ngx_http_gzip_filter_module.c @@ -306,7 +306,7 @@ ngx_http_gzip_header_filter(ngx_http_req ngx_http_clear_content_length(r); ngx_http_clear_accept_ranges(r); - ngx_http_clear_etag(r); + ngx_http_clear_strict_etag(r); return ngx_http_next_header_filter(r); } diff --git a/src/http/modules/ngx_http_ssi_filter_module.c b/src/http/modules/ngx_http_ssi_filter_module.c --- a/src/http/modules/ngx_http_ssi_filter_module.c +++ b/src/http/modules/ngx_http_ssi_filter_module.c @@ -368,10 +368,13 @@ ngx_http_ssi_header_filter(ngx_http_requ if (r == r->main) { ngx_http_clear_content_length(r); ngx_http_clear_accept_ranges(r); - ngx_http_clear_etag(r); if (!slcf->last_modified) { ngx_http_clear_last_modified(r); + ngx_http_clear_etag(r); + + } else { + ngx_http_clear_strict_etag(r); } } diff --git a/src/http/modules/ngx_http_sub_filter_module.c b/src/http/modules/ngx_http_sub_filter_module.c --- a/src/http/modules/ngx_http_sub_filter_module.c +++ b/src/http/modules/ngx_http_sub_filter_module.c @@ -175,10 +175,13 @@ ngx_http_sub_header_filter(ngx_http_requ if (r == r->main) { ngx_http_clear_content_length(r); - ngx_http_clear_etag(r); if (!slcf->last_modified) { ngx_http_clear_last_modified(r); + ngx_http_clear_etag(r); + + } else { + ngx_http_clear_strict_etag(r); } } diff --git a/src/http/modules/ngx_http_xslt_filter_module.c b/src/http/modules/ngx_http_xslt_filter_module.c --- a/src/http/modules/ngx_http_xslt_filter_module.c +++ b/src/http/modules/ngx_http_xslt_filter_module.c @@ -337,12 +337,14 @@ ngx_http_xslt_send(ngx_http_request_t *r r->headers_out.content_length = NULL; } - ngx_http_clear_etag(r); - conf = ngx_http_get_module_loc_conf(r, ngx_http_xslt_filter_module); if (!conf->last_modified) { ngx_http_clear_last_modified(r); + ngx_http_clear_etag(r); + + } else { + ngx_http_clear_strict_etag(r); } } diff --git a/src/http/ngx_http_core_module.h b/src/http/ngx_http_core_module.h --- a/src/http/ngx_http_core_module.h +++ b/src/http/ngx_http_core_module.h @@ -579,5 +579,16 @@ extern ngx_str_t ngx_http_core_get_meth r->headers_out.etag = NULL; \ } +#define ngx_http_clear_strict_etag(r) \ + \ + if (r->headers_out.etag \ + && (r->headers_out.etag->value.len < 2 \ + || r->headers_out.etag->value.data[0] != 'W' \ + || r->headers_out.etag->value.data[1] != '/')) \ + { \ + r->headers_out.etag->hash = 0; \ + r->headers_out.etag = NULL; \ + } + #endif /* _NGX_HTTP_CORE_H_INCLUDED_ */ But I don't really sure we need preserving weak etags aproach. It requires a backend to know that weak etags can be used, while strict ones can't. This is basically what we already have - but with Last-Modified instead of etags. It could be done better - strict etags can be downgraded to weak ones if we change entity representation (but not semantics). This way it will do its best while handling all possible responses. This will require much more changes though. > # HG changeset patch > # User Aaron Peschel > # Date 1383613159 28800 > # Mon Nov 04 16:59:19 2013 -0800 > # Node ID c0a50e6aac95feac6393dd6bff0b30bd1a05ef9e > # Parent e6a1623f87bc96d5ec62b6d77356aa47dbc60756 > Add Support for Weak ETags > > This is a response to rev 4746 which removed ETags. 4746 removes the ETag field > from the header in all instances where content is modified by the web server > prior to being sent to the requesting client. This is far more stringent than > required by the HTTP spec. Just a side note: referring to Mercurial changesets with revision numbers can be ambigous, as different clones may have different revision numbers. It's a good idea to referer to a revision hash or a revision id and hash as shown by hg log, e.g. 4746:4a18bf1833a9. -- Maxim Dounin http://nginx.org/en/donation.html From piotr at cloudflare.com Tue Nov 12 20:24:54 2013 From: piotr at cloudflare.com (Piotr Sikora) Date: Tue, 12 Nov 2013 12:24:54 -0800 Subject: Add Support for Weak ETags In-Reply-To: <20131112164627.GO95765@mdounin.ru> References: <20131112164627.GO95765@mdounin.ru> Message-ID: Hey Maxim, > I don't think that limiting the scope to gzip and gunzip is > correct either. From cache validation point of view, weak etags > are mostly identical to Last-Modified, and removing etags > completely should mostly follow ngx_http_clear_last_modified(). I strongly disagree. Modifying content (via addition/sub/ssi modules) can change enough to consider two pages completely different (at least as a general rule), so I don't think that they should retain any ETags. I've got mixed feelings regarding xslt module. While I don't disagree as much, I think it's still safer to remove weak ETags there as well. Best regards, Piotr Sikora From mdounin at mdounin.ru Tue Nov 12 20:54:57 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 13 Nov 2013 00:54:57 +0400 Subject: Add Support for Weak ETags In-Reply-To: References: <20131112164627.GO95765@mdounin.ru> Message-ID: <20131112205456.GQ95765@mdounin.ru> Hello! On Tue, Nov 12, 2013 at 12:24:54PM -0800, Piotr Sikora wrote: > Hey Maxim, > > > I don't think that limiting the scope to gzip and gunzip is > > correct either. From cache validation point of view, weak etags > > are mostly identical to Last-Modified, and removing etags > > completely should mostly follow ngx_http_clear_last_modified(). > > I strongly disagree. Modifying content (via addition/sub/ssi modules) > can change enough to consider two pages completely different (at least > as a general rule), so I don't think that they should retain any > ETags. The intended use case for addition filter is to use add headers/footers which don't change semantics. I.e., week entity tags can be preserved. Much like Last-Modified header currently. The sub and ssi remove Last-Modified and all etags by default. If needed though, they can be configured to preserve Last-Modified (and weak etags with a patch) with ssi_last_modified directive and friends: http://nginx.org/r/ssi_last_modified > I've got mixed feelings regarding xslt module. While I don't disagree > as much, I think it's still safer to remove weak ETags there as well. Much like with ssi, this is the default. I don't think we should remove weak etags if we are explicitly configured to preserve Last-Modified though (http://nginx.org/r/xslt_last_modified). -- Maxim Dounin http://nginx.org/en/donation.html From piotr at cloudflare.com Tue Nov 12 22:09:36 2013 From: piotr at cloudflare.com (Piotr Sikora) Date: Tue, 12 Nov 2013 14:09:36 -0800 Subject: Add Support for Weak ETags In-Reply-To: <20131112205456.GQ95765@mdounin.ru> References: <20131112164627.GO95765@mdounin.ru> <20131112205456.GQ95765@mdounin.ru> Message-ID: Hey Maxim, > The sub and ssi remove Last-Modified and all etags by default. If > needed though, they can be configured to preserve Last-Modified > (and weak etags with a patch) with ssi_last_modified directive and > friends: > > http://nginx.org/r/ssi_last_modified > > (...) > > Much like with ssi, this is the default. I don't think we should > remove weak etags if we are explicitly configured to preserve > Last-Modified though (http://nginx.org/r/xslt_last_modified). Sorry, I've completely missed the presence of _last_modified directives. You're right, your changes look better than just gzip/gunzip. Best regards, Piotr Sikora From serphen at gmail.com Wed Nov 13 16:23:58 2013 From: serphen at gmail.com (Arnaud GRANAL) Date: Wed, 13 Nov 2013 17:23:58 +0100 Subject: Passing response from upstream server to another upstream server In-Reply-To: <5D103CE839D50E4CBC62C9FD7B83287C274C0275@EXCN015.encara.local.ads> References: <5D103CE839D50E4CBC62C9FD7B83287C274C0275@EXCN015.encara.local.ads> Message-ID: Hi Thierry, This is the nginx-devel mailing-list (for development and commits), I guess you will have better luck on nginx user list http://mailman.nginx.org/mailman/listinfo/nginx nginx can be an upstream of another nginx (as reverse proxy). This means that you can chain nginx [A]->nginx [B]->nginx [C]->nginx [D] as much as you want. However the more layers you add, the more risks you have to break the chain (in reality, you end up with tons of opened sockets, issues with TIME_WAIT, tuning persistent connections between instances, etc..). Whenever possible, try to keep nginx [A] -> nginx [D] because this makes debugging and monitoring very simple. If you are trying to reach an internal network, you can for example use a GRE tunnel if your problem is that [A] can't talk directly with [D]. Arnaud. On Tue, Nov 12, 2013 at 4:04 PM, MAGNIEN, Thierry wrote: > Hi, > > I would like to achieve something like this: > - Nginx receives request from client > - Nginx forwards request to an upstream server and reads response > - Nginx sends this response to another upstream server (it will perform some content modification) > - Nginx gets final response and sends it over to client > > Is there a way to achieve this by using existing modules and an adequate configuration, or do I need to write my own "double upstream" module ? > > Thanks a lot, > Thierry > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel From thierry.magnien at sfr.com Wed Nov 13 16:30:58 2013 From: thierry.magnien at sfr.com (MAGNIEN, Thierry) Date: Wed, 13 Nov 2013 16:30:58 +0000 Subject: Passing response from upstream server to another upstream server In-Reply-To: References: <5D103CE839D50E4CBC62C9FD7B83287C274C0275@EXCN015.encara.local.ads> Message-ID: <5D103CE839D50E4CBC62C9FD7B83287C274C1960@EXCN015.encara.local.ads> Hi Arnaud, Thanks for your response but my goal is to have only one Nginx server, not chaining them. >From what I've seen, I think I should be able to achieve this using proxy + lua module, and I'll write my own module using subrequests if performance is too low. Regards, Thierry -----Message d'origine----- De?: Arnaud GRANAL [mailto:serphen at gmail.com] Envoy??: mercredi 13 novembre 2013 17:24 ??: nginx-devel at nginx.org Cc?: MAGNIEN, Thierry Objet?: Re: Passing response from upstream server to another upstream server Hi Thierry, This is the nginx-devel mailing-list (for development and commits), I guess you will have better luck on nginx user list http://mailman.nginx.org/mailman/listinfo/nginx nginx can be an upstream of another nginx (as reverse proxy). This means that you can chain nginx [A]->nginx [B]->nginx [C]->nginx [D] as much as you want. However the more layers you add, the more risks you have to break the chain (in reality, you end up with tons of opened sockets, issues with TIME_WAIT, tuning persistent connections between instances, etc..). Whenever possible, try to keep nginx [A] -> nginx [D] because this makes debugging and monitoring very simple. If you are trying to reach an internal network, you can for example use a GRE tunnel if your problem is that [A] can't talk directly with [D]. Arnaud. On Tue, Nov 12, 2013 at 4:04 PM, MAGNIEN, Thierry wrote: > Hi, > > I would like to achieve something like this: > - Nginx receives request from client > - Nginx forwards request to an upstream server and reads response > - Nginx sends this response to another upstream server (it will perform some content modification) > - Nginx gets final response and sends it over to client > > Is there a way to achieve this by using existing modules and an adequate configuration, or do I need to write my own "double upstream" module ? > > Thanks a lot, > Thierry > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel From jefftk at google.com Wed Nov 13 18:21:11 2013 From: jefftk at google.com (Jeff Kaufman) Date: Wed, 13 Nov 2013 13:21:11 -0500 Subject: Passing response from upstream server to another upstream server In-Reply-To: <5D103CE839D50E4CBC62C9FD7B83287C274C1960@EXCN015.encara.local.ads> References: <5D103CE839D50E4CBC62C9FD7B83287C274C0275@EXCN015.encara.local.ads> <5D103CE839D50E4CBC62C9FD7B83287C274C1960@EXCN015.encara.local.ads> Message-ID: I think Arnaud is suggesting that instead of a request flow like: 1. nginx[A] receives request, forwards to upstream[B] 2. upstream[B] creates response and replies to nginx[A] 3. nginx[A] forwards response to upstream[C] 4. upstream[C] modifies response and replies to nginx[A] 5. nginx[A] returns response to user You instead consider: 1. nginx[A] receives request, forwards to upstream[C] 2. upstream[C] forwards to upstream[B] 3. upstream[B] creates response and replies to upstream[C] 4. upstream[C] modifies request as it passes through and replies to nginx[A] 5. nginx[A] returns response to user Much more software is written expecting flows like this second one. On Wed, Nov 13, 2013 at 11:30 AM, MAGNIEN, Thierry wrote: > Hi Arnaud, > > Thanks for your response but my goal is to have only one Nginx server, not chaining them. > > From what I've seen, I think I should be able to achieve this using proxy + lua module, and I'll write my own module using subrequests if performance is too low. > > Regards, > Thierry > > -----Message d'origine----- > De : Arnaud GRANAL [mailto:serphen at gmail.com] > Envoy? : mercredi 13 novembre 2013 17:24 > ? : nginx-devel at nginx.org > Cc : MAGNIEN, Thierry > Objet : Re: Passing response from upstream server to another upstream server > > Hi Thierry, > > This is the nginx-devel mailing-list (for development and commits), I > guess you will have better luck on nginx user list > http://mailman.nginx.org/mailman/listinfo/nginx > > nginx can be an upstream of another nginx (as reverse proxy). > This means that you can chain nginx [A]->nginx [B]->nginx [C]->nginx > [D] as much as you want. However the more layers you add, the more > risks you have to break the chain (in reality, you end up with tons of > opened sockets, issues with TIME_WAIT, tuning persistent connections > between instances, etc..). > > Whenever possible, try to keep nginx [A] -> nginx [D] because this > makes debugging and monitoring very simple. > If you are trying to reach an internal network, you can for example > use a GRE tunnel if your problem is that [A] can't talk directly with > [D]. > > Arnaud. > > On Tue, Nov 12, 2013 at 4:04 PM, MAGNIEN, Thierry > wrote: >> Hi, >> >> I would like to achieve something like this: >> - Nginx receives request from client >> - Nginx forwards request to an upstream server and reads response >> - Nginx sends this response to another upstream server (it will perform some content modification) >> - Nginx gets final response and sends it over to client >> >> Is there a way to achieve this by using existing modules and an adequate configuration, or do I need to write my own "double upstream" module ? >> >> Thanks a lot, >> Thierry >> >> _______________________________________________ >> nginx-devel mailing list >> nginx-devel at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx-devel > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel From vbart at nginx.com Wed Nov 13 20:17:02 2013 From: vbart at nginx.com (Valentin V. Bartenev) Date: Thu, 14 Nov 2013 00:17:02 +0400 Subject: [PATCH] SSL: support ALPN (IETF's successor to NPN) In-Reply-To: References: Message-ID: <201311140017.02611.vbart@nginx.com> On Monday 04 November 2013 14:27:44 Piotr Sikora wrote: [..] > # HG changeset patch > # User Piotr Sikora > # Date 1383560396 28800 > # Mon Nov 04 02:19:56 2013 -0800 > # Node ID 78d793c51d5aa0ba8eec48340de49bfc3d17c97d > # Parent dea321e5c0216efccbb23e84bbce7cf3e28f130c > SSL: support ALPN (IETF's successor to NPN). I'm very unhappy with lots of #if(def)-s are introduced by the patch. Is there something can be done with that? > > Signed-off-by: Piotr Sikora > > diff -r dea321e5c021 -r 78d793c51d5a src/http/modules/ngx_http_ssl_module.c > --- a/src/http/modules/ngx_http_ssl_module.c Thu Oct 31 18:23:49 2013 +0400 > +++ b/src/http/modules/ngx_http_ssl_module.c Mon Nov 04 02:19:56 2013 -0800 > @@ -17,6 +17,17 @@ typedef ngx_int_t (*ngx_ssl_variable_han > #define NGX_DEFAULT_CIPHERS "HIGH:!aNULL:!MD5" > #define NGX_DEFAULT_ECDH_CURVE "prime256v1" > > +#if (defined TLSEXT_TYPE_application_layer_protocol_negotiation \ > + || defined TLSEXT_TYPE_next_proto_neg) Indentation problem, should be: #if (defined TLSEXT_TYPE_application_layer_protocol_negotiation \ || defined TLSEXT_TYPE_next_proto_neg) Also, please note that we usually put "\" at column 79. > +#define NGX_HTTP_NPN_ADVERTISE "\x08http/1.1" > +#endif > + > + > +#ifdef TLSEXT_TYPE_application_layer_protocol_negotiation > +static int ngx_http_ssl_alpn_select(ngx_ssl_conn_t *ssl_conn, > + const unsigned char **out, unsigned char *outlen, > + const unsigned char *in, unsigned int inlen, void *arg); > +#endif > > #ifdef TLSEXT_TYPE_next_proto_neg > static int ngx_http_ssl_npn_advertised(ngx_ssl_conn_t *ssl_conn, > @@ -274,10 +285,66 @@ static ngx_http_variable_t ngx_http_ssl > static ngx_str_t ngx_http_ssl_sess_id_ctx = ngx_string("HTTP"); > > > +#ifdef TLSEXT_TYPE_application_layer_protocol_negotiation > + > +static int > +ngx_http_ssl_alpn_select(ngx_ssl_conn_t *ssl_conn, const unsigned char **out, > + unsigned char *outlen, const unsigned char *in, unsigned int inlen, > + void *arg) > +{ > + unsigned int srvlen; > + unsigned char *srv; > +#if (NGX_DEBUG) > + unsigned int i; > +#endif > +#if (NGX_HTTP_SPDY) > + ngx_http_connection_t *hc; > +#endif > +#if (NGX_HTTP_SPDY || NGX_DEBUG) > + ngx_connection_t *c; > + > + c = ngx_ssl_get_connection(ssl_conn); > +#endif > + > +#if (NGX_DEBUG) > + for (i = 0; i < inlen; i += in[i] + 1) { > + ngx_log_debug2(NGX_LOG_DEBUG_HTTP, c->log, 0, > + "SSL ALPN supported by client: %*s", in[i], > &in[i + 1]); Your email client broke the patch here. > + } > +#endif > + > +#if (NGX_HTTP_SPDY) > + hc = c->data; > + > + if (hc->addr_conf->spdy) { > + srv = (unsigned char *) NGX_SPDY_NPN_ADVERTISE NGX_HTTP_NPN_ADVERTISE; > + srvlen = sizeof(NGX_SPDY_NPN_ADVERTISE NGX_HTTP_NPN_ADVERTISE) - 1; > + > + } else > +#endif > + { > + srv = (unsigned char *) NGX_HTTP_NPN_ADVERTISE; > + srvlen = sizeof(NGX_HTTP_NPN_ADVERTISE) - 1; > + } > + > + if (SSL_select_next_proto((unsigned char **) out, outlen, srv, srvlen, > + in, inlen) But the SSL_select_next_proto() function is missing if OpenSSL was built with OPENSSL_NO_NEXTPROTONEG. > + != OPENSSL_NPN_NEGOTIATED) > + { > + return SSL_TLSEXT_ERR_NOACK; > + } > + > + ngx_log_debug2(NGX_LOG_DEBUG_HTTP, c->log, 0, > + "SSL ALPN selected: %*s", *outlen, *out); > + > + return SSL_TLSEXT_ERR_OK; > +} > + > +#endif > + > + > #ifdef TLSEXT_TYPE_next_proto_neg > > -#define NGX_HTTP_NPN_ADVERTISE "\x08http/1.1" > - > static int > ngx_http_ssl_npn_advertised(ngx_ssl_conn_t *ssl_conn, > const unsigned char **out, unsigned int *outlen, void *arg) > @@ -542,6 +609,10 @@ ngx_http_ssl_merge_srv_conf(ngx_conf_t * > > #endif > > +#ifdef TLSEXT_TYPE_application_layer_protocol_negotiation > + SSL_CTX_set_alpn_select_cb(conf->ssl.ctx, ngx_http_ssl_alpn_select, NULL); > +#endif > + > #ifdef TLSEXT_TYPE_next_proto_neg > SSL_CTX_set_next_protos_advertised_cb(conf->ssl.ctx, > ngx_http_ssl_npn_advertised, NULL); > diff -r dea321e5c021 -r 78d793c51d5a src/http/ngx_http.c > --- a/src/http/ngx_http.c Thu Oct 31 18:23:49 2013 +0400 > +++ b/src/http/ngx_http.c Mon Nov 04 02:19:56 2013 -0800 > @@ -1349,11 +1349,12 @@ ngx_http_add_address(ngx_conf_t *cf, ngx > } > } > > -#if (NGX_HTTP_SPDY && NGX_HTTP_SSL && !defined TLSEXT_TYPE_next_proto_neg) > +#if (NGX_HTTP_SPDY && NGX_HTTP_SSL && !defined TLSEXT_TYPE_next_proto_neg \ > + && !defined TLSEXT_TYPE_application_layer_protocol_negotiation) I would prefer: #if (NGX_HTTP_SPDY && NGX_HTTP_SSL \ && !defined TLSEXT_TYPE_next_proto_neg \ && !defined TLSEXT_TYPE_application_layer_protocol_negotiation) > if (lsopt->spdy && lsopt->ssl) { > ngx_conf_log_error(NGX_LOG_WARN, cf, 0, > - "nginx was built without OpenSSL NPN support, " > - "SPDY is not enabled for %s", lsopt->addr); > + "nginx was built without OpenSSL ALPN and NPN " Maybe I'm wrong since English isn't my native language, but should it be: "nginx was built without OpenSSL ALPN or NPN " (s/and/or/) ? > + "support, SPDY is not enabled for %s", lsopt->addr); > } > #endif > > diff -r dea321e5c021 -r 78d793c51d5a src/http/ngx_http_request.c > --- a/src/http/ngx_http_request.c Thu Oct 31 18:23:49 2013 +0400 > +++ b/src/http/ngx_http_request.c Mon Nov 04 02:19:56 2013 -0800 > @@ -728,18 +728,31 @@ ngx_http_ssl_handshake_handler(ngx_conne > > c->ssl->no_wait_shutdown = 1; > > -#if (NGX_HTTP_SPDY && defined TLSEXT_TYPE_next_proto_neg) > +#if (NGX_HTTP_SPDY \ > + && (defined TLSEXT_TYPE_application_layer_protocol_negotiation \ > + || defined TLSEXT_TYPE_next_proto_neg)) > { > unsigned int len; > const unsigned char *data; > static const ngx_str_t spdy = ngx_string(NGX_SPDY_NPN_NEGOTIATED); > > - SSL_get0_next_proto_negotiated(c->ssl->connection, &data, &len); > +#ifdef TLSEXT_TYPE_application_layer_protocol_negotiation > + SSL_get0_alpn_selected(c->ssl->connection, &data, &len); > > if (len == spdy.len && ngx_strncmp(data, spdy.data, spdy.len) == 0) { > ngx_http_spdy_init(c->read); > return; > } > +#endif > + > +#ifdef TLSEXT_TYPE_next_proto_neg > + SSL_get0_next_proto_negotiated(c->ssl->connection, &data, &len); > + > + if (len == spdy.len && ngx_strncmp(data, spdy.data, spdy.len) == 0) { > + ngx_http_spdy_init(c->read); > + return; > + } > +#endif I'm not sure that we need to check NPN if from ALPN we know that some protocol was selected and it's not spdy. wbr, Valentin V. Bartenev From piotr at cloudflare.com Thu Nov 14 00:36:06 2013 From: piotr at cloudflare.com (Piotr Sikora) Date: Wed, 13 Nov 2013 16:36:06 -0800 Subject: [PATCH] SSL: support ALPN (IETF's successor to NPN) In-Reply-To: <201311140017.02611.vbart@nginx.com> References: <201311140017.02611.vbart@nginx.com> Message-ID: Hey Valentin, > I'm very unhappy with lots of #if(def)-s are introduced by the patch. > Is there something can be done with that? Added code depends on presence of ALPN support in OpenSSL, so I don't see how we could get away without all those #ifdefs... I'm open to suggestions, though :) > But the SSL_select_next_proto() function is missing if OpenSSL was built > with OPENSSL_NO_NEXTPROTONEG. Good catch, I totally forgot about this... I've sent a patch [0] for this to OpenSSL guys months ago and it was supposed to be fixed before ALPN was backported to OpenSSL-1.0.2, but I guess it didn't happen. I'll try to sort this out as soon as possible. > Maybe I'm wrong since English isn't my native language, but should it be: > > "nginx was built without OpenSSL ALPN or NPN " (s/and/or/) > > ? Neither am I, but not really. Double negation makes this tricky, but "or" would mean that it was built with one but not both, whereas "and" means that it was built with neither. > I'm not sure that we need to check NPN if from ALPN we know that some protocol > was selected and it's not spdy. Makes sense. I'll get back to you with updated patch once fix for "no-nextprotoneg" lands in OpenSSL-1.0.2. [0] https://rt.openssl.org/Ticket/Display.html?id=3106 (guest:guest) Best regards, Piotr Sikora From florent.lecoz at smartjog.com Thu Nov 14 15:33:55 2013 From: florent.lecoz at smartjog.com (Florent Le Coz) Date: Thu, 14 Nov 2013 16:33:55 +0100 Subject: [PATCH] Upstream: fix the cache duration calculation Message-ID: <5284ED63.60703@smartjog.com> Hello, We have a "chain" of nginx servers, like this: A (nginx) -> B (nginx) -> C (anything) B being A?s upstream, and C being B?s upstream We ran into an issue where the files were being cached for a too long duration. For example if we want to cache our files in this chain for 10 seconds, it could happen that the file actually exist for up to 30 seconds, because when B retrieves the file from C, even if it?s almost expired on C, B will cache it for 10 seconds. Same thing if A retrieves the file cached by B. To fix this issue, nginx needs to properly take into account the Age header provided by its upstream. This is what the attached patch does. A subsequent patch will by provided soon, where the Age provided by nginx actually indicates the age of the file in nginx?s cache, instead of the age that was provided by its upstream. Regards, -- Florent Le Coz Smartjog -------------- next part -------------- A non-text attachment was scrubbed... Name: fix_cache_duration_calculation.patch Type: text/x-patch Size: 9081 bytes Desc: not available URL: From mdounin at mdounin.ru Thu Nov 14 17:14:35 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 14 Nov 2013 21:14:35 +0400 Subject: [PATCH] Upstream: fix the cache duration calculation In-Reply-To: <5284ED63.60703@smartjog.com> References: <5284ED63.60703@smartjog.com> Message-ID: <20131114171435.GD95765@mdounin.ru> Hello! On Thu, Nov 14, 2013 at 04:33:55PM +0100, Florent Le Coz wrote: > Hello, > > We have a "chain" of nginx servers, like this: > A (nginx) -> B (nginx) -> C (anything) > > B being A?s upstream, and C being B?s upstream > > We ran into an issue where the files were being cached for a too > long duration. For example if we want to cache our files in this > chain for 10 seconds, it could happen that the file actually exist > for up to 30 seconds, because when B retrieves the file from C, even > if it?s almost expired on C, B will cache it for 10 seconds. Same > thing if A retrieves the file cached by B. > > To fix this issue, nginx needs to properly take into account the Age > header provided by its upstream. This is what the attached patch > does. While I agree with problems the patch tries to address (i.e., "Cache-Control: max-age" should take precedence over Expires, and Age should be taken into account while interpreting "Cache-Control: max-age"), I'm not happy with the patch provided. Given that there are at least two problems to address, I would suggest there should be at least two patches: one to address Expires vs. Cache-Control, and another one to handle Age header. [...] > Upstream: fix the cache duration calculation > > - Treat Expires and Cache-control headers in a well-defined order, instead > of treating them in the order they are received from upstream > - If Expires and Cache-control headers are both present and indicate a > different cache duration, use the biggest value of the two As per RFC2616, if Expires and Cache-Control are both present, the Cache-Control should be used, http://tools.ietf.org/html/rfc2616#section-14.9.3: If a response includes both an Expires header and a max-age directive, the max-age directive overrides the Expires header, even if the Expires header is more restrictive. [...] > @@ -229,6 +233,11 @@ Just a side note: please add [diff] showfunc=1 to ~/.hgrc. It makes review much easier. Thanks. > ngx_http_upstream_copy_header_line, > offsetof(ngx_http_headers_out_t, expires), 1 }, > > + { ngx_string("Age"), > + ngx_http_upstream_process_age, 0, > + ngx_http_upstream_copy_header_line, > + offsetof(ngx_http_headers_out_t, age), 1 }, > + As used in the patch, it looks like there is no need for a special process function for the Age header. > { ngx_string("Accept-Ranges"), > ngx_http_upstream_process_header_line, > offsetof(ngx_http_upstream_headers_in_t, accept_ranges), > @@ -2255,6 +2264,7 @@ > /* TODO: preallocate event_pipe bufs, look "Content-Length" */ > > #if (NGX_HTTP_CACHE) > + ngx_http_upstream_set_cache_valid(r, u); > Processing Cache-Control header in ngx_http_upstream_send_response() is at least sub-optimal, and may cause unneeded work if caching is disabled with, e.g., "Cache-Control: no-cache". At most, this should be done in ngx_http_upstream_process_headers(), but even this will be too late for upcoming cache revalidation with conditional requests. [...] > + if (!(u->conf->ignore_headers & NGX_HTTP_UPSTREAM_IGN_CACHE_CONTROL)) > + { > + u_char *p; > + u_char *last; > + > + ph = u->headers_in.cache_control.elts; > + > + for (i = 0; i != u->headers_in.cache_control.nelts; i++) > + { > + h = ph[i]; > + p = h->value.data; > + last = p + h->value.len; > + > + if (ngx_strlcasestrn(p, last, (u_char *) "no-cache", 8 - 1) != NULL > + || ngx_strlcasestrn(p, last, (u_char *) "no-store", 8 - 1) != NULL > + || ngx_strlcasestrn(p, last, (u_char *) "private", 7 - 1) != NULL) > + { > + u->cacheable = 0; > + return; > + } This seems to introduce multiple style problems. It dosn't really matter though, as it would be much better idea to don't move the code, see above. > + > + p = ngx_strlcasestrn(p, last, (u_char *) "max-age=", 8 - 1); > + > + if (p == NULL) { > + break; > + } > + > + n = 0; > + > + for (p += 8; p < last; p++) { > + if (*p == ',' || *p == ';' || *p == ' ') { > + break; > + } > + > + if (*p >= '0' && *p <= '9') { > + n = n * 10 + *p - '0'; > + continue; > + } > + > + u->cacheable = 0; > + return; > + } > + > + if (n == 0) { > + u->cacheable = 0; > + return; > + } > + r->cache->valid_sec = ngx_max(r->cache->valid_sec, ngx_time() + n); The ngx_max() seems to be only meaningful if there are more than one max-age value found in different Cache-Control headers returned by a backend. It is however inconsistent with other code in the function. E.g., Cache-Control: max-age=10, max-age=20 will result in 10 seconds max-age, while Cache-Control: max-age=10 Cache-Control: max-age=20 will result in 20 seconds max-age. Moreover, in contrast to the previous example, Cache-Control: max-age=0 Cache-Control: max-age=20 will result in a response being non-cacheable. > + } > + } > + if (!(u->conf->ignore_headers & NGX_HTTP_UPSTREAM_IGN_EXPIRES)) > + { > + time_t expires; > + > + h = u->headers_in.expires; > + if (h) > + { > + expires = ngx_http_parse_time(h->value.data, h->value.len); > + if (expires == NGX_ERROR || expires < ngx_time()) { > + u->cacheable = 0; > + return; > + } > + r->cache->valid_sec = ngx_max(expires, r->cache->valid_sec); This is incorrect and contradicts to RFC2616, see above. > + } > + } > + > + h = u->headers_in.age; > + if (h && r->cache->valid_sec) > + { > + n = ngx_atoi(h->value.data, h->value.len); > + if (n > 0 && n != NGX_ERROR) > + r->cache->valid_sec -= n; > + } This will corrupt absolute times parsed from the Expires header. Additionally, it may result in a valid_sec in the past, which is not checked. [...] -- Maxim Dounin http://nginx.org/en/donation.html From florent.lecoz at smartjog.com Thu Nov 14 17:44:00 2013 From: florent.lecoz at smartjog.com (Florent Le Coz) Date: Thu, 14 Nov 2013 18:44:00 +0100 Subject: [PATCH] Upstream: fix the cache duration calculation In-Reply-To: <20131114171435.GD95765@mdounin.ru> References: <5284ED63.60703@smartjog.com> <20131114171435.GD95765@mdounin.ru> Message-ID: <52850BE0.5010007@smartjog.com> On 11/14/2013 06:14 PM, Maxim Dounin wrote: > Hello! > Thanks for your quick review. I?ll provide a (actually two) revised patch later, meanwhile here are a few comments and questions. > > As used in the patch, it looks like there is no need for a special > process function for the Age header. > Indeed, but this is used in my subsequent patch. I will remove that from the first patches. > Processing Cache-Control header in > ngx_http_upstream_send_response() is at least sub-optimal, and may > cause unneeded work if caching is disabled with, e.g., > "Cache-Control: no-cache". > > At most, this should be done in ngx_http_upstream_process_headers(), > but even this will be too late for upcoming cache revalidation > with conditional requests. > > [...] Where would you then process these values, if not after every headers have been processed? At the moment, nginx processes them one by one, in the order there are found in upstream?s response. Do you suggest that in process_cache_control I add some "if 'expires' has already been processed" condition, and something equivalent in 'process_cache_control'? That doesn?t look ideal to me, hence why I chose to set the valid_sec value once every header has been processed. Do you have any suggestion? Thank you -- Florent Le Coz Smartjog From piotr at cloudflare.com Thu Nov 14 21:23:55 2013 From: piotr at cloudflare.com (Piotr Sikora) Date: Thu, 14 Nov 2013 13:23:55 -0800 Subject: [PATCH] SSL: support ALPN (IETF's successor to NPN) In-Reply-To: References: <201311140017.02611.vbart@nginx.com> Message-ID: Hey Valentin, updated patch with fixed style and no SPDY check in case ALPN was selected attached. SSL_select_next_proto() is available in OpenSSL-1.0.2 now, even when compiled with "no-nextprotoneg". Best regards, Piotr Sikora # HG changeset patch # User Piotr Sikora # Date 1384462707 28800 # Thu Nov 14 12:58:27 2013 -0800 # Node ID d848f32a9b677157ae2bddd3771509d73eb8e4d6 # Parent dea321e5c0216efccbb23e84bbce7cf3e28f130c SSL: support ALPN (IETF's successor to NPN). Signed-off-by: Piotr Sikora diff -r dea321e5c021 -r d848f32a9b67 src/http/modules/ngx_http_ssl_module.c --- a/src/http/modules/ngx_http_ssl_module.c Thu Oct 31 18:23:49 2013 +0400 +++ b/src/http/modules/ngx_http_ssl_module.c Thu Nov 14 12:58:27 2013 -0800 @@ -17,6 +17,17 @@ typedef ngx_int_t (*ngx_ssl_variable_han #define NGX_DEFAULT_CIPHERS "HIGH:!aNULL:!MD5" #define NGX_DEFAULT_ECDH_CURVE "prime256v1" +#if (defined TLSEXT_TYPE_application_layer_protocol_negotiation \ + || defined TLSEXT_TYPE_next_proto_neg) +#define NGX_HTTP_NPN_ADVERTISE "\x08http/1.1" +#endif + + +#ifdef TLSEXT_TYPE_application_layer_protocol_negotiation +static int ngx_http_ssl_alpn_select(ngx_ssl_conn_t *ssl_conn, + const unsigned char **out, unsigned char *outlen, + const unsigned char *in, unsigned int inlen, void *arg); +#endif #ifdef TLSEXT_TYPE_next_proto_neg static int ngx_http_ssl_npn_advertised(ngx_ssl_conn_t *ssl_conn, @@ -274,10 +285,66 @@ static ngx_http_variable_t ngx_http_ssl static ngx_str_t ngx_http_ssl_sess_id_ctx = ngx_string("HTTP"); +#ifdef TLSEXT_TYPE_application_layer_protocol_negotiation + +static int +ngx_http_ssl_alpn_select(ngx_ssl_conn_t *ssl_conn, const unsigned char **out, + unsigned char *outlen, const unsigned char *in, unsigned int inlen, + void *arg) +{ + unsigned int srvlen; + unsigned char *srv; +#if (NGX_DEBUG) + unsigned int i; +#endif +#if (NGX_HTTP_SPDY) + ngx_http_connection_t *hc; +#endif +#if (NGX_HTTP_SPDY || NGX_DEBUG) + ngx_connection_t *c; + + c = ngx_ssl_get_connection(ssl_conn); +#endif + +#if (NGX_DEBUG) + for (i = 0; i < inlen; i += in[i] + 1) { + ngx_log_debug2(NGX_LOG_DEBUG_HTTP, c->log, 0, + "SSL ALPN supported by client: %*s", in[i], &in[i + 1]); + } +#endif + +#if (NGX_HTTP_SPDY) + hc = c->data; + + if (hc->addr_conf->spdy) { + srv = (unsigned char *) NGX_SPDY_NPN_ADVERTISE NGX_HTTP_NPN_ADVERTISE; + srvlen = sizeof(NGX_SPDY_NPN_ADVERTISE NGX_HTTP_NPN_ADVERTISE) - 1; + + } else +#endif + { + srv = (unsigned char *) NGX_HTTP_NPN_ADVERTISE; + srvlen = sizeof(NGX_HTTP_NPN_ADVERTISE) - 1; + } + + if (SSL_select_next_proto((unsigned char **) out, outlen, srv, srvlen, + in, inlen) + != OPENSSL_NPN_NEGOTIATED) + { + return SSL_TLSEXT_ERR_NOACK; + } + + ngx_log_debug2(NGX_LOG_DEBUG_HTTP, c->log, 0, + "SSL ALPN selected: %*s", *outlen, *out); + + return SSL_TLSEXT_ERR_OK; +} + +#endif + + #ifdef TLSEXT_TYPE_next_proto_neg -#define NGX_HTTP_NPN_ADVERTISE "\x08http/1.1" - static int ngx_http_ssl_npn_advertised(ngx_ssl_conn_t *ssl_conn, const unsigned char **out, unsigned int *outlen, void *arg) @@ -542,6 +609,10 @@ ngx_http_ssl_merge_srv_conf(ngx_conf_t * #endif +#ifdef TLSEXT_TYPE_application_layer_protocol_negotiation + SSL_CTX_set_alpn_select_cb(conf->ssl.ctx, ngx_http_ssl_alpn_select, NULL); +#endif + #ifdef TLSEXT_TYPE_next_proto_neg SSL_CTX_set_next_protos_advertised_cb(conf->ssl.ctx, ngx_http_ssl_npn_advertised, NULL); diff -r dea321e5c021 -r d848f32a9b67 src/http/ngx_http.c --- a/src/http/ngx_http.c Thu Oct 31 18:23:49 2013 +0400 +++ b/src/http/ngx_http.c Thu Nov 14 12:58:27 2013 -0800 @@ -1349,11 +1349,13 @@ ngx_http_add_address(ngx_conf_t *cf, ngx } } -#if (NGX_HTTP_SPDY && NGX_HTTP_SSL && !defined TLSEXT_TYPE_next_proto_neg) +#if (NGX_HTTP_SPDY && NGX_HTTP_SSL \ + && !defined TLSEXT_TYPE_application_layer_protocol_negotiation \ + && !defined TLSEXT_TYPE_next_proto_neg) if (lsopt->spdy && lsopt->ssl) { ngx_conf_log_error(NGX_LOG_WARN, cf, 0, - "nginx was built without OpenSSL NPN support, " - "SPDY is not enabled for %s", lsopt->addr); + "nginx was built without OpenSSL ALPN and NPN " + "support, SPDY is not enabled for %s", lsopt->addr); } #endif diff -r dea321e5c021 -r d848f32a9b67 src/http/ngx_http_request.c --- a/src/http/ngx_http_request.c Thu Oct 31 18:23:49 2013 +0400 +++ b/src/http/ngx_http_request.c Thu Nov 14 12:58:27 2013 -0800 @@ -728,17 +728,33 @@ ngx_http_ssl_handshake_handler(ngx_conne c->ssl->no_wait_shutdown = 1; -#if (NGX_HTTP_SPDY && defined TLSEXT_TYPE_next_proto_neg) +#if (NGX_HTTP_SPDY \ + && (defined TLSEXT_TYPE_application_layer_protocol_negotiation \ + || defined TLSEXT_TYPE_next_proto_neg)) { unsigned int len; const unsigned char *data; static const ngx_str_t spdy = ngx_string(NGX_SPDY_NPN_NEGOTIATED); - SSL_get0_next_proto_negotiated(c->ssl->connection, &data, &len); +#ifdef TLSEXT_TYPE_application_layer_protocol_negotiation + SSL_get0_alpn_selected(c->ssl->connection, &data, &len); if (len == spdy.len && ngx_strncmp(data, spdy.data, spdy.len) == 0) { ngx_http_spdy_init(c->read); return; + + } else if (len == 0) +#endif + + { +#ifdef TLSEXT_TYPE_next_proto_neg + SSL_get0_next_proto_negotiated(c->ssl->connection, &data, &len); + + if (len == spdy.len && ngx_strncmp(data, spdy.data, spdy.len) == 0) { + ngx_http_spdy_init(c->read); + return; + } +#endif } } #endif diff -r dea321e5c021 -r d848f32a9b67 src/http/ngx_http_spdy.h --- a/src/http/ngx_http_spdy.h Thu Oct 31 18:23:49 2013 +0400 +++ b/src/http/ngx_http_spdy.h Thu Nov 14 12:58:27 2013 -0800 @@ -17,7 +17,8 @@ #define NGX_SPDY_VERSION 2 -#ifdef TLSEXT_TYPE_next_proto_neg +#if (defined TLSEXT_TYPE_application_layer_protocol_negotiation \ + || defined TLSEXT_TYPE_next_proto_neg) #define NGX_SPDY_NPN_ADVERTISE "\x06spdy/2" #define NGX_SPDY_NPN_NEGOTIATED "spdy/2" #endif -------------- next part -------------- A non-text attachment was scrubbed... Name: nginx__alpn.patch Type: application/octet-stream Size: 6427 bytes Desc: not available URL: From alex at zeitgeist.se Fri Nov 15 00:28:16 2013 From: alex at zeitgeist.se (Alex) Date: Fri, 15 Nov 2013 01:28:16 +0100 Subject: [PATCH] SSL: support ALPN (IETF's successor to NPN) In-Reply-To: References: <201311140017.02611.vbart@nginx.com> Message-ID: >> Maybe I'm wrong since English isn't my native language, but should it be: >> >> "nginx was built without OpenSSL ALPN or NPN " (s/and/or/) >> >> ? > > Neither am I, but not really. Double negation makes this tricky, but > "or" would mean that it was built with one but not both, whereas "and" > means that it was built with neither. Not a native English speaker either - but I think the conjunction "nor" would do the trick to avoid ambiguity. ;) "nginx was built without OpenSSL ALPN nor NPN" From moto at kawasaki3.org Fri Nov 15 08:35:39 2013 From: moto at kawasaki3.org (moto kawasaki) Date: Fri, 15 Nov 2013 17:35:39 +0900 (JST) Subject: pls. help for adding another parameter to ngx_upstream_server Message-ID: <20131115.173539.417762286168675573.moto@kawasaki3.org> Dear Sirs, Firstly, I'd like to thank you very much for supplying such powerful and smart software. nginx is so great. Now, I am struggling to add "setfib=N" parameter to "server" token in "upstream" clause, and so far failed. It is really appreciated if you'd advice/suggest/comment on it. Thank you very very much in advance. [my environment] - FreeBSD 9.2-RELEASE-p0 - www/nginx (ports), which is nginx-1.4.3. [my intention] - nginx can listen with setfib. http://nginx.org/en/docs/http/ngx_http_core_module.html#listen - but nginx cannot setfib against upstream/server. http://nginx.org/en/docs/http/ngx_http_upstream_module.html#server - I'd like to add "setfib" parameter, something like below; upstream backend { server 192.168.1.1:8080 max_fails=3 setfib=5; # (a) } # ^^^^^^^^ [what I did] - I made a patch (see attached) but it fails with the following emerge message in nginx-error.log. [emerg] 3848#0: invalid parameter "setfib=5" in /usr/local/etc/nginx/nginx.conf:18 The line 18 of nginx.conf contains setfib=5 (see (a) above.) - printf debugging tells me; (1) This emerg log comes from "invalid" clause in function ngx_http_upstream_server() at line 4689 of ngx_http_upstream.c. (line numbers are AFTER applying attached patch.) 4689 invalid: 4691 ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, 4692 "invalid parameter \"%V\"", &value[i]); And it is because "uscf->flags" check failed at l.4649. 4649 if (!(uscf->flags & NGX_HTTP_UPSTREAM_SETFIB)) { 4650 goto invalid; 4651 } (2) This "uscf->flags" has been set in the function ngx_http_upstream() at line 4434; 4434 uscf = ngx_http_upstream_add(cf, &u, NGX_HTTP_UPSTREAM_CREATE 4435 |NGX_HTTP_UPSTREAM_WEIGHT 4436 |NGX_HTTP_UPSTREAM_MAX_FAILS 4437 |NGX_HTTP_UPSTREAM_FAIL_TIMEOUT 4438 |NGX_HTTP_UPSTREAM_DOWN 4439 #if (NGX_HAVE_SETFIB) 4440 |NGX_HTTP_UPSTREAM_BACKUP 4441 |NGX_HTTP_UPSTREAM_SETFIB); 4442 #else 4443 |NGX_HTTP_UPSTREAM_BACKUP); 4444 #endif /* NGX_HAVE_SETFIB */ And keep the bit of NGX_HTTP_UPSTREAM_SETFIB as ON, so that the uscf->flags=127, until just before the function ngx_conf_parse() called. 4511 rv = ngx_conf_parse(cf, NULL); Returning from this function, uscf->flags=31, which means the SETFIB bit is OFF, thus check at l.4649 falls into invalid. [my questions] Well, I tried but couldn't find out where that bit being set OFF. So, please tell me the place or how to preserve it. Also, please advice me whether my logic above is wrong or not, where I made a mistake, how to achieve setfib option in upstream/server, etc. Thank you very much. Best Regards, -- moto kawasaki -------------- next part -------------- A non-text attachment was scrubbed... Name: nginx-upstreamsetfib.patch Type: text/x-patch Size: 4186 bytes Desc: not available URL: From vl at nginx.com Fri Nov 15 09:42:07 2013 From: vl at nginx.com (Vladimir Homutov) Date: Fri, 15 Nov 2013 13:42:07 +0400 Subject: pls. help for adding another parameter to ngx_upstream_server In-Reply-To: <20131115.173539.417762286168675573.moto@kawasaki3.org> References: <20131115.173539.417762286168675573.moto@kawasaki3.org> Message-ID: <20131115094206.GA30401@vlpc.i.nginx.com> > [emerg] 3848#0: invalid parameter "setfib=5" in /usr/local/etc/nginx/nginx.conf:18 > > The line 18 of nginx.conf contains setfib=5 (see (a) above.) can you please show full configuration? > This "uscf->flags" has been set in the function > ngx_http_upstream() at line 4434; you are expected to enable specific flags in each balancing module that support it. For example, ip_hash module doesn't support 'backup' flag and thus does not set 'NGX_HTTP_UPSTREAM_BACKUP' in ngx_http_upstream_ip_hash(). I suggest that you have specified something different from the default balancer and thus got this error, since your patch doesn't allow this parametr in it. From rob.stradling at comodo.com Fri Nov 15 10:24:16 2013 From: rob.stradling at comodo.com (Rob Stradling) Date: Fri, 15 Nov 2013 10:24:16 +0000 Subject: [PATCH] SSL: support ALPN (IETF's successor to NPN) In-Reply-To: References: <201311140017.02611.vbart@nginx.com> Message-ID: <5285F650.9050700@comodo.com> On 15/11/13 00:28, Alex wrote: >>> Maybe I'm wrong since English isn't my native language, but should it be: >>> >>> "nginx was built without OpenSSL ALPN or NPN " (s/and/or/) >>> >>> ? >> >> Neither am I, but not really. Double negation makes this tricky, but >> "or" would mean that it was built with one but not both, whereas "and" >> means that it was built with neither. > > Not a native English speaker either - but I think the conjunction "nor" > would do the trick to avoid ambiguity. ;) > > "nginx was built without OpenSSL ALPN nor NPN" I think "neither...nor" is what you're looking for. (I don't think "without...nor" is grammatically correct). Something like... "nginx was built with support for neither ALPN nor NPN" (P.S. I'm English, but I have to say that it's not uncommon for the non-native English speakers to have a better grasp of English grammar than the natives! ;-) ) -- Rob Stradling Senior Research & Development Scientist COMODO - Creating Trust Online From mdounin at mdounin.ru Fri Nov 15 10:25:04 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 15 Nov 2013 14:25:04 +0400 Subject: pls. help for adding another parameter to ngx_upstream_server In-Reply-To: <20131115.173539.417762286168675573.moto@kawasaki3.org> References: <20131115.173539.417762286168675573.moto@kawasaki3.org> Message-ID: <20131115102504.GF95765@mdounin.ru> Hello! On Fri, Nov 15, 2013 at 05:35:39PM +0900, moto kawasaki wrote: > Now, I am struggling to add "setfib=N" parameter to "server" token in > "upstream" clause, and so far failed. Could you please point out use cases for such a parameter? Shouldn't it be something like proxy_bind instead? (See Vladimir's reply for a possible explanation why you patch doesn't work for you.) -- Maxim Dounin http://nginx.org/en/donation.html From mdounin at mdounin.ru Fri Nov 15 12:44:12 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 15 Nov 2013 16:44:12 +0400 Subject: [PATCH] Upstream: fix the cache duration calculation In-Reply-To: <52850BE0.5010007@smartjog.com> References: <5284ED63.60703@smartjog.com> <20131114171435.GD95765@mdounin.ru> <52850BE0.5010007@smartjog.com> Message-ID: <20131115124412.GN95765@mdounin.ru> Hello! On Thu, Nov 14, 2013 at 06:44:00PM +0100, Florent Le Coz wrote: [...] > Where would you then process these values, if not after every > headers have been processed? At the moment, nginx processes them one > by one, in the order there are found in upstream?s response. Do you > suggest that in process_cache_control I add some "if 'expires' has > already been processed" condition, and something equivalent in > 'process_cache_control'? That doesn?t look ideal to me, hence why I > chose to set the valid_sec value once every header has been > processed. > > Do you have any suggestion? It would probably be enough to just look at the u->headers_in.x_accel_expires, like this (untested): --- a/src/http/ngx_http_upstream.c +++ b/src/http/ngx_http_upstream.c @@ -3599,7 +3599,7 @@ ngx_http_upstream_process_cache_control( return NGX_OK; } - if (r->cache->valid_sec != 0) { + if (r->cache->valid_sec != 0 && u->headers_in.x_accel_expires != NULL) { return NGX_OK; } This way the r->cache->valid_sec will be only checked while parsing Cache-Control if an X-Accel-Expires header was in the response, and thus Cache-Control will be preferred over Expires. This is not perfect solution - e.g., if X-Accel-Expires header was present but ignored or contained an invalid data, the code will do wrong things. On the other hand, it's probably good enough for practical things and certainly much better then what we currently have. Perfectly correct solution would be to store a bit (likely in u->headers_in) to indicate that valid_sec was set based on X-Accel-Expires and shouldn't be overwritten. I'm not sure it worth the effort though. -- Maxim Dounin http://nginx.org/en/donation.html From florent.lecoz at smartjog.com Fri Nov 15 15:35:09 2013 From: florent.lecoz at smartjog.com (Florent Le Coz) Date: Fri, 15 Nov 2013 16:35:09 +0100 Subject: [PATCH] Upstream: fix the cache duration calculation In-Reply-To: <20131115124412.GN95765@mdounin.ru> References: <5284ED63.60703@smartjog.com> <20131114171435.GD95765@mdounin.ru> <52850BE0.5010007@smartjog.com> <20131115124412.GN95765@mdounin.ru> Message-ID: <52863F2D.1050109@smartjog.com> Hi, On 11/15/2013 01:44 PM, Maxim Dounin wrote: [?] > > Perfectly correct solution would be to store a bit (likely in > u->headers_in) to indicate that valid_sec was set based on > X-Accel-Expires and shouldn't be overwritten. > Since there are, in nginx, three headers that can modify the value of valid_sec (Expires, Cache-Control and Expires), I think it would be cleaner to define a priority for each of these headers and to use that priority to decide if we modify or not the valid_sec value. That?s what I?ve done in the attached patch. When setting the value of valid_sec, each header writes its own priority in valid_sec_prio. When processing an other header, instead of checking if the valid_sec is already set, we check if the headers? priority is higher than the one set, before setting (or not) the value found in the header being processed. I?ve set the priorities as: X-Accel-Expires > Cache-Control > Expires. I?m not sure about what the priority of X-Accel-Expires should be (but the last two are well defined in the RFC as you correctly pointed in a previous message). > I'm not sure it worth the effort though. I personally think it?s much better to have a well-defined priority between headers modifying the same value, but well? Regards, -- Florent Le Coz Smartjog -------------- next part -------------- # HG changeset patch # User Florent Le Coz # Date 1384527955 0 # Node ID c26efc188ac84a8b8f2bb4478efb795635a3498d # Parent cbb9a6c7493c3c01323fbc4a61be4a9f0af55ef2 Upstream: implement a priority for headers modifying the cache behaviour Instead of setting the cache duration (or disabling it) based on the order in which the headers (X-Accel-Expires, Expires, Cache-Control) are received, each of these header now has a priority that we use to determine which header should be trusted. diff -r cbb9a6c7493c -r c26efc188ac8 src/http/ngx_http_upstream.c --- a/src/http/ngx_http_upstream.c Mon Nov 11 18:49:35 2013 +0400 +++ b/src/http/ngx_http_upstream.c Fri Nov 15 15:05:55 2013 +0000 @@ -3599,7 +3599,7 @@ ngx_http_upstream_process_cache_control( return NGX_OK; } - if (r->cache->valid_sec != 0) { + if (u->headers_in.valid_sec_prio >= NGX_HTTP_UPSTREAM_CACHE_CONTROL_H_P) { return NGX_OK; } @@ -3642,6 +3642,7 @@ ngx_http_upstream_process_cache_control( } r->cache->valid_sec = ngx_time() + n; + u->headers_in.valid_sec_prio = NGX_HTTP_UPSTREAM_CACHE_CONTROL_H_P; } #endif @@ -3674,6 +3675,10 @@ ngx_http_upstream_process_expires(ngx_ht return NGX_OK; } + if (u->headers_in.valid_sec_prio >= NGX_HTTP_UPSTREAM_EXPIRES_H_P) { + return NGX_OK; + } + expires = ngx_http_parse_time(h->value.data, h->value.len); if (expires == NGX_ERROR || expires < ngx_time()) { @@ -3682,6 +3687,7 @@ ngx_http_upstream_process_expires(ngx_ht } r->cache->valid_sec = expires; + u->headers_in.valid_sec_prio = NGX_HTTP_UPSTREAM_EXPIRES_H_P; } #endif @@ -3728,6 +3734,7 @@ ngx_http_upstream_process_accel_expires( default: r->cache->valid_sec = ngx_time() + n; + u->headers_in.valid_sec_prio = NGX_HTTP_UPSTREAM_XA_EXPIRES_H_P; return NGX_OK; } } @@ -3739,6 +3746,7 @@ ngx_http_upstream_process_accel_expires( if (n != NGX_ERROR) { r->cache->valid_sec = n; + u->headers_in.valid_sec_prio = NGX_HTTP_UPSTREAM_XA_EXPIRES_H_P; } } #endif diff -r cbb9a6c7493c -r c26efc188ac8 src/http/ngx_http_upstream.h --- a/src/http/ngx_http_upstream.h Mon Nov 11 18:49:35 2013 +0400 +++ b/src/http/ngx_http_upstream.h Fri Nov 15 15:05:55 2013 +0000 @@ -105,6 +105,10 @@ typedef struct { #define NGX_HTTP_UPSTREAM_DOWN 0x0010 #define NGX_HTTP_UPSTREAM_BACKUP 0x0020 +/* Headers priority to control the cache behaviour */ +#define NGX_HTTP_UPSTREAM_EXPIRES_H_P 1 +#define NGX_HTTP_UPSTREAM_CACHE_CONTROL_H_P 2 +#define NGX_HTTP_UPSTREAM_XA_EXPIRES_H_P 3 struct ngx_http_upstream_srv_conf_s { ngx_http_upstream_peer_t peer; @@ -245,6 +249,13 @@ typedef struct { unsigned connection_close:1; unsigned chunked:1; +#if (NGX_HTTP_CACHE) + /* Defines the priority (see NGX_HTTP_UPSTREAM_*_H_P) of the + ngx_http_cache_t.valid_sec value currently set. It is used to decide + if a new header should overwrite this value due to the priority of this + header being higher than the one currently set. */ + unsigned valid_sec_prio:2; +#endif } ngx_http_upstream_headers_in_t; From florent.lecoz at smartjog.com Fri Nov 15 16:41:46 2013 From: florent.lecoz at smartjog.com (Florent Le Coz) Date: Fri, 15 Nov 2013 17:41:46 +0100 Subject: [PATCH] Use the upstream Age header to adjust our cache duration Message-ID: <52864ECA.5060303@smartjog.com> Hello, Here's the second part of a revised patch, trying to address the issue of the Age headers (in upstream?s responses) being ignored by nginx. It relies on the sec_valid_prio value (see previous patch) to determine if the cache duration as been set by a Cache-Control header, and if that?s the case it subtracts this from valid_sec. If the result would be an age in the past, the response is non-cacheable. If the result is "now", the response is also non-cacheable, as it is already done in the case of a max-age=0 in Cache-Control. If age as an invalid value (negative), it is ignored. Note: I use + { ngx_string("Age"), + ngx_http_upstream_process_header_line, + offsetof(ngx_http_upstream_headers_in_t, age), + ngx_http_upstream_copy_header_line, 0, 0 }, + to copy this header in a special pointer, to access it easily in process_headers(), where the valid_sec modification based on age is done. Is this the correct way to do? I could of course remove this ngx_http_upstream_headers_in_t.age variable and look for a Age header in the headers list when I need it, but I think this would be suboptimal. Please correct me if I?m wrong. Regards, -- Florent Le Coz Smartjog -------------- next part -------------- # HG changeset patch # User Florent Le Coz # Date 1384532950 0 # Node ID 721fec3e72d3b273146df60dcb5c45fdc2dbf97a # Parent c26efc188ac84a8b8f2bb4478efb795635a3498d Use the upstream Age header when calculating our caching duration diff -r c26efc188ac8 -r 721fec3e72d3 src/http/ngx_http_upstream.c --- a/src/http/ngx_http_upstream.c Fri Nov 15 15:05:55 2013 +0000 +++ b/src/http/ngx_http_upstream.c Fri Nov 15 16:29:10 2013 +0000 @@ -229,6 +229,11 @@ ngx_http_upstream_header_t ngx_http_ups ngx_http_upstream_copy_header_line, offsetof(ngx_http_headers_out_t, expires), 1 }, + { ngx_string("Age"), + ngx_http_upstream_process_header_line, + offsetof(ngx_http_upstream_headers_in_t, age), + ngx_http_upstream_copy_header_line, 0, 0 }, + { ngx_string("Accept-Ranges"), ngx_http_upstream_process_header_line, offsetof(ngx_http_upstream_headers_in_t, accept_ranges), @@ -1939,6 +1944,8 @@ ngx_http_upstream_process_headers(ngx_ht { ngx_str_t *uri, args; ngx_uint_t i, flags; + time_t now; + ngx_int_t age; ngx_list_part_t *part; ngx_table_elt_t *h; ngx_http_upstream_header_t *hh; @@ -1946,6 +1953,20 @@ ngx_http_upstream_process_headers(ngx_ht umcf = ngx_http_get_module_main_conf(r, ngx_http_upstream_module); + if (u->cacheable && u->headers_in.age && + u->headers_in.valid_sec_prio == NGX_HTTP_UPSTREAM_CACHE_CONTROL_H_P) + { + age = ngx_atoi(u->headers_in.age->value.data, + u->headers_in.age->value.len); + if (age >= 0 && age != NGX_ERROR) + { + now = ngx_time(); + if (r->cache->valid_sec - age > now) + r->cache->valid_sec -= age; + else + u->cacheable = 0; + } + } if (u->headers_in.x_accel_redirect && !(u->conf->ignore_headers & NGX_HTTP_UPSTREAM_IGN_XA_REDIRECT)) { diff -r c26efc188ac8 -r 721fec3e72d3 src/http/ngx_http_upstream.h --- a/src/http/ngx_http_upstream.h Fri Nov 15 15:05:55 2013 +0000 +++ b/src/http/ngx_http_upstream.h Fri Nov 15 16:29:10 2013 +0000 @@ -225,6 +225,7 @@ typedef struct { ngx_table_elt_t *connection; ngx_table_elt_t *expires; + ngx_table_elt_t *age; ngx_table_elt_t *etag; ngx_table_elt_t *x_accel_expires; ngx_table_elt_t *x_accel_redirect; From mdounin at mdounin.ru Fri Nov 15 16:44:16 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 15 Nov 2013 20:44:16 +0400 Subject: [PATCH] Upstream: fix the cache duration calculation In-Reply-To: <52863F2D.1050109@smartjog.com> References: <5284ED63.60703@smartjog.com> <20131114171435.GD95765@mdounin.ru> <52850BE0.5010007@smartjog.com> <20131115124412.GN95765@mdounin.ru> <52863F2D.1050109@smartjog.com> Message-ID: <20131115164415.GS95765@mdounin.ru> Hello! On Fri, Nov 15, 2013 at 04:35:09PM +0100, Florent Le Coz wrote: > Hi, > > On 11/15/2013 01:44 PM, Maxim Dounin wrote: > [?] > > > > Perfectly correct solution would be to store a bit (likely in > > u->headers_in) to indicate that valid_sec was set based on > > X-Accel-Expires and shouldn't be overwritten. > > > > Since there are, in nginx, three headers that can modify the value > of valid_sec (Expires, Cache-Control and Expires), I think it would > be cleaner to define a priority for each of these headers and to use > that priority to decide if we modify or not the valid_sec value. > > That?s what I?ve done in the attached patch. > When setting the value of valid_sec, each header writes its own > priority in valid_sec_prio. > When processing an other header, instead of checking if the > valid_sec is already set, we check if the headers? priority is > higher than the one set, before setting (or not) the value found in > the header being processed. > > I?ve set the priorities as: X-Accel-Expires > Cache-Control > Expires. > I?m not sure about what the priority of X-Accel-Expires should be > (but the last two are well defined in the RFC as you correctly > pointed in a previous message). That's certainly looks like an overkill. At most, we need just one bit to disambiguate the Cache-Control header processing, as it needs to know whether valid_sec was set by X-Accel-Expires (and then it shouldn't do anything) or by Expires (and then it's expected to override the value set). (A side note: we might also want to do something with u->cacheable set to 0 by Expires. The Expires header is expected to be overriden by Cache-Control, but it doesn't happen if an Expires header contained a date in the past and u->cacheable was set to 0 due to it.) [...] > @@ -3674,6 +3675,10 @@ ngx_http_upstream_process_expires(ngx_ht > return NGX_OK; > } > > + if (u->headers_in.valid_sec_prio >= NGX_HTTP_UPSTREAM_EXPIRES_H_P) { > + return NGX_OK; > + } > + Just a side note: this change is a nop as the check isn't reached if u->cache->valid_sec is set. -- Maxim Dounin http://nginx.org/en/donation.html From florent.lecoz at smartjog.com Fri Nov 15 16:53:42 2013 From: florent.lecoz at smartjog.com (Florent Le Coz) Date: Fri, 15 Nov 2013 17:53:42 +0100 Subject: [PATCH] Upstream: fix the cache duration calculation In-Reply-To: <20131115164415.GS95765@mdounin.ru> References: <5284ED63.60703@smartjog.com> <20131114171435.GD95765@mdounin.ru> <52850BE0.5010007@smartjog.com> <20131115124412.GN95765@mdounin.ru> <52863F2D.1050109@smartjog.com> <20131115164415.GS95765@mdounin.ru> Message-ID: <52865196.5080302@smartjog.com> On 11/15/2013 05:44 PM, Maxim Dounin wrote: >> I?ve set the priorities as: X-Accel-Expires > Cache-Control > Expires. >> I?m not sure about what the priority of X-Accel-Expires should be >> (but the last two are well defined in the RFC as you correctly >> pointed in a previous message). > > That's certainly looks like an overkill. At most, we need just > one bit to disambiguate the Cache-Control header processing, as it > needs to know whether valid_sec was set by X-Accel-Expires (and > then it shouldn't do anything) or by Expires (and then it's > expected to override the value set). > In my next patch (using the Age header), we need to know if that value was set by Cache-Control (and not by Expires or X-Accel-Expires), because that?s the only case in which the valid_sec value should be modified. That would require a new bit to store this information. > (A side note: we might also want to do something with u->cacheable > set to 0 by Expires. The Expires header is expected to be > overriden by Cache-Control, but it doesn't happen if an Expires > header contained a date in the past and u->cacheable was set to 0 > due to it.) > Right. -- Florent Le Coz Smartjog From steve at stevemorin.com Sat Nov 16 05:35:45 2013 From: steve at stevemorin.com (Steve Morin) Date: Fri, 15 Nov 2013 21:35:45 -0800 Subject: Nginx Logging to Zeromq Module - Sparkngin Message-ID: Does anyone have experience integrating zeromq with Nginx. I am looking for some pointers, to see what concerns I should look out for. I am trying to contribute this code to a open source project. -Steve -------------- next part -------------- An HTML attachment was scrubbed... URL: From moto at kawasaki3.org Sat Nov 16 09:31:42 2013 From: moto at kawasaki3.org (moto kawasaki) Date: Sat, 16 Nov 2013 18:31:42 +0900 (JST) Subject: pls. help for adding another parameter to ngx_upstream_server In-Reply-To: <20131115094206.GA30401@vlpc.i.nginx.com> <20131115102504.GF95765@mdounin.ru> References: <20131115.173539.417762286168675573.moto@kawasaki3.org> <20131115094206.GA30401@vlpc.i.nginx.com> Message-ID: <20131116.183142.557495497606639364.moto@kawasaki3.org> Mr. Homutov and Mr. Dounin: Thank you very much for your quick replies. I'd apologize lack of information, and also my laziness not to test simplified configuration -- details follows. vl> > [emerg] 3848#0: invalid parameter "setfib=5" in /usr/local/etc/nginx/nginx.conf:18 vl> > vl> > The line 18 of nginx.conf contains setfib=5 (see (a) above.) vl> vl> can you please show full configuration? This is quite useful suggestion, since after I cut off surplus lines from my nginx.conf, nginx seems to stop aborting with emerge message. I am so embarrassed for me not to try this simplified configuration. Even now, I cannot reach upstream yet, nor see any packets on the interface. Therefore, it doesn't work yet, but please give me some time to check out what happens inside. vl> > This "uscf->flags" has been set in the function vl> > ngx_http_upstream() at line 4434; vl> vl> you are expected to enable specific flags in each balancing module that support vl> it. For example, ip_hash module doesn't support 'backup' flag and thus does vl> not set 'NGX_HTTP_UPSTREAM_BACKUP' in ngx_http_upstream_ip_hash(). vl> vl> I suggest that you have specified something different from the default balancer vl> and thus got this error, since your patch doesn't allow this parametr in it. I guess setting that flag is done at line 4434 of http/ngx_http_upstream.c, with uscf = ngx_upstream_add() http://lxr.evanmiller.org/http/source/http/ngx_http_upstream.c#L4415 If true, I do want set NGX_HTTP_UPSTREAM_SETFIB here, and did it. mdounin> > Now, I am struggling to add "setfib=N" parameter to "server" token in mdounin> > "upstream" clause, and so far failed. mdounin> mdounin> Could you please point out use cases for such a parameter? mdounin> Shouldn't it be something like proxy_bind instead? Yes, suppose you are hosting web servers for multiple clients, and those clients requires to be root on their web servers. My nginx server locates between their (hosted) web servers and the Internet as http proxy server. My current architecture is one nginx node for each client node, which is something like this. Internet ---+--- nginx_A ------ web_server_A (for client A) | +--- nginx_B ------ web_server_B | +--- nginx_C ------ web_server_C The reasen why I use three nginx nodes is to forbid layer2 attack among clients' nodes. ex.) ARP spoofing attack from web_server_A to B. Then, as number of clients grows, I have to operate/administer that number of nginx nodes. This is O(N), and now it is reaching the upper limit (of my time mainly). So I would like to use one nginx node for several clients' nodes, like this: Internet ------ nginx_X ---+--- web_server_A | +--- web_server_B | +--- web_server_C Now, in order to avoid ARP spoofing, web_server_[ABC] locates in different tagged VLAN, and nginx_X understand such VLANS as different interfaces (ex. igb0.100, igb0.101,...) But nginx_X node also does ipfw NAPT (for SSH, SMTP, etc.), and thus it do routing (sysctl -w net.inet.ip.forwarding=1). So, I want to separate those VLANs using setfib in upstream/server. I am sure that this can be achieved by using ipfw ACLs too, but in that case I have to take care of ACLs for all existing clients' nodes when adding a new client node. # Uh, I like configuring nginx much more than that of ipfw :-) Now, Thank you two (and others) very much!! I will check the behavior of nginx with simplified configuration, and perhaps shall come back with questions. Best Regards. -- moto kawasaki From mat999 at gmail.com Sat Nov 16 12:34:20 2013 From: mat999 at gmail.com (SplitIce) Date: Sat, 16 Nov 2013 23:04:20 +1030 Subject: IPv6 & IPv4 backend with proxy_bind Message-ID: Looking at the documentation it seems there is no way to specify a proxy bind address for both IPv4 and IPv6. You can specify one or the other, but never both. This is a particular issue when a configuration is setup to allow for a failure in IPv6 transit / routing. Is it possible to get a proxy_bind_v6 directive? Regards, Mathew -------------- next part -------------- An HTML attachment was scrubbed... URL: From roberto at unbit.it Sun Nov 17 07:50:14 2013 From: roberto at unbit.it (Roberto De Ioris) Date: Sun, 17 Nov 2013 08:50:14 +0100 Subject: [PATCH] ssl support for uwsgi upstream Message-ID: <569818c25b54ec83c244c706e7210d54.squirrel@manage.unbit.it> Hi all, attached you will find a patch adding two new options to the uwsgi upstream module: uwsgi_ssl (default off) uwsgi_ssl_session_reuse uwsgi over ssl has been officially added today to the uWSGI project (1.9.20) It has been a requirement of a single customer so i do not know how much real uses it has, but being a very tiny (and non invasive) patch i think it will be of interest for others. Regards -- Roberto De Ioris http://unbit.it -------------- next part -------------- A non-text attachment was scrubbed... Name: nginx_uwsgi_ssl.patch Type: application/octet-stream Size: 4321 bytes Desc: not available URL: From mdounin at mdounin.ru Mon Nov 18 11:45:54 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 18 Nov 2013 15:45:54 +0400 Subject: IPv6 & IPv4 backend with proxy_bind In-Reply-To: References: Message-ID: <20131118114554.GB41579@mdounin.ru> Hello! On Sat, Nov 16, 2013 at 11:04:20PM +1030, SplitIce wrote: > Looking at the documentation it seems there is no way to specify a proxy > bind address for both IPv4 and IPv6. > > You can specify one or the other, but never both. This is a particular > issue when a configuration is setup to allow for a failure in IPv6 transit > / routing. > > Is it possible to get a proxy_bind_v6 directive? Could you please clarify the intended use case? The proxy_bind directive is expected to be used to force an IP address used to connect an upstream, originally - to make sure the outgoing address used is one allowed by upstream's security restrictions. Just using distinct proxy_bind directives for different upstreams is usually expected to be enough (if at all needed). -- Maxim Dounin http://nginx.org/en/donation.html From mat999 at gmail.com Mon Nov 18 11:54:43 2013 From: mat999 at gmail.com (SplitIce) Date: Mon, 18 Nov 2013 22:24:43 +1030 Subject: IPv6 & IPv4 backend with proxy_bind In-Reply-To: <20131118114554.GB41579@mdounin.ru> References: <20131118114554.GB41579@mdounin.ru> Message-ID: Hi, We use proxy_bind to ensure traffic always goes out via the same address as the incoming request i.e the bound address where a server has many addresses. This is a hard restriction in our use case. We are looking to add support for IPv6 backends, we would like to allocate a single IPv6 outgoing address per client although this is not a fixed restriction at this stage. IPv6 backends may be used in the same upstream block as IPv4 addresses (and we encourage this, as some network providers are prone to IPv6 related issues). We need to be able to maintain our existing system of binding v4 addresses while allowing for additional support for ipv6 (it is not possible to use IPv6 at all while using a v4 bound address as it will fail with a binding error as expected). For one we expect to see upstreams such as upstream customer_1 { server 2001:...:7334 [...] server 123.1.2.3 backup; } become very common in the near future with the increased adoption of IPv6. We have already had several requests for such functionality in the past year. Regards, Mathew On Mon, Nov 18, 2013 at 10:15 PM, Maxim Dounin wrote: > Hello! > > On Sat, Nov 16, 2013 at 11:04:20PM +1030, SplitIce wrote: > > > Looking at the documentation it seems there is no way to specify a proxy > > bind address for both IPv4 and IPv6. > > > > You can specify one or the other, but never both. This is a particular > > issue when a configuration is setup to allow for a failure in IPv6 > transit > > / routing. > > > > Is it possible to get a proxy_bind_v6 directive? > > Could you please clarify the intended use case? > > The proxy_bind directive is expected to be used to force an IP > address used to connect an upstream, originally - to make sure the > outgoing address used is one allowed by upstream's security > restrictions. Just using distinct proxy_bind directives for > different upstreams is usually expected to be enough (if at all > needed). > > -- > Maxim Dounin > http://nginx.org/en/donation.html > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From razavet at stg-interactive.com Mon Nov 18 13:22:47 2013 From: razavet at stg-interactive.com (Philippe RAZAVET) Date: Mon, 18 Nov 2013 14:22:47 +0100 Subject: unsuscribe Message-ID: <528A14A7.6090702@stg-interactive.com> From mdounin at mdounin.ru Mon Nov 18 13:27:40 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 18 Nov 2013 17:27:40 +0400 Subject: IPv6 & IPv4 backend with proxy_bind In-Reply-To: References: <20131118114554.GB41579@mdounin.ru> Message-ID: <20131118132740.GH41579@mdounin.ru> Hello! On Mon, Nov 18, 2013 at 10:24:43PM +1030, SplitIce wrote: > Hi, > > We use proxy_bind to ensure traffic always goes out via the same address as > the incoming request i.e the bound address where a server has many > addresses. This is a hard restriction in our use case. > > We are looking to add support for IPv6 backends, we would like to allocate > a single IPv6 outgoing address per client although this is not a fixed > restriction at this stage. IPv6 backends may be used in the same upstream > block as IPv4 addresses (and we encourage this, as some network providers > are prone to IPv6 related issues). > > We need to be able to maintain our existing system of binding v4 addresses > while allowing for additional support for ipv6 (it is not possible to use > IPv6 at all while using a v4 bound address as it will fail with a binding > error as expected). > > For one we expect to see upstreams such as > > upstream customer_1 { > server 2001:...:7334 > [...] > server 123.1.2.3 backup; > } > > become very common in the near future with the increased adoption of IPv6. > We have already had several requests for such functionality in the past > year. Ok, I see what you are trying to do. A working solution would be to use distinct upstream blocks for ipv6 and ipv4 addresses and an error_page based fallback (with proxy_bind configured to appropriate addresses in distinct locations). Given the fact that use of proxy_bind is uncommon by itself, and it's use in multi-protocol configuration even more uncommon, I tend to think that exisiting solution is good enough. -- Maxim Dounin http://nginx.org/en/donation.html From mdounin at mdounin.ru Mon Nov 18 13:28:41 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 18 Nov 2013 17:28:41 +0400 Subject: [PATCH] ssl support for uwsgi upstream In-Reply-To: <569818c25b54ec83c244c706e7210d54.squirrel@manage.unbit.it> References: <569818c25b54ec83c244c706e7210d54.squirrel@manage.unbit.it> Message-ID: <20131118132841.GI41579@mdounin.ru> Hello! On Sun, Nov 17, 2013 at 08:50:14AM +0100, Roberto De Ioris wrote: > Hi all, attached you will find a patch adding two new options to the uwsgi > upstream module: > > uwsgi_ssl (default off) > > uwsgi_ssl_session_reuse > > uwsgi over ssl has been officially added today to the uWSGI project (1.9.20) > > It has been a requirement of a single customer so i do not know how much > real uses it has, but being a very tiny (and non invasive) patch i think > it will be of interest for others. Just a quick note: The proxy_ssl_protocols and proxy_ssl_ciphers directives were intoduced in nginx 1.5.6. So if you want the patch to be committed, it probably needs to be adapted be in line with proxy counterpart. -- Maxim Dounin http://nginx.org/en/donation.html From roberto at unbit.it Mon Nov 18 13:31:13 2013 From: roberto at unbit.it (Roberto De Ioris) Date: Mon, 18 Nov 2013 14:31:13 +0100 Subject: [PATCH] ssl support for uwsgi upstream In-Reply-To: <20131118132841.GI41579@mdounin.ru> References: <569818c25b54ec83c244c706e7210d54.squirrel@manage.unbit.it> <20131118132841.GI41579@mdounin.ru> Message-ID: <739ccf46a038a0156cbdaaf35de1a31b.squirrel@manage.unbit.it> > Hello! > > On Sun, Nov 17, 2013 at 08:50:14AM +0100, Roberto De Ioris wrote: > >> Hi all, attached you will find a patch adding two new options to the >> uwsgi >> upstream module: >> >> uwsgi_ssl (default off) >> >> uwsgi_ssl_session_reuse >> >> uwsgi over ssl has been officially added today to the uWSGI project >> (1.9.20) >> >> It has been a requirement of a single customer so i do not know how much >> real uses it has, but being a very tiny (and non invasive) patch i think >> it will be of interest for others. > > Just a quick note: > > The proxy_ssl_protocols and proxy_ssl_ciphers directives were > intoduced in nginx 1.5.6. So if you want the patch to be > committed, it probably needs to be adapted be in line with proxy > counterpart. > > -- > Hi Maxim, you mean the patch can (eventually) only be applied to 1.5 and not to 1.4 ? Or you mean that i should send a 1.5 version too for avoiding inconsistency ? Thanks a lot for your time -- Roberto De Ioris http://unbit.it From mdounin at mdounin.ru Mon Nov 18 14:09:08 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 18 Nov 2013 18:09:08 +0400 Subject: pls. help for adding another parameter to ngx_upstream_server In-Reply-To: <20131116.183142.557495497606639364.moto@kawasaki3.org> References: <20131115.173539.417762286168675573.moto@kawasaki3.org> <20131115094206.GA30401@vlpc.i.nginx.com> <20131116.183142.557495497606639364.moto@kawasaki3.org> Message-ID: <20131118140907.GL41579@mdounin.ru> Hello! On Sat, Nov 16, 2013 at 06:31:42PM +0900, moto kawasaki wrote: [...] > mdounin> > Now, I am struggling to add "setfib=N" parameter to "server" token in > mdounin> > "upstream" clause, and so far failed. > mdounin> > mdounin> Could you please point out use cases for such a parameter? > mdounin> Shouldn't it be something like proxy_bind instead? > > Yes, suppose you are hosting web servers for multiple clients, and > those clients requires to be root on their web servers. > My nginx server locates between their (hosted) web servers and the > Internet as http proxy server. > > My current architecture is one nginx node for each client node, which > is something like this. > > Internet ---+--- nginx_A ------ web_server_A (for client A) > | > +--- nginx_B ------ web_server_B > | > +--- nginx_C ------ web_server_C > > The reasen why I use three nginx nodes is to forbid layer2 attack > among clients' nodes. ex.) ARP spoofing attack from web_server_A to B. > > Then, as number of clients grows, I have to operate/administer that > number of nginx nodes. This is O(N), and now it is reaching the upper > limit (of my time mainly). > > So I would like to use one nginx node for several clients' nodes, like > this: > > Internet ------ nginx_X ---+--- web_server_A > | > +--- web_server_B > | > +--- web_server_C > > Now, in order to avoid ARP spoofing, web_server_[ABC] locates in > different tagged VLAN, and nginx_X understand such VLANS as different > interfaces (ex. igb0.100, igb0.101,...) > > But nginx_X node also does ipfw NAPT (for SSH, SMTP, etc.), and thus > it do routing (sysctl -w net.inet.ip.forwarding=1). > > So, I want to separate those VLANs using setfib in upstream/server. > I am sure that this can be achieved by using ipfw ACLs too, but in > that case I have to take care of ACLs for all existing clients' nodes > when adding a new client node. Well, as far as I can tell there is no reasons to do per-server setfib in this usecase, and proxy_setfib N; should be enough. It should be much easier to implement than what you are trying to do in your patch. -- Maxim Dounin http://nginx.org/en/donation.html From mdounin at mdounin.ru Mon Nov 18 14:21:09 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 18 Nov 2013 18:21:09 +0400 Subject: [PATCH] ssl support for uwsgi upstream In-Reply-To: <739ccf46a038a0156cbdaaf35de1a31b.squirrel@manage.unbit.it> References: <569818c25b54ec83c244c706e7210d54.squirrel@manage.unbit.it> <20131118132841.GI41579@mdounin.ru> <739ccf46a038a0156cbdaaf35de1a31b.squirrel@manage.unbit.it> Message-ID: <20131118142109.GM41579@mdounin.ru> Hello! On Mon, Nov 18, 2013 at 02:31:13PM +0100, Roberto De Ioris wrote: > > > Hello! > > > > On Sun, Nov 17, 2013 at 08:50:14AM +0100, Roberto De Ioris wrote: > > > >> Hi all, attached you will find a patch adding two new options to the > >> uwsgi > >> upstream module: > >> > >> uwsgi_ssl (default off) > >> > >> uwsgi_ssl_session_reuse > >> > >> uwsgi over ssl has been officially added today to the uWSGI project > >> (1.9.20) > >> > >> It has been a requirement of a single customer so i do not know how much > >> real uses it has, but being a very tiny (and non invasive) patch i think > >> it will be of interest for others. > > > > Just a quick note: > > > > The proxy_ssl_protocols and proxy_ssl_ciphers directives were > > intoduced in nginx 1.5.6. So if you want the patch to be > > committed, it probably needs to be adapted be in line with proxy > > counterpart. > > > > -- > > > > Hi Maxim, you mean the patch can (eventually) only be applied to 1.5 and > not to 1.4 ? Or you mean that i should send a 1.5 version too for avoiding > inconsistency ? Commits are only expected to happen on mainline branch, 1.5.x currently, where all new features appear. The 1.4.x branch is stable and, as per our current merge policy, only expected to get critical bugfixes and security fixes. (Previously, we used to merge some features as well, but this is not expected to happen anymore, as it seems to cause more harm than good.) -- Maxim Dounin http://nginx.org/en/donation.html From albertcasademont at gmail.com Mon Nov 18 15:10:22 2013 From: albertcasademont at gmail.com (Albert Casademont Filella) Date: Mon, 18 Nov 2013 16:10:22 +0100 Subject: Is it possible a config option for the TLS record size? Message-ID: Hi all! Right now we have compiled our own nginx instance to change the TLS record size [1], which is fine, but I am wondering if it would be very complicated to change this for a config option. I am new to the internals of nginx, and before trying to make a patch it would be nice to have some opinions on the topic. Thanks! Albert [1] https://github.com/nginx/nginx/blob/master/src/event/ngx_event_openssl.h#L97 -------------- next part -------------- An HTML attachment was scrubbed... URL: From roberto at unbit.it Mon Nov 18 15:21:45 2013 From: roberto at unbit.it (Roberto De Ioris) Date: Mon, 18 Nov 2013 16:21:45 +0100 Subject: [PATCH] ssl support for uwsgi upstream In-Reply-To: <20131118142109.GM41579@mdounin.ru> References: <569818c25b54ec83c244c706e7210d54.squirrel@manage.unbit.it> <20131118132841.GI41579@mdounin.ru> <739ccf46a038a0156cbdaaf35de1a31b.squirrel@manage.unbit.it> <20131118142109.GM41579@mdounin.ru> Message-ID: <22bf4154a478c1ae97dbddf161081ab0.squirrel@manage.unbit.it> > Hello! > > On Mon, Nov 18, 2013 at 02:31:13PM +0100, Roberto De Ioris wrote: > >> >> > Hello! >> > >> > On Sun, Nov 17, 2013 at 08:50:14AM +0100, Roberto De Ioris wrote: >> > >> >> Hi all, attached you will find a patch adding two new options to the >> >> uwsgi >> >> upstream module: >> >> >> >> uwsgi_ssl (default off) >> >> >> >> uwsgi_ssl_session_reuse >> >> >> >> uwsgi over ssl has been officially added today to the uWSGI project >> >> (1.9.20) >> >> >> >> It has been a requirement of a single customer so i do not know how >> much >> >> real uses it has, but being a very tiny (and non invasive) patch i >> think >> >> it will be of interest for others. >> > >> > Just a quick note: >> > >> > The proxy_ssl_protocols and proxy_ssl_ciphers directives were >> > intoduced in nginx 1.5.6. So if you want the patch to be >> > committed, it probably needs to be adapted be in line with proxy >> > counterpart. >> > >> > -- >> > >> >> Hi Maxim, you mean the patch can (eventually) only be applied to 1.5 and >> not to 1.4 ? Or you mean that i should send a 1.5 version too for >> avoiding >> inconsistency ? > > Commits are only expected to happen on mainline branch, 1.5.x > currently, where all new features appear. > > The 1.4.x branch is stable and, as per our current merge policy, > only expected to get critical bugfixes and security fixes. > (Previously, we used to merge some features as well, but this is > not expected to happen anymore, as it seems to cause more harm > than good.) > Seems reasonable, i will make a new patch for 1.5 Thanks again -- Roberto De Ioris http://unbit.it From mdounin at mdounin.ru Mon Nov 18 16:52:03 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 18 Nov 2013 16:52:03 +0000 Subject: [nginx] Upstream: cache revalidation with conditional requests. Message-ID: details: http://hg.nginx.org/nginx/rev/43ccaf8e8728 branches: changeset: 5441:43ccaf8e8728 user: Maxim Dounin date: Mon Nov 18 20:48:22 2013 +0400 description: Upstream: cache revalidation with conditional requests. The following new directives are introduced: proxy_cache_revalidate, fastcgi_cache_revalidate, scgi_cache_revalidate, uwsgi_cache_revalidate. Default is off. When set to on, they enable cache revalidation using conditional requests with If-Modified-Since for expired cache items. As of now, no attempts are made to merge headers given in a 304 response during cache revalidation with headers previously stored in a cache item. Headers in a 304 response are only used to calculate new validity time of a cache item. diffstat: src/http/modules/ngx_http_fastcgi_module.c | 14 +++- src/http/modules/ngx_http_proxy_module.c | 14 +++- src/http/modules/ngx_http_scgi_module.c | 14 +++- src/http/modules/ngx_http_uwsgi_module.c | 14 +++- src/http/ngx_http_cache.h | 6 +- src/http/ngx_http_file_cache.c | 111 +++++++++++++++++++++++++++++ src/http/ngx_http_upstream.c | 85 ++++++++++++++++++++++ src/http/ngx_http_upstream.h | 2 + 8 files changed, 254 insertions(+), 6 deletions(-) diffs (truncated from 459 to 300 lines): diff --git a/src/http/modules/ngx_http_fastcgi_module.c b/src/http/modules/ngx_http_fastcgi_module.c --- a/src/http/modules/ngx_http_fastcgi_module.c +++ b/src/http/modules/ngx_http_fastcgi_module.c @@ -405,6 +405,13 @@ static ngx_command_t ngx_http_fastcgi_c offsetof(ngx_http_fastcgi_loc_conf_t, upstream.cache_lock_timeout), NULL }, + { ngx_string("fastcgi_cache_revalidate"), + NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_FLAG, + ngx_conf_set_flag_slot, + NGX_HTTP_LOC_CONF_OFFSET, + offsetof(ngx_http_fastcgi_loc_conf_t, upstream.cache_revalidate), + NULL }, + #endif { ngx_string("fastcgi_temp_path"), @@ -563,7 +570,8 @@ static ngx_str_t ngx_http_fastcgi_hide_ #if (NGX_HTTP_CACHE) static ngx_keyval_t ngx_http_fastcgi_cache_headers[] = { - { ngx_string("HTTP_IF_MODIFIED_SINCE"), ngx_string("") }, + { ngx_string("HTTP_IF_MODIFIED_SINCE"), + ngx_string("$upstream_cache_last_modified") }, { ngx_string("HTTP_IF_UNMODIFIED_SINCE"), ngx_string("") }, { ngx_string("HTTP_IF_NONE_MATCH"), ngx_string("") }, { ngx_string("HTTP_IF_MATCH"), ngx_string("") }, @@ -2336,6 +2344,7 @@ ngx_http_fastcgi_create_loc_conf(ngx_con conf->upstream.cache_valid = NGX_CONF_UNSET_PTR; conf->upstream.cache_lock = NGX_CONF_UNSET; conf->upstream.cache_lock_timeout = NGX_CONF_UNSET_MSEC; + conf->upstream.cache_revalidate = NGX_CONF_UNSET; #endif conf->upstream.hide_headers = NGX_CONF_UNSET_PTR; @@ -2582,6 +2591,9 @@ ngx_http_fastcgi_merge_loc_conf(ngx_conf ngx_conf_merge_msec_value(conf->upstream.cache_lock_timeout, prev->upstream.cache_lock_timeout, 5000); + ngx_conf_merge_value(conf->upstream.cache_revalidate, + prev->upstream.cache_revalidate, 0); + #endif ngx_conf_merge_value(conf->upstream.pass_request_headers, diff --git a/src/http/modules/ngx_http_proxy_module.c b/src/http/modules/ngx_http_proxy_module.c --- a/src/http/modules/ngx_http_proxy_module.c +++ b/src/http/modules/ngx_http_proxy_module.c @@ -465,6 +465,13 @@ static ngx_command_t ngx_http_proxy_com offsetof(ngx_http_proxy_loc_conf_t, upstream.cache_lock_timeout), NULL }, + { ngx_string("proxy_cache_revalidate"), + NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_FLAG, + ngx_conf_set_flag_slot, + NGX_HTTP_LOC_CONF_OFFSET, + offsetof(ngx_http_proxy_loc_conf_t, upstream.cache_revalidate), + NULL }, + #endif { ngx_string("proxy_temp_path"), @@ -622,7 +629,8 @@ static ngx_keyval_t ngx_http_proxy_cach { ngx_string("Keep-Alive"), ngx_string("") }, { ngx_string("Expect"), ngx_string("") }, { ngx_string("Upgrade"), ngx_string("") }, - { ngx_string("If-Modified-Since"), ngx_string("") }, + { ngx_string("If-Modified-Since"), + ngx_string("$upstream_cache_last_modified") }, { ngx_string("If-Unmodified-Since"), ngx_string("") }, { ngx_string("If-None-Match"), ngx_string("") }, { ngx_string("If-Match"), ngx_string("") }, @@ -2454,6 +2462,7 @@ ngx_http_proxy_create_loc_conf(ngx_conf_ conf->upstream.cache_valid = NGX_CONF_UNSET_PTR; conf->upstream.cache_lock = NGX_CONF_UNSET; conf->upstream.cache_lock_timeout = NGX_CONF_UNSET_MSEC; + conf->upstream.cache_revalidate = NGX_CONF_UNSET; #endif conf->upstream.hide_headers = NGX_CONF_UNSET_PTR; @@ -2710,6 +2719,9 @@ ngx_http_proxy_merge_loc_conf(ngx_conf_t ngx_conf_merge_msec_value(conf->upstream.cache_lock_timeout, prev->upstream.cache_lock_timeout, 5000); + ngx_conf_merge_value(conf->upstream.cache_revalidate, + prev->upstream.cache_revalidate, 0); + #endif ngx_conf_merge_str_value(conf->method, prev->method, ""); diff --git a/src/http/modules/ngx_http_scgi_module.c b/src/http/modules/ngx_http_scgi_module.c --- a/src/http/modules/ngx_http_scgi_module.c +++ b/src/http/modules/ngx_http_scgi_module.c @@ -262,6 +262,13 @@ static ngx_command_t ngx_http_scgi_comma offsetof(ngx_http_scgi_loc_conf_t, upstream.cache_lock_timeout), NULL }, + { ngx_string("scgi_cache_revalidate"), + NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_FLAG, + ngx_conf_set_flag_slot, + NGX_HTTP_LOC_CONF_OFFSET, + offsetof(ngx_http_scgi_loc_conf_t, upstream.cache_revalidate), + NULL }, + #endif { ngx_string("scgi_temp_path"), @@ -369,7 +376,8 @@ static ngx_str_t ngx_http_scgi_hide_head #if (NGX_HTTP_CACHE) static ngx_keyval_t ngx_http_scgi_cache_headers[] = { - { ngx_string("HTTP_IF_MODIFIED_SINCE"), ngx_string("") }, + { ngx_string("HTTP_IF_MODIFIED_SINCE"), + ngx_string("$upstream_cache_last_modified") }, { ngx_string("HTTP_IF_UNMODIFIED_SINCE"), ngx_string("") }, { ngx_string("HTTP_IF_NONE_MATCH"), ngx_string("") }, { ngx_string("HTTP_IF_MATCH"), ngx_string("") }, @@ -1093,6 +1101,7 @@ ngx_http_scgi_create_loc_conf(ngx_conf_t conf->upstream.cache_valid = NGX_CONF_UNSET_PTR; conf->upstream.cache_lock = NGX_CONF_UNSET; conf->upstream.cache_lock_timeout = NGX_CONF_UNSET_MSEC; + conf->upstream.cache_revalidate = NGX_CONF_UNSET; #endif conf->upstream.hide_headers = NGX_CONF_UNSET_PTR; @@ -1333,6 +1342,9 @@ ngx_http_scgi_merge_loc_conf(ngx_conf_t ngx_conf_merge_msec_value(conf->upstream.cache_lock_timeout, prev->upstream.cache_lock_timeout, 5000); + ngx_conf_merge_value(conf->upstream.cache_revalidate, + prev->upstream.cache_revalidate, 0); + #endif ngx_conf_merge_value(conf->upstream.pass_request_headers, diff --git a/src/http/modules/ngx_http_uwsgi_module.c b/src/http/modules/ngx_http_uwsgi_module.c --- a/src/http/modules/ngx_http_uwsgi_module.c +++ b/src/http/modules/ngx_http_uwsgi_module.c @@ -289,6 +289,13 @@ static ngx_command_t ngx_http_uwsgi_comm offsetof(ngx_http_uwsgi_loc_conf_t, upstream.cache_lock_timeout), NULL }, + { ngx_string("uwsgi_cache_revalidate"), + NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_FLAG, + ngx_conf_set_flag_slot, + NGX_HTTP_LOC_CONF_OFFSET, + offsetof(ngx_http_uwsgi_loc_conf_t, upstream.cache_revalidate), + NULL }, + #endif { ngx_string("uwsgi_temp_path"), @@ -402,7 +409,8 @@ static ngx_str_t ngx_http_uwsgi_hide_hea #if (NGX_HTTP_CACHE) static ngx_keyval_t ngx_http_uwsgi_cache_headers[] = { - { ngx_string("HTTP_IF_MODIFIED_SINCE"), ngx_string("") }, + { ngx_string("HTTP_IF_MODIFIED_SINCE"), + ngx_string("$upstream_cache_last_modified") }, { ngx_string("HTTP_IF_UNMODIFIED_SINCE"), ngx_string("") }, { ngx_string("HTTP_IF_NONE_MATCH"), ngx_string("") }, { ngx_string("HTTP_IF_MATCH"), ngx_string("") }, @@ -1130,6 +1138,7 @@ ngx_http_uwsgi_create_loc_conf(ngx_conf_ conf->upstream.cache_valid = NGX_CONF_UNSET_PTR; conf->upstream.cache_lock = NGX_CONF_UNSET; conf->upstream.cache_lock_timeout = NGX_CONF_UNSET_MSEC; + conf->upstream.cache_revalidate = NGX_CONF_UNSET; #endif conf->upstream.hide_headers = NGX_CONF_UNSET_PTR; @@ -1370,6 +1379,9 @@ ngx_http_uwsgi_merge_loc_conf(ngx_conf_t ngx_conf_merge_msec_value(conf->upstream.cache_lock_timeout, prev->upstream.cache_lock_timeout, 5000); + ngx_conf_merge_value(conf->upstream.cache_revalidate, + prev->upstream.cache_revalidate, 0); + #endif ngx_conf_merge_value(conf->upstream.pass_request_headers, diff --git a/src/http/ngx_http_cache.h b/src/http/ngx_http_cache.h --- a/src/http/ngx_http_cache.h +++ b/src/http/ngx_http_cache.h @@ -19,8 +19,9 @@ #define NGX_HTTP_CACHE_EXPIRED 3 #define NGX_HTTP_CACHE_STALE 4 #define NGX_HTTP_CACHE_UPDATING 5 -#define NGX_HTTP_CACHE_HIT 6 -#define NGX_HTTP_CACHE_SCARCE 7 +#define NGX_HTTP_CACHE_REVALIDATED 6 +#define NGX_HTTP_CACHE_HIT 7 +#define NGX_HTTP_CACHE_SCARCE 8 #define NGX_HTTP_CACHE_KEY_LEN 16 @@ -143,6 +144,7 @@ void ngx_http_file_cache_create_key(ngx_ ngx_int_t ngx_http_file_cache_open(ngx_http_request_t *r); void ngx_http_file_cache_set_header(ngx_http_request_t *r, u_char *buf); void ngx_http_file_cache_update(ngx_http_request_t *r, ngx_temp_file_t *tf); +void ngx_http_file_cache_update_header(ngx_http_request_t *r); ngx_int_t ngx_http_cache_send(ngx_http_request_t *); void ngx_http_file_cache_free(ngx_http_cache_t *c, ngx_temp_file_t *tf); time_t ngx_http_file_cache_valid(ngx_array_t *cache_valid, ngx_uint_t status); diff --git a/src/http/ngx_http_file_cache.c b/src/http/ngx_http_file_cache.c --- a/src/http/ngx_http_file_cache.c +++ b/src/http/ngx_http_file_cache.c @@ -53,6 +53,7 @@ ngx_str_t ngx_http_cache_status[] = { ngx_string("EXPIRED"), ngx_string("STALE"), ngx_string("UPDATING"), + ngx_string("REVALIDATED"), ngx_string("HIT") }; @@ -971,6 +972,116 @@ ngx_http_file_cache_update(ngx_http_requ } +void +ngx_http_file_cache_update_header(ngx_http_request_t *r) +{ + ssize_t n; + ngx_err_t err; + ngx_file_t file; + ngx_file_info_t fi; + ngx_http_cache_t *c; + ngx_http_file_cache_header_t h; + + ngx_log_debug0(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, + "http file cache update header"); + + c = r->cache; + + ngx_memzero(&file, sizeof(ngx_file_t)); + + file.name = c->file.name; + file.log = r->connection->log; + file.fd = ngx_open_file(file.name.data, NGX_FILE_RDWR, NGX_FILE_OPEN, 0); + + if (file.fd == NGX_INVALID_FILE) { + err = ngx_errno; + + /* cache file may have been deleted */ + + if (err == NGX_ENOENT) { + ngx_log_debug1(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, + "http file cache \"%s\" not found", + file.name.data); + return; + } + + ngx_log_error(NGX_LOG_CRIT, r->connection->log, err, + ngx_open_file_n " \"%s\" failed", file.name.data); + return; + } + + /* + * make sure cache file wasn't replaced; + * if it was, do nothing + */ + + if (ngx_fd_info(file.fd, &fi) == NGX_FILE_ERROR) { + ngx_log_error(NGX_LOG_CRIT, r->connection->log, ngx_errno, + ngx_fd_info_n " \"%s\" failed", file.name.data); + goto done; + } + + if (c->uniq != ngx_file_uniq(&fi) + || c->length != ngx_file_size(&fi)) + { + ngx_log_debug1(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, + "http file cache \"%s\" changed", + file.name.data); + goto done; + } + + n = ngx_read_file(&file, (u_char *) &h, + sizeof(ngx_http_file_cache_header_t), 0); + + if (n == NGX_ERROR) { + goto done; + } + + if ((size_t) n != sizeof(ngx_http_file_cache_header_t)) { + ngx_log_error(NGX_LOG_CRIT, r->connection->log, 0, + ngx_read_file_n " read only %z of %z from \"%s\"", + n, sizeof(ngx_http_file_cache_header_t), file.name.data); + goto done; + } + + if (h.last_modified != c->last_modified + || h.crc32 != c->crc32 + || h.header_start != c->header_start + || h.body_start != c->body_start) + { + ngx_log_debug1(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, + "http file cache \"%s\" content changed", + file.name.data); + goto done; + } From roberto at unbit.it Tue Nov 19 10:24:50 2013 From: roberto at unbit.it (Roberto De Ioris) Date: Tue, 19 Nov 2013 11:24:50 +0100 Subject: [PATCH v2] uwsgi over ssl Message-ID: <0dc9a0b4c852ede3d20403e82a1d50fe.squirrel@manage.unbit.it> Hi, this is a new patch for uwsgi over ssl support aimed at nginx 1.5.x It now exposes 4 options: uwsgi_ssl uwsgi_ssl_session_reuse uwsgi_ssl_protocols uwsgi_ssl_ciphers Regards -- Roberto De Ioris http://unbit.it -------------- next part -------------- A non-text attachment was scrubbed... Name: nginx_uwsgi_ssl_v2.patch Type: application/octet-stream Size: 5406 bytes Desc: not available URL: From mat999 at gmail.com Tue Nov 19 10:39:34 2013 From: mat999 at gmail.com (SplitIce) Date: Tue, 19 Nov 2013 21:09:34 +1030 Subject: IPv6 & IPv4 backend with proxy_bind In-Reply-To: <20131118132740.GH41579@mdounin.ru> References: <20131118114554.GB41579@mdounin.ru> <20131118132740.GH41579@mdounin.ru> Message-ID: An IPv6 based fallback is not the only solution we want to support, ultimately we would like to be able to load-balance between them as well. An error_page based solution would not assist. I also get the feeling that such a hack would have large implications, while either an additional parameter or another directive would be a simple & clean solution to a real identified deficiency. This kind of request is only going to get more common with the growing adoption of IPv6. Regards, Mathew On Mon, Nov 18, 2013 at 11:57 PM, Maxim Dounin wrote: > Hello! > > On Mon, Nov 18, 2013 at 10:24:43PM +1030, SplitIce wrote: > > > Hi, > > > > We use proxy_bind to ensure traffic always goes out via the same address > as > > the incoming request i.e the bound address where a server has many > > addresses. This is a hard restriction in our use case. > > > > We are looking to add support for IPv6 backends, we would like to > allocate > > a single IPv6 outgoing address per client although this is not a fixed > > restriction at this stage. IPv6 backends may be used in the same upstream > > block as IPv4 addresses (and we encourage this, as some network providers > > are prone to IPv6 related issues). > > > > We need to be able to maintain our existing system of binding v4 > addresses > > while allowing for additional support for ipv6 (it is not possible to use > > IPv6 at all while using a v4 bound address as it will fail with a binding > > error as expected). > > > > For one we expect to see upstreams such as > > > > upstream customer_1 { > > server 2001:...:7334 > > [...] > > server 123.1.2.3 backup; > > } > > > > become very common in the near future with the increased adoption of > IPv6. > > We have already had several requests for such functionality in the past > > year. > > Ok, I see what you are trying to do. A working solution would be > to use distinct upstream blocks for ipv6 and ipv4 addresses and an > error_page based fallback (with proxy_bind configured to > appropriate addresses in distinct locations). > > Given the fact that use of proxy_bind is uncommon by itself, > and it's use in multi-protocol configuration even more uncommon, I > tend to think that exisiting solution is good enough. > > -- > Maxim Dounin > http://nginx.org/en/donation.html > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Tue Nov 19 14:57:19 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 19 Nov 2013 14:57:19 +0000 Subject: [nginx] Proper backtracking after space in a request line. Message-ID: details: http://hg.nginx.org/nginx/rev/63f960bbc52f branches: changeset: 5442:63f960bbc52f user: Ruslan Ermilov date: Tue Nov 19 06:57:58 2013 +0400 description: Proper backtracking after space in a request line. diffstat: src/http/ngx_http_parse.c | 2 ++ 1 files changed, 2 insertions(+), 0 deletions(-) diffs (19 lines): diff --git a/src/http/ngx_http_parse.c b/src/http/ngx_http_parse.c --- a/src/http/ngx_http_parse.c +++ b/src/http/ngx_http_parse.c @@ -617,6 +617,7 @@ ngx_http_parse_request_line(ngx_http_req default: r->space_in_uri = 1; state = sw_check_uri; + p--; break; } break; @@ -670,6 +671,7 @@ ngx_http_parse_request_line(ngx_http_req default: r->space_in_uri = 1; state = sw_uri; + p--; break; } break; From mdounin at mdounin.ru Tue Nov 19 14:57:20 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 19 Nov 2013 14:57:20 +0000 Subject: [nginx] nginx-1.5.7-RELEASE Message-ID: details: http://hg.nginx.org/nginx/rev/9ba2542d75bf branches: changeset: 5443:9ba2542d75bf user: Maxim Dounin date: Tue Nov 19 14:03:47 2013 +0400 description: nginx-1.5.7-RELEASE diffstat: docs/xml/nginx/changes.xml | 136 +++++++++++++++++++++++++++++++++++++++++++++ 1 files changed, 136 insertions(+), 0 deletions(-) diffs (146 lines): diff --git a/docs/xml/nginx/changes.xml b/docs/xml/nginx/changes.xml --- a/docs/xml/nginx/changes.xml +++ b/docs/xml/nginx/changes.xml @@ -5,6 +5,142 @@ + + + + +??????, ????????? ?? ???????????????? ???????? ? ?????? ???????, +????????????? ??????????? (CVE-2013-4547); +?????? ????????? ? 0.8.41.
+??????? Ivan Fratric ?? Google Security Team. +
+ +a character following an unescaped space in a request line +was handled incorrectly (CVE-2013-4547); +the bug had appeared in 0.8.41.
+Thanks to Ivan Fratric of the Google Security Team. +
+
+ + + +??????? ???????????? ?????? auth_basic ?? ?????????? ?????? +??????? ? ?????? error ?? info. + + +a logging level of auth_basic errors about no user/password provided +has been lowered from "error" to "info". + + + + + +????????? proxy_cache_revalidate, fastcgi_cache_revalidate, +scgi_cache_revalidate ? uwsgi_cache_revalidate. + + +the "proxy_cache_revalidate", "fastcgi_cache_revalidate", +"scgi_cache_revalidate", and "uwsgi_cache_revalidate" directives. + + + + + +????????? ssl_session_ticket_key.
+??????? Piotr Sikora. +
+ +the "ssl_session_ticket_key" directive.
+Thanks to Piotr Sikora. +
+
+ + + +????????? "add_header Cache-Control ''" +????????? ?????? ????????? ?????? "Cache-Control" ? ?????? ?????????. + + +the directive "add_header Cache-Control ''" +added a "Cache-Control" response header line with an empty value. + + + + + +????????? "satisfy any" ????? ??????? ?????? 403 ?????? 401 +??? ????????????? ???????? auth_request ? auth_basic.
+??????? Jan Marc Hoffmann. +
+ +the "satisfy any" directive might return 403 error instead of 401 +if auth_request and auth_basic directives were used.
+Thanks to Jan Marc Hoffmann. +
+
+ + + +????????? accept_filter ? deferred ????????? listen ?????????????? +??? listen-???????, ??????????? ? ???????? ?????????? ???????????? ?????.
+??????? Piotr Sikora. +
+ +the "accept_filter" and "deferred" parameters of the "listen" directive +were ignored for listen sockets created during binary upgrade.
+Thanks to Piotr Sikora. +
+
+ + + +????? ??????, ?????????? ?? ??????? ??? ?????????????????? ?????????????, +????? ?? ???????????? ??????? ?????, +???? ?????????????? ????????? gzip ??? gunzip.
+??????? Yichun Zhang. +
+ +some data received from a backend with unbufferred proxy +might not be sent to a client immediately +if "gzip" or "gunzip" directives were used.
+Thanks to Yichun Zhang. +
+
+ + + +? ????????? ?????? ? ?????? ngx_http_gunzip_filter_module. + + +in error handling in ngx_http_gunzip_filter_module. + + + + + +?????? ????? ???????? +???? ????????????? ?????? ngx_http_spdy_module +? ????????? auth_request. + + +responses might hang +if the ngx_http_spdy_module was used +with the "auth_request" directive. + + + + + +?????? ?????? ? nginx/Windows. + + +memory leak in nginx/Windows. + + + +
+ + From mdounin at mdounin.ru Tue Nov 19 14:57:22 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 19 Nov 2013 14:57:22 +0000 Subject: [nginx] release-1.5.7 tag Message-ID: details: http://hg.nginx.org/nginx/rev/48d6aaf6bf30 branches: changeset: 5444:48d6aaf6bf30 user: Maxim Dounin date: Tue Nov 19 14:03:47 2013 +0400 description: release-1.5.7 tag diffstat: .hgtags | 1 + 1 files changed, 1 insertions(+), 0 deletions(-) diffs (8 lines): diff --git a/.hgtags b/.hgtags --- a/.hgtags +++ b/.hgtags @@ -362,3 +362,4 @@ 644a079526295aca11c52c46cb81e3754e6ad4ad 376a5e7694004048a9d073e4feb81bb54ee3ba91 release-1.5.4 60e0409b9ec7ee194c6d8102f0656598cc4a6cfe release-1.5.5 70c5cd3a61cb476c2afb3a61826e59c7cda0b7a7 release-1.5.6 +9ba2542d75bf62a3972278c63561fc2ef5ec573a release-1.5.7 From mdounin at mdounin.ru Tue Nov 19 14:57:23 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 19 Nov 2013 14:57:23 +0000 Subject: [nginx] Version bump. Message-ID: details: http://hg.nginx.org/nginx/rev/108791bded2e branches: stable-1.4 changeset: 5445:108791bded2e user: Maxim Dounin date: Tue Nov 19 15:23:03 2013 +0400 description: Version bump. diffstat: src/core/nginx.h | 4 ++-- 1 files changed, 2 insertions(+), 2 deletions(-) diffs (14 lines): diff --git a/src/core/nginx.h b/src/core/nginx.h --- a/src/core/nginx.h +++ b/src/core/nginx.h @@ -9,8 +9,8 @@ #define _NGINX_H_INCLUDED_ -#define nginx_version 1004003 -#define NGINX_VERSION "1.4.3" +#define nginx_version 1004004 +#define NGINX_VERSION "1.4.4" #define NGINX_VER "nginx/" NGINX_VERSION #define NGINX_VAR "NGINX" From mdounin at mdounin.ru Tue Nov 19 14:57:24 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 19 Nov 2013 14:57:24 +0000 Subject: [nginx] Proper backtracking after space in a request line. Message-ID: details: http://hg.nginx.org/nginx/rev/988c22615014 branches: stable-1.4 changeset: 5446:988c22615014 user: Ruslan Ermilov date: Tue Nov 19 06:57:58 2013 +0400 description: Proper backtracking after space in a request line. diffstat: src/http/ngx_http_parse.c | 2 ++ 1 files changed, 2 insertions(+), 0 deletions(-) diffs (19 lines): diff --git a/src/http/ngx_http_parse.c b/src/http/ngx_http_parse.c --- a/src/http/ngx_http_parse.c +++ b/src/http/ngx_http_parse.c @@ -614,6 +614,7 @@ ngx_http_parse_request_line(ngx_http_req default: r->space_in_uri = 1; state = sw_check_uri; + p--; break; } break; @@ -667,6 +668,7 @@ ngx_http_parse_request_line(ngx_http_req default: r->space_in_uri = 1; state = sw_uri; + p--; break; } break; From mdounin at mdounin.ru Tue Nov 19 14:57:26 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 19 Nov 2013 14:57:26 +0000 Subject: [nginx] nginx-1.4.4-RELEASE Message-ID: details: http://hg.nginx.org/nginx/rev/7e9543faf5f0 branches: stable-1.4 changeset: 5447:7e9543faf5f0 user: Maxim Dounin date: Tue Nov 19 15:25:24 2013 +0400 description: nginx-1.4.4-RELEASE diffstat: docs/xml/nginx/changes.xml | 20 ++++++++++++++++++++ 1 files changed, 20 insertions(+), 0 deletions(-) diffs (30 lines): diff --git a/docs/xml/nginx/changes.xml b/docs/xml/nginx/changes.xml --- a/docs/xml/nginx/changes.xml +++ b/docs/xml/nginx/changes.xml @@ -5,6 +5,26 @@ + + + + +??????, ????????? ?? ???????????????? ???????? ? ?????? ???????, +????????????? ??????????? (CVE-2013-4547); +?????? ????????? ? 0.8.41.
+??????? Ivan Fratric ?? Google Security Team. +
+ +a character following an unescaped space in a request line +was handled incorrectly (CVE-2013-4547); +the bug had appeared in 0.8.41.
+Thanks to Ivan Fratric of the Google Security Team. +
+
+ +
+ + From mdounin at mdounin.ru Tue Nov 19 14:57:27 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 19 Nov 2013 14:57:27 +0000 Subject: [nginx] release-1.4.4 tag Message-ID: details: http://hg.nginx.org/nginx/rev/68e250ceb06e branches: stable-1.4 changeset: 5448:68e250ceb06e user: Maxim Dounin date: Tue Nov 19 15:25:24 2013 +0400 description: release-1.4.4 tag diffstat: .hgtags | 1 + 1 files changed, 1 insertions(+), 0 deletions(-) diffs (8 lines): diff --git a/.hgtags b/.hgtags --- a/.hgtags +++ b/.hgtags @@ -358,3 +358,4 @@ 7809529022b83157067e7d1e2fb65d57db5f4d99 0702de638a4c51123d7b97801d393e8e25eb48de release-1.4.1 50f065641b4c52ced41fae1ce216c73aaf112306 release-1.4.2 69ffaca7795531e19f3827940cc28dca0b50d0b8 release-1.4.3 +7e9543faf5f0a443ba605d9d483cf4721fae30a5 release-1.4.4 From steven.hartland at multiplay.co.uk Tue Nov 19 16:43:54 2013 From: steven.hartland at multiplay.co.uk (Steven Hartland) Date: Tue, 19 Nov 2013 16:43:54 -0000 Subject: Patch: Prevent crit error being loggged on delete of non-existent file Message-ID: <061289301EE94901ABEE4FA51229782E@multiplay.co.uk> Hi guys the attached patch prevents http file cache from logging a critical error when a file delete fails due to it not existing, which I'm sure was never the intention. N.B. Sorry if this appears as a duplicate, sent initially from the wrong email address. Regards Steve ================================================ This e.mail is private and confidential between Multiplay (UK) Ltd. and the person or entity to whom it is addressed. In the event of misdirection, the recipient is prohibited from using, copying, printing or otherwise disseminating it or any information contained in it. In the event of misdirection, illegible or incomplete transmission please telephone +44 845 868 1337 or return the E.mail to postmaster at multiplay.co.uk. -------------- next part -------------- A non-text attachment was scrubbed... Name: nginx-file-delete.patch Type: application/octet-stream Size: 2017 bytes Desc: not available URL: From mdounin at mdounin.ru Tue Nov 19 16:52:33 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 19 Nov 2013 20:52:33 +0400 Subject: Patch: Prevent crit error being loggged on delete of non-existent file In-Reply-To: <061289301EE94901ABEE4FA51229782E@multiplay.co.uk> References: <061289301EE94901ABEE4FA51229782E@multiplay.co.uk> Message-ID: <20131119165233.GR41579@mdounin.ru> Hello! On Tue, Nov 19, 2013 at 04:43:54PM -0000, Steven Hartland wrote: > Hi guys the attached patch prevents http file cache from > logging a critical error when a file delete fails due to > it not existing, which I'm sure was never the intention. Why you think it's not an intention? If a file doesn't exists but nginx tries to delete it - it means that nginx has wong notion of what is on disk. It's certainly a problem which deserves logging. -- Maxim Dounin http://nginx.org/en/donation.html From steven.hartland at multiplay.co.uk Tue Nov 19 17:36:51 2013 From: steven.hartland at multiplay.co.uk (Steven Hartland) Date: Tue, 19 Nov 2013 17:36:51 -0000 Subject: Patch: Prevent crit error being loggged on delete of non-existent file References: <061289301EE94901ABEE4FA51229782E@multiplay.co.uk> <20131119165233.GR41579@mdounin.ru> Message-ID: <37CD7BF024AE4C32B3E917E88ABEB41C@multiplay.co.uk> ----- Original Message ----- From: "Maxim Dounin" To: Sent: Tuesday, November 19, 2013 4:52 PM Subject: Re: Patch: Prevent crit error being loggged on delete of non-existent file > Hello! > > On Tue, Nov 19, 2013 at 04:43:54PM -0000, Steven Hartland wrote: > >> Hi guys the attached patch prevents http file cache from >> logging a critical error when a file delete fails due to >> it not existing, which I'm sure was never the intention. > > Why you think it's not an intention? If a file doesn't exists > but nginx tries to delete it - it means that nginx has wong notion > of what is on disk. It's certainly a problem which deserves > logging. I can understand that being an issue when you tried to access the file from the cache, but thats not where it gets logged only on delete and as your deleting it why worry if its not there? In specific use case here is a manual cache purge which has occured deleting all files in the directory, which admittely isnt ideal, but appart from the critical error being logged on delete everything appears to be working fine. Regards Steve ================================================ This e.mail is private and confidential between Multiplay (UK) Ltd. and the person or entity to whom it is addressed. In the event of misdirection, the recipient is prohibited from using, copying, printing or otherwise disseminating it or any information contained in it. In the event of misdirection, illegible or incomplete transmission please telephone +44 845 868 1337 or return the E.mail to postmaster at multiplay.co.uk. From info at tvdw.eu Tue Nov 19 18:28:19 2013 From: info at tvdw.eu (Tom van der Woerdt) Date: Tue, 19 Nov 2013 19:28:19 +0100 Subject: Patch: Prevent crit error being loggged on delete of non-existent file In-Reply-To: <37CD7BF024AE4C32B3E917E88ABEB41C@multiplay.co.uk> References: <061289301EE94901ABEE4FA51229782E@multiplay.co.uk> <20131119165233.GR41579@mdounin.ru> <37CD7BF024AE4C32B3E917E88ABEB41C@multiplay.co.uk> Message-ID: <528BADC3.90109@tvdw.eu> Steven Hartland schreef op 19/11/13 18:36: > ----- Original Message ----- From: "Maxim Dounin" > To: > Sent: Tuesday, November 19, 2013 4:52 PM > Subject: Re: Patch: Prevent crit error being loggged on delete of > non-existent file > > >> Hello! >> >> On Tue, Nov 19, 2013 at 04:43:54PM -0000, Steven Hartland wrote: >> >>> Hi guys the attached patch prevents http file cache from >>> logging a critical error when a file delete fails due to >>> it not existing, which I'm sure was never the intention. >> >> Why you think it's not an intention? If a file doesn't exists but >> nginx tries to delete it - it means that nginx has wong notion of >> what is on disk. It's certainly a problem which deserves logging. > > I can understand that being an issue when you tried to access > the file from the cache, but thats not where it gets logged > only on delete and as your deleting it why worry if its not > there? > > In specific use case here is a manual cache purge which has occured > deleting all files in the directory, which admittely isnt ideal, > but appart from the critical error being logged on delete > everything appears to be working fine. > > Regards > Steve If everything works fine but something is logged, it should not be considered critical, but simply a warning. Criticals and errors should be reserved for things nginx can't compensate for, warnings and notices for the rest. Completely muting these things is not the right solution though. Tom -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3729 bytes Desc: S/MIME-cryptografische ondertekening URL: From mdounin at mdounin.ru Tue Nov 19 22:42:52 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 20 Nov 2013 02:42:52 +0400 Subject: Patch: Prevent crit error being loggged on delete of non-existent file In-Reply-To: <37CD7BF024AE4C32B3E917E88ABEB41C@multiplay.co.uk> References: <061289301EE94901ABEE4FA51229782E@multiplay.co.uk> <20131119165233.GR41579@mdounin.ru> <37CD7BF024AE4C32B3E917E88ABEB41C@multiplay.co.uk> Message-ID: <20131119224252.GT41579@mdounin.ru> Hello! On Tue, Nov 19, 2013 at 05:36:51PM -0000, Steven Hartland wrote: > >On Tue, Nov 19, 2013 at 04:43:54PM -0000, Steven Hartland wrote: > > > >>Hi guys the attached patch prevents http file cache from > >>logging a critical error when a file delete fails due to > >>it not existing, which I'm sure was never the intention. > > > >Why you think it's not an intention? If a file doesn't exists but > >nginx tries to delete it - it means that nginx has wong notion of > >what is on disk. It's certainly a problem which deserves logging. > > I can understand that being an issue when you tried to access > the file from the cache, but thats not where it gets logged > only on delete and as your deleting it why worry if its not > there? Because it's not what nginx expects. This indicate a problem - either a bug in nginx, or some external modification of a cache store. > In specific use case here is a manual cache purge which has occured > deleting all files in the directory, which admittely isnt ideal, > but appart from the critical error being logged on delete > everything appears to be working fine. While manual deleting of cache files is generally considered safe, it implies at least one problem - the cache manager can't maintain max_size correctly if some files were deleted. If you've modified the cache manually, it's expected that you know what you are doing, and agree with implications. But it's up to you to ensure everything is working fine, and it's up to nginx to complain about inconsistencies it detected. -- Maxim Dounin http://nginx.org/en/donation.html From ru at nginx.com Fri Nov 22 13:26:42 2013 From: ru at nginx.com (Ruslan Ermilov) Date: Fri, 22 Nov 2013 17:26:42 +0400 Subject: IPv6 & IPv4 backend with proxy_bind In-Reply-To: References: <20131118114554.GB41579@mdounin.ru> <20131118132740.GH41579@mdounin.ru> Message-ID: <20131122132642.GO56821@lo0.su> On Tue, Nov 19, 2013 at 09:09:34PM +1030, SplitIce wrote: > An IPv6 based fallback is not the only solution we want to support, > ultimately we would like to be able to load-balance between them as well. > An error_page based solution would not assist. > > I also get the feeling that such a hack would have large implications, > while either an additional parameter or another directive would be a simple > & clean solution to a real identified deficiency. > > This kind of request is only going to get more common with the growing > adoption of IPv6. You can make the currently selected peer address available as an nginx variable, then use the "map" directive to compute the per-peer bind address, like follows: map $peer_addr $bind_addr { 192.168.1.100 192.168.1.1; 2001:0db8::100 2001:0db8::1; ... } or like this: map $peer_addr $bind_addr { ~: 2001:0db8::1; default 192.168.1.1; } Hint: the "proxy_bind" directive supports variables. From mat999 at gmail.com Fri Nov 22 13:40:06 2013 From: mat999 at gmail.com (SplitIce) Date: Sat, 23 Nov 2013 00:10:06 +1030 Subject: IPv6 & IPv4 backend with proxy_bind In-Reply-To: <20131122132642.GO56821@lo0.su> References: <20131118114554.GB41579@mdounin.ru> <20131118132740.GH41579@mdounin.ru> <20131122132642.GO56821@lo0.su> Message-ID: Ruslan, its funny you should mention this, I am testing a patch to do just that at the moment. Once I am certain that its not leaking memory and I have reviewed it in regards to the nginx code standards Ill post it in this email thread in case it is of use for others. On Fri, Nov 22, 2013 at 11:56 PM, Ruslan Ermilov wrote: > On Tue, Nov 19, 2013 at 09:09:34PM +1030, SplitIce wrote: > > An IPv6 based fallback is not the only solution we want to support, > > ultimately we would like to be able to load-balance between them as well. > > An error_page based solution would not assist. > > > > I also get the feeling that such a hack would have large implications, > > while either an additional parameter or another directive would be a > simple > > & clean solution to a real identified deficiency. > > > > This kind of request is only going to get more common with the growing > > adoption of IPv6. > > You can make the currently selected peer address available as an nginx > variable, then use the "map" directive to compute the per-peer bind > address, like follows: > > map $peer_addr $bind_addr { > 192.168.1.100 192.168.1.1; > 2001:0db8::100 2001:0db8::1; > ... > } > > or like this: > > map $peer_addr $bind_addr { > ~: 2001:0db8::1; > default 192.168.1.1; > } > > Hint: the "proxy_bind" directive supports variables. > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mat999 at gmail.com Sat Nov 23 01:45:28 2013 From: mat999 at gmail.com (SplitIce) Date: Sat, 23 Nov 2013 12:15:28 +1030 Subject: IPv6 & IPv4 backend with proxy_bind In-Reply-To: References: <20131118114554.GB41579@mdounin.ru> <20131118132740.GH41579@mdounin.ru> <20131122132642.GO56821@lo0.su> Message-ID: Attached is the patch, This is the first time I have created a variable or really done anything inside the http request processing flow so feel free to let me know if there is a better way to do something or if I have any edge cases. This patch provides a $upstream_connecting variable which contains the IP address and port of the upstream being connected. If there is no upstream, it will return "-" my understanding is this may happen if the upstream is DNS resolved (untested). There may be a better way of doing this? This should be used in a config like the following - map $upstream_connecting $test { ~^93\.184\.216\.119\: 192.168.2.40; ~^192\.168\.2\.([0-9]+)\: 192.168.2.40; } proxy_bind $test; Regards, Mathew On Sat, Nov 23, 2013 at 12:10 AM, SplitIce wrote: > Ruslan, its funny you should mention this, I am testing a patch to do just > that at the moment. > > Once I am certain that its not leaking memory and I have reviewed it in > regards to the nginx code standards Ill post it in this email thread in > case it is of use for others. > > > On Fri, Nov 22, 2013 at 11:56 PM, Ruslan Ermilov wrote: > >> On Tue, Nov 19, 2013 at 09:09:34PM +1030, SplitIce wrote: >> > An IPv6 based fallback is not the only solution we want to support, >> > ultimately we would like to be able to load-balance between them as >> well. >> > An error_page based solution would not assist. >> > >> > I also get the feeling that such a hack would have large implications, >> > while either an additional parameter or another directive would be a >> simple >> > & clean solution to a real identified deficiency. >> > >> > This kind of request is only going to get more common with the growing >> > adoption of IPv6. >> >> You can make the currently selected peer address available as an nginx >> variable, then use the "map" directive to compute the per-peer bind >> address, like follows: >> >> map $peer_addr $bind_addr { >> 192.168.1.100 192.168.1.1; >> 2001:0db8::100 2001:0db8::1; >> ... >> } >> >> or like this: >> >> map $peer_addr $bind_addr { >> ~: 2001:0db8::1; >> default 192.168.1.1; >> } >> >> Hint: the "proxy_bind" directive supports variables. >> >> _______________________________________________ >> nginx-devel mailing list >> nginx-devel at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx-devel >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: patch-upstream-v2 Type: application/octet-stream Size: 3396 bytes Desc: not available URL: From niq at apache.org Mon Nov 25 16:36:33 2013 From: niq at apache.org (Nick Kew) Date: Mon, 25 Nov 2013 16:36:33 +0000 Subject: API inconsistencies Message-ID: <05803B9A-E70C-480D-BB77-17C0E06CE341@apache.org> Is there a prescribed way for a module to deal with API changes to support both older and newer versions? I can use constructs like: /* Get the remote address */ #if OLDVERSION len = ngx_sock_ntop(conn->sockaddr, ... #else len = ngx_sock_ntop(conn->sockaddr, conn->socklen, ? #endif But that's just passing the buck to users to figure out why my module doesn't compile with their nginx or tengine. I want a test like Apache's MMN that will automatically detect the API version to use. Is there a way to do that? (FWIW, the above API changed in http://hg.nginx.org/nginx/rev/05ba5bce31e0 ). -- Nick Kew From ru at nginx.com Mon Nov 25 16:46:51 2013 From: ru at nginx.com (Ruslan Ermilov) Date: Mon, 25 Nov 2013 20:46:51 +0400 Subject: API inconsistencies In-Reply-To: <05803B9A-E70C-480D-BB77-17C0E06CE341@apache.org> References: <05803B9A-E70C-480D-BB77-17C0E06CE341@apache.org> Message-ID: <20131125164651.GG74311@lo0.su> On Mon, Nov 25, 2013 at 04:36:33PM +0000, Nick Kew wrote: > Is there a prescribed way for a module to deal with API changes > to support both older and newer versions? > > I can use constructs like: > > /* Get the remote address */ > #if OLDVERSION > len = ngx_sock_ntop(conn->sockaddr, ... > #else > len = ngx_sock_ntop(conn->sockaddr, conn->socklen, ? > #endif > > But that's just passing the buck to users to figure out why my > module doesn't compile with their nginx or tengine. I want > a test like Apache's MMN that will automatically detect the > API version to use. Is there a way to do that? > > (FWIW, the above API changed in > http://hg.nginx.org/nginx/rev/05ba5bce31e0 ). nginx_version in src/core/nginx.h. From niq at apache.org Mon Nov 25 17:49:18 2013 From: niq at apache.org (Nick Kew) Date: Mon, 25 Nov 2013 17:49:18 +0000 Subject: API inconsistencies In-Reply-To: <20131125164651.GG74311@lo0.su> References: <05803B9A-E70C-480D-BB77-17C0E06CE341@apache.org> <20131125164651.GG74311@lo0.su> Message-ID: <5321D360-20E5-4F6C-BCD7-DB4494727775@apache.org> On 25 Nov 2013, at 16:46, Ruslan Ermilov wrote: > nginx_version in src/core/nginx.h. Thanks. Seems to be 1.5.3 at the time of that commit, so I guess the test I need is < 1005003 vs >= 1005003? But that presumably only works with release versions, so anyone compiling against arbitrary versions will have to deal with it case-by-case (in this instance, either API could apply to version 1005003)? -- Nick Kew From moto at kawasaki3.org Tue Nov 26 09:37:59 2013 From: moto at kawasaki3.org (moto kawasaki) Date: Tue, 26 Nov 2013 18:37:59 +0900 (JST) Subject: pls. help for adding another parameter to ngx_upstream_server In-Reply-To: <20131118140907.GL41579@mdounin.ru> References: <20131115094206.GA30401@vlpc.i.nginx.com> <20131116.183142.557495497606639364.moto@kawasaki3.org> <20131118140907.GL41579@mdounin.ru> Message-ID: <20131126.183759.56672497386060771.moto@kawasaki3.org> Hello, I am the questioner about setfib feature on the upstream side. And I am back for asking you comments/suggestions/help. Thank you very much in advance. Please refer the following URL for the conversation before. http://forum.nginx.org/read.php?29,244686 mdounin> Well, as far as I can tell there is no reasons to do per-server mdounin> setfib in this usecase, and mdounin> mdounin> proxy_setfib N; mdounin> mdounin> should be enough. It should be much easier to implement than what mdounin> you are trying to do in your patch. Thank you very much, Mr. Dounin, I followed your advice. Now, I am refering src/http/modules/ngx_http_upstream_keepalive_module.c, and create a new module "ngx_http_upstream_proxy_setfib_module.c". With this module, I can write "setfib N" in nginx.conf like; http { (snip) upstream UPSTREAM { proxy_setfib 5; server 10.200.195.70:80 max_fails=3 fail_timeout=300; } } And also I can create server config for this module, and the config "proxy_setfib 5" can be read and set into server config. Now, I want to do setsockopt(2) with SO_SETFIB, so that I can make a connection to upstream (contents) server with a given fib number (5 in above case). But I am lost in the nginx architecture, so please advice me in which function should I call setsockopt() ?? My incomplete understandings are: - I need to override ngx_event_connect_peer(), since this function has socket(), setsockopt(SO_RCVBUF), and bind(). I cannot find any hook/callback point between socket() and bind(). http://lxr.evanmiller.org/http/source/event/ngx_event_connect.c#L15 - ngx_event_connect_peer() is called from ngx_http_upstream_connect() etc. but I don't understand when and by which function it is called. - seeing keepalive module, the initialization chain is connected: ngx_http_upstream_keepalive() -> ngx_http_upstream_init_keepalive() -> ngx_http_upstream_init_keepalive_peer() -> ngx_http_upstream_init_get_keepalive_peer() -> ... http://lxr.evanmiller.org/http/source/http/ngx_http_upstream.c#L1132 But I could not find any place on that chain which might call ngx_http_upstream_connect() so far. I am embarrassed to ask such basic questions but any comments/suggestions are really appreciated. I've attached my module source code, configure option, and nginx.conf. I am running them on FreeBSD 9.2-RELEASE (amd64), and nginx version is 1.4.3. Thank you very much. Best Regards, -- moto kawasaki -------------- next part -------------- A non-text attachment was scrubbed... Name: setfib_module.tar.gz Type: application/octet-stream Size: 2056 bytes Desc: not available URL: From ru at nginx.com Tue Nov 26 11:10:41 2013 From: ru at nginx.com (Ruslan Ermilov) Date: Tue, 26 Nov 2013 15:10:41 +0400 Subject: API inconsistencies In-Reply-To: <5321D360-20E5-4F6C-BCD7-DB4494727775@apache.org> References: <05803B9A-E70C-480D-BB77-17C0E06CE341@apache.org> <20131125164651.GG74311@lo0.su> <5321D360-20E5-4F6C-BCD7-DB4494727775@apache.org> Message-ID: <20131126111041.GB65068@lo0.su> On Mon, Nov 25, 2013 at 05:49:18PM +0000, Nick Kew wrote: > On 25 Nov 2013, at 16:46, Ruslan Ermilov wrote: > > > nginx_version in src/core/nginx.h. > > Thanks. Seems to be 1.5.3 at the time of that commit, so I guess > the test I need is < 1005003 vs >= 1005003? > > But that presumably only works with release versions, so anyone > compiling against arbitrary versions will have to deal with it > case-by-case (in this instance, either API could apply to > version 1005003)? Consider we do not support unreleased versions. From reeteshr at outlook.com Tue Nov 26 12:30:35 2013 From: reeteshr at outlook.com (Reetesh Ranjan) Date: Tue, 26 Nov 2013 18:00:35 +0530 Subject: Help on designing using multiple location/upstream modules Message-ID: Hi, I am a newbie to nginx. I have done some initial research on nginx architecture, location modules, upstream modules, third party modules available for various purposes etc. After going through a number of pages I have a question which I can't seem to find an easy answer to. I have a very simple use case like this: user enters a set of keywords to search on my web site. In the backend, in my nginx location module, i first go to Redis for cached results against the set of keywords and if not found, to Sphinx search daemon. In the latter case, I set the results obtained from Sphinx back into Redis. I have thought of the following design, in terms of nginx modules I would use: 1 My main location module that picks the keywords entered and communicates to Redis and Sphinx2 For communicating to Redis I thought of using HttpRedis2Module (http://wiki.nginx.org/HttpRedis2Module)3 For communicating with Sphinx, I am trying to write a simple C++ client or adapt the Sphinx C++ client (http://sourceforge.net/projects/cppsphinxclient/) or its parts into an upstream module. What I wanted to know is how to invoke the upstream modules within my main location module. Are there standard APIs provided by Nginx for the same and do they retain the async advantages? Or do I have to resort to make curl calls from my C++ client and use the response? I was hoping that the former (Nginx APIs to call upstream modules) exists in some form and serves as some "shortcut" or "faster" way/alternative to making some curl API calls. Regards,Reetesh -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Tue Nov 26 13:29:09 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 26 Nov 2013 17:29:09 +0400 Subject: pls. help for adding another parameter to ngx_upstream_server In-Reply-To: <20131126.183759.56672497386060771.moto@kawasaki3.org> References: <20131115094206.GA30401@vlpc.i.nginx.com> <20131116.183142.557495497606639364.moto@kawasaki3.org> <20131118140907.GL41579@mdounin.ru> <20131126.183759.56672497386060771.moto@kawasaki3.org> Message-ID: <20131126132909.GD93176@mdounin.ru> Hello! On Tue, Nov 26, 2013 at 06:37:59PM +0900, moto kawasaki wrote: > > Hello, > > I am the questioner about setfib feature on the upstream side. > And I am back for asking you comments/suggestions/help. > Thank you very much in advance. > > Please refer the following URL for the conversation before. > http://forum.nginx.org/read.php?29,244686 > > mdounin> Well, as far as I can tell there is no reasons to do per-server > mdounin> setfib in this usecase, and > mdounin> > mdounin> proxy_setfib N; > mdounin> > mdounin> should be enough. It should be much easier to implement than what > mdounin> you are trying to do in your patch. > > Thank you very much, Mr. Dounin, I followed your advice. > > Now, I am refering src/http/modules/ngx_http_upstream_keepalive_module.c, > and create a new module "ngx_http_upstream_proxy_setfib_module.c". > > With this module, I can write "setfib N" in nginx.conf like; > > http { > (snip) > upstream UPSTREAM { > proxy_setfib 5; > server 10.200.195.70:80 max_fails=3 fail_timeout=300; > } > } > > And also I can create server config for this module, and the config > "proxy_setfib 5" can be read and set into server config. Why you are trying to do this with a balancer module? You won't be able to do this with a balancer module anyway, as you'll have to call setsockopt() on a socket after it's created but before the connect() is called, which is only possible with modifications to ngx_event_connect_peer(). Try following proxy_bind implementation as already suggested. [...] -- Maxim Dounin http://nginx.org/en/donation.html From moto at kawasaki3.org Wed Nov 27 02:05:53 2013 From: moto at kawasaki3.org (moto kawasaki) Date: Wed, 27 Nov 2013 11:05:53 +0900 (JST) Subject: pls. help for adding another parameter to ngx_upstream_server In-Reply-To: <20131126132909.GD93176@mdounin.ru> References: <20131118140907.GL41579@mdounin.ru> <20131126.183759.56672497386060771.moto@kawasaki3.org> <20131126132909.GD93176@mdounin.ru> Message-ID: <20131127.110553.1954467124078386531.moto@kawasaki3.org> Dear Mr. Dounin, I am sorry I didn't recognise suggestion about proxy_bind. Very sorry. I will try. Thank you very much! moto kawasaki mdounin> Why you are trying to do this with a balancer module? You won't mdounin> be able to do this with a balancer module anyway, as you'll have mdounin> to call setsockopt() on a socket after it's created but before the mdounin> connect() is called, which is only possible with modifications to mdounin> ngx_event_connect_peer(). mdounin> mdounin> Try following proxy_bind implementation as already suggested. mdounin> mdounin> [...] mdounin> mdounin> -- mdounin> Maxim Dounin mdounin> http://nginx.org/en/donation.html mdounin> mdounin> _______________________________________________ mdounin> nginx-devel mailing list mdounin> nginx-devel at nginx.org mdounin> http://mailman.nginx.org/mailman/listinfo/nginx-devel From wangxiaochen0 at gmail.com Wed Nov 27 05:54:06 2013 From: wangxiaochen0 at gmail.com (Xiaochen Wang) Date: Wed, 27 Nov 2013 13:54:06 +0800 Subject: spdy: connection leak when client aborted connection Message-ID: Configure: ./configure --with-http_spdy_module --with-http_stub_status_module --prefix=$(pwd)/../nginx --with-cc-opt=-O0 Reproduce: 1. use nginx(spdy/2) to build static file server 2. request large static file(16MB file, 10 streams in one session) $ spdycat --no-tls -2 -m 10 http://ip:spdy_port/16M_static_file 3. input ctrl-c to abort connection before spdycat exit 4. see output of stub status module $ curl http://ip:http_port/stub_status Active connections: 2 <<< always larger than 1, one connection leak server accepts handled requests 4 4 4 Reading: 0 Writing: 2 Waiting: 1 Note: If you cannot reproduce, try one more time. gdb trace: -> ngx_http_spdy_read_handler ... do { recv() do { sc->handler(sc, p, end) } while } while ->sc->handler() -> ... -> ngx_http_spdy_send_output_queue() -> c->send_chain(c, cl, 0); <<< (maybe) not send total chains stream->request->connection->write->handler = ngx_http_request_handler stream->request->write_event_handler = ngx_http_writer >>> client aborted the connection <<< -> ngx_http_spdy_read_handler ... do { rc = recv() if (rc == 0) <<< Client closed the connection. ngx_http_spdy_finalize_connection() } while ... -> ngx_http_spdy_finalize_connection -> ev->handler (stream->request->connection->write->handler) ngx_http_request_handler -> r->write_event_handler ngx_http_writer ... if (wev->delayed || r->aio) { <<< wev->delayed was set. ngx_log_debug0(NGX_LOG_DEBUG_HTTP, wev->log, 0, "http writer delayed"); if (ngx_handle_write_event(wev, clcf->send_lowat) != NGX_OK) { ngx_http_close_request(r, 0); } return; } ... In which case, this stream had not been finalized, and sc->processing was not decreased. After ngx_http_spdy_finalize_connection(), nginx got one connection(spdy session) leak. -------------- next part -------------- An HTML attachment was scrubbed... URL: From reeteshr at outlook.com Wed Nov 27 06:24:52 2013 From: reeteshr at outlook.com (Reetesh Ranjan) Date: Wed, 27 Nov 2013 11:54:52 +0530 Subject: Help on designing using multiple location/upstream modules In-Reply-To: References: Message-ID: I saw several pages on web about ngx_http_subrequest; filters vs location modules using it; parallel vs sequential usage; code/modules using it etc. Would first try out a solution for my use case using this method, and come back in case I am stuck and of course after reading previous threads. Regards,Reetesh From: reeteshr at outlook.com To: nginx-devel at nginx.org Subject: Help on designing using multiple location/upstream modules Date: Tue, 26 Nov 2013 18:00:35 +0530 Hi, I am a newbie to nginx. I have done some initial research on nginx architecture, location modules, upstream modules, third party modules available for various purposes etc. After going through a number of pages I have a question which I can't seem to find an easy answer to. I have a very simple use case like this: user enters a set of keywords to search on my web site. In the backend, in my nginx location module, i first go to Redis for cached results against the set of keywords and if not found, to Sphinx search daemon. In the latter case, I set the results obtained from Sphinx back into Redis. I have thought of the following design, in terms of nginx modules I would use: 1 My main location module that picks the keywords entered and communicates to Redis and Sphinx2 For communicating to Redis I thought of using HttpRedis2Module (http://wiki.nginx.org/HttpRedis2Module)3 For communicating with Sphinx, I am trying to write a simple C++ client or adapt the Sphinx C++ client (http://sourceforge.net/projects/cppsphinxclient/) or its parts into an upstream module. What I wanted to know is how to invoke the upstream modules within my main location module. Are there standard APIs provided by Nginx for the same and do they retain the async advantages? Or do I have to resort to make curl calls from my C++ client and use the response? I was hoping that the former (Nginx APIs to call upstream modules) exists in some form and serves as some "shortcut" or "faster" way/alternative to making some curl API calls. Regards,Reetesh _______________________________________________ nginx-devel mailing list nginx-devel at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx-devel -------------- next part -------------- An HTML attachment was scrubbed... URL: From agentzh at gmail.com Wed Nov 27 20:39:15 2013 From: agentzh at gmail.com (Yichun Zhang (agentzh)) Date: Wed, 27 Nov 2013 12:39:15 -0800 Subject: Help on designing using multiple location/upstream modules In-Reply-To: References: Message-ID: Hello! On Tue, Nov 26, 2013 at 4:30 AM, Reetesh Ranjan wrote: > I have thought of the following design, in terms of nginx modules I would > use: > > 1 My main location module that picks the keywords entered and communicates > to Redis and Sphinx > 2 For communicating to Redis I thought of using HttpRedis2Module > (http://wiki.nginx.org/HttpRedis2Module) > 3 For communicating with Sphinx, I am trying to write a simple C++ client > or adapt the Sphinx C++ client > (http://sourceforge.net/projects/cppsphinxclient/) or its parts into an > upstream module. > This looks trivial if you use ngx_lua module as the glue. In particular you can check out the ngx.location.capture and ngx.location.capture_multi API functions for captured subrequests: https://github.com/chaoslawful/lua-nginx-module#ngxlocationcapture https://github.com/chaoslawful/lua-nginx-module#ngxlocationcapture_multi And probably also the "light thread" API that can work with the subrequest API above: https://github.com/chaoslawful/lua-nginx-module#ngxthreadspawn When using the Lua API provided by ngx_lua, everything is nonblocking out of the box :) Regards, -agentzh From mat999 at gmail.com Thu Nov 28 09:09:32 2013 From: mat999 at gmail.com (SplitIce) Date: Thu, 28 Nov 2013 19:39:32 +1030 Subject: redis nginx 1.5.x support Message-ID: I am aware atleast 2 others are considering developing patches with one the module maintainer. I wanted to improve so have attempted this upgrade myself. Its only a small patch, from my understanding its the upstream length field that has changed between 1.4.x and 1.5.x. I would like to know if I have done something I shoudnt etc so as to improve my nginx knowledge. The commits - https://github.com/splitice/ngx_http_redis/commit/88d423c17a8614b29635994839649c3e0d576641 https://github.com/splitice/ngx_http_redis/commit/c7dbe9fa75001787bd6602c4029e0e314798a37d I always find myself worried that I will do something that will have strange flow on effects e.g at a certain length in a buffer etc. Thanks in advance, Mathew -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Thu Nov 28 09:40:28 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 28 Nov 2013 13:40:28 +0400 Subject: redis nginx 1.5.x support In-Reply-To: References: Message-ID: <20131128094028.GZ93176@mdounin.ru> Hello! On Thu, Nov 28, 2013 at 07:39:32PM +1030, SplitIce wrote: > I am aware atleast 2 others are considering developing patches with one the > module maintainer. > > I wanted to improve so have attempted this upgrade myself. Its only a small > patch, from my understanding its the upstream length field that has changed > between 1.4.x and 1.5.x. > > I would like to know if I have done something I shoudnt etc so as to > improve my nginx knowledge. > > The commits - > https://github.com/splitice/ngx_http_redis/commit/88d423c17a8614b29635994839649c3e0d576641 > https://github.com/splitice/ngx_http_redis/commit/c7dbe9fa75001787bd6602c4029e0e314798a37d > > I always find myself worried that I will do something that will have > strange flow on effects e.g at a certain length in a buffer etc. You mean adoption for the API change in nginx 1.5.3: *) Change in internal API: now u->length defaults to -1 if working with backends in unbuffered mode. (quote from http://nginx.org/en/CHANGES)? You patches look wrong for me, correct one should be like the change to memcached module in this commit: http://hg.nginx.org/nginx/rev/f538a67c9f77 That is, in u->length should be explicitly set in filter init callback, not just incremented from a default value (which was changed). Something like this should work (not tested): --- ngx_http_redis_module.c.orig 2013-11-28 13:35:28.000000000 +0400 +++ ngx_http_redis_module.c 2013-11-28 13:37:19.000000000 +0400 @@ -578,7 +578,7 @@ ngx_http_redis_filter_init(void *data) u = ctx->request->upstream; - u->length += NGX_HTTP_REDIS_END; + u->headers_in.content_length_n + NGX_HTTP_REDIS_END; return NGX_OK; } Author CC'd. -- Maxim Dounin http://nginx.org/en/donation.html From mat999 at gmail.com Thu Nov 28 09:51:47 2013 From: mat999 at gmail.com (SplitIce) Date: Thu, 28 Nov 2013 20:21:47 +1030 Subject: redis nginx 1.5.x support In-Reply-To: <20131128094028.GZ93176@mdounin.ru> References: <20131128094028.GZ93176@mdounin.ru> Message-ID: Yes that seems much better than taking over the u->headers_in.content_length_n field. Didn't realize just how similar memcache was :) Thanks. On Thu, Nov 28, 2013 at 8:10 PM, Maxim Dounin wrote: > Hello! > > On Thu, Nov 28, 2013 at 07:39:32PM +1030, SplitIce wrote: > > > I am aware atleast 2 others are considering developing patches with one > the > > module maintainer. > > > > I wanted to improve so have attempted this upgrade myself. Its only a > small > > patch, from my understanding its the upstream length field that has > changed > > between 1.4.x and 1.5.x. > > > > I would like to know if I have done something I shoudnt etc so as to > > improve my nginx knowledge. > > > > The commits - > > > https://github.com/splitice/ngx_http_redis/commit/88d423c17a8614b29635994839649c3e0d576641 > > > https://github.com/splitice/ngx_http_redis/commit/c7dbe9fa75001787bd6602c4029e0e314798a37d > > > > I always find myself worried that I will do something that will have > > strange flow on effects e.g at a certain length in a buffer etc. > > You mean adoption for the API change in nginx 1.5.3: > > *) Change in internal API: now u->length defaults to -1 if working with > backends in unbuffered mode. > > (quote from http://nginx.org/en/CHANGES)? > > You patches look wrong for me, correct one should be like the > change to memcached module in this commit: > > http://hg.nginx.org/nginx/rev/f538a67c9f77 > > That is, in u->length should be explicitly set in filter init > callback, not just incremented from a default value (which was > changed). > > Something like this should work (not tested): > > --- ngx_http_redis_module.c.orig 2013-11-28 13:35:28.000000000 +0400 > +++ ngx_http_redis_module.c 2013-11-28 13:37:19.000000000 +0400 > @@ -578,7 +578,7 @@ ngx_http_redis_filter_init(void *data) > > u = ctx->request->upstream; > > - u->length += NGX_HTTP_REDIS_END; > + u->headers_in.content_length_n + NGX_HTTP_REDIS_END; > > return NGX_OK; > } > > > Author CC'd. > > -- > Maxim Dounin > http://nginx.org/en/donation.html > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From wmark+nginx at hurrikane.de Thu Nov 28 23:03:19 2013 From: wmark+nginx at hurrikane.de (W-Mark Kubacki) Date: Fri, 29 Nov 2013 00:03:19 +0100 Subject: redis nginx 1.5.x support In-Reply-To: References: <20131128094028.GZ93176@mdounin.ru> Message-ID: Here are the two aforementioned changes: https://github.com/wmark/ossdl-overlay/blob/61d928a0df58e5d38385920ce05e7381f86913c7/www-servers/nginx/files/http_redis-0.3.6-trailer.patch -- Mark 2013/11/28 SplitIce : > Yes that seems much better than taking over the > u->headers_in.content_length_n field. > > Didn't realize just how similar memcache was :) > > Thanks. From moto at kawasaki3.org Fri Nov 29 07:04:58 2013 From: moto at kawasaki3.org (moto kawasaki) Date: Fri, 29 Nov 2013 16:04:58 +0900 (JST) Subject: pls. help for adding another parameter to ngx_upstream_server In-Reply-To: <20131127.110553.1954467124078386531.moto@kawasaki3.org> References: <20131126.183759.56672497386060771.moto@kawasaki3.org> <20131126132909.GD93176@mdounin.ru> <20131127.110553.1954467124078386531.moto@kawasaki3.org> Message-ID: <20131129.160458.1406288008207445494.moto@kawasaki3.org> Dear Mr. Dounin and list members, Thank you very much for your suggestion, I finally made it work !! Now we can write something like; http { proxy_setfib N; } or http { server{ proxy_setfib N; } } and the connections toward upstream server are setsockopt(SETFIB)-ed. I need to test the code, but anyway I have confirmed in the most simple configuration. patch and configuration file attached. Thank you very much. Best Regards, -- moto kawasaki -------------- next part -------------- A non-text attachment was scrubbed... Name: nginx.proxy_setfib.patch Type: text/x-patch Size: 5728 bytes Desc: not available URL: -------------- next part -------------- daemon off; error_log /home/moto/local/logs/nginx-error.log info; events { } http { include mime.types; default_type application/octet-stream; access_log /home/moto/local/logs/nginx-access.log; proxy_connect_timeout 1; # default=60 sec # proxy_setfib 5; upstream UPSTREAM { #proxy_setfib 5; #proxy_bind 127.0.0.1; server 10.200.195.70:80 max_fails=3 fail_timeout=300; } server { listen 101.78.7.14:8080 default_server bind setfib=5; server_name hostn3.oka.lac.jp; proxy_setfib 5; location / { proxy_pass http://UPSTREAM; } } } From mdounin at mdounin.ru Fri Nov 29 13:24:29 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 29 Nov 2013 13:24:29 +0000 Subject: [nginx] Version bump. Message-ID: details: http://hg.nginx.org/nginx/rev/8f3cf6776648 branches: changeset: 5449:8f3cf6776648 user: Maxim Dounin date: Fri Nov 29 17:11:36 2013 +0400 description: Version bump. diffstat: src/core/nginx.h | 4 ++-- 1 files changed, 2 insertions(+), 2 deletions(-) diffs (14 lines): diff --git a/src/core/nginx.h b/src/core/nginx.h --- a/src/core/nginx.h +++ b/src/core/nginx.h @@ -9,8 +9,8 @@ #define _NGINX_H_INCLUDED_ -#define nginx_version 1005007 -#define NGINX_VERSION "1.5.7" +#define nginx_version 1005008 +#define NGINX_VERSION "1.5.8" #define NGINX_VER "nginx/" NGINX_VERSION #define NGINX_VAR "NGINX" From mdounin at mdounin.ru Fri Nov 29 13:24:30 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 29 Nov 2013 13:24:30 +0000 Subject: [nginx] SSL: fixed c->read->ready handling in ngx_ssl_recv(). Message-ID: details: http://hg.nginx.org/nginx/rev/9868c72f6f43 branches: changeset: 5450:9868c72f6f43 user: Maxim Dounin date: Fri Nov 29 17:16:06 2013 +0400 description: SSL: fixed c->read->ready handling in ngx_ssl_recv(). If c->read->ready was reset, but later some data were read from a socket buffer due to a call to ngx_ssl_recv(), the c->read->ready flag should be restored if not all data were read from OpenSSL buffers (as kernel won't notify us about the data anymore). More details are available here: http://mailman.nginx.org/pipermail/nginx/2013-November/041178.html diffstat: src/event/ngx_event_openssl.c | 5 +++++ 1 files changed, 5 insertions(+), 0 deletions(-) diffs (22 lines): diff --git a/src/event/ngx_event_openssl.c b/src/event/ngx_event_openssl.c --- a/src/event/ngx_event_openssl.c +++ b/src/event/ngx_event_openssl.c @@ -1025,6 +1025,7 @@ ngx_ssl_recv(ngx_connection_t *c, u_char size -= n; if (size == 0) { + c->read->ready = 1; return bytes; } @@ -1034,6 +1035,10 @@ ngx_ssl_recv(ngx_connection_t *c, u_char } if (bytes) { + if (c->ssl->last != NGX_AGAIN) { + c->read->ready = 1; + } + return bytes; } From mdounin at mdounin.ru Fri Nov 29 13:24:32 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 29 Nov 2013 13:24:32 +0000 Subject: [nginx] Upstream: skip empty cache headers. Message-ID: details: http://hg.nginx.org/nginx/rev/e68af4e3396f branches: changeset: 5451:e68af4e3396f user: Maxim Dounin date: Fri Nov 29 17:23:38 2013 +0400 description: Upstream: skip empty cache headers. Notably this fixes HTTP_IF_MODIFIED_SINCE which was always sent with cache enabled in fastcgi/scgi/uwsgi after 43ccaf8e8728. diffstat: src/http/modules/ngx_http_fastcgi_module.c | 2 +- src/http/modules/ngx_http_scgi_module.c | 2 +- src/http/modules/ngx_http_uwsgi_module.c | 2 +- 3 files changed, 3 insertions(+), 3 deletions(-) diffs (36 lines): diff --git a/src/http/modules/ngx_http_fastcgi_module.c b/src/http/modules/ngx_http_fastcgi_module.c --- a/src/http/modules/ngx_http_fastcgi_module.c +++ b/src/http/modules/ngx_http_fastcgi_module.c @@ -2769,7 +2769,7 @@ ngx_http_fastcgi_merge_params(ngx_conf_t s->key = h->key; s->value = h->value; - s->skip_empty = 0; + s->skip_empty = 1; next: diff --git a/src/http/modules/ngx_http_scgi_module.c b/src/http/modules/ngx_http_scgi_module.c --- a/src/http/modules/ngx_http_scgi_module.c +++ b/src/http/modules/ngx_http_scgi_module.c @@ -1506,7 +1506,7 @@ ngx_http_scgi_merge_params(ngx_conf_t *c s->key = h->key; s->value = h->value; - s->skip_empty = 0; + s->skip_empty = 1; next: diff --git a/src/http/modules/ngx_http_uwsgi_module.c b/src/http/modules/ngx_http_uwsgi_module.c --- a/src/http/modules/ngx_http_uwsgi_module.c +++ b/src/http/modules/ngx_http_uwsgi_module.c @@ -1548,7 +1548,7 @@ ngx_http_uwsgi_merge_params(ngx_conf_t * s->key = h->key; s->value = h->value; - s->skip_empty = 0; + s->skip_empty = 1; next: From mdounin at mdounin.ru Fri Nov 29 13:24:33 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 29 Nov 2013 13:24:33 +0000 Subject: [nginx] Win32: fixed init_process without master process (ticket... Message-ID: details: http://hg.nginx.org/nginx/rev/b7bf4671bb7b branches: changeset: 5452:b7bf4671bb7b user: Maxim Dounin date: Fri Nov 29 17:23:47 2013 +0400 description: Win32: fixed init_process without master process (ticket #453). Init process callbacks are called by ngx_worker_thread(), there is no need to call them in ngx_single_process_cycle(). diffstat: src/os/win32/ngx_process_cycle.c | 10 ---------- 1 files changed, 0 insertions(+), 10 deletions(-) diffs (22 lines): diff --git a/src/os/win32/ngx_process_cycle.c b/src/os/win32/ngx_process_cycle.c --- a/src/os/win32/ngx_process_cycle.c +++ b/src/os/win32/ngx_process_cycle.c @@ -1022,18 +1022,8 @@ ngx_cache_loader_thread(void *data) void ngx_single_process_cycle(ngx_cycle_t *cycle) { - ngx_int_t i; ngx_tid_t tid; - for (i = 0; ngx_modules[i]; i++) { - if (ngx_modules[i]->init_process) { - if (ngx_modules[i]->init_process(cycle) == NGX_ERROR) { - /* fatal */ - exit(2); - } - } - } - ngx_process_init(cycle); ngx_console_init(cycle); From reeteshr at outlook.com Sat Nov 30 19:05:10 2013 From: reeteshr at outlook.com (Reetesh Ranjan) Date: Sun, 1 Dec 2013 00:35:10 +0530 Subject: Help on designing using multiple location/upstream modules In-Reply-To: References: , Message-ID: Hi Yichun Zhang, Thanks for the help! Going by the documentation of the lua-nginx-module on its subrequest handling it looks really promising for my use case. I am currently writing the Sphinx2 upstream module. Would get back with questions in case I have any on using the lua-nginx-module for achieving what I need to do. Regards,Reetesh > Date: Wed, 27 Nov 2013 12:39:15 -0800 > Subject: Re: Help on designing using multiple location/upstream modules > From: agentzh at gmail.com > To: nginx-devel at nginx.org > > Hello! > > On Tue, Nov 26, 2013 at 4:30 AM, Reetesh Ranjan wrote: > > I have thought of the following design, in terms of nginx modules I would > > use: > > > > 1 My main location module that picks the keywords entered and communicates > > to Redis and Sphinx > > 2 For communicating to Redis I thought of using HttpRedis2Module > > (http://wiki.nginx.org/HttpRedis2Module) > > 3 For communicating with Sphinx, I am trying to write a simple C++ client > > or adapt the Sphinx C++ client > > (http://sourceforge.net/projects/cppsphinxclient/) or its parts into an > > upstream module. > > > > This looks trivial if you use ngx_lua module as the glue. In > particular you can check out the ngx.location.capture and > ngx.location.capture_multi API functions for captured subrequests: > > https://github.com/chaoslawful/lua-nginx-module#ngxlocationcapture > > https://github.com/chaoslawful/lua-nginx-module#ngxlocationcapture_multi > > And probably also the "light thread" API that can work with the > subrequest API above: > > https://github.com/chaoslawful/lua-nginx-module#ngxthreadspawn > > When using the Lua API provided by ngx_lua, everything is nonblocking > out of the box :) > > Regards, > -agentzh > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel -------------- next part -------------- An HTML attachment was scrubbed... URL: From agentzh at gmail.com Sat Nov 30 22:37:04 2013 From: agentzh at gmail.com (Yichun Zhang (agentzh)) Date: Sat, 30 Nov 2013 14:37:04 -0800 Subject: Help on designing using multiple location/upstream modules In-Reply-To: References: Message-ID: Hello! On Sat, Nov 30, 2013 at 11:05 AM, Reetesh Ranjan wrote: > Thanks for the help! Going by the documentation of the lua-nginx-module on > its subrequest handling it looks really promising for my use case. Great :) > I am > currently writing the Sphinx2 upstream module. BTW, you could also build a lua-resty-sphinx2 library atop ngx_lua's nonblocking cosocket API instead of writing an upstream C module. You can check out the lua-resty-mysql or lua-resty-redis libraries for examples: https://github.com/agentzh/lua-resty-mysql https://github.com/agentzh/lua-resty-redis The ngx_lua cosocket API is more flexible and much easier to use :) That way you also don't have to use nginx subrequests at all ;) > Would get back with questions > in case I have any on using the lua-nginx-module for achieving what I need > to do. > You're recommended to post ngx_lua related questions to the openresty-en mailing list: https://groups.google.com/group/openresty-en That way we can see your mails sooner rather later :) Best regards, -agentzh