From agentzh at gmail.com Wed May 2 15:04:08 2012 From: agentzh at gmail.com (agentzh) Date: Wed, 2 May 2012 23:04:08 +0800 Subject: [PATCH] resetting write event handler in ngx_http_named_location Message-ID: Hello! drdrxp has found a hanging problem when using ngx_eval in a named location which is internal redirected to at content phase: https://github.com/agentzh/nginx-eval-module/pull/1 And we've found it is related to an issue in ngx_http_named_location which could affect a wide range of different nginx modules. This issue still exists in nginx 1.2.0. Below is a patch (for nginx 1.0.15) to fix it by resetting r->write_event_handler to ngx_http_core_run_phases, just as in ngx_http_internal_redirect. The patch is needed because when ngx_http_named_location is called from within the content phase, r->write_event_handler is reset to ngx_http_request_empty_handler in ngx_http_core_content_phase, or other content handlers may override it to other things too. Hopefully this issue can be fixed in the official core. Thanks! -agentzh --- nginx-1.0.15/src/http/ngx_http_core_module.c 2012-03-05 21:03:39.000000000 +0800 +++ nginx-1.0.15-patched/src/http/ngx_http_core_module.c 2012-05-02 21:57:40.624937882 +0800 @@ -2567,6 +2567,8 @@ r->phase_handler = cmcf->phase_engine.location_rewrite_index; + r->write_event_handler = ngx_http_core_run_phases; + ngx_http_core_run_phases(r); return NGX_DONE; -------------- next part -------------- A non-text attachment was scrubbed... Name: nginx-1.0.15-reset_wev_handler_in_named_locations.patch Type: application/octet-stream Size: 418 bytes Desc: not available URL: From mdounin at mdounin.ru Fri May 4 11:35:23 2012 From: mdounin at mdounin.ru (mdounin at mdounin.ru) Date: Fri, 4 May 2012 11:35:23 +0000 Subject: [nginx] svn commit: r4615 - trunk/src/http Message-ID: <20120504113524.180463F9DAE@mail.nginx.com> Author: mdounin Date: 2012-05-04 11:35:22 +0000 (Fri, 04 May 2012) New Revision: 4615 URL: http://trac.nginx.org/nginx/changeset/4615/nginx Log: Added write event handler reset in ngx_http_named_location(). On internal redirects this happens via ngx_http_handler() call, which is not called on named location redirect. As a result incorrect write handler remained (if previously set) and this might cause incorrect behaviour (likely request hang). Patch by Yichun Zhang (agentzh). Modified: trunk/src/http/ngx_http_core_module.c Modified: trunk/src/http/ngx_http_core_module.c =================================================================== --- trunk/src/http/ngx_http_core_module.c 2012-04-29 22:02:18 UTC (rev 4614) +++ trunk/src/http/ngx_http_core_module.c 2012-05-04 11:35:22 UTC (rev 4615) @@ -2599,6 +2599,7 @@ r->phase_handler = cmcf->phase_engine.location_rewrite_index; + r->write_event_handler = ngx_http_core_run_phases; ngx_http_core_run_phases(r); return NGX_DONE; From mdounin at mdounin.ru Fri May 4 11:35:39 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 4 May 2012 15:35:39 +0400 Subject: [PATCH] resetting write event handler in ngx_http_named_location In-Reply-To: References: Message-ID: <20120504113539.GE31671@mdounin.ru> Hello! On Wed, May 02, 2012 at 11:04:08PM +0800, agentzh wrote: > Hello! > > drdrxp has found a hanging problem when using ngx_eval in a named > location which is internal redirected to at content phase: > > https://github.com/agentzh/nginx-eval-module/pull/1 > > And we've found it is related to an issue in ngx_http_named_location > which could affect a wide range of different nginx modules. This issue > still exists in nginx 1.2.0. > > Below is a patch (for nginx 1.0.15) to fix it by resetting > r->write_event_handler to ngx_http_core_run_phases, just as in > ngx_http_internal_redirect. The patch is needed because when > ngx_http_named_location is called from within the content phase, > r->write_event_handler is reset to ngx_http_request_empty_handler in > ngx_http_core_content_phase, or other content handlers may override it > to other things too. > > Hopefully this issue can be fixed in the official core. > > Thanks! > -agentzh > > --- nginx-1.0.15/src/http/ngx_http_core_module.c 2012-03-05 > 21:03:39.000000000 +0800 > +++ nginx-1.0.15-patched/src/http/ngx_http_core_module.c > 2012-05-02 21:57:40.624937882 +0800 > @@ -2567,6 +2567,8 @@ > > r->phase_handler = cmcf->phase_engine.location_rewrite_index; > > + r->write_event_handler = ngx_http_core_run_phases; > + > ngx_http_core_run_phases(r); > > return NGX_DONE; Committed, thanks. Maxim Dounin From manlio.perillo at gmail.com Fri May 4 17:52:33 2012 From: manlio.perillo at gmail.com (Manlio Perillo) Date: Fri, 04 May 2012 19:52:33 +0200 Subject: [PATCH] NGX_HTTP_SYNC flag in ngx_http_send_special Message-ID: <4FA41761.70605@gmail.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Patch is attached, against mercurial mirror changeset 47cb3497fbab. Use case: I'm updating my ngx_http_wsgi_module. In one of my test application, I return a chunk-encoded response. The problem is that after last buffer is sent, I raise an exception, causing Nginx to abort the request handling. This, in turn, cause an incorrect chunk encoding (Python httplib fails to read the response). Calling ngx_http_send_special(r, NGX_HTTP_FLUSH) after last buffer does not help, however I found that using my patch, calling ngx_http_send_special(r, NGX_HTTP_FLUSH | NGX_HTTP_SYNC) do help; now the response is correctly chunk encoded. Please note that I tested my patch using 0.7.59, since ngx_http_wsgi_module has some issues on recent Nginx versions (that I'm going to fix). Also note that I can not use NGX_HTTP_LAST (due to how ngx_http_wsgi_module is implemented), and I'm not sure if is it correct to set sync = 1 for a buffer that is not the last one. Thanks Manlio Perillo -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.10 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iEYEARECAAYFAk+kF2EACgkQscQJ24LbaURqRQCcDFzOqeHtj4C6Wzc4S4o6I0EP LSUAoI+fSlZ0MKXEnp0JdhiB6AAhgxbm =fqo2 -----END PGP SIGNATURE----- -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: ngx_http_sync_flag.diff URL: From simohayha.bobo at gmail.com Sat May 5 12:10:30 2012 From: simohayha.bobo at gmail.com (Simon Liu) Date: Sat, 5 May 2012 20:10:30 +0800 Subject: [PATCH] optimization of Intel processor cacheline calculation Message-ID: Hello! cacheline calculation is hardcode in ngx_cpuinfo, this will make mistake in some intel processor. example cache line is 64 byte in sandy bridge, its family code is 0110 and model no is 1010 or 1101(in this document http://www.intel.com/content/www/us/en/processors/processor-identification-cpuid-instruction-note.html). but code is this in ngx_cpuinfo: /* Pentium Pro, II, III */ case 6: ngx_cacheline_size = 32; model = ((cpu[0] & 0xf0000) >> 8) | (cpu[0] & 0xf0); if (model >= 0xd0) { /* Intel Core, Core 2, Atom */ ngx_cacheline_size = 64; } break; if model no is 1010 , ngx_cacheline_size will be 32, and so this is wrong. Below is a patch(for nginx trunk) fix this problem, and use cpuid(2) solve hardcode? Index: src/core/ngx_cpuinfo.c =================================================================== --- src/core/ngx_cpuinfo.c (revision 4615) +++ src/core/ngx_cpuinfo.c (working copy) @@ -12,9 +12,93 @@ #if (( __i386__ || __amd64__ ) && ( __GNUC__ || __INTEL_COMPILER )) +#define NGX_CACHE_LVL_1_DATA 1 +#define NGX_CACHE_LVL_2 2 +#define NGX_CACHE_LVL_3 3 +#define NGX_CACHE_PREFETCHING 4 + + +typedef struct ngx_cache_table { + u_char descriptor; + u_char type; + ngx_uint_t size; +} ngx_cache_table_t; + + static ngx_inline void ngx_cpuid(uint32_t i, uint32_t *buf); +static ngx_cache_table_t cache_table[] = { + { 0x0a, NGX_CACHE_LVL_1_DATA, 32 }, /* 32 byte line size */ + { 0x0c, NGX_CACHE_LVL_1_DATA, 32 }, /* 32 byte line size */ + { 0x0d, NGX_CACHE_LVL_1_DATA, 64 }, /* 64 byte line size */ + { 0x0e, NGX_CACHE_LVL_1_DATA, 64 }, /* 64 byte line size */ + { 0x21, NGX_CACHE_LVL_2, 64 }, /* 64 byte line size */ + { 0x22, NGX_CACHE_LVL_3, 64 }, /* 64 byte line size */ + { 0x23, NGX_CACHE_LVL_3, 64 }, /* 64 byte line size */ + { 0x25, NGX_CACHE_LVL_3, 64 }, /* 64 byte line size */ + { 0x29, NGX_CACHE_LVL_3, 64 }, /* 64 byte line size */ + { 0x2c, NGX_CACHE_LVL_1_DATA, 64 }, /* 64 byte line size */ + { 0x39, NGX_CACHE_LVL_2, 64 }, /* 64 byte line size */ + { 0x3a, NGX_CACHE_LVL_2, 64 }, /* 64 byte line size */ + { 0x3b, NGX_CACHE_LVL_2, 64 }, /* 64 byte line size */ + { 0x3c, NGX_CACHE_LVL_2, 64 }, /* 64 byte line size */ + { 0x3d, NGX_CACHE_LVL_2, 64 }, /* 64 byte line size */ + { 0x3e, NGX_CACHE_LVL_2, 64 }, /* 64 byte line size */ + { 0x3f, NGX_CACHE_LVL_2, 64 }, /* 64 byte line size */ + { 0x41, NGX_CACHE_LVL_2, 32 }, /* 32 byte line size */ + { 0x42, NGX_CACHE_LVL_2, 32 }, /* 32 byte line size */ + { 0x43, NGX_CACHE_LVL_2, 32 }, /* 32 byte line size */ + { 0x44, NGX_CACHE_LVL_2, 32 }, /* 32 byte line size */ + { 0x45, NGX_CACHE_LVL_2, 32 }, /* 32 byte line size */ + { 0x46, NGX_CACHE_LVL_3, 64 }, /* 64 byte line size */ + { 0x47, NGX_CACHE_LVL_3, 64 }, /* 64 byte line size */ + { 0x48, NGX_CACHE_LVL_2, 64 }, /* 64 byte line size */ + { 0x49, NGX_CACHE_LVL_3, 64 }, /* 64 byte line size */ + { 0x4a, NGX_CACHE_LVL_3, 64 }, /* 64 byte line size */ + { 0x4b, NGX_CACHE_LVL_3, 64 }, /* 64 byte line size */ + { 0x4c, NGX_CACHE_LVL_3, 64 }, /* 64 byte line size */ + { 0x4d, NGX_CACHE_LVL_3, 64 }, /* 64 byte line size */ + { 0x4e, NGX_CACHE_LVL_2, 64 }, /* 64 byte line size */ + { 0x60, NGX_CACHE_LVL_1_DATA, 64 }, /* 64 byte line size */ + { 0x66, NGX_CACHE_LVL_1_DATA, 64 }, /* 64 byte line size */ + { 0x67, NGX_CACHE_LVL_1_DATA, 64 }, /* 64 byte line size */ + { 0x68, NGX_CACHE_LVL_1_DATA, 64 }, /* 64 byte line size */ + { 0x78, NGX_CACHE_LVL_2, 64 }, /* 64 byte line size */ + { 0x79, NGX_CACHE_LVL_2, 64 }, /* 64 byte line size */ + { 0x7a, NGX_CACHE_LVL_2, 64 }, /* 64 byte line size */ + { 0x7b, NGX_CACHE_LVL_2, 64 }, /* 64 byte line size */ + { 0x7c, NGX_CACHE_LVL_2, 64 }, /* 64 byte line size */ + { 0x7d, NGX_CACHE_LVL_2, 64 }, /* 64 byte line size */ + { 0x7f, NGX_CACHE_LVL_2, 64 }, /* 64 byte line size */ + { 0x80, NGX_CACHE_LVL_2, 64 }, /* 64 byte line size */ + { 0x82, NGX_CACHE_LVL_2, 32 }, /* 32 byte line size */ + { 0x83, NGX_CACHE_LVL_2, 32 }, /* 32 byte line size */ + { 0x84, NGX_CACHE_LVL_2, 32 }, /* 32 byte line size */ + { 0x85, NGX_CACHE_LVL_2, 32 }, /* 32 byte line size */ + { 0x86, NGX_CACHE_LVL_2, 64 }, /* 64 byte line size */ + { 0x87, NGX_CACHE_LVL_2, 64 }, /* 64 byte line size */ + { 0xd0, NGX_CACHE_LVL_3, 64 }, /* 64 byte line size */ + { 0xd1, NGX_CACHE_LVL_3, 64 }, /* 64 byte line size */ + { 0xd2, NGX_CACHE_LVL_3, 64 }, /* 64 byte line size */ + { 0xd6, NGX_CACHE_LVL_3, 64 }, /* 64 byte line size */ + { 0xd7, NGX_CACHE_LVL_3, 64 }, /* 64 byte line size */ + { 0xd8, NGX_CACHE_LVL_3, 64 }, /* 64 byte line size */ + { 0xdc, NGX_CACHE_LVL_3, 64 }, /* 64 byte line size */ + { 0xdd, NGX_CACHE_LVL_3, 64 }, /* 64 byte line size */ + { 0xde, NGX_CACHE_LVL_3, 64 }, /* 64 byte line size */ + { 0xe2, NGX_CACHE_LVL_3, 64 }, /* 64 byte line size */ + { 0xe3, NGX_CACHE_LVL_3, 64 }, /* 64 byte line size */ + { 0xe4, NGX_CACHE_LVL_3, 64 }, /* 64 byte line size */ + { 0xea, NGX_CACHE_LVL_3, 64 }, /* 64 byte line size */ + { 0xeb, NGX_CACHE_LVL_3, 64 }, /* 64 byte line size */ + { 0xec, NGX_CACHE_LVL_3, 64 }, /* 64 byte line size */ + { 0xf0, NGX_CACHE_PREFETCHING, 64 }, /* 64-byte prefetching */ + { 0xf1, NGX_CACHE_PREFETCHING, 128 }, /* 128-byte prefetching */ + { 0x00, 0, 0} +}; + + #if ( __i386__ ) static ngx_inline void @@ -67,13 +151,25 @@ #endif +static ngx_inline +uint32_t ngx_cpuid_eax(uint32_t op) +{ + uint32_t cpu[4]; + + ngx_cpuid(op, cpu); + + return cpu[0]; +} + + /* auto detect the L2 cache line size of modern and widespread CPUs */ void ngx_cpuinfo(void) { - u_char *vendor; - uint32_t vbuf[5], cpu[4], model; + u_char *vendor, *dp, des; + uint32_t vbuf[5], cache[4], n; + ngx_uint_t i, j, k, l1, l2, l3, prefetch; vbuf[0] = 0; vbuf[1] = 0; @@ -81,6 +177,13 @@ vbuf[3] = 0; vbuf[4] = 0; + l1 = 0; + l2 = 0; + l3 = 0; + prefetch = 0; + + dp = (u_char *) cache; + ngx_cpuid(0, vbuf); vendor = (u_char *) &vbuf[1]; @@ -89,39 +192,57 @@ return; } - ngx_cpuid(1, cpu); - if (ngx_strcmp(vendor, "GenuineIntel") == 0) { - switch ((cpu[0] & 0xf00) >> 8) { + n = ngx_cpuid_eax(2) & 0xFF; - /* Pentium */ - case 5: - ngx_cacheline_size = 32; - break; + for (i = 0 ; i < n ; i++) { + ngx_cpuid(2, cache); - /* Pentium Pro, II, III */ - case 6: - ngx_cacheline_size = 32; + for (j = 0; j < 3; j++) { + if (cache[j] & (1 << 31)) { + cache[j] = 0; + } + } - model = ((cpu[0] & 0xf0000) >> 8) | (cpu[0] & 0xf0); + for (j = 1; j < 16; j++) { + des = dp[j]; + k = 0; - if (model >= 0xd0) { - /* Intel Core, Core 2, Atom */ - ngx_cacheline_size = 64; - } + while (cache_table[k].descriptor != 0) { + if (cache_table[k].descriptor == des) { - break; + switch (cache_table[k].type) { - /* - * Pentium 4, although its cache line size is 64 bytes, - * it prefetches up to two cache lines during memory read - */ - case 15: - ngx_cacheline_size = 128; - break; + case NGX_CACHE_LVL_1_DATA: + l1 = cache_table[k].size; + break; + + case NGX_CACHE_LVL_2: + l2 = cache_table[k].size; + break; + + case NGX_CACHE_LVL_3: + l3 = cache_table[k].size; + break; + + case NGX_CACHE_PREFETCHING: + prefetch = cache_table[k].size; + break; + } + + break; + } + + k++; + } + } } + ngx_cacheline_size = ngx_max(l1, l2); + ngx_cacheline_size = ngx_max(l3, ngx_cacheline_size); + ngx_cacheline_size = ngx_max(prefetch, ngx_cacheline_size); + } else if (ngx_strcmp(vendor, "AuthenticAMD") == 0) { ngx_cacheline_size = 64; } -- do not fear to be eccentric in opinion, for every opinion now accepted was once eccentric. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: cpuinfo.patch Type: application/octet-stream Size: 8228 bytes Desc: not available URL: From mdounin at mdounin.ru Sat May 5 21:41:04 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 6 May 2012 01:41:04 +0400 Subject: [PATCH] optimization of Intel processor cacheline calculation In-Reply-To: References: Message-ID: <20120505214104.GP31671@mdounin.ru> Hello! On Sat, May 05, 2012 at 08:10:30PM +0800, Simon Liu wrote: > Hello! > > cacheline calculation is hardcode in ngx_cpuinfo, this will make mistake in > some intel processor. example cache line is 64 byte in sandy bridge, > its family code is 0110 and model no is 1010 or 1101(in this document > http://www.intel.com/content/www/us/en/processors/processor-identification-cpuid-instruction-note.html). > but code is this in ngx_cpuinfo: > > /* Pentium Pro, II, III */ > case 6: > ngx_cacheline_size = 32; > > model = ((cpu[0] & 0xf0000) >> 8) | (cpu[0] & 0xf0); > > if (model >= 0xd0) { > /* Intel Core, Core 2, Atom */ > ngx_cacheline_size = 64; > } > > break; > > if model no is 1010 , ngx_cacheline_size will be 32, and so this is wrong. Note the model variable in the above code includes extended model field as well, and for sandy bridge it will be 0x2a0 (extended model 0010, model 0101). Thus cache line size is correctly detected as 64. > Below is a patch(for nginx trunk) fix this problem, and use cpuid(2) solve > hardcode? > > Index: src/core/ngx_cpuinfo.c > =================================================================== > --- src/core/ngx_cpuinfo.c (revision 4615) > +++ src/core/ngx_cpuinfo.c (working copy) > @@ -12,9 +12,93 @@ > #if (( __i386__ || __amd64__ ) && ( __GNUC__ || __INTEL_COMPILER )) > > > +#define NGX_CACHE_LVL_1_DATA 1 > +#define NGX_CACHE_LVL_2 2 > +#define NGX_CACHE_LVL_3 3 > +#define NGX_CACHE_PREFETCHING 4 > + > + > +typedef struct ngx_cache_table { > + u_char descriptor; > + u_char type; > + ngx_uint_t size; > +} ngx_cache_table_t; > + > + > static ngx_inline void ngx_cpuid(uint32_t i, uint32_t *buf); > > > +static ngx_cache_table_t cache_table[] = { > + { 0x0a, NGX_CACHE_LVL_1_DATA, 32 }, /* 32 byte line size */ > + { 0x0c, NGX_CACHE_LVL_1_DATA, 32 }, /* 32 byte line size */ > + { 0x0d, NGX_CACHE_LVL_1_DATA, 64 }, /* 64 byte line size */ > + { 0x0e, NGX_CACHE_LVL_1_DATA, 64 }, /* 64 byte line size */ [...] I don't really think we need full intel cache descriptor decoding. It's rather huge and I suspect it might cause more harm than good, especially in virtualized environment. And if we decide we need one, we probably want something simplier (i.e. we don't care about cache levels and so on, and this information is clearly not needed here). Maxim Dounin From manlio.perillo at gmail.com Sun May 6 07:43:15 2012 From: manlio.perillo at gmail.com (Manlio Perillo) Date: Sun, 06 May 2012 09:43:15 +0200 Subject: [BUG] some directives set using -g command line options are ignored Message-ID: <4FA62B93.8090107@gmail.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Hi. I have noticed that some directives set using the -g command line options are ignored. As an example, commenting the daemon and master directives in the nginx.conf file, the following: nginx -p . -c conf/nginx.conf -g daemon=off -g master=off will start an Nginx server with daemon on and master on. I have also checked the pid and error_log directives and they are ignored, too. Nginx behaviour with -g error_log=<...> is strange, since the error_log path is ignored (the default logs/error.log is used insted), but the file is empty. Thanks Manlio Perillo -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.10 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iEYEARECAAYFAk+mK5MACgkQscQJ24LbaUT9ogCfRu01QYfgrtciznBVMJq/mbks 2OoAnjTP8wvK3vKuUQkG3N+OTHG8TcpP =HbwH -----END PGP SIGNATURE----- From mdounin at mdounin.ru Sun May 6 09:48:40 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 6 May 2012 13:48:40 +0400 Subject: [BUG] some directives set using -g command line options are ignored In-Reply-To: <4FA62B93.8090107@gmail.com> References: <4FA62B93.8090107@gmail.com> Message-ID: <20120506094840.GR31671@mdounin.ru> Hello! On Sun, May 06, 2012 at 09:43:15AM +0200, Manlio Perillo wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > Hi. > > I have noticed that some directives set using the -g command line > options are ignored. > > As an example, commenting the daemon and master directives in the > nginx.conf file, the following: > > nginx -p . -c conf/nginx.conf -g daemon=off -g master=off > > will start an Nginx server with daemon on and master on. > > I have also checked the pid and error_log directives and they are > ignored, too. > > Nginx behaviour with -g error_log=<...> is strange, since the error_log > path is ignored (the default logs/error.log is used insted), but the > file is empty. As already discussed on nginx@ list, this is fixed in 1.2.0. Now nginx correcly complains about invalid syntax used. Right one is nginx -p . -c conf/nginx.conf -g "daemon off; master_process off;" Maxim Dounin From mdounin at mdounin.ru Sun May 6 18:40:10 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 6 May 2012 22:40:10 +0400 Subject: [PATCH] NGX_HTTP_SYNC flag in ngx_http_send_special In-Reply-To: <4FA41761.70605@gmail.com> References: <4FA41761.70605@gmail.com> Message-ID: <20120506184010.GT31671@mdounin.ru> Hello! On Fri, May 04, 2012 at 07:52:33PM +0200, Manlio Perillo wrote: > Patch is attached, against mercurial mirror changeset 47cb3497fbab. > > > Use case: > > I'm updating my ngx_http_wsgi_module. > > In one of my test application, I return a chunk-encoded response. > The problem is that after last buffer is sent, I raise an exception, > causing Nginx to abort the request handling. > > This, in turn, cause an incorrect chunk encoding (Python httplib fails > to read the response). > > Calling > ngx_http_send_special(r, NGX_HTTP_FLUSH) > after last buffer does not help, however I found that using my patch, > calling > ngx_http_send_special(r, NGX_HTTP_FLUSH | NGX_HTTP_SYNC) > do help; now the response is correctly chunk encoded. > > Please note that I tested my patch using 0.7.59, since > ngx_http_wsgi_module has some issues on recent Nginx versions (that I'm > going to fix). Just ngx_http_send_special(r, NGX_HTTP_FLUSH) should be enough to flush internal buffers, but please not that it doesn't mean that response is actually sent - there still may be buffering on socket level etc. If you abort request handling (not sure how you do it) - it may result in incomplete response sent. Not sure about 0.7.59, it's really old, but if you'll still see the problem with 1.2.0 - please report more details, we probably want to fix this properly. > Also note that I can not use NGX_HTTP_LAST (due to how > ngx_http_wsgi_module is implemented), and I'm not sure if is it correct > to set sync = 1 for a buffer that is not the last one. Setting sync = 1 on intermediate buffers is ok, but it shouldn't be needed to explicitly sent such buffers. Maxim Dounin From manlio.perillo at gmail.com Sun May 6 19:22:59 2012 From: manlio.perillo at gmail.com (Manlio Perillo) Date: Sun, 06 May 2012 21:22:59 +0200 Subject: [PATCH] NGX_HTTP_SYNC flag in ngx_http_send_special In-Reply-To: <20120506184010.GT31671@mdounin.ru> References: <4FA41761.70605@gmail.com> <20120506184010.GT31671@mdounin.ru> Message-ID: <4FA6CF93.7080507@gmail.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Il 06/05/2012 20:40, Maxim Dounin ha scritto: > Hello! > > On Fri, May 04, 2012 at 07:52:33PM +0200, Manlio Perillo wrote: > >> Patch is attached, against mercurial mirror changeset 47cb3497fbab. >> >> >> Use case: >> >> I'm updating my ngx_http_wsgi_module. >> >> In one of my test application, I return a chunk-encoded response. >> The problem is that after last buffer is sent, I raise an exception, >> causing Nginx to abort the request handling. >> > [...] > Just ngx_http_send_special(r, NGX_HTTP_FLUSH) should be enough to > flush internal buffers, but please not that it doesn't mean that > response is actually sent - there still may be buffering on socket > level etc. If you abort request handling (not sure how you do it) > - it may result in incomplete response sent. > Well, I do not really abort the request; I simply call ngx_finalize_request(r, NGX_HTTP_INTERNAL_SERVER_ERROR). > Not sure about 0.7.59, it's really old, but if you'll still see > the problem with 1.2.0 - please report more details, we probably > want to fix this properly. > I finally fixed an issue that prevented ngx_http_wsgi_module to be compiled with "recent" Nginx versions (new request->main->count introduced in Nginx 0.8.11). Using 1.2.0 the issue is solved. Now Nginx returns a correct chunk encoded response, with the special HTML text for 500 status code at the end of the response body. It is not required to call ngx_http_send_special(r, NGX_HTTP_FLUSH). >> Also note that I can not use NGX_HTTP_LAST (due to how >> ngx_http_wsgi_module is implemented), and I'm not sure if is it correct >> to set sync = 1 for a buffer that is not the last one. > > Setting sync = 1 on intermediate buffers is ok, but it shouldn't > be needed to explicitly sent such buffers. > With Nginx 0.7.59 setting sync = 1 cause response body to be truncated. Buffers sent after sync are not received by the client. Well, it is no more a problem now that I can use a recent version of Nginx. Thanks Manlio -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.10 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iEUEARECAAYFAk+mz5MACgkQscQJ24LbaUQE+wCfY3vl+vC1Yid9PfGJ1b7pbNOP hWUAmN2xMEGuGBQVRqUbE629wGP2Etk= =Wkau -----END PGP SIGNATURE----- From goelvivek2011 at gmail.com Mon May 7 06:29:06 2012 From: goelvivek2011 at gmail.com (vivek goel) Date: Mon, 7 May 2012 11:59:06 +0530 Subject: Nginx showing directive is duplicate Message-ID: nginx version: nginx/1.1.18 What is wrong in following declaration { ngx_string("v_max_result"), NGX_HTTP_MAIN_CONF | NGX_HTTP_LOC_CONF|NGX_HTTP_SRV_CONF | NGX_CONF_TAKE1, ngx_conf_set_num_slot, NGX_HTTP_LOC_CONF_OFFSET, offsetof(ngx_http_v_conf_t, max_result), NULL}, I am getting error nginx: [emerg] "v_max_result" directive is duplicate But if I declare it as string it is working fine { ngx_string("v_max_result"), NGX_HTTP_MAIN_CONF | NGX_HTTP_LOC_CONF|NGX_HTTP_SRV_CONF | NGX_CONF_TAKE1, ngx_conf_set_str_slot, NGX_HTTP_LOC_CONF_OFFSET, offsetof(ngx_http_v_conf_t, max_result), NULL}, regards Vivek Goel -------------- next part -------------- An HTML attachment was scrubbed... URL: From goelvivek2011 at gmail.com Mon May 7 08:07:00 2012 From: goelvivek2011 at gmail.com (vivek goel) Date: Mon, 7 May 2012 13:37:00 +0530 Subject: Nginx showing directive is duplicate In-Reply-To: References: Message-ID: Found the bug in my code I was not doing NGX_CONF_UNSET_UINT in create_conf function. regards Vivek Goel On Mon, May 7, 2012 at 11:59 AM, vivek goel wrote: > nginx version: nginx/1.1.18 > > What is wrong in following declaration > > { ngx_string("v_max_result"), > NGX_HTTP_MAIN_CONF | NGX_HTTP_LOC_CONF|NGX_HTTP_SRV_CONF | > NGX_CONF_TAKE1, > ngx_conf_set_num_slot, > NGX_HTTP_LOC_CONF_OFFSET, > offsetof(ngx_http_v_conf_t, max_result), > NULL}, > > I am getting error > nginx: [emerg] "v_max_result" directive is duplicate > > > But if I declare it as string it is working fine > > { ngx_string("v_max_result"), > NGX_HTTP_MAIN_CONF | NGX_HTTP_LOC_CONF|NGX_HTTP_SRV_CONF | > NGX_CONF_TAKE1, > ngx_conf_set_str_slot, > NGX_HTTP_LOC_CONF_OFFSET, > offsetof(ngx_http_v_conf_t, max_result), > NULL}, > > > > regards > Vivek Goel > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From goelvivek2011 at gmail.com Mon May 7 14:49:48 2012 From: goelvivek2011 at gmail.com (vivek goel) Date: Mon, 7 May 2012 20:19:48 +0530 Subject: Event based implementation in http module Message-ID: I am working on http module using nginx. I have one question. 1. Is function specified in ngx_command_t will be blocking call ? If not My module description is as follow: It does read of file which is blocking call. That I think at same time worker process can't server the same client ? The solution I am thinking is that I can do a blocking operation in one thread and call a callback to send response when response is ready. Is there a way I can tell worker process to start accepting the connection and server the response for old request when response is ready for that client? Can you please suggest some better idea to server multiple client on blocking call with nginx http module ? regards Vivek Goel -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Mon May 7 17:13:41 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 7 May 2012 21:13:41 +0400 Subject: Event based implementation in http module In-Reply-To: References: Message-ID: <20120507171341.GX31671@mdounin.ru> Hello! On Mon, May 07, 2012 at 08:19:48PM +0530, vivek goel wrote: > I am working on http module using nginx. > I have one question. > > 1. Is function specified in ngx_command_t will be blocking call ? > > If not > My module description is as follow: > It does read of file which is blocking call. That I think at same > time worker process can't server the same client ? Functions specified in ngx_command_t structures are configuration parsing handlers, they are executed during configuration parsing within master process and are allowed to block (as this doesn't affect worker processes and hence request handling). Maxim Dounin From goelvivek2011 at gmail.com Mon May 7 17:27:04 2012 From: goelvivek2011 at gmail.com (vivek goel) Date: Mon, 7 May 2012 22:57:04 +0530 Subject: Event based implementation in http module In-Reply-To: References: Message-ID: @Maxim and what about handler function specified by clcf->handler ? Is it also blocking ? and what about my others questions. Can I server multiple client using worker process ? regards Vivek Goel On Mon, May 7, 2012 at 8:19 PM, vivek goel wrote: > I am working on http module using nginx. > I have one question. > > 1. Is function specified in ngx_command_t will be blocking call ? > > If not > My module description is as follow: > It does read of file which is blocking call. That I think at same > time worker process can't server the same client ? > > The solution I am thinking is that I can do a blocking operation in one > thread and call a callback to send response when response is ready. Is > there a way I can tell worker process to start accepting the connection and > server the response for old request when response is ready for that client? > > Can you please suggest some better idea to server multiple client on > blocking call with nginx http module ? > > > > regards > Vivek Goel > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From agentzh at gmail.com Tue May 8 05:12:34 2012 From: agentzh at gmail.com (agentzh) Date: Tue, 8 May 2012 13:12:34 +0800 Subject: Event based implementation in http module In-Reply-To: References: Message-ID: May 8, 2012 at 1:27 AM, vivek goel wrote: > and what about handler function specified by?? clcf->handler ? > Is it also blocking ? The clcf->handler, i.e., the content handler, runs at request-time. It should never be blocking on network communications, or your nginx worker process will block on a single request. > and what about my others questions. Can I server multiple client using > worker process ? > The single-threaded nginx worker is designed to be able to serve a *lot* of concurrent requests at the same time. Regards, -agentzh From goelvivek2011 at gmail.com Tue May 8 05:46:27 2012 From: goelvivek2011 at gmail.com (vivek goel) Date: Tue, 8 May 2012 11:16:27 +0530 Subject: Event based implementation in http module In-Reply-To: References: Message-ID: Sorry just clearing my doubt. Again I have one doubt. Work I am doing in clcf->handle is a blocking io call. Now if I am running nginx with 2 worker process and function I am calling in clcf->handle takes 200 ms to generate response. So it means that I will not able to server other clients from same worker process withing 200 ms time ? If yes , How can I make it non-blocking so that I can server multiple clients ? Thanks in advance for your reply. regards Vivek Goel On Mon, May 7, 2012 at 10:57 PM, vivek goel wrote: > @Maxim > and what about handler function specified by clcf->handler ? > Is it also blocking ? > and what about my others questions. Can I server multiple client using > worker process ? > > regards > Vivek Goel > > > > On Mon, May 7, 2012 at 8:19 PM, vivek goel wrote: > >> I am working on http module using nginx. >> I have one question. >> >> 1. Is function specified in ngx_command_t will be blocking call ? >> >> If not >> My module description is as follow: >> It does read of file which is blocking call. That I think at same >> time worker process can't server the same client ? >> >> The solution I am thinking is that I can do a blocking operation in one >> thread and call a callback to send response when response is ready. Is >> there a way I can tell worker process to start accepting the connection and >> server the response for old request when response is ready for that client? >> >> Can you please suggest some better idea to server multiple client on >> blocking call with nginx http module ? >> >> >> >> regards >> Vivek Goel >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From simohayha.bobo at gmail.com Tue May 8 12:10:46 2012 From: simohayha.bobo at gmail.com (Simon Liu) Date: Tue, 8 May 2012 20:10:46 +0800 Subject: [PATCH]add new http status code Message-ID: Hello! It add a few of new http status code In RFC 6585( http://tools.ietf.org/html/rfc6585), and so I add this to Nginx. The new http status code 429(Too Many Requests) and 431(Request Header Fields Too Large) in RFC 6585 is useful to Nginx, because Nginx will return 400 when request header is Too Large and return 503 when too many request(in limit_req). Blew is patch(for Nginx trunk) to use 429 replace 400 when request header is too large, and use 431 replace 503 when find too many request(limit_req module). in addition to add new http status code 511. Thanks! Index: src/http/ngx_http_special_response.c =================================================================== --- src/http/ngx_http_special_response.c (revision 4615) +++ src/http/ngx_http_special_response.c (working copy) @@ -210,13 +210,21 @@ ; -static char ngx_http_error_494_page[] = +static char ngx_http_error_429_page[] = "" CRLF -"400 Request Header Or Cookie Too Large" +"429 Too Many Requests" CRLF +"" CRLF +"

429 Too Many Requests

" CRLF +; + + +static char ngx_http_error_431_page[] = +"" CRLF +"431 Request Header Fields Too Large" CRLF "" CRLF -"

400 Bad Request

" CRLF -"
Request Header Or Cookie Too Large
" CRLF +"

431 Request Header Fields Too Large

" CRLF +"

request header fields too large." ; @@ -298,6 +306,16 @@ ; +static char ngx_http_error_511_page[] = +"" CRLF +"511 Network Authentication Required" CRLF +"" CRLF +"

511 Network Authentication Required

" CRLF +"

You need to " +"authenticate with the local network in order to gain access." +; + + static ngx_str_t ngx_http_error_pages[] = { ngx_null_string, /* 201, 204 */ @@ -334,11 +352,25 @@ ngx_string(ngx_http_error_414_page), ngx_string(ngx_http_error_415_page), ngx_string(ngx_http_error_416_page), + ngx_null_string, /* 417 */ + ngx_null_string, /* 418 */ + ngx_null_string, /* 419 */ + ngx_null_string, /* 420 */ + ngx_null_string, /* 421 */ + ngx_null_string, /* 422 */ + ngx_null_string, /* 423 */ + ngx_null_string, /* 424 */ + ngx_null_string, /* 425 */ + ngx_null_string, /* 426 */ + ngx_null_string, /* 427 */ + ngx_null_string, /* 428 */ + ngx_string(ngx_http_error_429_page), + ngx_null_string, /* 430 */ + ngx_string(ngx_http_error_431_page), -#define NGX_HTTP_LAST_4XX 417 +#define NGX_HTTP_LAST_4XX 432 #define NGX_HTTP_OFF_5XX (NGX_HTTP_LAST_4XX - 400 + NGX_HTTP_OFF_4XX) - ngx_string(ngx_http_error_494_page), /* 494, request header too large */ ngx_string(ngx_http_error_495_page), /* 495, https certificate error */ ngx_string(ngx_http_error_496_page), /* 496, https no certificate */ ngx_string(ngx_http_error_497_page), /* 497, http to https */ @@ -352,9 +384,13 @@ ngx_string(ngx_http_error_504_page), ngx_null_string, /* 505 */ ngx_null_string, /* 506 */ - ngx_string(ngx_http_error_507_page) + ngx_string(ngx_http_error_507_page), + ngx_null_string, /* 508 */ + ngx_null_string, /* 509 */ + ngx_null_string, /* 510 */ + ngx_string(ngx_http_error_511_page), -#define NGX_HTTP_LAST_5XX 508 +#define NGX_HTTP_LAST_5XX 512 }; @@ -460,7 +496,6 @@ case NGX_HTTP_TO_HTTPS: case NGX_HTTPS_CERT_ERROR: case NGX_HTTPS_NO_CERT: - case NGX_HTTP_REQUEST_HEADER_TOO_LARGE: r->err_status = NGX_HTTP_BAD_REQUEST; break; } Index: src/http/modules/ngx_http_limit_req_module.c =================================================================== --- src/http/modules/ngx_http_limit_req_module.c (revision 4615) +++ src/http/modules/ngx_http_limit_req_module.c (working copy) @@ -245,7 +245,7 @@ ctx->node = NULL; } - return NGX_HTTP_SERVICE_UNAVAILABLE; + return NGX_HTTP_TOO_MANY_REQUESTS; } /* rc == NGX_AGAIN || rc == NGX_OK */ Index: src/http/ngx_http_request.h =================================================================== --- src/http/ngx_http_request.h (revision 4615) +++ src/http/ngx_http_request.h (working copy) @@ -91,16 +91,18 @@ #define NGX_HTTP_UNSUPPORTED_MEDIA_TYPE 415 #define NGX_HTTP_RANGE_NOT_SATISFIABLE 416 +/* RFC 6585 */ +#define NGX_HTTP_TOO_MANY_REQUESTS 429 +#define NGX_HTTP_REQUEST_HEADER_TOO_LARGE 431 + /* Our own HTTP codes */ /* The special code to close connection without any response */ #define NGX_HTTP_CLOSE 444 -#define NGX_HTTP_NGINX_CODES 494 +#define NGX_HTTP_NGINX_CODES 495 -#define NGX_HTTP_REQUEST_HEADER_TOO_LARGE 494 - #define NGX_HTTPS_CERT_ERROR 495 #define NGX_HTTPS_NO_CERT 496 @@ -128,7 +130,10 @@ #define NGX_HTTP_GATEWAY_TIME_OUT 504 #define NGX_HTTP_INSUFFICIENT_STORAGE 507 +/* RFC 6585 */ +#define NGX_HTTP_NETWORK_AUTHENTICATION_REQUIRED 511 + #define NGX_HTTP_LOWLEVEL_BUFFERED 0xf0 #define NGX_HTTP_WRITE_BUFFERED 0x10 #define NGX_HTTP_GZIP_BUFFERED 0x20 Index: src/http/ngx_http_header_filter_module.c =================================================================== --- src/http/ngx_http_header_filter_module.c (revision 4615) +++ src/http/ngx_http_header_filter_module.c (working copy) @@ -99,16 +99,24 @@ ngx_string("415 Unsupported Media Type"), ngx_string("416 Requested Range Not Satisfiable"), - /* ngx_null_string, */ /* "417 Expectation Failed" */ - /* ngx_null_string, */ /* "418 unused" */ - /* ngx_null_string, */ /* "419 unused" */ - /* ngx_null_string, */ /* "420 unused" */ - /* ngx_null_string, */ /* "421 unused" */ - /* ngx_null_string, */ /* "422 Unprocessable Entity" */ - /* ngx_null_string, */ /* "423 Locked" */ - /* ngx_null_string, */ /* "424 Failed Dependency" */ + ngx_null_string, /* "417 Expectation Failed" */ + ngx_null_string, /* "418 unused" */ + ngx_null_string, /* "419 unused" */ + ngx_null_string, /* "420 unused" */ + ngx_null_string, /* "421 unused" */ + ngx_null_string, /* "422 Unprocessable Entity" */ + ngx_null_string, /* "423 Locked" */ + ngx_null_string, /* "424 Failed Dependency" */ + ngx_null_string, /* "425 unused" */ + ngx_null_string, /* "426 unused" */ + ngx_null_string, /* "427 unused" */ + ngx_null_string, /* "428 Precondition Required" */ -#define NGX_HTTP_LAST_4XX 417 + ngx_string("429 Too Many Requests"), + ngx_null_string, /* "430 unused" */ + ngx_string("431 Request Header Fields Too Large"), + +#define NGX_HTTP_LAST_4XX 432 #define NGX_HTTP_OFF_5XX (NGX_HTTP_LAST_4XX - 400 + NGX_HTTP_OFF_4XX) ngx_string("500 Internal Server Error"), @@ -120,12 +128,14 @@ ngx_null_string, /* "505 HTTP Version Not Supported" */ ngx_null_string, /* "506 Variant Also Negotiates" */ ngx_string("507 Insufficient Storage"), - /* ngx_null_string, */ /* "508 unused" */ - /* ngx_null_string, */ /* "509 unused" */ - /* ngx_null_string, */ /* "510 Not Extended" */ + ngx_null_string, /* "508 unused" */ + ngx_null_string, /* "509 unused" */ + ngx_null_string, /* "510 Not Extended" */ -#define NGX_HTTP_LAST_5XX 508 + ngx_string("511 Network Authentication Required"), +#define NGX_HTTP_LAST_5XX 512 + }; -- do not fear to be eccentric in opinion, for every opinion now accepted was once eccentric. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: nginx_trunk_rfc6585.patch Type: application/octet-stream Size: 7465 bytes Desc: not available URL: From agentzh at gmail.com Tue May 8 12:15:33 2012 From: agentzh at gmail.com (agentzh) Date: Tue, 8 May 2012 20:15:33 +0800 Subject: Event based implementation in http module In-Reply-To: References: Message-ID: On Tue, May 8, 2012 at 1:46 PM, vivek goel wrote: > Sorry just clearing my doubt. > Again I have one doubt. > Work I am doing in? clcf->handle is a blocking io call. > Now if I am running nginx with 2 worker process and function I am calling in > clcf->handle takes 200 ms to generate response. > So it means that I will not able to server other clients from same worker > process withing 200 ms time? ? > Sure. The worker process is single-threaded. And by using blocking calls in the content handler essentially turns nginx into an apache server with the prefork mpm. Some inexperienced nginx developers have been making such serious mistakes in their nginx modules. > If yes , > ? How can I make it non-blocking so that I can server multiple clients ? > Just use nonblocking calls and leverage the nginx event model. You can check out all those nginx upstream modules out there, ngx_memcached, ngx_redis2, ngx_memc, ngx_drizzle, ngx_postgres, ngx_proxy, ngx_lua, ngx_beanstalkd, and etc etc etc. Regards, -agentzh From mdounin at mdounin.ru Tue May 8 19:39:20 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 8 May 2012 23:39:20 +0400 Subject: [PATCH]add new http status code In-Reply-To: References: Message-ID: <20120508193919.GZ31671@mdounin.ru> Hello! On Tue, May 08, 2012 at 08:10:46PM +0800, Simon Liu wrote: > Hello! > > It add a few of new http status code In RFC 6585( > http://tools.ietf.org/html/rfc6585), and so I add this to Nginx. > > The new http status code 429(Too Many Requests) and 431(Request Header > Fields Too Large) in RFC 6585 is useful to Nginx, > because Nginx will return 400 when request header is Too Large and return > 503 when too many request(in limit_req). > > Blew is patch(for Nginx trunk) to use 429 replace 400 when request header > is too large, > and use 431 replace 503 when find too many request(limit_req module). in > addition to add new http status code 511. There are no plans to replace returned status codes in near future, at least not before we have clear understanding of effects of new codes on behaviour of various clients. And in case of limit_req/limit_conn this might happen at all due to compatibility reasons: there are lots of configs out there which expect 503 to be returned. Maxim Dounin From manlio.perillo at gmail.com Tue May 8 21:24:33 2012 From: manlio.perillo at gmail.com (Manlio Perillo) Date: Tue, 08 May 2012 23:24:33 +0200 Subject: [PATCH]add new http status code In-Reply-To: <20120508193919.GZ31671@mdounin.ru> References: <20120508193919.GZ31671@mdounin.ru> Message-ID: <4FA98F11.4020106@gmail.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Il 08/05/2012 21:39, Maxim Dounin ha scritto: > [...] >> Blew is patch(for Nginx trunk) to use 429 replace 400 when request header >> is too large, >> and use 431 replace 503 when find too many request(limit_req module). in >> addition to add new http status code 511. > > There are no plans to replace returned status codes in near > future, at least not before we have clear understanding of effects > of new codes on behaviour of various clients. > > And in case of limit_req/limit_conn this might happen at all due > to compatibility reasons: there are lots of configs out there > which expect 503 to be returned. > For limit_req/limit_conn this should not be a problem, since the error code to return can be made configurable. Regards Manlio -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.10 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iEYEARECAAYFAk+pjxEACgkQscQJ24LbaURa/ACcDXAYXTZqh2jAmgwCytH6Y5/8 LZgAniWxKuBoMTbZG4wo+AYM7BF6U59j =Lsnd -----END PGP SIGNATURE----- From mdounin at mdounin.ru Wed May 9 11:07:36 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 9 May 2012 15:07:36 +0400 Subject: [PATCH]add new http status code In-Reply-To: <4FA98F11.4020106@gmail.com> References: <20120508193919.GZ31671@mdounin.ru> <4FA98F11.4020106@gmail.com> Message-ID: <20120509110736.GA31671@mdounin.ru> Hello! On Tue, May 08, 2012 at 11:24:33PM +0200, Manlio Perillo wrote: > Il 08/05/2012 21:39, Maxim Dounin ha scritto: > > [...] > >> Blew is patch(for Nginx trunk) to use 429 replace 400 when request header > >> is too large, > >> and use 431 replace 503 when find too many request(limit_req module). in > >> addition to add new http status code 511. > > > > There are no plans to replace returned status codes in near > > future, at least not before we have clear understanding of effects > > of new codes on behaviour of various clients. > > > > And in case of limit_req/limit_conn this might happen at all due > > to compatibility reasons: there are lots of configs out there > > which expect 503 to be returned. > > > > For limit_req/limit_conn this should not be a problem, since the error > code to return can be made configurable. It can be, but I'm not sure there are benefits (and the default should be kept the same anyway). If one wants to return any specific code, error_page might be used to change the code. Moreover, use of 429 in case of non-client-specific limits (that is, even if you use $remote_addr for limiting as NATs are common, not even talking about resource-specific limits, e.g. based on $server_name) looks flawed for me from standards point of view, as the status should be really 5xx, not 4xx. Maxim Dounin From simohayha.bobo at gmail.com Wed May 9 11:58:49 2012 From: simohayha.bobo at gmail.com (Simon Liu) Date: Wed, 9 May 2012 19:58:49 +0800 Subject: [PATCH]add new http status code In-Reply-To: <20120509110736.GA31671@mdounin.ru> References: <20120508193919.GZ31671@mdounin.ru> <4FA98F11.4020106@gmail.com> <20120509110736.GA31671@mdounin.ru> Message-ID: Hello! On Wed, May 9, 2012 at 7:07 PM, Maxim Dounin wrote: > Hello! > > On Tue, May 08, 2012 at 11:24:33PM +0200, Manlio Perillo wrote: > > > Il 08/05/2012 21:39, Maxim Dounin ha scritto: > > > [...] > > >> Blew is patch(for Nginx trunk) to use 429 replace 400 when request > header > > >> is too large, > > >> and use 431 replace 503 when find too many request(limit_req module). > in > > >> addition to add new http status code 511. > > > > > > There are no plans to replace returned status codes in near > > > future, at least not before we have clear understanding of effects > > > of new codes on behaviour of various clients. > > > > > > And in case of limit_req/limit_conn this might happen at all due > > > to compatibility reasons: there are lots of configs out there > > > which expect 503 to be returned. > > > > > > > For limit_req/limit_conn this should not be a problem, since the error > > code to return can be made configurable. > > It can be, but I'm not sure there are benefits (and the default > should be kept the same anyway). If one wants to return any > specific code, error_page might be used to change the code. > > Moreover, use of 429 in case of non-client-specific limits (that > is, even if you use $remote_addr for limiting as NATs are common, > not even talking about resource-specific limits, e.g. based on > $server_name) looks flawed for me from standards point of view, as > the status should be really 5xx, not 4xx. > > Maxim Dounin > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > I got it, thanks for your replay. -- do not fear to be eccentric in opinion, for every opinion now accepted was once eccentric. -------------- next part -------------- An HTML attachment was scrubbed... URL: From kindy61 at gmail.com Fri May 11 01:51:41 2012 From: kindy61 at gmail.com (kindy) Date: Fri, 11 May 2012 09:51:41 +0800 Subject: the $upstream_addr variable is empty when enable keepalive In-Reply-To: <20120426123539.GV31671@mdounin.ru> References: <20120426123539.GV31671@mdounin.ru> Message-ID: hi, got it. and when will this be fixed? On Thu, Apr 26, 2012 at 8:35 PM, Maxim Dounin wrote: > Hello! > > On Thu, Apr 26, 2012 at 10:11:48AM +0800, kindy wrote: > > > hi, > > > > from nginx 1.1.13 to 1.2.0. > > the conf: > > > > upstream a { > > server 127.0.0.1:8083; > > keepalive 10 single; > > Don't use "single", it's intentionally left undocumented in > official nginx docs (http://nginx.org/r/keepalive) as it > might cause various problems. It will likely be deprecated and > removed. > > Maxim Dounin > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > -- ??(Kindy Lin) -------------- next part -------------- An HTML attachment was scrubbed... URL: From agentzh at gmail.com Fri May 11 12:59:48 2012 From: agentzh at gmail.com (agentzh) Date: Fri, 11 May 2012 20:59:48 +0800 Subject: [PATCH] nginx does not close the connection for 412 responses under extreme conditions Message-ID: Hello! I've just noticed that the "412 Precondition Failed page" for the If-Unmodified-Since request header could lead to connection hang. That is, when the 412 page cannot be sent out in a single run (seen EAGAIN for example), then ngx_http_finalize_request will never close the downstream connection due to the r->filter_finalize set by ngx_http_filter_finalize_request. This issue can be reproduced with the standard ngx_http_static_module serving the sample index.html page. Here attaches the patch for both nginx 1.0.15 to fix this (it should also be applied to nginx 1.2.0, I think). Comments welcome! Thanks! -agentzh --- nginx-1.0.15/src/http/ngx_http_request.c 2012-03-05 20:49:32.000000000 +0800 +++ nginx-1.0.15-patched/src/http/ngx_http_request.c 2012-05-11 20:50:01.478111234 +0800 @@ -1900,6 +1900,7 @@ if (rc == NGX_OK && r->filter_finalize) { c->error = 1; + ngx_http_finalize_connection(r); return; } -------------- next part -------------- A non-text attachment was scrubbed... Name: nginx-1.0.15-filter_finalize_hang.patch Type: application/octet-stream Size: 331 bytes Desc: not available URL: From mdounin at mdounin.ru Fri May 11 13:09:24 2012 From: mdounin at mdounin.ru (mdounin at mdounin.ru) Date: Fri, 11 May 2012 13:09:24 +0000 Subject: [nginx] svn commit: r4616 - trunk/src/http/modules Message-ID: <20120511130924.B38483FA054@mail.nginx.com> Author: mdounin Date: 2012-05-11 13:09:24 +0000 (Fri, 11 May 2012) New Revision: 4616 URL: http://trac.nginx.org/nginx/changeset/4616/nginx Log: Added r->state reset on fastcgi/scgi/uwsgi request start. Failing to do so results in problems if 400 or 414 requests are redirected to fastcgi/scgi/uwsgi upstream, as well as after invalid headers got from upstream. This was already fixed for proxy in r3478, but fastcgi (the only affected protocol at that time) was missed. Reported by Matthieu Tourne. Modified: trunk/src/http/modules/ngx_http_fastcgi_module.c trunk/src/http/modules/ngx_http_scgi_module.c trunk/src/http/modules/ngx_http_uwsgi_module.c Modified: trunk/src/http/modules/ngx_http_fastcgi_module.c =================================================================== --- trunk/src/http/modules/ngx_http_fastcgi_module.c 2012-05-04 11:35:22 UTC (rev 4615) +++ trunk/src/http/modules/ngx_http_fastcgi_module.c 2012-05-11 13:09:24 UTC (rev 4616) @@ -619,6 +619,7 @@ u->process_header = ngx_http_fastcgi_process_header; u->abort_request = ngx_http_fastcgi_abort_request; u->finalize_request = ngx_http_fastcgi_finalize_request; + r->state = 0; u->buffering = 1; @@ -1194,6 +1195,8 @@ f->fastcgi_stdout = 0; f->large_stderr = 0; + r->state = 0; + return NGX_OK; } Modified: trunk/src/http/modules/ngx_http_scgi_module.c =================================================================== --- trunk/src/http/modules/ngx_http_scgi_module.c 2012-05-04 11:35:22 UTC (rev 4615) +++ trunk/src/http/modules/ngx_http_scgi_module.c 2012-05-11 13:09:24 UTC (rev 4616) @@ -434,6 +434,7 @@ u->process_header = ngx_http_scgi_process_status_line; u->abort_request = ngx_http_scgi_abort_request; u->finalize_request = ngx_http_scgi_finalize_request; + r->state = 0; u->buffering = scf->upstream.buffering; @@ -843,6 +844,7 @@ status->end = NULL; r->upstream->process_header = ngx_http_scgi_process_status_line; + r->state = 0; return NGX_OK; } Modified: trunk/src/http/modules/ngx_http_uwsgi_module.c =================================================================== --- trunk/src/http/modules/ngx_http_uwsgi_module.c 2012-05-04 11:35:22 UTC (rev 4615) +++ trunk/src/http/modules/ngx_http_uwsgi_module.c 2012-05-11 13:09:24 UTC (rev 4616) @@ -467,6 +467,7 @@ u->process_header = ngx_http_uwsgi_process_status_line; u->abort_request = ngx_http_uwsgi_abort_request; u->finalize_request = ngx_http_uwsgi_finalize_request; + r->state = 0; u->buffering = uwcf->upstream.buffering; @@ -883,6 +884,7 @@ status->end = NULL; r->upstream->process_header = ngx_http_uwsgi_process_status_line; + r->state = 0; return NGX_OK; } From mdounin at mdounin.ru Fri May 11 13:14:59 2012 From: mdounin at mdounin.ru (mdounin at mdounin.ru) Date: Fri, 11 May 2012 13:14:59 +0000 Subject: [nginx] svn commit: r4617 - trunk/src/http/modules Message-ID: <20120511131459.224AC3F9E9E@mail.nginx.com> Author: mdounin Date: 2012-05-11 13:14:58 +0000 (Fri, 11 May 2012) New Revision: 4617 URL: http://trac.nginx.org/nginx/changeset/4617/nginx Log: Fastcgi: fixed padding handling on fixed-size records. Padding was incorrectly ignored on end request, empty stdout and stderr fastcgi records. This resulted in protocol desynchronization if fastcgi application used these records with padding for some reason. Reported by Ilia Vinokurov. Modified: trunk/src/http/modules/ngx_http_fastcgi_module.c Modified: trunk/src/http/modules/ngx_http_fastcgi_module.c =================================================================== --- trunk/src/http/modules/ngx_http_fastcgi_module.c 2012-05-11 13:09:24 UTC (rev 4616) +++ trunk/src/http/modules/ngx_http_fastcgi_module.c 2012-05-11 13:14:58 UTC (rev 4617) @@ -1356,7 +1356,11 @@ } } else { - f->state = ngx_http_fastcgi_st_version; + if (f->padding) { + f->state = ngx_http_fastcgi_st_padding; + } else { + f->state = ngx_http_fastcgi_st_version; + } } continue; @@ -1689,8 +1693,13 @@ } if (f->type == NGX_HTTP_FASTCGI_STDOUT && f->length == 0) { - f->state = ngx_http_fastcgi_st_version; + if (f->padding) { + f->state = ngx_http_fastcgi_st_padding; + } else { + f->state = ngx_http_fastcgi_st_version; + } + if (!flcf->keep_conn) { p->upstream_done = 1; } @@ -1702,7 +1711,13 @@ } if (f->type == NGX_HTTP_FASTCGI_END_REQUEST) { - f->state = ngx_http_fastcgi_st_version; + + if (f->padding) { + f->state = ngx_http_fastcgi_st_padding; + } else { + f->state = ngx_http_fastcgi_st_version; + } + p->upstream_done = 1; if (flcf->keep_conn) { @@ -1775,7 +1790,11 @@ } } else { - f->state = ngx_http_fastcgi_st_version; + if (f->padding) { + f->state = ngx_http_fastcgi_st_padding; + } else { + f->state = ngx_http_fastcgi_st_version; + } } continue; From mdounin at mdounin.ru Fri May 11 13:19:22 2012 From: mdounin at mdounin.ru (mdounin at mdounin.ru) Date: Fri, 11 May 2012 13:19:22 +0000 Subject: [nginx] svn commit: r4618 - trunk/src/http Message-ID: <20120511131922.943413F9E9E@mail.nginx.com> Author: mdounin Date: 2012-05-11 13:19:22 +0000 (Fri, 11 May 2012) New Revision: 4618 URL: http://trac.nginx.org/nginx/changeset/4618/nginx Log: Rewrite: fixed escaping and possible segfault (ticket #162). The following code resulted in incorrect escaping of uri and possible segfault: location / { rewrite ^(.*) $1?c=$1; return 200 "$uri"; } If there were arguments in a rewrite's replacement string, and length was actually calculated (due to duplicate captures as in the example above, or variables present), the is_args flag was set and incorrectly copied after length calculation. This resulted in escaping applied to the uri part of the replacement, resulting in incorrect escaping. Additionally, buffer was allocated without escaping expected, thus this also resulted in buffer overrun and possible segfault. Modified: trunk/src/http/ngx_http_script.c Modified: trunk/src/http/ngx_http_script.c =================================================================== --- trunk/src/http/ngx_http_script.c 2012-05-11 13:14:58 UTC (rev 4617) +++ trunk/src/http/ngx_http_script.c 2012-05-11 13:19:22 UTC (rev 4618) @@ -1043,7 +1043,6 @@ } e->buf.len = len; - e->is_args = le.is_args; } if (code->add_args && r->args.len) { From mdounin at mdounin.ru Fri May 11 13:33:07 2012 From: mdounin at mdounin.ru (mdounin at mdounin.ru) Date: Fri, 11 May 2012 13:33:07 +0000 Subject: [nginx] svn commit: r4619 - in trunk/src: event os/unix os/win32 Message-ID: <20120511133307.786423FA054@mail.nginx.com> Author: mdounin Date: 2012-05-11 13:33:06 +0000 (Fri, 11 May 2012) New Revision: 4619 URL: http://trac.nginx.org/nginx/changeset/4619/nginx Log: Accept moderation in case of EMFILE/ENFILE. In case of EMFILE/ENFILE returned from accept() we disable accept events, and (in case of no accept mutex used) arm timer to re-enable them later. With accept mutex we just drop it, and rely on normal accept mutex handling to re-enable accept events once it's acquired again. As we now handle errors in question, logging level was changed to "crit" (instead of "alert" used for unknown errors). Note: the code might call ngx_enable_accept_events() multiple times if there are many listen sockets. The ngx_enable_accept_events() function was modified to check if connection is already active (via c->read->active) and skip it then, thus making multiple calls safe. Modified: trunk/src/event/ngx_event_accept.c trunk/src/os/unix/ngx_errno.h trunk/src/os/win32/ngx_errno.h Modified: trunk/src/event/ngx_event_accept.c =================================================================== --- trunk/src/event/ngx_event_accept.c 2012-05-11 13:19:22 UTC (rev 4618) +++ trunk/src/event/ngx_event_accept.c 2012-05-11 13:33:06 UTC (rev 4619) @@ -21,6 +21,7 @@ socklen_t socklen; ngx_err_t err; ngx_log_t *log; + ngx_uint_t level; ngx_socket_t s; ngx_event_t *rev, *wev; ngx_listening_t *ls; @@ -31,6 +32,14 @@ static ngx_uint_t use_accept4 = 1; #endif + if (ev->timedout) { + if (ngx_enable_accept_events((ngx_cycle_t *) ngx_cycle) != NGX_OK) { + return; + } + + ev->timedout = 0; + } + ecf = ngx_event_get_conf(ngx_cycle->conf_ctx, ngx_event_core_module); if (ngx_event_flags & NGX_USE_RTSIG_EVENT) { @@ -70,10 +79,17 @@ return; } + level = NGX_LOG_ALERT; + + if (err == NGX_ECONNABORTED) { + level = NGX_LOG_ERR; + + } else if (err == NGX_EMFILE || err == NGX_ENFILE) { + level = NGX_LOG_CRIT; + } + #if (NGX_HAVE_ACCEPT4) - ngx_log_error((ngx_uint_t) ((err == NGX_ECONNABORTED) ? - NGX_LOG_ERR : NGX_LOG_ALERT), - ev->log, err, + ngx_log_error(level, ev->log, err, use_accept4 ? "accept4() failed" : "accept() failed"); if (use_accept4 && err == NGX_ENOSYS) { @@ -82,9 +98,7 @@ continue; } #else - ngx_log_error((ngx_uint_t) ((err == NGX_ECONNABORTED) ? - NGX_LOG_ERR : NGX_LOG_ALERT), - ev->log, err, "accept() failed"); + ngx_log_error(level, ev->log, err, "accept() failed"); #endif if (err == NGX_ECONNABORTED) { @@ -97,6 +111,26 @@ } } + if (err == NGX_EMFILE || err == NGX_ENFILE) { + if (ngx_disable_accept_events((ngx_cycle_t *) ngx_cycle) + != NGX_OK) + { + return; + } + + if (ngx_use_accept_mutex) { + if (ngx_accept_mutex_held) { + ngx_shmtx_unlock(&ngx_accept_mutex); + ngx_accept_mutex_held = 0; + } + + ngx_accept_disabled = 1; + + } else { + ngx_add_timer(ev, ecf->accept_mutex_delay); + } + } + return; } @@ -383,6 +417,10 @@ c = ls[i].connection; + if (c->read->active) { + continue; + } + if (ngx_event_flags & NGX_USE_RTSIG_EVENT) { if (ngx_add_conn(c) == NGX_ERROR) { Modified: trunk/src/os/unix/ngx_errno.h =================================================================== --- trunk/src/os/unix/ngx_errno.h 2012-05-11 13:19:22 UTC (rev 4618) +++ trunk/src/os/unix/ngx_errno.h 2012-05-11 13:33:06 UTC (rev 4619) @@ -29,6 +29,8 @@ #define NGX_ENOTDIR ENOTDIR #define NGX_EISDIR EISDIR #define NGX_EINVAL EINVAL +#define NGX_ENFILE ENFILE +#define NGX_EMFILE EMFILE #define NGX_ENOSPC ENOSPC #define NGX_EPIPE EPIPE #define NGX_EINPROGRESS EINPROGRESS Modified: trunk/src/os/win32/ngx_errno.h =================================================================== --- trunk/src/os/win32/ngx_errno.h 2012-05-11 13:19:22 UTC (rev 4618) +++ trunk/src/os/win32/ngx_errno.h 2012-05-11 13:33:06 UTC (rev 4619) @@ -54,6 +54,8 @@ #define NGX_EALREADY WSAEALREADY #define NGX_EINVAL WSAEINVAL +#define NGX_EMFILE WSAEMFILE +#define NGX_ENFILE WSAEMFILE u_char *ngx_strerror(ngx_err_t err, u_char *errstr, size_t size); From goelvivek2011 at gmail.com Sat May 12 07:56:50 2012 From: goelvivek2011 at gmail.com (vivek goel) Date: Sat, 12 May 2012 13:26:50 +0530 Subject: Event based implementation in http module In-Reply-To: References: Message-ID: Hi agentzh, Thanks for detailed information. The IO operation we are doing doesn't support blocking calls. I am thinking about two approach 1. Either pre-forking multiple nginx worker(50). or 2. Moving the IO operation I am doing to thread in my module so I can user nginx event based API and having ony 2 nginx worker process. What do you suggest? Which one will be good approach ? Will having 50 or more nginx worker will cause extra time for client connect ? regards Vivek Goel On Tue, May 8, 2012 at 11:16 AM, vivek goel wrote: > Sorry just clearing my doubt. > Again I have one doubt. > Work I am doing in clcf->handle is a blocking io call. > Now if I am running nginx with 2 worker process and function I am calling > in clcf->handle takes 200 ms to generate response. > So it means that I will not able to server other clients from same worker > process withing 200 ms time ? > > If yes , > How can I make it non-blocking so that I can server multiple clients ? > > > Thanks in advance for your reply. > > > regards > Vivek Goel > > > > On Mon, May 7, 2012 at 10:57 PM, vivek goel wrote: > >> @Maxim >> and what about handler function specified by clcf->handler ? >> Is it also blocking ? >> and what about my others questions. Can I server multiple client using >> worker process ? >> >> regards >> Vivek Goel >> >> >> >> On Mon, May 7, 2012 at 8:19 PM, vivek goel wrote: >> >>> I am working on http module using nginx. >>> I have one question. >>> >>> 1. Is function specified in ngx_command_t will be blocking call ? >>> >>> If not >>> My module description is as follow: >>> It does read of file which is blocking call. That I think at same >>> time worker process can't server the same client ? >>> >>> The solution I am thinking is that I can do a blocking operation in one >>> thread and call a callback to send response when response is ready. Is >>> there a way I can tell worker process to start accepting the connection and >>> server the response for old request when response is ready for that client? >>> >>> Can you please suggest some better idea to server multiple client on >>> blocking call with nginx http module ? >>> >>> >>> >>> regards >>> Vivek Goel >>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Sat May 12 08:50:54 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sat, 12 May 2012 12:50:54 +0400 Subject: [PATCH] nginx does not close the connection for 412 responses under extreme conditions In-Reply-To: References: Message-ID: <20120512085054.GR31671@mdounin.ru> Hello! On Fri, May 11, 2012 at 08:59:48PM +0800, agentzh wrote: > Hello! > > I've just noticed that the "412 Precondition Failed page" for the > If-Unmodified-Since request header could lead to connection hang. That > is, when the 412 page cannot be sent out in a single run (seen EAGAIN > for example), then ngx_http_finalize_request will never close the > downstream connection due to the r->filter_finalize set by > ngx_http_filter_finalize_request. > > This issue can be reproduced with the standard ngx_http_static_module > serving the sample index.html page. Could you please clarify steps to reproduce you've used? The only way I see is to redirect 412 error to an uncached static file with aio used, thus causing r->blocked to be set during request finalization. > Here attaches the patch for both nginx 1.0.15 to fix this (it should > also be applied to nginx 1.2.0, I think). > > Comments welcome! > > Thanks! > -agentzh > > --- nginx-1.0.15/src/http/ngx_http_request.c 2012-03-05 20:49:32.000000000 +0800 > +++ nginx-1.0.15-patched/src/http/ngx_http_request.c 2012-05-11 > 20:50:01.478111234 +0800 > @@ -1900,6 +1900,7 @@ > > if (rc == NGX_OK && r->filter_finalize) { > c->error = 1; > + ngx_http_finalize_connection(r); > return; > } While the patch looks ok for me, the whole filter_finalize thing is at best fragile (and known to cause at least one segfault as of now). It really needs some attention (or even reimplementation). Maxim Dounin From agentzh at gmail.com Sat May 12 12:17:16 2012 From: agentzh at gmail.com (agentzh) Date: Sat, 12 May 2012 20:17:16 +0800 Subject: [PATCH] nginx does not close the connection for 412 responses under extreme conditions In-Reply-To: <20120512085054.GR31671@mdounin.ru> References: <20120512085054.GR31671@mdounin.ru> Message-ID: On Sat, May 12, 2012 at 4:50 PM, Maxim Dounin wrote: >> >> This issue can be reproduced with the standard ngx_http_static_module >> serving the sample index.html page. > > Could you please clarify steps to reproduce you've used? ?The only > way I see is to redirect 412 error to an uncached static file with > aio used, thus causing r->blocked to be set during request > finalization. > I didn't enable aio. I used the mockeagain tool to emulate networks that are extremely slow to write: https://github.com/agentzh/mockeagain Then the debug logs look like below: 1 2012/05/12 20:14:01 [debug] 2770#0: *1 http finalize request: -2, "/index.html?" a:1, c:2 2 2012/05/12 20:14:01 [debug] 2770#0: *1 http finalize request: -4, "/index.html?" a:1, c:2 3 2012/05/12 20:14:01 [debug] 2770#0: *1 http writer handler: "/index.html?" 4 2012/05/12 20:14:01 [debug] 2770#0: *1 http writer output filter: -2, "/index.html?" 5 2012/05/12 20:14:01 [debug] 2770#0: *1 http writer handler: "/index.html?" 6 2012/05/12 20:14:01 [debug] 2770#0: *1 http writer output filter: -2, "/index.html?" 7 2012/05/12 20:14:01 [debug] 2770#0: *1 http writer handler: "/index.html?" 8 2012/05/12 20:14:01 [debug] 2770#0: *1 http writer output filter: -2, "/index.html?" 9 2012/05/12 20:14:01 [debug] 2770#0: *1 http writer handler: "/index.html?" 10 2012/05/12 20:14:01 [debug] 2770#0: *1 http writer output filter: -2, "/index.html?" [...] 902 2012/05/12 20:14:01 [debug] 2770#0: *1 http writer output filter: -2, "/index.html?" 903 2012/05/12 20:14:01 [debug] 2770#0: *1 http writer handler: "/index.html?" 904 2012/05/12 20:14:01 [debug] 2770#0: *1 http writer output filter: 0, "/index.html?" 905 2012/05/12 20:14:01 [debug] 2770#0: *1 http writer done: "/index.html?" 906 2012/05/12 20:14:01 [debug] 2770#0: *1 http finalize request: 0, "/index.html?" a:1, c:1 907 2012/05/12 20:14:04 [debug] 2770#0: *1 http finalize request: 0, "/index.html?" a:1, c:1 908 2012/05/12 20:14:04 [debug] 2770#0: *1 http finalize request: 0, "/index.html?" a:1, c:1 909 2012/05/12 20:14:04 [debug] 2770#0: *1 http finalize request: 0, "/index.html?" a:1, c:1 910 2012/05/12 20:14:04 [debug] 2770#0: *1 http finalize request: 0, "/index.html?" a:1, c:1 911 2012/05/12 20:14:04 [debug] 2770#0: *1 http finalize request: 0, "/index.html?" a:1, c:1 912 2012/05/12 20:14:04 [debug] 2770#0: *1 http finalize request: 0, "/index.html?" a:1, c:1 [...] That is, after all the response has been sent out via the http writer handler, the ngx_http_finalize_request function is repeatedly being called without terminating the connection. Regards, -agentzh From mdounin at mdounin.ru Sat May 12 12:47:58 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sat, 12 May 2012 16:47:58 +0400 Subject: [PATCH] nginx does not close the connection for 412 responses under extreme conditions In-Reply-To: References: <20120512085054.GR31671@mdounin.ru> Message-ID: <20120512124757.GW31671@mdounin.ru> Hello! On Sat, May 12, 2012 at 08:17:16PM +0800, agentzh wrote: > On Sat, May 12, 2012 at 4:50 PM, Maxim Dounin wrote: > >> > >> This issue can be reproduced with the standard ngx_http_static_module > >> serving the sample index.html page. > > > > Could you please clarify steps to reproduce you've used? ?The only > > way I see is to redirect 412 error to an uncached static file with > > aio used, thus causing r->blocked to be set during request > > finalization. > > > > I didn't enable aio. I used the mockeagain tool to emulate networks > that are extremely slow to write: > > https://github.com/agentzh/mockeagain > > Then the debug logs look like below: > > 1 2012/05/12 20:14:01 [debug] 2770#0: *1 http finalize request: > -2, "/index.html?" a:1, c:2 I assume you use error_page to redirect 412, right? You may want to show config (and more complete debug log) to simplify reading. > 2 2012/05/12 20:14:01 [debug] 2770#0: *1 http finalize request: > -4, "/index.html?" a:1, c:2 How this happens to be -4 (NGX_DONE)? The NGX_ERROR is returned from ngx_http_filter_finalize_request(), and this is what should be used on ngx_http_finalize_request() call (unless some filter does wrong thing), resulting in ngx_terminate_request() call unless r->main->blocked is set. [...] Maxim Dounin From agentzh at gmail.com Sat May 12 15:10:09 2012 From: agentzh at gmail.com (agentzh) Date: Sat, 12 May 2012 23:10:09 +0800 Subject: [PATCH] nginx does not close the connection for 412 responses under extreme conditions In-Reply-To: <20120512124757.GW31671@mdounin.ru> References: <20120512085054.GR31671@mdounin.ru> <20120512124757.GW31671@mdounin.ru> Message-ID: Hello! On Sat, May 12, 2012 at 8:47 PM, Maxim Dounin wrote: > > I assume you use error_page to redirect 412, right? ?You may want > to show config (and more complete debug log) to simplify reading. > Nope. There is no error_page directive in my nginx.conf. Here is the configure file I'm using: https://gist.github.com/2667007 The request I'm using is this: GET / HTTP/1.1 Host: localhost Connection: Close If-Unmodified-Since: Thu, 10 May 2012 07:50:47 GMT The debug log is here (only the first part is given, because the whole is too huge): https://gist.github.com/2666809 >> ? ? ? 2 2012/05/12 20:14:01 [debug] 2770#0: *1 http finalize request: >> -4, "/index.html?" a:1, c:2 > > How this happens to be -4 (NGX_DONE)? In short, the ngx_http_finalize_request(r, NGX_DONE) call was caused by recursive calling of ngx_http_core_content_phase due to the standard ngx_index module. To be more specific, the backtrace for the (first) ngx_http_finalize_request(r, NGX_AGAIN) call looks like this #0 ngx_http_finalize_request (r=0x77ccb0, rc=-2) at src/http/ngx_http_request.c:1890 #1 0x0000000000455758 in ngx_http_core_content_phase (r=0x77ccb0, ph=0x78d5c8) at src/http/ngx_http_core_module.c:1377 #2 0x00000000004542a4 in ngx_http_core_run_phases (r=0x77ccb0) at src/http/ngx_http_core_module.c:862 #3 0x000000000045421b in ngx_http_handler (r=0x77ccb0) at src/http/ngx_http_core_module.c:845 #4 0x0000000000457d53 in ngx_http_internal_redirect (r=0x77ccb0, uri=0x7fffe0aadc40, args=0x77cfd0) at src/http/ngx_http_core_module.c:2516 #5 0x0000000000482b8b in ngx_http_index_handler (r=0x77ccb0) at src/http/modules/ngx_http_index_module.c:265 #6 0x000000000045573a in ngx_http_core_content_phase (r=0x77ccb0, ph=0x78d598) at src/http/ngx_http_core_module.c:1374 #7 0x00000000004542a4 in ngx_http_core_run_phases (r=0x77ccb0) at src/http/ngx_http_core_module.c:862 #8 0x000000000045421b in ngx_http_handler (r=0x77ccb0) at src/http/ngx_http_core_module.c:845 [...] We can see that when ngx_http_finalize_request (r=0x77ccb0, rc=-2) returns, the ngx_http_internal_redirect call at frame #4 will then return NGX_DONE to ngx_http_index_handler at frame #5, and in turn, ngx_http_index_handler will also return NGX_DONE to ngx_http_core_content_phase at frame #6, and finally triggering the call ngx_http_finalize_request(r, NGX_DONE). If you have further questions, please let me know :) And thank you very much for looking into this! Best regards, -agentzh From mdounin at mdounin.ru Sat May 12 17:12:46 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sat, 12 May 2012 21:12:46 +0400 Subject: [PATCH] nginx does not close the connection for 412 responses under extreme conditions In-Reply-To: References: <20120512085054.GR31671@mdounin.ru> <20120512124757.GW31671@mdounin.ru> Message-ID: <20120512171246.GB31671@mdounin.ru> Hello! On Sat, May 12, 2012 at 11:10:09PM +0800, agentzh wrote: > Hello! > > On Sat, May 12, 2012 at 8:47 PM, Maxim Dounin wrote: > > > > I assume you use error_page to redirect 412, right? ?You may want > > to show config (and more complete debug log) to simplify reading. > > > > Nope. There is no error_page directive in my nginx.conf. Here is the > configure file I'm using: > > https://gist.github.com/2667007 > > The request I'm using is this: > > GET / HTTP/1.1 > Host: localhost > Connection: Close > If-Unmodified-Since: Thu, 10 May 2012 07:50:47 GMT > > The debug log is here (only the first part is given, because the whole > is too huge): > > https://gist.github.com/2666809 > > >> ? ? ? 2 2012/05/12 20:14:01 [debug] 2770#0: *1 http finalize request: > >> -4, "/index.html?" a:1, c:2 > > > > How this happens to be -4 (NGX_DONE)? > > In short, the ngx_http_finalize_request(r, NGX_DONE) call was caused > by recursive calling of ngx_http_core_content_phase due to the > standard ngx_index module. Ok, I see this now from full debug log. Thanks for details. I'll take a closer look at the patch later to make sure it doesn't break various image filter use cases (likely no, but as I already said filter finalization is at best fragile) and to see if it's also possible to avoid response truncation in case of error_page used after filter finalization. Maxim Dounin From mdounin at mdounin.ru Mon May 14 09:13:45 2012 From: mdounin at mdounin.ru (mdounin at mdounin.ru) Date: Mon, 14 May 2012 09:13:45 +0000 Subject: [nginx] svn commit: r4620 - trunk/src/core Message-ID: <20120514091345.EBB2E3FA02E@mail.nginx.com> Author: mdounin Date: 2012-05-14 09:13:45 +0000 (Mon, 14 May 2012) New Revision: 4620 URL: http://trac.nginx.org/nginx/changeset/4620/nginx Log: Resolver: protection from duplicate responses. If we already had CNAME in resolver node (i.e. rn->cnlen and rn->u.cname set), and got additional response with A record, it resulted in rn->cnlen set and rn->u.cname overwritten by rn->u.addr (or rn->u.addrs), causing segmentation fault later in ngx_resolver_free_node() on an attempt to free overwritten rn->u.cname. The opposite (i.e. CNAME got after A) might cause similar problems as well. Modified: trunk/src/core/ngx_resolver.c Modified: trunk/src/core/ngx_resolver.c =================================================================== --- trunk/src/core/ngx_resolver.c 2012-05-11 13:33:06 UTC (rev 4619) +++ trunk/src/core/ngx_resolver.c 2012-05-14 09:13:45 UTC (rev 4620) @@ -513,8 +513,10 @@ /* lock alloc mutex */ - ngx_resolver_free_locked(r, rn->query); - rn->query = NULL; + if (rn->query) { + ngx_resolver_free_locked(r, rn->query); + rn->query = NULL; + } if (rn->cnlen) { ngx_resolver_free_locked(r, rn->u.cname); @@ -1409,6 +1411,9 @@ ngx_resolver_free(r, addrs); } + ngx_resolver_free(r, rn->query); + rn->query = NULL; + return; } else if (cname) { @@ -1441,6 +1446,9 @@ (void) ngx_resolve_name_locked(r, ctx); } + ngx_resolver_free(r, rn->query); + rn->query = NULL; + return; } From mdounin at mdounin.ru Mon May 14 09:48:06 2012 From: mdounin at mdounin.ru (mdounin at mdounin.ru) Date: Mon, 14 May 2012 09:48:06 +0000 Subject: [nginx] svn commit: r4621 - trunk/src/http Message-ID: <20120514094806.558553F9F94@mail.nginx.com> Author: mdounin Date: 2012-05-14 09:48:05 +0000 (Mon, 14 May 2012) New Revision: 4621 URL: http://trac.nginx.org/nginx/changeset/4621/nginx Log: Fixed possible request hang with filter finalization. With r->filter_finalize set the ngx_http_finalize_connection() wasn't called from ngx_http_finalize_request() called with NGX_OK, resulting in r->main->count not being decremented, thus causing request hang in some rare situations. See here for more details: http://mailman.nginx.org/pipermail/nginx-devel/2012-May/002190.html Patch by Yichun Zhang (agentzh). Modified: trunk/src/http/ngx_http_request.c Modified: trunk/src/http/ngx_http_request.c =================================================================== --- trunk/src/http/ngx_http_request.c 2012-05-14 09:13:45 UTC (rev 4620) +++ trunk/src/http/ngx_http_request.c 2012-05-14 09:48:05 UTC (rev 4621) @@ -1933,6 +1933,7 @@ if (rc == NGX_OK && r->filter_finalize) { c->error = 1; + ngx_http_finalize_connection(r); return; } From mdounin at mdounin.ru Mon May 14 09:57:21 2012 From: mdounin at mdounin.ru (mdounin at mdounin.ru) Date: Mon, 14 May 2012 09:57:21 +0000 Subject: [nginx] svn commit: r4622 - trunk/src/http Message-ID: <20120514095721.430403F9FC4@mail.nginx.com> Author: mdounin Date: 2012-05-14 09:57:20 +0000 (Mon, 14 May 2012) New Revision: 4622 URL: http://trac.nginx.org/nginx/changeset/4622/nginx Log: Upstream: smooth weighted round-robin balancing. For edge case weights like { 5, 1, 1 } we now produce { a, a, b, a, c, a, a } sequence instead of { c, b, a, a, a, a, a } produced previously. Algorithm is as follows: on each peer selection we increase current_weight of each eligible peer by its weight, select peer with greatest current_weight and reduce its current_weight by total number of weight points distributed among peers. In case of { 5, 1, 1 } weights this gives the following sequence of current_weight's: a b c 0 0 0 (initial state) 5 1 1 (a selected) -2 1 1 3 2 2 (a selected) -4 2 2 1 3 3 (b selected) 1 -4 3 6 -3 4 (a selected) -1 -3 4 4 -2 5 (c selected) 4 -2 -2 9 -1 -1 (a selected) 2 -1 -1 7 0 0 (a selected) 0 0 0 To preserve weight reduction in case of failures the effective_weight variable was introduced, which usually matches peer's weight, but is reduced temporarily on peer failures. This change also fixes loop with backup servers and proxy_next_upstream http_404 (ticket #47), and skipping alive upstreams in some cases if there are multiple dead ones (ticket #64). Modified: trunk/src/http/ngx_http_upstream_round_robin.c trunk/src/http/ngx_http_upstream_round_robin.h Modified: trunk/src/http/ngx_http_upstream_round_robin.c =================================================================== --- trunk/src/http/ngx_http_upstream_round_robin.c 2012-05-14 09:48:05 UTC (rev 4621) +++ trunk/src/http/ngx_http_upstream_round_robin.c 2012-05-14 09:57:20 UTC (rev 4622) @@ -12,8 +12,8 @@ static ngx_int_t ngx_http_upstream_cmp_servers(const void *one, const void *two); -static ngx_uint_t -ngx_http_upstream_get_peer(ngx_http_upstream_rr_peers_t *peers); +static ngx_http_upstream_rr_peer_t *ngx_http_upstream_get_peer( + ngx_http_upstream_rr_peer_data_t *rrp); #if (NGX_HTTP_SSL) @@ -81,7 +81,8 @@ peers->peer[n].fail_timeout = server[i].fail_timeout; peers->peer[n].down = server[i].down; peers->peer[n].weight = server[i].down ? 0 : server[i].weight; - peers->peer[n].current_weight = peers->peer[n].weight; + peers->peer[n].effective_weight = peers->peer[n].weight; + peers->peer[n].current_weight = 0; n++; } } @@ -131,7 +132,8 @@ backup->peer[n].socklen = server[i].addrs[j].socklen; backup->peer[n].name = server[i].addrs[j].name; backup->peer[n].weight = server[i].weight; - backup->peer[n].current_weight = server[i].weight; + backup->peer[n].effective_weight = server[i].weight; + backup->peer[n].current_weight = 0; backup->peer[n].max_fails = server[i].max_fails; backup->peer[n].fail_timeout = server[i].fail_timeout; backup->peer[n].down = server[i].down; @@ -190,7 +192,8 @@ peers->peer[i].socklen = u.addrs[i].socklen; peers->peer[i].name = u.addrs[i].name; peers->peer[i].weight = 1; - peers->peer[i].current_weight = 1; + peers->peer[i].effective_weight = 1; + peers->peer[i].current_weight = 0; peers->peer[i].max_fails = 1; peers->peer[i].fail_timeout = 10; } @@ -306,7 +309,8 @@ peers->peer[0].socklen = ur->socklen; peers->peer[0].name = ur->host; peers->peer[0].weight = 1; - peers->peer[0].current_weight = 1; + peers->peer[0].effective_weight = 1; + peers->peer[0].current_weight = 0; peers->peer[0].max_fails = 1; peers->peer[0].fail_timeout = 10; @@ -338,7 +342,8 @@ peers->peer[i].name.len = len; peers->peer[i].name.data = p; peers->peer[i].weight = 1; - peers->peer[i].current_weight = 1; + peers->peer[i].effective_weight = 1; + peers->peer[i].current_weight = 0; peers->peer[i].max_fails = 1; peers->peer[i].fail_timeout = 10; } @@ -378,8 +383,6 @@ { ngx_http_upstream_rr_peer_data_t *rrp = data; - time_t now; - uintptr_t m; ngx_int_t rc; ngx_uint_t i, n; ngx_connection_t *c; @@ -389,8 +392,6 @@ ngx_log_debug1(NGX_LOG_DEBUG_HTTP, pc->log, 0, "get rr peer, try: %ui", pc->tries); - now = ngx_time(); - /* ngx_lock_mutex(rrp->peers->mutex); */ if (rrp->peers->last_cached) { @@ -423,118 +424,15 @@ /* there are several peers */ - if (pc->tries == rrp->peers->number) { + peer = ngx_http_upstream_get_peer(rrp); - /* it's a first try - get a current peer */ - - i = pc->tries; - - for ( ;; ) { - rrp->current = ngx_http_upstream_get_peer(rrp->peers); - - ngx_log_debug2(NGX_LOG_DEBUG_HTTP, pc->log, 0, - "get rr peer, current: %ui %i", - rrp->current, - rrp->peers->peer[rrp->current].current_weight); - - n = rrp->current / (8 * sizeof(uintptr_t)); - m = (uintptr_t) 1 << rrp->current % (8 * sizeof(uintptr_t)); - - if (!(rrp->tried[n] & m)) { - peer = &rrp->peers->peer[rrp->current]; - - if (!peer->down) { - - if (peer->max_fails == 0 - || peer->fails < peer->max_fails) - { - break; - } - - if (now - peer->checked > peer->fail_timeout) { - peer->checked = now; - break; - } - - peer->current_weight = 0; - - } else { - rrp->tried[n] |= m; - } - - pc->tries--; - } - - if (pc->tries == 0) { - goto failed; - } - - if (--i == 0) { - ngx_log_error(NGX_LOG_ALERT, pc->log, 0, - "round robin upstream stuck on %ui tries", - pc->tries); - goto failed; - } - } - - peer->current_weight--; - - } else { - - i = pc->tries; - - for ( ;; ) { - n = rrp->current / (8 * sizeof(uintptr_t)); - m = (uintptr_t) 1 << rrp->current % (8 * sizeof(uintptr_t)); - - if (!(rrp->tried[n] & m)) { - - peer = &rrp->peers->peer[rrp->current]; - - if (!peer->down) { - - if (peer->max_fails == 0 - || peer->fails < peer->max_fails) - { - break; - } - - if (now - peer->checked > peer->fail_timeout) { - peer->checked = now; - break; - } - - peer->current_weight = 0; - - } else { - rrp->tried[n] |= m; - } - - pc->tries--; - } - - rrp->current++; - - if (rrp->current >= rrp->peers->number) { - rrp->current = 0; - } - - if (pc->tries == 0) { - goto failed; - } - - if (--i == 0) { - ngx_log_error(NGX_LOG_ALERT, pc->log, 0, - "round robin upstream stuck on %ui tries", - pc->tries); - goto failed; - } - } - - peer->current_weight--; + if (peer == NULL) { + goto failed; } - rrp->tried[n] |= m; + ngx_log_debug2(NGX_LOG_DEBUG_HTTP, pc->log, 0, + "get rr peer, current: %ui %i", + rrp->current, peer->current_weight); } pc->sockaddr = peer->sockaddr; @@ -545,11 +443,6 @@ if (pc->tries == 1 && rrp->peers->next) { pc->tries += rrp->peers->next->number; - - n = rrp->peers->next->number / (8 * sizeof(uintptr_t)) + 1; - for (i = 0; i < n; i++) { - rrp->tried[i] = 0; - } } return NGX_OK; @@ -595,56 +488,71 @@ } -static ngx_uint_t -ngx_http_upstream_get_peer(ngx_http_upstream_rr_peers_t *peers) +static ngx_http_upstream_rr_peer_t * +ngx_http_upstream_get_peer(ngx_http_upstream_rr_peer_data_t *rrp) { - ngx_uint_t i, n, reset = 0; - ngx_http_upstream_rr_peer_t *peer; + time_t now; + uintptr_t m; + ngx_int_t total; + ngx_uint_t i, n; + ngx_http_upstream_rr_peer_t *peer, *best; - peer = &peers->peer[0]; + now = ngx_time(); - for ( ;; ) { + best = NULL; + total = 0; - for (i = 0; i < peers->number; i++) { + for (i = 0; i < rrp->peers->number; i++) { - if (peer[i].current_weight <= 0) { - continue; - } + n = i / (8 * sizeof(uintptr_t)); + m = (uintptr_t) 1 << i % (8 * sizeof(uintptr_t)); - n = i; + if (rrp->tried[n] & m) { + continue; + } - while (i < peers->number - 1) { + peer = &rrp->peers->peer[i]; - i++; + if (peer->down) { + continue; + } - if (peer[i].current_weight <= 0) { - continue; - } + if (peer->max_fails + && peer->fails >= peer->max_fails + && now - peer->checked <= peer->fail_timeout) + { + continue; + } - if (peer[n].current_weight * 1000 / peer[i].current_weight - > peer[n].weight * 1000 / peer[i].weight) - { - return n; - } + peer->current_weight += peer->effective_weight; + total += peer->effective_weight; - n = i; - } - - if (peer[i].current_weight > 0) { - n = i; - } - - return n; + if (peer->effective_weight < peer->weight) { + peer->effective_weight++; } - if (reset++) { - return 0; + if (best == NULL || peer->current_weight > best->current_weight) { + best = peer; } + } - for (i = 0; i < peers->number; i++) { - peer[i].current_weight = peer[i].weight; - } + if (best == NULL) { + return NULL; } + + i = best - &rrp->peers->peer[0]; + + rrp->current = i; + + n = i / (8 * sizeof(uintptr_t)); + m = (uintptr_t) 1 << i % (8 * sizeof(uintptr_t)); + + rrp->tried[n] |= m; + + best->current_weight -= total; + best->checked = now; + + return best; } @@ -683,15 +591,15 @@ peer->checked = now; if (peer->max_fails) { - peer->current_weight -= peer->weight / peer->max_fails; + peer->effective_weight -= peer->weight / peer->max_fails; } ngx_log_debug2(NGX_LOG_DEBUG_HTTP, pc->log, 0, "free rr peer failed: %ui %i", - rrp->current, peer->current_weight); + rrp->current, peer->effective_weight); - if (peer->current_weight < 0) { - peer->current_weight = 0; + if (peer->effective_weight < 0) { + peer->effective_weight = 0; } /* ngx_unlock_mutex(rrp->peers->mutex); */ @@ -705,12 +613,6 @@ } } - rrp->current++; - - if (rrp->current >= rrp->peers->number) { - rrp->current = 0; - } - if (pc->tries) { pc->tries--; } Modified: trunk/src/http/ngx_http_upstream_round_robin.h =================================================================== --- trunk/src/http/ngx_http_upstream_round_robin.h 2012-05-14 09:48:05 UTC (rev 4621) +++ trunk/src/http/ngx_http_upstream_round_robin.h 2012-05-14 09:57:20 UTC (rev 4622) @@ -20,6 +20,7 @@ ngx_str_t name; ngx_int_t current_weight; + ngx_int_t effective_weight; ngx_int_t weight; ngx_uint_t fails; From mdounin at mdounin.ru Mon May 14 09:58:07 2012 From: mdounin at mdounin.ru (mdounin at mdounin.ru) Date: Mon, 14 May 2012 09:58:07 +0000 Subject: [nginx] svn commit: r4623 - trunk/src/http Message-ID: <20120514095807.C14A13FA173@mail.nginx.com> Author: mdounin Date: 2012-05-14 09:58:07 +0000 (Mon, 14 May 2012) New Revision: 4623 URL: http://trac.nginx.org/nginx/changeset/4623/nginx Log: Upstream: fixed ip_hash rebalancing with the "down" flag. Due to weight being set to 0 for down peers, order of peers after sorting wasn't the same as without the "down" flag (with down peers at the end), resulting in client rebalancing for clients on other servers. The only rebalancing which should happen after adding "down" to a server is one for clients on the server. The problem was introduced in r1377 (which fixed endless loop by setting weight to 0 for down servers). The loop is no longer possible with new smooth algorithm, so preserving original weight is safe. Modified: trunk/src/http/ngx_http_upstream_round_robin.c Modified: trunk/src/http/ngx_http_upstream_round_robin.c =================================================================== --- trunk/src/http/ngx_http_upstream_round_robin.c 2012-05-14 09:57:20 UTC (rev 4622) +++ trunk/src/http/ngx_http_upstream_round_robin.c 2012-05-14 09:58:07 UTC (rev 4623) @@ -80,8 +80,8 @@ peers->peer[n].max_fails = server[i].max_fails; peers->peer[n].fail_timeout = server[i].fail_timeout; peers->peer[n].down = server[i].down; - peers->peer[n].weight = server[i].down ? 0 : server[i].weight; - peers->peer[n].effective_weight = peers->peer[n].weight; + peers->peer[n].weight = server[i].weight; + peers->peer[n].effective_weight = server[i].weight; peers->peer[n].current_weight = 0; n++; } From goelvivek2011 at gmail.com Mon May 14 11:21:10 2012 From: goelvivek2011 at gmail.com (vivek goel) Date: Mon, 14 May 2012 16:51:10 +0530 Subject: Nginx module development practice with blocking call Message-ID: I am writing nginx module which uses sqlite to do some read operation. As sqlite read doesn't support non-blocking call(according to my knowledge). What will be best solution to integrate it with nginx. I am accepting near about 200 at the same time. One request takes near about 200 ms to process. What method will be the best implementation using nginx: 1. Increasing nginx worker process count to 50-200. So I will having enough worker to accept client request. My concern it that will it increasing connecting time of a user on next request as there will be less chances that same user will be served by same nginx worker process. 2. Doing te read in a thread and using nginx event based api to send response. So I can handle multiple client using only two or one worker process. 3. Moving my code to fastcgi and using nginx fastcgi module to communicate with . What will be the best solution for this implementation ? regards Vivek Goel -------------- next part -------------- An HTML attachment was scrubbed... URL: From ru at nginx.com Mon May 14 12:27:41 2012 From: ru at nginx.com (ru at nginx.com) Date: Mon, 14 May 2012 12:27:41 +0000 Subject: [nginx] svn commit: r4624 - trunk/src/http Message-ID: <20120514122741.BC1D93FA126@mail.nginx.com> Author: ru Date: 2012-05-14 12:27:41 +0000 (Mon, 14 May 2012) New Revision: 4624 URL: http://trac.nginx.org/nginx/changeset/4624/nginx Log: New function ngx_http_get_forwarded_addr() to look up real client address. On input it takes an original address, string in the X-Forwarded-For format and its length, list of trusted proxies, and a flag indicating to perform the recursive search. On output it returns NGX_OK and the "deepest" valid address in a chain, or NGX_DECLINED. It supports AF_INET and AF_INET6. Additionally, original address and/or proxy may be specified as AF_UNIX. Modified: trunk/src/http/ngx_http_core_module.c trunk/src/http/ngx_http_core_module.h Modified: trunk/src/http/ngx_http_core_module.c =================================================================== --- trunk/src/http/ngx_http_core_module.c 2012-05-14 09:58:07 UTC (rev 4623) +++ trunk/src/http/ngx_http_core_module.c 2012-05-14 12:27:41 UTC (rev 4624) @@ -2699,6 +2699,102 @@ } +ngx_int_t +ngx_http_get_forwarded_addr(ngx_http_request_t *r, ngx_addr_t *addr, + u_char *xff, size_t xfflen, ngx_array_t *proxies, int recursive) +{ + u_char *p; + in_addr_t *inaddr; + ngx_addr_t paddr; + ngx_cidr_t *cidr; + ngx_uint_t family, i; +#if (NGX_HAVE_INET6) + ngx_uint_t n; + struct in6_addr *inaddr6; +#endif + + family = addr->sockaddr->sa_family; + + if (family == AF_INET) { + inaddr = &((struct sockaddr_in *) addr->sockaddr)->sin_addr.s_addr; + } + +#if (NGX_HAVE_INET6) + else if (family == AF_INET6) { + inaddr6 = &((struct sockaddr_in6 *) addr->sockaddr)->sin6_addr; + + if (IN6_IS_ADDR_V4MAPPED(inaddr6)) { + family = AF_INET; + inaddr = (in_addr_t *) &inaddr6->s6_addr[12]; + } + } +#endif + + for (cidr = proxies->elts, i = 0; i < proxies->nelts; i++) { + if (cidr[i].family != family) { + goto next; + } + + switch (family) { + +#if (NGX_HAVE_INET6) + case AF_INET6: + for (n = 0; n < 16; n++) { + if ((inaddr6->s6_addr[n] & cidr[i].u.in6.mask.s6_addr[n]) + != cidr[i].u.in6.addr.s6_addr[n]) + { + goto next; + } + } + break; +#endif + +#if (NGX_HAVE_UNIX_DOMAIN) + case AF_UNIX: + break; +#endif + + default: /* AF_INET */ + if ((*inaddr & cidr[i].u.in.mask) != cidr[i].u.in.addr) { + goto next; + } + break; + } + + for (p = xff + xfflen - 1; p > xff; p--, xfflen--) { + if (*p != ' ' && *p != ',') { + break; + } + } + + for ( /* void */ ; p > xff; p--) { + if (*p == ' ' || *p == ',') { + p++; + break; + } + } + + if (ngx_parse_addr(r->pool, &paddr, p, xfflen - (p - xff)) != NGX_OK) { + return NGX_DECLINED; + } + + *addr = paddr; + + if (recursive && p > xff) { + (void) ngx_http_get_forwarded_addr(r, addr, xff, p - 1 - xff, + proxies, 1); + } + + return NGX_OK; + + next: + continue; + } + + return NGX_DECLINED; +} + + static char * ngx_http_core_server(ngx_conf_t *cf, ngx_command_t *cmd, void *dummy) { Modified: trunk/src/http/ngx_http_core_module.h =================================================================== --- trunk/src/http/ngx_http_core_module.h 2012-05-14 09:58:07 UTC (rev 4623) +++ trunk/src/http/ngx_http_core_module.h 2012-05-14 12:27:41 UTC (rev 4624) @@ -513,7 +513,10 @@ ngx_int_t ngx_http_set_disable_symlinks(ngx_http_request_t *r, ngx_http_core_loc_conf_t *clcf, ngx_str_t *path, ngx_open_file_info_t *of); +ngx_int_t ngx_http_get_forwarded_addr(ngx_http_request_t *r, ngx_addr_t *addr, + u_char *xff, size_t xfflen, ngx_array_t *proxies, int recursive); + extern ngx_module_t ngx_http_core_module; extern ngx_uint_t ngx_http_max_module; From ru at nginx.com Mon May 14 12:41:03 2012 From: ru at nginx.com (ru at nginx.com) Date: Mon, 14 May 2012 12:41:03 +0000 Subject: [nginx] svn commit: r4625 - trunk/src/http/modules Message-ID: <20120514124103.700B83F9E89@mail.nginx.com> Author: ru Date: 2012-05-14 12:41:03 +0000 (Mon, 14 May 2012) New Revision: 4625 URL: http://trac.nginx.org/nginx/changeset/4625/nginx Log: realip: chains of trusted proxies and IPv6 support. The module now supports recursive search of client address through the chain of trusted proxies, controlled by the "real_ip_recursive" directive (closes #2). It also gets full IPv6 support (closes #44) and canonical value of the $client_addr variable on address change. Example: real_ip_header X-Forwarded-For; set_real_ip_from 127.0.0.0/8; set_real_ip_from ::1; set_real_ip_from unix:; real_ip_recursive on; Modified: trunk/src/http/modules/ngx_http_realip_module.c Modified: trunk/src/http/modules/ngx_http_realip_module.c =================================================================== --- trunk/src/http/modules/ngx_http_realip_module.c 2012-05-14 12:27:41 UTC (rev 4624) +++ trunk/src/http/modules/ngx_http_realip_module.c 2012-05-14 12:41:03 UTC (rev 4625) @@ -16,13 +16,11 @@ typedef struct { - ngx_array_t *from; /* array of ngx_in_cidr_t */ + ngx_array_t *from; /* array of ngx_cidr_t */ ngx_uint_t type; ngx_uint_t hash; ngx_str_t header; -#if (NGX_HAVE_UNIX_DOMAIN) - ngx_uint_t unixsock; /* unsigned unixsock:2; */ -#endif + ngx_flag_t recursive; } ngx_http_realip_loc_conf_t; @@ -35,8 +33,8 @@ static ngx_int_t ngx_http_realip_handler(ngx_http_request_t *r); -static ngx_int_t ngx_http_realip_set_addr(ngx_http_request_t *r, u_char *ip, - size_t len); +static ngx_int_t ngx_http_realip_set_addr(ngx_http_request_t *r, + ngx_addr_t *addr); static void ngx_http_realip_cleanup(void *data); static char *ngx_http_realip_from(ngx_conf_t *cf, ngx_command_t *cmd, void *conf); @@ -63,6 +61,13 @@ 0, NULL }, + { ngx_string("real_ip_recursive"), + NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_FLAG, + ngx_conf_set_flag_slot, + NGX_HTTP_LOC_CONF_OFFSET, + offsetof(ngx_http_realip_loc_conf_t, recursive), + NULL }, + ngx_null_command }; @@ -105,10 +110,9 @@ u_char *ip, *p; size_t len; ngx_uint_t i, hash; + ngx_addr_t addr; ngx_list_part_t *part; ngx_table_elt_t *header; - struct sockaddr_in *sin; - ngx_in_cidr_t *from; ngx_connection_t *c; ngx_http_realip_ctx_t *ctx; ngx_http_realip_loc_conf_t *rlcf; @@ -121,12 +125,7 @@ rlcf = ngx_http_get_module_loc_conf(r, ngx_http_realip_module); - if (rlcf->from == NULL -#if (NGX_HAVE_UNIX_DOMAIN) - && !rlcf->unixsock -#endif - ) - { + if (rlcf->from == NULL) { return NGX_DECLINED; } @@ -152,15 +151,6 @@ len = r->headers_in.x_forwarded_for->value.len; ip = r->headers_in.x_forwarded_for->value.data; - for (p = ip + len - 1; p > ip; p--) { - if (*p == ' ' || *p == ',') { - p++; - len -= p - ip; - ip = p; - break; - } - } - break; default: /* NGX_HTTP_REALIP_HEADER */ @@ -204,42 +194,27 @@ ngx_log_debug1(NGX_LOG_DEBUG_HTTP, c->log, 0, "realip: \"%s\"", ip); - /* AF_INET only */ + addr.sockaddr = c->sockaddr; + addr.socklen = c->socklen; + /* addr.name = c->addr_text; */ - if (c->sockaddr->sa_family == AF_INET) { - sin = (struct sockaddr_in *) c->sockaddr; - - from = rlcf->from->elts; - for (i = 0; i < rlcf->from->nelts; i++) { - - ngx_log_debug3(NGX_LOG_DEBUG_HTTP, c->log, 0, - "realip: %08XD %08XD %08XD", - sin->sin_addr.s_addr, from[i].mask, from[i].addr); - - if ((sin->sin_addr.s_addr & from[i].mask) == from[i].addr) { - return ngx_http_realip_set_addr(r, ip, len); - } - } + if (ngx_http_get_forwarded_addr(r, &addr, ip, len, rlcf->from, + rlcf->recursive) + == NGX_OK) + { + return ngx_http_realip_set_addr(r, &addr); } -#if (NGX_HAVE_UNIX_DOMAIN) - - if (c->sockaddr->sa_family == AF_UNIX && rlcf->unixsock) { - return ngx_http_realip_set_addr(r, ip, len); - } - -#endif - return NGX_DECLINED; } static ngx_int_t -ngx_http_realip_set_addr(ngx_http_request_t *r, u_char *ip, size_t len) +ngx_http_realip_set_addr(ngx_http_request_t *r, ngx_addr_t *addr) { + size_t len; u_char *p; - ngx_int_t rc; - ngx_addr_t addr; + u_char text[NGX_SOCKADDR_STRLEN]; ngx_connection_t *c; ngx_pool_cleanup_t *cln; ngx_http_realip_ctx_t *ctx; @@ -254,15 +229,9 @@ c = r->connection; - rc = ngx_parse_addr(c->pool, &addr, ip, len); - - switch (rc) { - case NGX_DECLINED: - return NGX_DECLINED; - case NGX_ERROR: + len = ngx_sock_ntop(addr->sockaddr, text, NGX_SOCKADDR_STRLEN, 0); + if (len == 0) { return NGX_HTTP_INTERNAL_SERVER_ERROR; - default: /* NGX_OK */ - break; } p = ngx_pnalloc(c->pool, len); @@ -270,7 +239,7 @@ return NGX_HTTP_INTERNAL_SERVER_ERROR; } - ngx_memcpy(p, ip, len); + ngx_memcpy(p, text, len); cln->handler = ngx_http_realip_cleanup; @@ -279,8 +248,8 @@ ctx->socklen = c->socklen; ctx->addr_text = c->addr_text; - c->sockaddr = addr.sockaddr; - c->socklen = addr.socklen; + c->sockaddr = addr->sockaddr; + c->socklen = addr->socklen; c->addr_text.len = len; c->addr_text.data = p; @@ -310,55 +279,45 @@ ngx_int_t rc; ngx_str_t *value; - ngx_cidr_t cidr; - ngx_in_cidr_t *from; + ngx_cidr_t *cidr; value = cf->args->elts; -#if (NGX_HAVE_UNIX_DOMAIN) - - if (ngx_strcmp(value[1].data, "unix:") == 0) { - rlcf->unixsock = 1; - return NGX_CONF_OK; - } - -#endif - if (rlcf->from == NULL) { rlcf->from = ngx_array_create(cf->pool, 2, - sizeof(ngx_in_cidr_t)); + sizeof(ngx_cidr_t)); if (rlcf->from == NULL) { return NGX_CONF_ERROR; } } - from = ngx_array_push(rlcf->from); - if (from == NULL) { + cidr = ngx_array_push(rlcf->from); + if (cidr == NULL) { return NGX_CONF_ERROR; } - rc = ngx_ptocidr(&value[1], &cidr); +#if (NGX_HAVE_UNIX_DOMAIN) + if (ngx_strcmp(value[1].data, "unix:") == 0) { + cidr->family = AF_UNIX; + return NGX_CONF_OK; + } + +#endif + + rc = ngx_ptocidr(&value[1], cidr); + if (rc == NGX_ERROR) { ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, "invalid parameter \"%V\"", &value[1]); return NGX_CONF_ERROR; } - if (cidr.family != AF_INET) { - ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, - "\"set_real_ip_from\" supports IPv4 only"); - return NGX_CONF_ERROR; - } - if (rc == NGX_DONE) { ngx_conf_log_error(NGX_LOG_WARN, cf, 0, "low address bits of %V are meaningless", &value[1]); } - from->mask = cidr.u.in.mask; - from->addr = cidr.u.in.addr; - return NGX_CONF_OK; } @@ -409,9 +368,7 @@ */ conf->type = NGX_CONF_UNSET_UINT; -#if (NGX_HAVE_UNIX_DOMAIN) - conf->unixsock = 2; -#endif + conf->recursive = NGX_CONF_UNSET; return conf; } @@ -427,13 +384,8 @@ conf->from = prev->from; } -#if (NGX_HAVE_UNIX_DOMAIN) - if (conf->unixsock == 2) { - conf->unixsock = (prev->unixsock == 2) ? 0 : prev->unixsock; - } -#endif - ngx_conf_merge_uint_value(conf->type, prev->type, NGX_HTTP_REALIP_XREALIP); + ngx_conf_merge_value(conf->recursive, prev->recursive, 0); if (conf->header.len == 0) { conf->hash = prev->hash; From vshebordaev at mail.ru Mon May 14 13:12:41 2012 From: vshebordaev at mail.ru (Vladimir Shebordaev) Date: Mon, 14 May 2012 17:12:41 +0400 Subject: Nginx module development practice with blocking call In-Reply-To: References: Message-ID: Hi! Increasing the number of nginx worker processes is not scalable and rather wasteful solution, it can easily affect overall server performance. Basically, current nginx design does not provide much for dynamic content generation but it has well-developed facilities for caching and proxying. So, it seems, the best option for you is to move to fastcgi interface and handle db access percularities in the application code. In the hope it helps. Regards, Vladimir 2012/5/14 vivek goel : > I am writing nginx module which uses sqlite to do some read operation. As > sqlite read doesn't support non-blocking call(according to my knowledge). > What will be best solution to integrate it with nginx. > I am accepting near about 200 at the same time. One request takes near about > 200 ms to process. What method will be the best implementation using nginx: > > Increasing nginx worker process count to 50-200. So I will having enough > worker to accept client request. > My concern it that will it increasing connecting time of a user on next > request as there will be less chances that same user will be served by same > nginx worker process. > Doing te read in a thread and using nginx event based api to send response. > So I can handle multiple client using only two or one worker process. > Moving my code to fastcgi and using nginx fastcgi module to communicate with > . > > What will be the best solution for this implementation ? > > regards > Vivek Goel > > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel From ru at nginx.com Mon May 14 13:15:22 2012 From: ru at nginx.com (ru at nginx.com) Date: Mon, 14 May 2012 13:15:22 +0000 Subject: [nginx] svn commit: r4626 - trunk/src/http Message-ID: <20120514131522.5087B3FA02E@mail.nginx.com> Author: ru Date: 2012-05-14 13:15:22 +0000 (Mon, 14 May 2012) New Revision: 4626 URL: http://trac.nginx.org/nginx/changeset/4626/nginx Log: Fixed compilation warning introduced in r4624. Modified: trunk/src/http/ngx_http_core_module.c Modified: trunk/src/http/ngx_http_core_module.c =================================================================== --- trunk/src/http/ngx_http_core_module.c 2012-05-14 12:41:03 UTC (rev 4625) +++ trunk/src/http/ngx_http_core_module.c 2012-05-14 13:15:22 UTC (rev 4626) @@ -2715,21 +2715,29 @@ family = addr->sockaddr->sa_family; - if (family == AF_INET) { - inaddr = &((struct sockaddr_in *) addr->sockaddr)->sin_addr.s_addr; - } + switch (family) { #if (NGX_HAVE_INET6) - else if (family == AF_INET6) { + case AF_INET6: inaddr6 = &((struct sockaddr_in6 *) addr->sockaddr)->sin6_addr; if (IN6_IS_ADDR_V4MAPPED(inaddr6)) { family = AF_INET; inaddr = (in_addr_t *) &inaddr6->s6_addr[12]; } - } + + break; #endif +#if (NGX_HAVE_UNIX_DOMAIN) + case AF_UNIX: + break; +#endif + + default: /* AF_INET */ + inaddr = &((struct sockaddr_in *) addr->sockaddr)->sin_addr.s_addr; + } + for (cidr = proxies->elts, i = 0; i < proxies->nelts; i++) { if (cidr[i].family != family) { goto next; From ru at nginx.com Mon May 14 13:53:22 2012 From: ru at nginx.com (ru at nginx.com) Date: Mon, 14 May 2012 13:53:22 +0000 Subject: [nginx] svn commit: r4627 - trunk/src/http/modules Message-ID: <20120514135322.E823E3F9EF1@mail.nginx.com> Author: ru Date: 2012-05-14 13:53:22 +0000 (Mon, 14 May 2012) New Revision: 4627 URL: http://trac.nginx.org/nginx/changeset/4627/nginx Log: geo: chains of trusted proxies and partial IPv6 support. The module now supports recursive search of client address through the chain of trusted proxies, controlled by the "proxy_recursive" directive in the "geo" block. It also gets partial IPv6 support: now proxies may be specified with IPv6 addresses. Example: geo $test { ... proxy 127.0.0.1; proxy ::1; proxy_recursive; } There's also a slight change in behavior. When original client address (as specified by the "geo" directive) is one of the trusted proxies, and the value of the X-Forwarded-For request header cannot not be parsed as a valid address, an original client address will be used for lookup. Previously, 255.255.255.255 was used in this case. Modified: trunk/src/http/modules/ngx_http_geo_module.c Modified: trunk/src/http/modules/ngx_http_geo_module.c =================================================================== --- trunk/src/http/modules/ngx_http_geo_module.c 2012-05-14 13:15:22 UTC (rev 4626) +++ trunk/src/http/modules/ngx_http_geo_module.c 2012-05-14 13:53:22 UTC (rev 4627) @@ -51,6 +51,7 @@ unsigned outside_entries:1; unsigned allow_binary_include:1; unsigned binary_include:1; + unsigned proxy_recursive:1; } ngx_http_geo_conf_ctx_t; @@ -61,6 +62,7 @@ } u; ngx_array_t *proxies; + unsigned proxy_recursive:1; ngx_int_t index; } ngx_http_geo_ctx_t; @@ -68,8 +70,8 @@ static in_addr_t ngx_http_geo_addr(ngx_http_request_t *r, ngx_http_geo_ctx_t *ctx); -static in_addr_t ngx_http_geo_real_addr(ngx_http_request_t *r, - ngx_http_geo_ctx_t *ctx); +static ngx_int_t ngx_http_geo_real_addr(ngx_http_request_t *r, + ngx_http_geo_ctx_t *ctx, ngx_addr_t *addr); static char *ngx_http_geo_block(ngx_conf_t *cf, ngx_command_t *cmd, void *conf); static char *ngx_http_geo(ngx_conf_t *cf, ngx_command_t *dummy, void *conf); static char *ngx_http_geo_range(ngx_conf_t *cf, ngx_http_geo_conf_ctx_t *ctx, @@ -212,87 +214,60 @@ static in_addr_t ngx_http_geo_addr(ngx_http_request_t *r, ngx_http_geo_ctx_t *ctx) { - u_char *p, *ip; - size_t len; - in_addr_t addr; - ngx_uint_t i, n; - ngx_in_cidr_t *proxies; - ngx_table_elt_t *xfwd; + ngx_addr_t addr; + ngx_table_elt_t *xfwd; + struct sockaddr_in *sin; - addr = ngx_http_geo_real_addr(r, ctx); + if (ngx_http_geo_real_addr(r, ctx, &addr) != NGX_OK) { + return INADDR_NONE; + } xfwd = r->headers_in.x_forwarded_for; - if (xfwd == NULL || ctx->proxies == NULL) { - return addr; + if (xfwd != NULL && ctx->proxies != NULL) { + (void) ngx_http_get_forwarded_addr(r, &addr, xfwd->value.data, + xfwd->value.len, ctx->proxies, + ctx->proxy_recursive); } - proxies = ctx->proxies->elts; - n = ctx->proxies->nelts; +#if (NGX_HAVE_INET6) - for (i = 0; i < n; i++) { - if ((addr & proxies[i].mask) == proxies[i].addr) { + if (addr.sockaddr->sa_family == AF_INET6) { + struct in6_addr *inaddr6; - len = xfwd->value.len; - ip = xfwd->value.data; + inaddr6 = &((struct sockaddr_in6 *) addr.sockaddr)->sin6_addr; - for (p = ip + len - 1; p > ip; p--) { - if (*p == ' ' || *p == ',') { - p++; - len -= p - ip; - ip = p; - break; - } - } - - return ntohl(ngx_inet_addr(ip, len)); + if (IN6_IS_ADDR_V4MAPPED(inaddr6)) { + return ntohl(*(in_addr_t *) &inaddr6->s6_addr[12]); } } - return addr; +#endif + + if (addr.sockaddr->sa_family != AF_INET) { + return INADDR_NONE; + } + + sin = (struct sockaddr_in *) addr.sockaddr; + return ntohl(sin->sin_addr.s_addr); } -static in_addr_t -ngx_http_geo_real_addr(ngx_http_request_t *r, ngx_http_geo_ctx_t *ctx) +static ngx_int_t +ngx_http_geo_real_addr(ngx_http_request_t *r, ngx_http_geo_ctx_t *ctx, + ngx_addr_t *addr) { - struct sockaddr_in *sin; ngx_http_variable_value_t *v; -#if (NGX_HAVE_INET6) - u_char *p; - in_addr_t addr; - struct sockaddr_in6 *sin6; -#endif if (ctx->index == -1) { ngx_log_debug1(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, "http geo started: %V", &r->connection->addr_text); - switch (r->connection->sockaddr->sa_family) { + addr->sockaddr = r->connection->sockaddr; + addr->socklen = r->connection->socklen; + /* addr->name = r->connection->addr_text; */ - case AF_INET: - sin = (struct sockaddr_in *) r->connection->sockaddr; - return ntohl(sin->sin_addr.s_addr); - -#if (NGX_HAVE_INET6) - - case AF_INET6: - sin6 = (struct sockaddr_in6 *) r->connection->sockaddr; - - if (IN6_IS_ADDR_V4MAPPED(&sin6->sin6_addr)) { - p = sin6->sin6_addr.s6_addr; - addr = p[12] << 24; - addr += p[13] << 16; - addr += p[14] << 8; - addr += p[15]; - - return addr; - } - -#endif - } - - return INADDR_NONE; + return NGX_OK; } v = ngx_http_get_flushed_variable(r, ctx->index); @@ -301,13 +276,17 @@ ngx_log_debug0(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, "http geo not found"); - return 0; + return NGX_ERROR; } ngx_log_debug1(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, "http geo started: %v", v); - return ntohl(ngx_inet_addr(v->data, v->len)); + if (ngx_parse_addr(r->pool, addr, v->data, v->len) == NGX_OK) { + return NGX_OK; + } + + return NGX_ERROR; } @@ -388,6 +367,7 @@ *cf = save; geo->proxies = ctx.proxies; + geo->proxy_recursive = ctx.proxy_recursive; if (ctx.high.low) { @@ -493,6 +473,12 @@ goto done; } + + else if (ngx_strcmp(value[0].data, "proxy_recursive") == 0) { + ctx->proxy_recursive = 1; + rv = NGX_CONF_OK; + goto done; + } } if (cf->args->nelts != 2) { @@ -926,6 +912,7 @@ } if (ngx_strcmp(value[0].data, "default") == 0) { + /* cidr.family = AF_INET; */ cidr.u.in.addr = 0; cidr.u.in.mask = 0; net = &value[0]; @@ -944,6 +931,15 @@ return NGX_CONF_ERROR; } + if (cidr.family != AF_INET) { + ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, + "\"geo\" supports IPv4 only"); + return NGX_CONF_ERROR; + } + + cidr.u.in.addr = ntohl(cidr.u.in.addr); + cidr.u.in.mask = ntohl(cidr.u.in.mask); + if (del) { if (ngx_radix32tree_delete(ctx->tree, cidr.u.in.addr, cidr.u.in.mask) @@ -1052,10 +1048,10 @@ ngx_http_geo_add_proxy(ngx_conf_t *cf, ngx_http_geo_conf_ctx_t *ctx, ngx_cidr_t *cidr) { - ngx_in_cidr_t *c; + ngx_cidr_t *c; if (ctx->proxies == NULL) { - ctx->proxies = ngx_array_create(ctx->pool, 4, sizeof(ngx_in_cidr_t)); + ctx->proxies = ngx_array_create(ctx->pool, 4, sizeof(ngx_cidr_t)); if (ctx->proxies == NULL) { return NGX_CONF_ERROR; } @@ -1066,8 +1062,7 @@ return NGX_CONF_ERROR; } - c->addr = cidr->u.in.addr; - c->mask = cidr->u.in.mask; + *c = *cidr; return NGX_CONF_OK; } @@ -1079,6 +1074,7 @@ ngx_int_t rc; if (ngx_strcmp(net->data, "255.255.255.255") == 0) { + cidr->family = AF_INET; cidr->u.in.addr = 0xffffffff; cidr->u.in.mask = 0xffffffff; @@ -1092,19 +1088,11 @@ return NGX_ERROR; } - if (cidr->family != AF_INET) { - ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, "\"geo\" supports IPv4 only"); - return NGX_ERROR; - } - if (rc == NGX_DONE) { ngx_conf_log_error(NGX_LOG_WARN, cf, 0, "low address bits of %V are meaningless", net); } - cidr->u.in.addr = ntohl(cidr->u.in.addr); - cidr->u.in.mask = ntohl(cidr->u.in.mask); - return NGX_OK; } From ru at nginx.com Mon May 14 14:00:18 2012 From: ru at nginx.com (ru at nginx.com) Date: Mon, 14 May 2012 14:00:18 +0000 Subject: [nginx] svn commit: r4628 - trunk/src/http/modules Message-ID: <20120514140018.4CD553F9F86@mail.nginx.com> Author: ru Date: 2012-05-14 14:00:17 +0000 (Mon, 14 May 2012) New Revision: 4628 URL: http://trac.nginx.org/nginx/changeset/4628/nginx Log: geoip: trusted proxies support and partial IPv6 support. The module now supports recursive search of client address through the chain of trusted proxies (closes #100), in the same scope as the geo module. Proxies are listed by the "geoip_proxy" directive, recursive search is enabled by the "geoip_proxy_recursive" directive. IPv6 is partially supported: proxies may be specified with IPv6 addresses. Example: geoip_country .../GeoIP.dat; geoip_proxy 127.0.0.1; geoip_proxy ::1; geoip_proxy 10.0.0.0/8; geoip_proxy_recursive on; Modified: trunk/src/http/modules/ngx_http_geoip_module.c Modified: trunk/src/http/modules/ngx_http_geoip_module.c =================================================================== --- trunk/src/http/modules/ngx_http_geoip_module.c 2012-05-14 13:53:22 UTC (rev 4627) +++ trunk/src/http/modules/ngx_http_geoip_module.c 2012-05-14 14:00:17 UTC (rev 4628) @@ -14,20 +14,24 @@ typedef struct { - GeoIP *country; - GeoIP *org; - GeoIP *city; + GeoIP *country; + GeoIP *org; + GeoIP *city; + ngx_array_t *proxies; /* array of ngx_cidr_t */ + ngx_flag_t proxy_recursive; } ngx_http_geoip_conf_t; typedef struct { - ngx_str_t *name; - uintptr_t data; + ngx_str_t *name; + uintptr_t data; } ngx_http_geoip_var_t; typedef const char *(*ngx_http_geoip_variable_handler_pt)(GeoIP *, u_long addr); +static u_long ngx_http_geoip_addr(ngx_http_request_t *r, + ngx_http_geoip_conf_t *gcf); static ngx_int_t ngx_http_geoip_country_variable(ngx_http_request_t *r, ngx_http_variable_value_t *v, uintptr_t data); static ngx_int_t ngx_http_geoip_org_variable(ngx_http_request_t *r, @@ -44,12 +48,17 @@ static ngx_int_t ngx_http_geoip_add_variables(ngx_conf_t *cf); static void *ngx_http_geoip_create_conf(ngx_conf_t *cf); +static char *ngx_http_geoip_init_conf(ngx_conf_t *cf, void *conf); static char *ngx_http_geoip_country(ngx_conf_t *cf, ngx_command_t *cmd, void *conf); static char *ngx_http_geoip_org(ngx_conf_t *cf, ngx_command_t *cmd, void *conf); static char *ngx_http_geoip_city(ngx_conf_t *cf, ngx_command_t *cmd, void *conf); +static char *ngx_http_geoip_proxy(ngx_conf_t *cf, ngx_command_t *cmd, + void *conf); +static ngx_int_t ngx_http_geoip_cidr_value(ngx_conf_t *cf, ngx_str_t *net, + ngx_cidr_t *cidr); static void ngx_http_geoip_cleanup(void *data); @@ -76,6 +85,20 @@ 0, NULL }, + { ngx_string("geoip_proxy"), + NGX_HTTP_MAIN_CONF|NGX_CONF_TAKE1, + ngx_http_geoip_proxy, + NGX_HTTP_MAIN_CONF_OFFSET, + 0, + NULL }, + + { ngx_string("geoip_proxy_recursive"), + NGX_HTTP_MAIN_CONF|NGX_CONF_FLAG, + ngx_conf_set_flag_slot, + NGX_HTTP_MAIN_CONF_OFFSET, + offsetof(ngx_http_geoip_conf_t, proxy_recursive), + NULL }, + ngx_null_command }; @@ -85,7 +108,7 @@ NULL, /* postconfiguration */ ngx_http_geoip_create_conf, /* create main configuration */ - NULL, /* init main configuration */ + ngx_http_geoip_init_conf, /* init main configuration */ NULL, /* create server configuration */ NULL, /* merge server configuration */ @@ -182,40 +205,44 @@ static u_long -ngx_http_geoip_addr(ngx_http_request_t *r) +ngx_http_geoip_addr(ngx_http_request_t *r, ngx_http_geoip_conf_t *gcf) { - struct sockaddr_in *sin; -#if (NGX_HAVE_INET6) - u_char *p; - u_long addr; - struct sockaddr_in6 *sin6; -#endif + ngx_addr_t addr; + ngx_table_elt_t *xfwd; + struct sockaddr_in *sin; - switch (r->connection->sockaddr->sa_family) { + addr.sockaddr = r->connection->sockaddr; + addr.socklen = r->connection->socklen; + /* addr.name = r->connection->addr_text; */ - case AF_INET: - sin = (struct sockaddr_in *) r->connection->sockaddr; - return ntohl(sin->sin_addr.s_addr); + xfwd = r->headers_in.x_forwarded_for; + if (xfwd != NULL && gcf->proxies != NULL) { + (void) ngx_http_get_forwarded_addr(r, &addr, xfwd->value.data, + xfwd->value.len, gcf->proxies, + gcf->proxy_recursive); + } + #if (NGX_HAVE_INET6) - case AF_INET6: - sin6 = (struct sockaddr_in6 *) r->connection->sockaddr; + if (addr.sockaddr->sa_family == AF_INET6) { + struct in6_addr *inaddr6; - if (IN6_IS_ADDR_V4MAPPED(&sin6->sin6_addr)) { - p = sin6->sin6_addr.s6_addr; - addr = p[12] << 24; - addr += p[13] << 16; - addr += p[14] << 8; - addr += p[15]; + inaddr6 = &((struct sockaddr_in6 *) addr.sockaddr)->sin6_addr; - return addr; + if (IN6_IS_ADDR_V4MAPPED(inaddr6)) { + return ntohl(*(in_addr_t *) &inaddr6->s6_addr[12]); } + } #endif + + if (addr.sockaddr->sa_family != AF_INET) { + return INADDR_NONE; } - return INADDR_NONE; + sin = (struct sockaddr_in *) addr.sockaddr; + return ntohl(sin->sin_addr.s_addr); } @@ -235,7 +262,7 @@ goto not_found; } - val = handler(gcf->country, ngx_http_geoip_addr(r)); + val = handler(gcf->country, ngx_http_geoip_addr(r, gcf)); if (val == NULL) { goto not_found; @@ -273,7 +300,7 @@ goto not_found; } - val = handler(gcf->org, ngx_http_geoip_addr(r)); + val = handler(gcf->org, ngx_http_geoip_addr(r, gcf)); if (val == NULL) { goto not_found; @@ -453,7 +480,7 @@ gcf = ngx_http_get_module_main_conf(r, ngx_http_geoip_module); if (gcf->city) { - return GeoIP_record_by_ipnum(gcf->city, ngx_http_geoip_addr(r)); + return GeoIP_record_by_ipnum(gcf->city, ngx_http_geoip_addr(r, gcf)); } return NULL; @@ -490,6 +517,8 @@ return NULL; } + conf->proxy_recursive = NGX_CONF_UNSET; + cln = ngx_pool_cleanup_add(cf->pool, 0); if (cln == NULL) { return NULL; @@ -503,6 +532,17 @@ static char * +ngx_http_geoip_init_conf(ngx_conf_t *cf, void *conf) +{ + ngx_http_geoip_conf_t *gcf = conf; + + ngx_conf_init_value(gcf->proxy_recursive, 0); + + return NGX_CONF_OK; +} + + +static char * ngx_http_geoip_country(ngx_conf_t *cf, ngx_command_t *cmd, void *conf) { ngx_http_geoip_conf_t *gcf = conf; @@ -652,6 +692,66 @@ } +static char * +ngx_http_geoip_proxy(ngx_conf_t *cf, ngx_command_t *cmd, void *conf) +{ + ngx_http_geoip_conf_t *gcf = conf; + + ngx_str_t *value; + ngx_cidr_t cidr, *c; + + value = cf->args->elts; + + if (ngx_http_geoip_cidr_value(cf, &value[1], &cidr) != NGX_OK) { + return NGX_CONF_ERROR; + } + + if (gcf->proxies == NULL) { + gcf->proxies = ngx_array_create(cf->pool, 4, sizeof(ngx_cidr_t)); + if (gcf->proxies == NULL) { + return NGX_CONF_ERROR; + } + } + + c = ngx_array_push(gcf->proxies); + if (c == NULL) { + return NGX_CONF_ERROR; + } + + *c = cidr; + + return NGX_CONF_OK; +} + +static ngx_int_t +ngx_http_geoip_cidr_value(ngx_conf_t *cf, ngx_str_t *net, ngx_cidr_t *cidr) +{ + ngx_int_t rc; + + if (ngx_strcmp(net->data, "255.255.255.255") == 0) { + cidr->family = AF_INET; + cidr->u.in.addr = 0xffffffff; + cidr->u.in.mask = 0xffffffff; + + return NGX_OK; + } + + rc = ngx_ptocidr(net, cidr); + + if (rc == NGX_ERROR) { + ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, "invalid network \"%V\"", net); + return NGX_ERROR; + } + + if (rc == NGX_DONE) { + ngx_conf_log_error(NGX_LOG_WARN, cf, 0, + "low address bits of %V are meaningless", net); + } + + return NGX_OK; +} + + static void ngx_http_geoip_cleanup(void *data) { From ru at nginx.com Mon May 14 15:52:37 2012 From: ru at nginx.com (ru at nginx.com) Date: Mon, 14 May 2012 15:52:37 +0000 Subject: [nginx] svn commit: r4629 - trunk/src/http Message-ID: <20120514155237.6D28D3FA1DF@mail.nginx.com> Author: ru Date: 2012-05-14 15:52:37 +0000 (Mon, 14 May 2012) New Revision: 4629 URL: http://trac.nginx.org/nginx/changeset/4629/nginx Log: Reverted previous attempt to fix complation warning introduced in r4624 and actually fixed it. Modified: trunk/src/http/ngx_http_core_module.c Modified: trunk/src/http/ngx_http_core_module.c =================================================================== --- trunk/src/http/ngx_http_core_module.c 2012-05-14 14:00:17 UTC (rev 4628) +++ trunk/src/http/ngx_http_core_module.c 2012-05-14 15:52:37 UTC (rev 4629) @@ -2713,31 +2713,30 @@ struct in6_addr *inaddr6; #endif +#if (NGX_SUPPRESS_WARN) + inaddr = NULL; +#if (NGX_HAVE_INET6) + inaddr6 = NULL; +#endif +#endif + family = addr->sockaddr->sa_family; - switch (family) { + if (family == AF_INET) { + inaddr = &((struct sockaddr_in *) addr->sockaddr)->sin_addr.s_addr; + } #if (NGX_HAVE_INET6) - case AF_INET6: + else if (family == AF_INET6) { inaddr6 = &((struct sockaddr_in6 *) addr->sockaddr)->sin6_addr; if (IN6_IS_ADDR_V4MAPPED(inaddr6)) { family = AF_INET; inaddr = (in_addr_t *) &inaddr6->s6_addr[12]; } - - break; + } #endif -#if (NGX_HAVE_UNIX_DOMAIN) - case AF_UNIX: - break; -#endif - - default: /* AF_INET */ - inaddr = &((struct sockaddr_in *) addr->sockaddr)->sin_addr.s_addr; - } - for (cidr = proxies->elts, i = 0; i < proxies->nelts; i++) { if (cidr[i].family != family) { goto next; From vbart at nginx.com Mon May 14 16:30:34 2012 From: vbart at nginx.com (vbart at nginx.com) Date: Mon, 14 May 2012 16:30:34 +0000 Subject: [nginx] svn commit: r4630 - trunk/src/event Message-ID: <20120514163034.2854E3F9E5D@mail.nginx.com> Author: vbart Date: 2012-05-14 16:30:33 +0000 (Mon, 14 May 2012) New Revision: 4630 URL: http://trac.nginx.org/nginx/changeset/4630/nginx Log: Update c->sent in ngx_ssl_send_chain() even if SSL buffer is not used. Modified: trunk/src/event/ngx_event_openssl.c Modified: trunk/src/event/ngx_event_openssl.c =================================================================== --- trunk/src/event/ngx_event_openssl.c 2012-05-14 15:52:37 UTC (rev 4629) +++ trunk/src/event/ngx_event_openssl.c 2012-05-14 16:30:33 UTC (rev 4630) @@ -995,6 +995,7 @@ } in->buf->pos += n; + c->sent += n; if (in->buf->pos == in->buf->last) { in = in->next; From appa at perusio.net Mon May 14 19:47:54 2012 From: appa at perusio.net (=?UTF-8?B?QW50w7NuaW8=?= P. P. Almeida) Date: Mon, 14 May 2012 21:47:54 +0200 Subject: Nginx module development practice with blocking call In-Reply-To: References: Message-ID: <873972r1cl.wl%appa@perusio.net> On 14 Mai 2012 13h21 CEST, goelvivek2011 at gmail.com wrote: > I am writing nginx module which uses sqlite to do some read > operation. As sqlite read doesn't support non-blocking > call(according to my knowledge). What will be best solution to > integrate it with nginx. I am accepting near about 200 at the same > time. One request takes near about 200 ms to process. What method > will be the best implementation using nginx: > > > 1. Increasing nginx worker process count to 50-200. So I will having > enough worker to accept client request. My concern it that will it > increasing connecting time of a user on next request as there will > be less chances that same user will be served by same nginx worker > process. > 2. Doing te read in a thread and using nginx event based api to send > response. So I can handle multiple client using only two or one worker > process. > 3. Moving my code to fastcgi and using nginx fastcgi module to > communicate with . > > What will be the best solution for this implementation ? I suggest you use the Lua module and speak to the SQLite DB through Lua's sqlite driver. Just an idea, --- appa From goelvivek2011 at gmail.com Tue May 15 04:56:07 2012 From: goelvivek2011 at gmail.com (vivek goel) Date: Tue, 15 May 2012 10:26:07 +0530 Subject: Nginx module development practice with blocking call In-Reply-To: References: Message-ID: *I suggest you use the Lua module and speak to the SQLite DB through Lua's sqlite driver. *@Ant?nio P. P. Almeida I don't want to rewirte my c++ code for lua. I am looking for easiest way I can integrate my C++ code with nginx. Rewriting code of communicating with sqlite will cost too much amount of time. -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhuzhaoyuan at gmail.com Tue May 15 04:56:17 2012 From: zhuzhaoyuan at gmail.com (Joshua Zhu) Date: Tue, 15 May 2012 12:56:17 +0800 Subject: [nginx] svn commit: r4328 - trunk/src/http/modules In-Reply-To: <20111206210710.CD7FE3F9C1D@mail.nginx.com> References: <20111206210710.CD7FE3F9C1D@mail.nginx.com> Message-ID: Hi Ruslan, On Wed, Dec 7, 2011 at 5:07 AM, wrote: > Author: ru > Date: 2011-12-06 21:07:10 +0000 (Tue, 06 Dec 2011) > New Revision: 4328 > > Log: > - Improved error message when parsing of the "buffer" parameter of the > "access_log" directive fails. > > - Added a warning if "log_format" is used in contexts other than "http". > > > Modified: > trunk/src/http/modules/ngx_http_log_module.c > > Modified: trunk/src/http/modules/ngx_http_log_module.c > =================================================================== > --- trunk/src/http/modules/ngx_http_log_module.c 2011-12-06 > 15:49:40 UTC (rev 4327) > +++ trunk/src/http/modules/ngx_http_log_module.c 2011-12-06 > 21:07:10 UTC (rev 4328) > @@ -971,7 +971,7 @@ > > if (buf == NGX_ERROR) { > ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, > - "invalid parameter \"%V\"", &value[3]); > + "invalid buffer value \"%V\"", &name); > return NGX_CONF_ERROR; > } > > @@ -1004,6 +1004,12 @@ > ngx_uint_t i; > ngx_http_log_fmt_t *fmt; > > + if (cf->cmd_type != NGX_HTTP_MAIN_CONF) { > + ngx_conf_log_error(NGX_LOG_WARN, cf, 0, > + "the \"log_format\" directive may be used " > + "only on \"http\" level"); > + } > + > value = cf->args->elts; > > fmt = lmcf->formats.elts; > Could you shed light on why the 'log_format' change was introduced? Since it's a little bit confusing to me that the 'log_format' directive is allowed in http/server/location, but on the other hand, it would be warned if it's in a server/location block. I ask this question because in my situation, I have lots of separate server{} specific configuration files which are included in the main configuration file, and each server may have its own log_format. Specifying 'log_format' in the server{} is quite handy because it keeps the changes only in the included server specific file. Regards, -- Joshua Zhu Senior Software Engineer Server Platforms Team at Taobao -------------- next part -------------- An HTML attachment was scrubbed... URL: From ru at nginx.com Tue May 15 08:11:00 2012 From: ru at nginx.com (ru at nginx.com) Date: Tue, 15 May 2012 08:11:00 +0000 Subject: [nginx] svn commit: r4631 - trunk/src/http Message-ID: <20120515081100.EAE663F9C1B@mail.nginx.com> Author: ru Date: 2012-05-15 08:10:59 +0000 (Tue, 15 May 2012) New Revision: 4631 URL: http://trac.nginx.org/nginx/changeset/4631/nginx Log: Fixed win32 build after changes in r4624. Modified: trunk/src/http/ngx_http_core_module.c Modified: trunk/src/http/ngx_http_core_module.c =================================================================== --- trunk/src/http/ngx_http_core_module.c 2012-05-14 16:30:33 UTC (rev 4630) +++ trunk/src/http/ngx_http_core_module.c 2012-05-15 08:10:59 UTC (rev 4631) @@ -2704,7 +2704,7 @@ u_char *xff, size_t xfflen, ngx_array_t *proxies, int recursive) { u_char *p; - in_addr_t *inaddr; + in_addr_t inaddr; ngx_addr_t paddr; ngx_cidr_t *cidr; ngx_uint_t family, i; @@ -2714,7 +2714,7 @@ #endif #if (NGX_SUPPRESS_WARN) - inaddr = NULL; + inaddr = 0; #if (NGX_HAVE_INET6) inaddr6 = NULL; #endif @@ -2723,7 +2723,7 @@ family = addr->sockaddr->sa_family; if (family == AF_INET) { - inaddr = &((struct sockaddr_in *) addr->sockaddr)->sin_addr.s_addr; + inaddr = ((struct sockaddr_in *) addr->sockaddr)->sin_addr.s_addr; } #if (NGX_HAVE_INET6) @@ -2732,7 +2732,7 @@ if (IN6_IS_ADDR_V4MAPPED(inaddr6)) { family = AF_INET; - inaddr = (in_addr_t *) &inaddr6->s6_addr[12]; + inaddr = *(in_addr_t *) &inaddr6->s6_addr[12]; } } #endif @@ -2762,7 +2762,7 @@ #endif default: /* AF_INET */ - if ((*inaddr & cidr[i].u.in.mask) != cidr[i].u.in.addr) { + if ((inaddr & cidr[i].u.in.mask) != cidr[i].u.in.addr) { goto next; } break; From mdounin at mdounin.ru Tue May 15 14:20:06 2012 From: mdounin at mdounin.ru (mdounin at mdounin.ru) Date: Tue, 15 May 2012 14:20:06 +0000 Subject: [nginx] svn commit: r4632 - trunk/misc Message-ID: <20120515142006.E5A4C3FA049@mail.nginx.com> Author: mdounin Date: 2012-05-15 14:20:06 +0000 (Tue, 15 May 2012) New Revision: 4632 URL: http://trac.nginx.org/nginx/changeset/4632/nginx Log: Updated OpenSSL used for win32 builds. Modified: trunk/misc/GNUmakefile Modified: trunk/misc/GNUmakefile =================================================================== --- trunk/misc/GNUmakefile 2012-05-15 08:10:59 UTC (rev 4631) +++ trunk/misc/GNUmakefile 2012-05-15 14:20:06 UTC (rev 4632) @@ -6,7 +6,7 @@ REPO = $(shell svn info | sed -n 's/^Repository Root: //p') OBJS = objs.msvc8 -OPENSSL = openssl-1.0.0i +OPENSSL = openssl-1.0.1c ZLIB = zlib-1.2.5 PCRE = pcre-8.30 From mdounin at mdounin.ru Tue May 15 14:23:49 2012 From: mdounin at mdounin.ru (mdounin at mdounin.ru) Date: Tue, 15 May 2012 14:23:49 +0000 Subject: [nginx] svn commit: r4633 - trunk/docs/xml/nginx Message-ID: <20120515142349.B0D433FA0EE@mail.nginx.com> Author: mdounin Date: 2012-05-15 14:23:49 +0000 (Tue, 15 May 2012) New Revision: 4633 URL: http://trac.nginx.org/nginx/changeset/4633/nginx Log: nginx-1.3.0-RELEASE Modified: trunk/docs/xml/nginx/changes.xml Modified: trunk/docs/xml/nginx/changes.xml =================================================================== --- trunk/docs/xml/nginx/changes.xml 2012-05-15 14:20:06 UTC (rev 4632) +++ trunk/docs/xml/nginx/changes.xml 2012-05-15 14:23:49 UTC (rev 4633) @@ -9,6 +9,145 @@ nginx changelog + + + + +????????? debug_connection ?????? ???????????? IPv6-?????? +? ???????? "unix:". + + +the "debug_connection" directive now supports IPv6 addresses +and the "unix:" parameter. + + + + + +????????? set_real_ip_from ? ???????? proxy +????????? geo ?????? ???????????? IPv6-??????. + + +the "set_real_ip_from" directive and the "proxy" parameter +of the "geo" directive now support IPv6 addresses. + + + + + +????????? real_ip_recursive, geoip_proxy ? geoip_proxy_recursive. + + +the "real_ip_recursive", "geoip_proxy", and "geoip_proxy_recursive" directives. + + + + + +???????? proxy_recursive ????????? geo. + + +the "proxy_recursive" parameter of the "geo" directive. + + + + + +? ??????? ???????? ??? ????????? segmentation fault, +???? ?????????????? ????????? resolver. + + +a segmentation fault might occur in a worker process +if the "resolver" directive was used. + + + + + +? ??????? ???????? ??? ????????? segmentation fault, +???? ?????????????? ????????? fastcgi_pass, scgi_pass ??? uwsgi_pass +? ?????? ????????? ???????????? ?????. + + +a segmentation fault might occur in a worker process +if the "fastcgi_pass", "scgi_pass", or "uwsgi_pass" directives were used +and backend returned incorrect response. + + + + + +? ??????? ???????? ??? ????????? segmentation fault, +???? ?????????????? ????????? rewrite ? ? ????? ?????????? ??????? ? ?????? +?????? ?????????????? ??????????. + + +a segmentation fault might occur in a worker process +if the "rewrite" directive was used and new request arguments +in a replacement used variables. + + + + + +nginx ??? ????????? ?????????, +???? ???? ?????????? ??????????? ?? ?????????? ???????? ??????. + + +nginx might hog CPU +if the open file resource limit was reached. + + + + + +??? ????????????? ????????? proxy_next_upstream ? ?????????? http_404 +nginx ??? ?????????? ?????????? ???????, ???? ? ????? upstream ??? +???? ?? ???? ?????? ? ?????? backup. + + +nginx might loop infinitely over backends +if the "proxy_next_upstream" directive with the "http_404" parameter was used +and there were backup servers specified in an upstream block. + + + + + +??? ????????????? ????????? ip_hash +????????? ????????? down ????????? server +????? ????????? ? ????????? ????????????????? ???????? ????? ?????????. + + +adding the "down" parameter of the "server" directive +might cause unneeded client redistribution among backend servers +if the "ip_hash" directive was used. + + + + + +?????? ???????.
+??????? Yichun Zhang. +
+ +socket leak.
+Thanks to Yichun Zhang. +
+
+ + + +? ?????? ngx_http_fastcgi_module. + + +in the ngx_http_fastcgi_module. + + + +
+ + From mdounin at mdounin.ru Tue May 15 14:24:09 2012 From: mdounin at mdounin.ru (mdounin at mdounin.ru) Date: Tue, 15 May 2012 14:24:09 +0000 Subject: [nginx] svn commit: r4634 - tags Message-ID: <20120515142409.4CD793F9F32@mail.nginx.com> Author: mdounin Date: 2012-05-15 14:24:09 +0000 (Tue, 15 May 2012) New Revision: 4634 URL: http://trac.nginx.org/nginx/changeset/4634/nginx Log: release-1.3.0 tag Added: tags/release-1.3.0/ From ru at nginx.com Tue May 15 16:06:04 2012 From: ru at nginx.com (Ruslan Ermilov) Date: Tue, 15 May 2012 20:06:04 +0400 Subject: [nginx] svn commit: r4328 - trunk/src/http/modules In-Reply-To: References: <20111206210710.CD7FE3F9C1D@mail.nginx.com> Message-ID: <20120515160604.GA13923@lo0.su> On Tue, May 15, 2012 at 12:56:17PM +0800, Joshua Zhu wrote: > Hi Ruslan, > > On Wed, Dec 7, 2011 at 5:07 AM, wrote: > > > Author: ru > > Date: 2011-12-06 21:07:10 +0000 (Tue, 06 Dec 2011) > > New Revision: 4328 > > > > Log: > > - Improved error message when parsing of the "buffer" parameter of the > > "access_log" directive fails. > > > > - Added a warning if "log_format" is used in contexts other than "http". > > > > > > Modified: > > trunk/src/http/modules/ngx_http_log_module.c > > > > Modified: trunk/src/http/modules/ngx_http_log_module.c > > =================================================================== > > --- trunk/src/http/modules/ngx_http_log_module.c 2011-12-06 > > 15:49:40 UTC (rev 4327) > > +++ trunk/src/http/modules/ngx_http_log_module.c 2011-12-06 > > 21:07:10 UTC (rev 4328) > > @@ -971,7 +971,7 @@ > > > > if (buf == NGX_ERROR) { > > ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, > > - "invalid parameter \"%V\"", &value[3]); > > + "invalid buffer value \"%V\"", &name); > > return NGX_CONF_ERROR; > > } > > > > @@ -1004,6 +1004,12 @@ > > ngx_uint_t i; > > ngx_http_log_fmt_t *fmt; > > > > + if (cf->cmd_type != NGX_HTTP_MAIN_CONF) { > > + ngx_conf_log_error(NGX_LOG_WARN, cf, 0, > > + "the \"log_format\" directive may be used " > > + "only on \"http\" level"); > > + } > > + > > value = cf->args->elts; > > > > fmt = lmcf->formats.elts; > > > > Could you shed light on why the 'log_format' change was introduced? Since > it's a little bit confusing to me that the 'log_format' directive is > allowed in http/server/location, but on the other hand, it would be warned > if it's in a server/location block. > > I ask this question because in my situation, I have lots of separate > server{} specific configuration files which are included in the main > configuration file, and each server may have its own log_format. Specifying > 'log_format' in the server{} is quite handy because it keeps the changes > only in the included server specific file. 1. "log_format" was never documented to be supported on levels other than "http". 2. Surprisingly, "log_format" specified in one "location" makes it magically available anywhere below in the configuration, including other locations and other servers. 3. The commit you reference didn't change or prohibit anything, but made nginx emit a warning if you try to use "log_format" on levels other than "http", due to above mentioned side effects. After a commit, Maxim Dounin suggested to officially support "log_format" on levels other than "http" but it was never implemented. From zhuzhaoyuan at gmail.com Tue May 15 16:14:52 2012 From: zhuzhaoyuan at gmail.com (Joshua Zhu) Date: Wed, 16 May 2012 00:14:52 +0800 Subject: [nginx] svn commit: r4328 - trunk/src/http/modules In-Reply-To: <20120515160604.GA13923@lo0.su> References: <20111206210710.CD7FE3F9C1D@mail.nginx.com> <20120515160604.GA13923@lo0.su> Message-ID: Hi, On Wed, May 16, 2012 at 12:06 AM, Ruslan Ermilov wrote: > On Tue, May 15, 2012 at 12:56:17PM +0800, Joshua Zhu wrote: > > Hi Ruslan, > > > > On Wed, Dec 7, 2011 at 5:07 AM, wrote: > > > > > Author: ru > > > Date: 2011-12-06 21:07:10 +0000 (Tue, 06 Dec 2011) > > > New Revision: 4328 > > > > > > Log: > > > - Improved error message when parsing of the "buffer" parameter of the > > > "access_log" directive fails. > > > > > > - Added a warning if "log_format" is used in contexts other than > "http". > > > > > > > > > Modified: > > > trunk/src/http/modules/ngx_http_log_module.c > > > > > > Modified: trunk/src/http/modules/ngx_http_log_module.c > > > =================================================================== > > > --- trunk/src/http/modules/ngx_http_log_module.c 2011-12-06 > > > 15:49:40 UTC (rev 4327) > > > +++ trunk/src/http/modules/ngx_http_log_module.c 2011-12-06 > > > 21:07:10 UTC (rev 4328) > > > @@ -971,7 +971,7 @@ > > > > > > if (buf == NGX_ERROR) { > > > ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, > > > - "invalid parameter \"%V\"", &value[3]); > > > + "invalid buffer value \"%V\"", &name); > > > return NGX_CONF_ERROR; > > > } > > > > > > @@ -1004,6 +1004,12 @@ > > > ngx_uint_t i; > > > ngx_http_log_fmt_t *fmt; > > > > > > + if (cf->cmd_type != NGX_HTTP_MAIN_CONF) { > > > + ngx_conf_log_error(NGX_LOG_WARN, cf, 0, > > > + "the \"log_format\" directive may be used " > > > + "only on \"http\" level"); > > > + } > > > + > > > value = cf->args->elts; > > > > > > fmt = lmcf->formats.elts; > > > > > > > Could you shed light on why the 'log_format' change was introduced? Since > > it's a little bit confusing to me that the 'log_format' directive is > > allowed in http/server/location, but on the other hand, it would be > warned > > if it's in a server/location block. > > > > I ask this question because in my situation, I have lots of separate > > server{} specific configuration files which are included in the main > > configuration file, and each server may have its own log_format. > Specifying > > 'log_format' in the server{} is quite handy because it keeps the changes > > only in the included server specific file. > > 1. "log_format" was never documented to be supported on levels > other than "http". > > 2. Surprisingly, "log_format" specified in one "location" makes it > magically available anywhere below in the configuration, including > other locations and other servers. > > 3. The commit you reference didn't change or prohibit anything, but > made nginx emit a warning if you try to use "log_format" on levels > other than "http", due to above mentioned side effects. > > After a commit, Maxim Dounin suggested to officially support "log_format" > on levels other than "http" but it was never implemented. Now I see. Thank you for your kind explanation. Regards, -- Joshua Zhu Senior Software Engineer Server Platforms Team at Taobao -------------- next part -------------- An HTML attachment was scrubbed... URL: From usirsiwal at verivue.com Wed May 16 02:16:14 2012 From: usirsiwal at verivue.com (Sirsiwal, Umesh) Date: Tue, 15 May 2012 22:16:14 -0400 Subject: Lua Variable access bug? Message-ID: Hello, We are seeing the following problem with nginx variable access from Lua. I have the following configuration: set $vv $http_host; set_by_lua $i ' return ngx.log(ngx.ERR, ngx.var.http_host)'; If I comment out the first line and send SIGHUP to master process the logged variable becomes empty. #set $vv $http_host; set_by_lua $i ' return ngx.log(ngx.ERR, ngx.var.http_host)'; Note that if we start nginx with set $vv $http_host; commented there is no issue. The issue only exists if we start off accessing the variable from the configuration file and than remove the variable access. I think issue exists because of the way ngx_http_variables_init_vars is written. It changes the flags in the static ngx_http_core_variables variable. During the first configuration read cycle where $http_host is indexed, the flag changes to indexed. During the second configuration read cycle where the the $http_host variable should not be indexed, the flags still remain INDEXED. That confuses ngx_http_get_variable later. Thanks for any help. -Umesh From ru at nginx.com Wed May 16 13:09:40 2012 From: ru at nginx.com (ru at nginx.com) Date: Wed, 16 May 2012 13:09:40 +0000 Subject: [nginx] svn commit: r4635 - in trunk/src: core http/modules/perl Message-ID: <20120516130940.974D73F9F86@mail.nginx.com> Author: ru Date: 2012-05-16 13:09:39 +0000 (Wed, 16 May 2012) New Revision: 4635 URL: http://trac.nginx.org/nginx/changeset/4635/nginx Log: Version bump. Modified: trunk/src/core/nginx.h trunk/src/http/modules/perl/nginx.pm Modified: trunk/src/core/nginx.h =================================================================== --- trunk/src/core/nginx.h 2012-05-15 14:24:09 UTC (rev 4634) +++ trunk/src/core/nginx.h 2012-05-16 13:09:39 UTC (rev 4635) @@ -9,8 +9,8 @@ #define _NGINX_H_INCLUDED_ -#define nginx_version 1003000 -#define NGINX_VERSION "1.3.0" +#define nginx_version 1003001 +#define NGINX_VERSION "1.3.1" #define NGINX_VER "nginx/" NGINX_VERSION #define NGINX_VAR "NGINX" Modified: trunk/src/http/modules/perl/nginx.pm =================================================================== --- trunk/src/http/modules/perl/nginx.pm 2012-05-15 14:24:09 UTC (rev 4634) +++ trunk/src/http/modules/perl/nginx.pm 2012-05-16 13:09:39 UTC (rev 4635) @@ -50,7 +50,7 @@ HTTP_INSUFFICIENT_STORAGE ); -our $VERSION = '1.3.0'; +our $VERSION = '1.3.1'; require XSLoader; XSLoader::load('nginx', $VERSION); From ru at nginx.com Wed May 16 13:14:53 2012 From: ru at nginx.com (ru at nginx.com) Date: Wed, 16 May 2012 13:14:53 +0000 Subject: [nginx] svn commit: r4636 - trunk/src/http/modules Message-ID: <20120516131453.CBEFF3FA142@mail.nginx.com> Author: ru Date: 2012-05-16 13:14:53 +0000 (Wed, 16 May 2012) New Revision: 4636 URL: http://trac.nginx.org/nginx/changeset/4636/nginx Log: Added syntax checking of the second parameter of the "split_clients" directive. Modified: trunk/src/http/modules/ngx_http_split_clients_module.c Modified: trunk/src/http/modules/ngx_http_split_clients_module.c =================================================================== --- trunk/src/http/modules/ngx_http_split_clients_module.c 2012-05-16 13:09:39 UTC (rev 4635) +++ trunk/src/http/modules/ngx_http_split_clients_module.c 2012-05-16 13:14:53 UTC (rev 4636) @@ -138,6 +138,13 @@ } name = value[2]; + + if (name.len < 2 || name.data[0] != '$') { + ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, + "invalid variable name \"%V\"", &name); + return NGX_CONF_ERROR; + } + name.len--; name.data++; From ru at nginx.com Wed May 16 13:22:03 2012 From: ru at nginx.com (ru at nginx.com) Date: Wed, 16 May 2012 13:22:03 +0000 Subject: [nginx] svn commit: r4637 - in trunk/src/http: . modules Message-ID: <20120516132203.C767A3F9F32@mail.nginx.com> Author: ru Date: 2012-05-16 13:22:03 +0000 (Wed, 16 May 2012) New Revision: 4637 URL: http://trac.nginx.org/nginx/changeset/4637/nginx Log: Capped the status code that may be returned with "return" and "try_files". Modified: trunk/src/http/modules/ngx_http_rewrite_module.c trunk/src/http/ngx_http_core_module.c Modified: trunk/src/http/modules/ngx_http_rewrite_module.c =================================================================== --- trunk/src/http/modules/ngx_http_rewrite_module.c 2012-05-16 13:14:53 UTC (rev 4636) +++ trunk/src/http/modules/ngx_http_rewrite_module.c 2012-05-16 13:22:03 UTC (rev 4637) @@ -485,6 +485,12 @@ } else { + if (ret->status > 999) { + ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, + "invalid return code \"%V\"", &value[1]); + return NGX_CONF_ERROR; + } + if (cf->args->nelts == 2) { return NGX_CONF_OK; } Modified: trunk/src/http/ngx_http_core_module.c =================================================================== --- trunk/src/http/ngx_http_core_module.c 2012-05-16 13:14:53 UTC (rev 4636) +++ trunk/src/http/ngx_http_core_module.c 2012-05-16 13:22:03 UTC (rev 4637) @@ -4662,7 +4662,7 @@ code = ngx_atoi(tf[i - 1].name.data + 1, tf[i - 1].name.len - 2); - if (code == NGX_ERROR) { + if (code == NGX_ERROR || code > 999) { ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, "invalid code \"%*s\"", tf[i - 1].name.len - 1, tf[i - 1].name.data); From ru at nginx.com Wed May 16 13:27:05 2012 From: ru at nginx.com (ru at nginx.com) Date: Wed, 16 May 2012 13:27:05 +0000 Subject: [nginx] svn commit: r4638 - in trunk/src/http: . modules Message-ID: <20120516132705.9D03A3FA0F9@mail.nginx.com> Author: ru Date: 2012-05-16 13:27:04 +0000 (Wed, 16 May 2012) New Revision: 4638 URL: http://trac.nginx.org/nginx/changeset/4638/nginx Log: Zero padded the returned and logged HTTP status code, and fixed possible buffer overrun in $status handling. Modified: trunk/src/http/modules/ngx_http_log_module.c trunk/src/http/ngx_http_header_filter_module.c Modified: trunk/src/http/modules/ngx_http_log_module.c =================================================================== --- trunk/src/http/modules/ngx_http_log_module.c 2012-05-16 13:22:03 UTC (rev 4637) +++ trunk/src/http/modules/ngx_http_log_module.c 2012-05-16 13:27:04 UTC (rev 4638) @@ -205,7 +205,7 @@ { ngx_string("msec"), NGX_TIME_T_LEN + 4, ngx_http_log_msec }, { ngx_string("request_time"), NGX_TIME_T_LEN + 4, ngx_http_log_request_time }, - { ngx_string("status"), 3, ngx_http_log_status }, + { ngx_string("status"), NGX_INT_T_LEN, ngx_http_log_status }, { ngx_string("bytes_sent"), NGX_OFF_T_LEN, ngx_http_log_bytes_sent }, { ngx_string("body_bytes_sent"), NGX_OFF_T_LEN, ngx_http_log_body_bytes_sent }, @@ -593,7 +593,7 @@ status = 0; } - return ngx_sprintf(buf, "%ui", status); + return ngx_sprintf(buf, "%03ui", status); } Modified: trunk/src/http/ngx_http_header_filter_module.c =================================================================== --- trunk/src/http/ngx_http_header_filter_module.c 2012-05-16 13:22:03 UTC (rev 4637) +++ trunk/src/http/ngx_http_header_filter_module.c 2012-05-16 13:27:04 UTC (rev 4638) @@ -445,7 +445,7 @@ b->last = ngx_copy(b->last, status_line->data, status_line->len); } else { - b->last = ngx_sprintf(b->last, "%ui", status); + b->last = ngx_sprintf(b->last, "%03ui", status); } *b->last++ = CR; *b->last++ = LF; From jamie at tomoyolinux.co.uk Wed May 16 17:01:47 2012 From: jamie at tomoyolinux.co.uk (Jamie Nguyen) Date: Wed, 16 May 2012 18:01:47 +0100 Subject: systemd and nginx custom script Message-ID: Hi, I am co-maintainer for nginx Fedora package. We would like to upstream our systemd service file. Could you consider including it in the nginx tarball? In other projects (e.g. rsyslog, mpd) the service file is only installed if "--with-systemdsystemunitdir=/usr/lib/systemd/system" was passed to the configure script. The service file we currently use should work universally across systems using systemd: $ cat /lib/systemd/system/nginx.service [Unit] Description=A high performance web server and reverse proxy server After=syslog.target network.target remote-fs.target nss-lookup.target [Service] Type=forking PIDFile=/run/nginx.pid ExecStartPre=/usr/sbin/nginx -t ExecStart=/usr/sbin/nginx ExecReload=/usr/sbin/nginx -s reload ExecStop=/usr/sbin/nginx -s quit PrivateTmp=true [Install] WantedBy=multi-user.target Most distributions implement a "zero-downtime upgrade" option in their initscript. This cannot currently be implemented in systemd, so I implemented it in a separate script. I paste the script below for possible inclusion in http://wiki.nginx.org/CommandLine (Perhaps you might even consider including this script (or something similar) in the nginx tarball as it would be helpful for systems using systemd.) #!/bin/sh [ ! -f /run/nginx.pid ] && exit 1 echo "Start new nginx master..." /bin/systemctl kill --signal=USR2 nginx.service sleep 5 [ ! -f /run/nginx.pid.oldbin ] && sleep 5 if [ ! -f /run/nginx.pid.oldbin ]; then echo "Failed to start new nginx master." exit 1 fi echo "Stop old nginx master gracefully..." oldpid=`cat /run/nginx.pid.oldbin 2>/dev/null` /bin/kill -s QUIT $oldpid 2>/dev/null Kind regards, Jamie From mdounin at mdounin.ru Wed May 16 17:28:32 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 16 May 2012 21:28:32 +0400 Subject: systemd and nginx custom script In-Reply-To: References: Message-ID: <20120516172832.GF31671@mdounin.ru> Hello! On Wed, May 16, 2012 at 06:01:47PM +0100, Jamie Nguyen wrote: > Hi, I am co-maintainer for nginx Fedora package. We would like to > upstream our systemd service file. Could you consider including it in > the nginx tarball? We don't usually include any init scripts in distribution and usually rely on package mainterners to provide one appropriate for a system in question. If you think the file will be usable for ones who compile nginx from source, you may place it here on wiki: http://wiki.nginx.org/InitScripts [...] > ExecReload=/usr/sbin/nginx -s reload > ExecStop=/usr/sbin/nginx -s quit Just a side note: using "-s ..." on unix might not be a good idea, as it needs nginx binary compatible to one already running to be able to parse config for a pid file name and sent appropriate signal. It was introduced mainly for Windows. On unix-like systems it's more fail-safe to explicitly send appropriate signals. Just a side note 2: your service file obviously rely on non-default configure arguments, you may want to explicitly state this when placing it on wiki (or change the file to match defaults). Maxim Dounin From jamie at tomoyolinux.co.uk Wed May 16 20:05:32 2012 From: jamie at tomoyolinux.co.uk (Jamie Nguyen) Date: Wed, 16 May 2012 21:05:32 +0100 Subject: systemd and nginx custom script In-Reply-To: <20120516172832.GF31671@mdounin.ru> References: <20120516172832.GF31671@mdounin.ru> Message-ID: On 16 May 2012 18:28, Maxim Dounin wrote: > Hello! > > On Wed, May 16, 2012 at 06:01:47PM +0100, Jamie Nguyen wrote: > >> Hi, I am co-maintainer for nginx Fedora package. We would like to >> upstream our systemd service file. Could you consider including it in >> the nginx tarball? > > We don't usually include any init scripts in distribution and > usually rely on package mainterners to provide one appropriate for > a system in question. No problem. > If you think the file will be usable for ones who compile > nginx from source, you may place it here on wiki: > > http://wiki.nginx.org/InitScripts Good idea. Done!: http://wiki.nginx.org/FedoraSystemdServiceFile >> ExecReload=/usr/sbin/nginx -s reload >> ExecStop=/usr/sbin/nginx -s quit > > Just a side note: using "-s ..." on unix might not be a good idea, > as it needs nginx binary compatible to one already running to be > able to parse config for a pid file name and sent appropriate > signal. ?It was introduced mainly for Windows. ?On unix-like > systems it's more fail-safe to explicitly send appropriate > signals. No problem. I've changed it to: ExecReload=/bin/kill -s HUP $MAINPID ExecStop=/bin/kill -s QUIT $MAINPID > Just a side note 2: your service file obviously rely on > non-default configure arguments, you may want to explicitly state > this when placing it on wiki (or change the file to match > defaults). I've added a comment in the wiki page. Thanks very much for your advice :-) Kind regards, Jamie From agentzh at gmail.com Thu May 17 08:35:24 2012 From: agentzh at gmail.com (agentzh) Date: Thu, 17 May 2012 16:35:24 +0800 Subject: Lua Variable access bug? In-Reply-To: References: Message-ID: On Wed, May 16, 2012 at 10:16 AM, Sirsiwal, Umesh wrote: > > I think issue exists because of the way ngx_http_variables_init_vars is > written. It changes the flags in the static ngx_http_core_variables variable. > During the first configuration read cycle where $http_host is indexed, the > flag changes to indexed. During the second configuration read cycle where > the the $http_host variable should not be indexed, the flags still remain > INDEXED. That confuses ngx_http_get_variable later. > This is indeed an issue in the nginx core and it should affect other modules other than ngx_lua. I think the ngx_http_variables_add_core_vars function should pass a copy of the ngx_http_variable_t entries in the static variable ngx_http_core_variables. Could you please try the patch below for nginx 1.0.15? I've confirmed that it can also be applied cleanly to nginx 1.2.0. Thanks! -agentzh --- nginx-1.0.15/src/http/ngx_http_variables.c 2012-03-05 20:36:51.000000000 +0800 +++ nginx-1.0.15-patched/src/http/ngx_http_variables.c 2012-05-17 16:21:42.120968722 +0800 @@ -1898,6 +1898,7 @@ { ngx_int_t rc; ngx_http_variable_t *v; + ngx_http_variable_t *value; ngx_http_core_main_conf_t *cmcf; cmcf = ngx_http_conf_get_module_main_conf(cf, ngx_http_core_module); @@ -1918,7 +1919,14 @@ } for (v = ngx_http_core_variables; v->name.len; v++) { - rc = ngx_hash_add_key(cmcf->variables_keys, &v->name, v, + value = ngx_palloc(cf->pool, sizeof(ngx_http_variable_t)); + if (value == NULL) { + return NGX_ERROR; + } + + *value = *v; + + rc = ngx_hash_add_key(cmcf->variables_keys, &value->name, value, NGX_HASH_READONLY_KEY); if (rc == NGX_OK) { -------------- next part -------------- A non-text attachment was scrubbed... Name: nginx-1.0.15-add_core_vars_polluting_globals.patch Type: application/octet-stream Size: 909 bytes Desc: not available URL: From mdounin at mdounin.ru Thu May 17 09:02:14 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 17 May 2012 13:02:14 +0400 Subject: Lua Variable access bug? In-Reply-To: References: Message-ID: <20120517090214.GL31671@mdounin.ru> Hello! On Thu, May 17, 2012 at 04:35:24PM +0800, agentzh wrote: > On Wed, May 16, 2012 at 10:16 AM, Sirsiwal, Umesh wrote: > > > > I think issue exists because of the way ngx_http_variables_init_vars is > > written. It changes the flags in the static ngx_http_core_variables variable. > > During the first configuration read cycle where $http_host is indexed, the > > flag changes to indexed. During the second configuration read cycle where > > the the $http_host variable should not be indexed, the flags still remain > > INDEXED. That confuses ngx_http_get_variable later. > > > > This is indeed an issue in the nginx core and it should affect other > modules other than ngx_lua. > > I think the ngx_http_variables_add_core_vars function should pass a > copy of the ngx_http_variable_t entries in the static variable > ngx_http_core_variables. > > Could you please try the patch below for nginx 1.0.15? I've confirmed > that it can also be applied cleanly to nginx 1.2.0. > > Thanks! > -agentzh > > --- nginx-1.0.15/src/http/ngx_http_variables.c 2012-03-05 > 20:36:51.000000000 +0800 > +++ nginx-1.0.15-patched/src/http/ngx_http_variables.c 2012-05-17 > 16:21:42.120968722 +0800 > @@ -1898,6 +1898,7 @@ > { > ngx_int_t rc; > ngx_http_variable_t *v; > + ngx_http_variable_t *value; > ngx_http_core_main_conf_t *cmcf; > > cmcf = ngx_http_conf_get_module_main_conf(cf, ngx_http_core_module); > @@ -1918,7 +1919,14 @@ > } > > for (v = ngx_http_core_variables; v->name.len; v++) { > - rc = ngx_hash_add_key(cmcf->variables_keys, &v->name, v, > + value = ngx_palloc(cf->pool, sizeof(ngx_http_variable_t)); > + if (value == NULL) { > + return NGX_ERROR; > + } > + > + *value = *v; > + > + rc = ngx_hash_add_key(cmcf->variables_keys, &value->name, value, > NGX_HASH_READONLY_KEY); > > if (rc == NGX_OK) { Shouldn't it be --- a/src/http/ngx_http_variables.c +++ b/src/http/ngx_http_variables.c @@ -2072,6 +2072,11 @@ ngx_http_variables_init_vars(ngx_conf_t v = cmcf->variables.elts; key = cmcf->variables_keys->keys.elts; + for (n = 0; n < cmcf->variables_keys->keys.nelts; n++) { + av = key[n].value; + av->flags &= ~NGX_HTTP_VAR_INDEXED; + } + for (i = 0; i < cmcf->variables.nelts; i++) { for (n = 0; n < cmcf->variables_keys->keys.nelts; n++) { instead? (completely untested) Maxim Dounin From alek.storm at gmail.com Thu May 17 09:43:29 2012 From: alek.storm at gmail.com (Alek Storm) Date: Thu, 17 May 2012 04:43:29 -0500 Subject: Upcoming SPDY support details Message-ID: Hi all, I'm developing SPDY (draft 2) support for Tornado, a Python web framework that often uses nginx as a reverse proxy. AFAICT, nginx will roll out its own SPDY support in 1.3, to be released at the end of May or in early June. I'd like Tornado to complement nginx's design, but I can't find any details of its SPDY implementation online or in svn, so I'll ask a few questions here. 1. To avoid the significant overhead of an SSL connection, nginx communicates with the server behind the reverse proxy on unencrypted HTTP. However, whether to use HTTP or SPDY framing is normally decided via NPN negotiation during the SSL handshake. Since the backend server can't distinguish between a forwarded HTTP or SPDY connection without the SSL layer, will there be a mechanism for delegating HTTP connections to one address/port, and SPDY to another? Alternately, will nginx serve as an HTTP<=>SPDY gateway, so that all requests appear to be SPDY to the backend server? 2. Will there be any way to take advantage of nginx's caching when pushing static resources that would normally be served by it? For example, the backend server could send a SYN_STREAM frame with the UNIDIRECTIONAL flag set and only the "url" header included - if the URL points to a location that nginx would serve, nginx takes over, filling in the rest of the headers and serving the file's contents. I'd appreciate any guidance you may have. Thanks, Alek Storm -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Thu May 17 10:24:54 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 17 May 2012 14:24:54 +0400 Subject: [PATCH] nginx does not close the connection for 412 responses under extreme conditions In-Reply-To: <20120512171246.GB31671@mdounin.ru> References: <20120512085054.GR31671@mdounin.ru> <20120512124757.GW31671@mdounin.ru> <20120512171246.GB31671@mdounin.ru> Message-ID: <20120517102453.GO31671@mdounin.ru> Hello! On Sat, May 12, 2012 at 09:12:46PM +0400, Maxim Dounin wrote: [...] > I'll take a closer look at the patch later to make sure it doesn't > break various image filter use cases (likely no, but as I already > said filter finalization is at best fragile) and to see if it's > also possible to avoid response truncation in case of error_page > used after filter finalization. Production use of 1.3.0 revealed that it indeed breaks filter finalization in config like this location /image/ { error_page 415 = /zero; image_filter crop $arg_w $arg_h; proxy_pass http://127.0.0.1:8080/1m?; proxy_store /tmp/store/$uri; } location /zero { return 204; } In case of 415 from image_filter request is finalized in rewrite module, and 1) due to filter_finalize check r->write_event_handler remains set to ngx_http_core_run_phases(); 2) due to proxy_store NGX_ERROR is ignored by upstream, and reading from upstream continues. Then on write event reported for a client ngx_http_core_run_phases() is triggered again, ngx_http_finalize_request() is called (incorrecly) causing another ngx_http_finalize_connection() due to filter_finalize set. This eventually results in segfault once request is finalized from upstream module. Michael Monashov, who reported the issue, currently testing the following patch (against 1.3.0): --- a/src/http/ngx_http_request.c +++ b/src/http/ngx_http_request.c @@ -1933,8 +1933,6 @@ ngx_http_finalize_request(ngx_http_reque if (rc == NGX_OK && r->filter_finalize) { c->error = 1; - ngx_http_finalize_connection(r); - return; } if (rc == NGX_DECLINED) { It should resolve the original problem with socket leak but let things handle correctly the above scenario as well. Could you please test if it works for you? Maxim Dounin From agentzh at gmail.com Thu May 17 10:48:24 2012 From: agentzh at gmail.com (agentzh) Date: Thu, 17 May 2012 18:48:24 +0800 Subject: [PATCH] nginx does not close the connection for 412 responses under extreme conditions In-Reply-To: <20120517102453.GO31671@mdounin.ru> References: <20120512085054.GR31671@mdounin.ru> <20120512124757.GW31671@mdounin.ru> <20120512171246.GB31671@mdounin.ru> <20120517102453.GO31671@mdounin.ru> Message-ID: On Thu, May 17, 2012 at 6:24 PM, Maxim Dounin wrote: > > Production use of 1.3.0 revealed that it indeed breaks filter > finalization in config like this Oh, sorry about that :P > Michael Monashov, who reported the issue, currently testing the > following patch (against 1.3.0): > [...] > > It should resolve the original problem with socket leak but let > things handle correctly the above scenario as well. ?Could you > please test if it works for you? > I've just tested this patch and it works for me :) Thanks! -agentzh From ne at vbart.ru Thu May 17 11:23:55 2012 From: ne at vbart.ru (Valentin V. Bartenev) Date: Thu, 17 May 2012 15:23:55 +0400 Subject: Upcoming SPDY support details In-Reply-To: References: Message-ID: <201205171523.55975.ne@vbart.ru> On Thursday 17 May 2012 13:43:29 Alek Storm wrote: > Hi all, > > I'm developing SPDY (draft 2) support for Tornado, a Python web framework > that often uses nginx as a reverse proxy. AFAICT, nginx will roll out its > own SPDY support in 1.3, to be released at the end of May or in early June. > I'd like Tornado to complement nginx's design, but I can't find any details > of its SPDY implementation online or in svn, so I'll ask a few questions > here. > > 1. To avoid the significant overhead of an SSL connection, nginx > communicates with the server behind the reverse proxy on unencrypted HTTP. > However, whether to use HTTP or SPDY framing is normally decided via NPN > negotiation during the SSL handshake. Since the backend server can't > distinguish between a forwarded HTTP or SPDY connection without the SSL > layer, will there be a mechanism for delegating HTTP connections to one > address/port, and SPDY to another? Alternately, will nginx serve as an > HTTP<=>SPDY gateway, so that all requests appear to be SPDY to the backend > server? First implementation of SPDY support will work only on frontend side. So, nginx will be able to talk with backends by HTTP, FastCGI, uwsgi, or SCGI protocols. Also, we will add a variable in nginx configuration, that will indicate client connection by spdy protocol in case if someone wants to write it in the log or notify the application by sending special header. > 2. Will there be any way to take advantage of nginx's caching when pushing > static resources that would normally be served by it? For example, the > backend server could send a SYN_STREAM frame with the UNIDIRECTIONAL flag > set and only the "url" header included - if the URL points to a location > that nginx would serve, nginx takes over, filling in the rest of the > headers and serving the file's contents. > First implementation will have no support of SPDY server push. I believe that the next step is to add pushing by special response header from backend (like "X-Accel-Redirect" but with the list of URIs to push). wbr, Valentin V. Bartenev From agentzh at gmail.com Thu May 17 11:42:45 2012 From: agentzh at gmail.com (agentzh) Date: Thu, 17 May 2012 19:42:45 +0800 Subject: Lua Variable access bug? In-Reply-To: <20120517090214.GL31671@mdounin.ru> References: <20120517090214.GL31671@mdounin.ru> Message-ID: On Thu, May 17, 2012 at 5:02 PM, Maxim Dounin wrote: > Shouldn't it be > > --- a/src/http/ngx_http_variables.c > +++ b/src/http/ngx_http_variables.c > @@ -2072,6 +2072,11 @@ ngx_http_variables_init_vars(ngx_conf_t > ? ? v = cmcf->variables.elts; > ? ? key = cmcf->variables_keys->keys.elts; > > + ? ?for (n = 0; n < cmcf->variables_keys->keys.nelts; n++) { > + ? ? ? ?av = key[n].value; > + ? ? ? ?av->flags &= ~NGX_HTTP_VAR_INDEXED; > + ? ?} > + > ? ? for (i = 0; i < cmcf->variables.nelts; i++) { > > ? ? ? ? for (n = 0; n < cmcf->variables_keys->keys.nelts; n++) { > > > instead? > Yes, this patch is better and is more efficient :) > (completely untested) > I've tested it on my side and it indeed solves this problem. Thanks! -agentzh From alek.storm at gmail.com Thu May 17 12:06:21 2012 From: alek.storm at gmail.com (Alek Storm) Date: Thu, 17 May 2012 07:06:21 -0500 Subject: Upcoming SPDY support details In-Reply-To: <201205171523.55975.ne@vbart.ru> References: <201205171523.55975.ne@vbart.ru> Message-ID: On Thu, May 17, 2012 at 6:23 AM, Valentin V. Bartenev wrote: > First implementation of SPDY support will work only on frontend side. So, > nginx > will be able to talk with backends by HTTP, FastCGI, uwsgi, or SCGI > protocols. > Got it. Looking forward to backend SPDY support, so crucial advantages like multiplexing are preserved. Will nginx at least add headers like "X-Priority" to the forwarded request? First implementation will have no support of SPDY server push. I believe > that > the next step is to add pushing by special response header from backend > (like > "X-Accel-Redirect" but with the list of URIs to push). > That'll work, but am I correct in assuming that the response containing "X-Accel-Redirect" is different from the response to the original request? If they are the same, then nginx will not be notified of the URIs to push until the server is ready to send the headers for the response to the original request. This would be more restrictive than SPDY's semantics, which allow pushing resources before the SYN_REPLY frame for the original request is even sent. Alek -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Thu May 17 13:06:49 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 17 May 2012 17:06:49 +0400 Subject: Lua Variable access bug? In-Reply-To: References: <20120517090214.GL31671@mdounin.ru> Message-ID: <20120517130649.GS31671@mdounin.ru> Hello! On Thu, May 17, 2012 at 07:42:45PM +0800, agentzh wrote: > On Thu, May 17, 2012 at 5:02 PM, Maxim Dounin wrote: > > Shouldn't it be > > > > --- a/src/http/ngx_http_variables.c > > +++ b/src/http/ngx_http_variables.c > > @@ -2072,6 +2072,11 @@ ngx_http_variables_init_vars(ngx_conf_t > > ? ? v = cmcf->variables.elts; > > ? ? key = cmcf->variables_keys->keys.elts; > > > > + ? ?for (n = 0; n < cmcf->variables_keys->keys.nelts; n++) { > > + ? ? ? ?av = key[n].value; > > + ? ? ? ?av->flags &= ~NGX_HTTP_VAR_INDEXED; > > + ? ?} > > + > > ? ? for (i = 0; i < cmcf->variables.nelts; i++) { > > > > ? ? ? ? for (n = 0; n < cmcf->variables_keys->keys.nelts; n++) { > > > > > > instead? > > > > Yes, this patch is better and is more efficient :) No, disregard this patch. It will break things if new configuration will be rejected for some reason, as global in master will be left in an inconsistent state. As a result incorrect worker processes will be spawn e.g. after abnormal termination of a worker process (or after SIGWINCH + SIGHUP sequence). Allocating temporary copy of a variable looks like the only correct way. Maxim Dounin From agentzh at gmail.com Thu May 17 13:13:34 2012 From: agentzh at gmail.com (agentzh) Date: Thu, 17 May 2012 21:13:34 +0800 Subject: Lua Variable access bug? In-Reply-To: <20120517130649.GS31671@mdounin.ru> References: <20120517090214.GL31671@mdounin.ru> <20120517130649.GS31671@mdounin.ru> Message-ID: On Thu, May 17, 2012 at 9:06 PM, Maxim Dounin wrote: > > No, disregard this patch. ?It will break things if new > configuration will be rejected for some reason, as global in > master will be left in an inconsistent state. ?As a result > incorrect worker processes will be spawn e.g. after abnormal > termination of a worker process (or after SIGWINCH + SIGHUP > sequence). > > Allocating temporary copy of a variable looks like the only > correct way. > Indeed. Modifying static/global variables is almost always dangerous for configuration reloading :) Thanks! -agentzh From ne at vbart.ru Thu May 17 13:17:56 2012 From: ne at vbart.ru (Valentin V. Bartenev) Date: Thu, 17 May 2012 17:17:56 +0400 Subject: Upcoming SPDY support details In-Reply-To: References: <201205171523.55975.ne@vbart.ru> Message-ID: <201205171717.56692.ne@vbart.ru> On Thursday 17 May 2012 16:06:21 Alek Storm wrote: > On Thu, May 17, 2012 at 6:23 AM, Valentin V. Bartenev wrote: > > First implementation of SPDY support will work only on frontend side. So, > > nginx > > will be able to talk with backends by HTTP, FastCGI, uwsgi, or SCGI > > protocols. > > Got it. Looking forward to backend SPDY support, so crucial advantages like > multiplexing are preserved. Will nginx at least add headers like > "X-Priority" to the forwarded request? Surely, I added the $spdy_request_priority configuration variable and it is suitable for such tasks. > > that > > the next step is to add pushing by special response header from backend > > (like > > "X-Accel-Redirect" but with the list of URIs to push). > > That'll work, but am I correct in assuming that the response containing > "X-Accel-Redirect" is different from the response to the original request? In the most simplest implementation they will be the same. > If they are the same, then nginx will not be notified of the URIs to push > until the server is ready to send the headers for the response to the > original request. This would be more restrictive than SPDY's semantics, > which allow pushing resources before the SYN_REPLY frame for the original > request is even sent. Yes, you're right. And this question is still undecided and requires further investigation. I think that we also need some sort of configuration directives for server pushing. wbr, Valentin V. Bartenev From usirsiwal at verivue.com Thu May 17 13:36:48 2012 From: usirsiwal at verivue.com (Sirsiwal, Umesh) Date: Thu, 17 May 2012 09:36:48 -0400 Subject: Lua Variable access bug? In-Reply-To: References: <20120517090214.GL31671@mdounin.ru> <20120517130649.GS31671@mdounin.ru> Message-ID: <4FB4FEF0.3030705@verivue.com> On my end the agentzh's patch is working correctly. -Umesh On 05/17/2012 09:13 AM, agentzh wrote: > On Thu, May 17, 2012 at 9:06 PM, Maxim Dounin wrote: >> No, disregard this patch. It will break things if new >> configuration will be rejected for some reason, as global in >> master will be left in an inconsistent state. As a result >> incorrect worker processes will be spawn e.g. after abnormal >> termination of a worker process (or after SIGWINCH + SIGHUP >> sequence). >> >> Allocating temporary copy of a variable looks like the only >> correct way. >> > Indeed. Modifying static/global variables is almost always dangerous > for configuration reloading :) > > Thanks! > -agentzh > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel From vbart at nginx.com Thu May 17 13:47:04 2012 From: vbart at nginx.com (vbart at nginx.com) Date: Thu, 17 May 2012 13:47:04 +0000 Subject: [nginx] svn commit: r4639 - trunk/src/core Message-ID: <20120517134704.CED4E3F9F90@mail.nginx.com> Author: vbart Date: 2012-05-17 13:47:04 +0000 (Thu, 17 May 2012) New Revision: 4639 URL: http://trac.nginx.org/nginx/changeset/4639/nginx Log: Fixed the ngx_regex.h header file compatibility with C++. Modified: trunk/src/core/ngx_regex.c trunk/src/core/ngx_regex.h Modified: trunk/src/core/ngx_regex.c =================================================================== --- trunk/src/core/ngx_regex.c 2012-05-16 13:27:04 UTC (rev 4638) +++ trunk/src/core/ngx_regex.c 2012-05-17 13:47:04 UTC (rev 4639) @@ -152,7 +152,7 @@ return NGX_ERROR; } - rc->regex->pcre = re; + rc->regex->code = re; /* do not study at runtime */ @@ -367,7 +367,7 @@ i = 0; } - elts[i].regex->extra = pcre_study(elts[i].regex->pcre, opt, &errstr); + elts[i].regex->extra = pcre_study(elts[i].regex->code, opt, &errstr); if (errstr != NULL) { ngx_log_error(NGX_LOG_ALERT, cycle->log, 0, @@ -380,7 +380,7 @@ int jit, n; jit = 0; - n = pcre_fullinfo(elts[i].regex->pcre, elts[i].regex->extra, + n = pcre_fullinfo(elts[i].regex->code, elts[i].regex->extra, PCRE_INFO_JIT, &jit); if (n != 0 || jit != 1) { Modified: trunk/src/core/ngx_regex.h =================================================================== --- trunk/src/core/ngx_regex.h 2012-05-16 13:27:04 UTC (rev 4638) +++ trunk/src/core/ngx_regex.h 2012-05-17 13:47:04 UTC (rev 4639) @@ -21,7 +21,7 @@ typedef struct { - pcre *pcre; + pcre *code; pcre_extra *extra; } ngx_regex_t; @@ -50,7 +50,7 @@ ngx_int_t ngx_regex_compile(ngx_regex_compile_t *rc); #define ngx_regex_exec(re, s, captures, size) \ - pcre_exec(re->pcre, re->extra, (const char *) (s)->data, (s)->len, 0, 0, \ + pcre_exec(re->code, re->extra, (const char *) (s)->data, (s)->len, 0, 0, \ captures, size) #define ngx_regex_exec_n "pcre_exec()" From alek.storm at gmail.com Thu May 17 13:52:30 2012 From: alek.storm at gmail.com (Alek Storm) Date: Thu, 17 May 2012 08:52:30 -0500 Subject: Upcoming SPDY support details In-Reply-To: <201205171717.56692.ne@vbart.ru> References: <201205171523.55975.ne@vbart.ru> <201205171717.56692.ne@vbart.ru> Message-ID: That's perfectly reasonable. Thanks for the quick and clear answers. Is there somewhere online the in-progress SPDY implementation has been posted? If not, what's the current estimated date of release? Alek On Thu, May 17, 2012 at 8:17 AM, Valentin V. Bartenev wrote: > On Thursday 17 May 2012 16:06:21 Alek Storm wrote: > > On Thu, May 17, 2012 at 6:23 AM, Valentin V. Bartenev > wrote: > > > First implementation of SPDY support will work only on frontend side. > So, > > > nginx > > > will be able to talk with backends by HTTP, FastCGI, uwsgi, or SCGI > > > protocols. > > > > Got it. Looking forward to backend SPDY support, so crucial advantages > like > > multiplexing are preserved. Will nginx at least add headers like > > "X-Priority" to the forwarded request? > > Surely, I added the $spdy_request_priority configuration variable and it is > suitable for such tasks. > > > > that > > > the next step is to add pushing by special response header from backend > > > (like > > > "X-Accel-Redirect" but with the list of URIs to push). > > > > That'll work, but am I correct in assuming that the response containing > > "X-Accel-Redirect" is different from the response to the original > request? > > In the most simplest implementation they will be the same. > > > If they are the same, then nginx will not be notified of the URIs to push > > until the server is ready to send the headers for the response to the > > original request. This would be more restrictive than SPDY's semantics, > > which allow pushing resources before the SYN_REPLY frame for the original > > request is even sent. > > Yes, you're right. And this question is still undecided and requires > further > investigation. I think that we also need some sort of configuration > directives > for server pushing. > > wbr, Valentin V. Bartenev > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vbart at nginx.com Thu May 17 15:12:45 2012 From: vbart at nginx.com (vbart at nginx.com) Date: Thu, 17 May 2012 15:12:45 +0000 Subject: [nginx] svn commit: r4640 - trunk/auto Message-ID: <20120517151245.E9F793F9F66@mail.nginx.com> Author: vbart Date: 2012-05-17 15:12:45 +0000 (Thu, 17 May 2012) New Revision: 4640 URL: http://trac.nginx.org/nginx/changeset/4640/nginx Log: Fixed building --with-cpp_test_module on some systems. Modified: trunk/auto/modules Modified: trunk/auto/modules =================================================================== --- trunk/auto/modules 2012-05-17 13:47:04 UTC (rev 4639) +++ trunk/auto/modules 2012-05-17 15:12:45 UTC (rev 4640) @@ -458,6 +458,7 @@ if [ $NGX_CPP_TEST = YES ]; then NGX_MISC_SRCS="$NGX_MISC_SRCS $NGX_CPP_TEST_SRCS" + CORE_LIBS="$CORE_LIBS -lstdc++" fi From mdounin at mdounin.ru Thu May 17 17:41:40 2012 From: mdounin at mdounin.ru (mdounin at mdounin.ru) Date: Thu, 17 May 2012 17:41:40 +0000 Subject: [nginx] svn commit: r4641 - trunk/src/http Message-ID: <20120517174140.CCC5D3F9FB5@mail.nginx.com> Author: mdounin Date: 2012-05-17 17:41:40 +0000 (Thu, 17 May 2012) New Revision: 4641 URL: http://trac.nginx.org/nginx/changeset/4641/nginx Log: Fixed segfault with filter_finalize introduced in r4621 (1.3.0). Example configuration to reproduce: location /image/ { error_page 415 = /zero; image_filter crop 100 100; proxy_pass http://127.0.0.1:8080; proxy_store on; } location /zero { return 204; } The problem appeared if upstream returned (big enough) non-image file, causing 415 to be generated by image filter. Modified: trunk/src/http/ngx_http_request.c Modified: trunk/src/http/ngx_http_request.c =================================================================== --- trunk/src/http/ngx_http_request.c 2012-05-17 15:12:45 UTC (rev 4640) +++ trunk/src/http/ngx_http_request.c 2012-05-17 17:41:40 UTC (rev 4641) @@ -1933,8 +1933,6 @@ if (rc == NGX_OK && r->filter_finalize) { c->error = 1; - ngx_http_finalize_connection(r); - return; } if (rc == NGX_DECLINED) { From mdounin at mdounin.ru Thu May 17 18:10:34 2012 From: mdounin at mdounin.ru (mdounin at mdounin.ru) Date: Thu, 17 May 2012 18:10:34 +0000 Subject: [nginx] svn commit: r4642 - trunk/src/http Message-ID: <20120517181035.308153F9FCE@mail.nginx.com> Author: mdounin Date: 2012-05-17 18:10:34 +0000 (Thu, 17 May 2012) New Revision: 4642 URL: http://trac.nginx.org/nginx/changeset/4642/nginx Log: Fixed core variables dynamic access after reconfiguration. If variable was indexed in previous configuration but not in current one, the NGX_HTTP_VAR_INDEXED flag was left set and confused ngx_http_get_variable(). Patch by Yichun Zhang (agentzh), slightly modified. Modified: trunk/src/http/ngx_http_variables.c Modified: trunk/src/http/ngx_http_variables.c =================================================================== --- trunk/src/http/ngx_http_variables.c 2012-05-17 17:41:40 UTC (rev 4641) +++ trunk/src/http/ngx_http_variables.c 2012-05-17 18:10:34 UTC (rev 4642) @@ -2016,7 +2016,7 @@ ngx_http_variables_add_core_vars(ngx_conf_t *cf) { ngx_int_t rc; - ngx_http_variable_t *v; + ngx_http_variable_t *cv, *v; ngx_http_core_main_conf_t *cmcf; cmcf = ngx_http_conf_get_module_main_conf(cf, ngx_http_core_module); @@ -2036,7 +2036,14 @@ return NGX_ERROR; } - for (v = ngx_http_core_variables; v->name.len; v++) { + for (cv = ngx_http_core_variables; cv->name.len; cv++) { + v = ngx_palloc(cf->pool, sizeof(ngx_http_variable_t)); + if (v == NULL) { + return NGX_ERROR; + } + + *v = *cv; + rc = ngx_hash_add_key(cmcf->variables_keys, &v->name, v, NGX_HASH_READONLY_KEY); From mdounin at mdounin.ru Thu May 17 18:11:41 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 17 May 2012 22:11:41 +0400 Subject: Lua Variable access bug? In-Reply-To: References: Message-ID: <20120517181141.GV31671@mdounin.ru> Hello! On Thu, May 17, 2012 at 04:35:24PM +0800, agentzh wrote: > On Wed, May 16, 2012 at 10:16 AM, Sirsiwal, Umesh wrote: > > > > I think issue exists because of the way ngx_http_variables_init_vars is > > written. It changes the flags in the static ngx_http_core_variables variable. > > During the first configuration read cycle where $http_host is indexed, the > > flag changes to indexed. During the second configuration read cycle where > > the the $http_host variable should not be indexed, the flags still remain > > INDEXED. That confuses ngx_http_get_variable later. > > > > This is indeed an issue in the nginx core and it should affect other > modules other than ngx_lua. > > I think the ngx_http_variables_add_core_vars function should pass a > copy of the ngx_http_variable_t entries in the static variable > ngx_http_core_variables. > > Could you please try the patch below for nginx 1.0.15? I've confirmed > that it can also be applied cleanly to nginx 1.2.0. > > Thanks! > -agentzh > > --- nginx-1.0.15/src/http/ngx_http_variables.c 2012-03-05 > 20:36:51.000000000 +0800 > +++ nginx-1.0.15-patched/src/http/ngx_http_variables.c 2012-05-17 > 16:21:42.120968722 +0800 > @@ -1898,6 +1898,7 @@ > { > ngx_int_t rc; > ngx_http_variable_t *v; > + ngx_http_variable_t *value; > ngx_http_core_main_conf_t *cmcf; > > cmcf = ngx_http_conf_get_module_main_conf(cf, ngx_http_core_module); > @@ -1918,7 +1919,14 @@ > } > > for (v = ngx_http_core_variables; v->name.len; v++) { > - rc = ngx_hash_add_key(cmcf->variables_keys, &v->name, v, > + value = ngx_palloc(cf->pool, sizeof(ngx_http_variable_t)); > + if (value == NULL) { > + return NGX_ERROR; > + } > + > + *value = *v; > + > + rc = ngx_hash_add_key(cmcf->variables_keys, &value->name, value, > NGX_HASH_READONLY_KEY); > > if (rc == NGX_OK) { Slightly modified version of this patch committed, thanks. Maxim Dounin From ne at vbart.ru Thu May 17 18:29:25 2012 From: ne at vbart.ru (Valentin V. Bartenev) Date: Thu, 17 May 2012 22:29:25 +0400 Subject: Upcoming SPDY support details In-Reply-To: References: <201205171717.56692.ne@vbart.ru> Message-ID: <201205172229.25178.ne@vbart.ru> On Thursday 17 May 2012 17:52:30 Alek Storm wrote: > That's perfectly reasonable. Thanks for the quick and clear answers. Is > there somewhere online the in-progress SPDY implementation has been posted? > If not, what's the current estimated date of release? > We usually post the code on public only after the completion of the development cycle of some full-fledged functionality, internal testing and review (sometimes double or even triple code review). SPDY is the very big one, so it takes time. As you've already seen on roadmap ( http://trac.nginx.org/nginx/roadmap ), the end of May - early June is our goal. wbr, Valentin V. Bartenev From zls.sogou at gmail.com Fri May 18 02:56:47 2012 From: zls.sogou at gmail.com (lanshun zhou) Date: Fri, 18 May 2012 10:56:47 +0800 Subject: Lua Variable access bug? In-Reply-To: <20120517181141.GV31671@mdounin.ru> References: <20120517181141.GV31671@mdounin.ru> Message-ID: Yeah, and it's not safe to save variable index in a global variable, like ngx_http_userid_reset_index in userid filter module. 2012/5/18 Maxim Dounin : > Hello! > > On Thu, May 17, 2012 at 04:35:24PM +0800, agentzh wrote: > >> On Wed, May 16, 2012 at 10:16 AM, Sirsiwal, Umesh wrote: >> > >> > I think issue exists because of the way ngx_http_variables_init_vars is >> > written. It changes the flags in the static ngx_http_core_variables variable. >> > During the first configuration read cycle where $http_host is indexed, the >> > flag changes to indexed. During the second configuration read cycle where >> > the the $http_host variable should not be indexed, the flags still remain >> > INDEXED. That confuses ngx_http_get_variable later. >> > >> >> This is indeed an issue in the nginx core and it should affect other >> modules other than ngx_lua. >> >> I think the ngx_http_variables_add_core_vars function should pass a >> copy of the ngx_http_variable_t entries in the static variable >> ngx_http_core_variables. >> >> Could you please try the patch below for nginx 1.0.15? I've confirmed >> that it can also be applied cleanly to nginx 1.2.0. >> >> Thanks! >> -agentzh >> >> --- nginx-1.0.15/src/http/ngx_http_variables.c ? ? ? ?2012-03-05 >> 20:36:51.000000000 +0800 >> +++ nginx-1.0.15-patched/src/http/ngx_http_variables.c ? ? ? ?2012-05-17 >> 16:21:42.120968722 +0800 >> @@ -1898,6 +1898,7 @@ >> ?{ >> ? ? ?ngx_int_t ? ? ? ? ? ? ? ? ? rc; >> ? ? ?ngx_http_variable_t ? ? ? ?*v; >> + ? ?ngx_http_variable_t ? ? ? ?*value; >> ? ? ?ngx_http_core_main_conf_t ?*cmcf; >> >> ? ? ?cmcf = ngx_http_conf_get_module_main_conf(cf, ngx_http_core_module); >> @@ -1918,7 +1919,14 @@ >> ? ? ?} >> >> ? ? ?for (v = ngx_http_core_variables; v->name.len; v++) { >> - ? ? ? ?rc = ngx_hash_add_key(cmcf->variables_keys, &v->name, v, >> + ? ? ? ?value = ngx_palloc(cf->pool, sizeof(ngx_http_variable_t)); >> + ? ? ? ?if (value == NULL) { >> + ? ? ? ? ? ?return NGX_ERROR; >> + ? ? ? ?} >> + >> + ? ? ? ?*value = *v; >> + >> + ? ? ? ?rc = ngx_hash_add_key(cmcf->variables_keys, &value->name, value, >> ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?NGX_HASH_READONLY_KEY); >> >> ? ? ? ? ?if (rc == NGX_OK) { > > Slightly modified version of this patch committed, thanks. > > Maxim Dounin > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel From mdounin at mdounin.ru Fri May 18 11:08:30 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 18 May 2012 15:08:30 +0400 Subject: Lua Variable access bug? In-Reply-To: References: <20120517181141.GV31671@mdounin.ru> Message-ID: <20120518110830.GC31671@mdounin.ru> Hello! On Fri, May 18, 2012 at 10:56:47AM +0800, lanshun zhou wrote: > Yeah, and it's not safe to save variable index in a global variable, > like ngx_http_userid_reset_index in userid filter module. Sure. Care to provide patch? Maxim Dounin From mikegagnon at gmail.com Fri May 18 15:36:19 2012 From: mikegagnon at gmail.com (Mike Gagnon) Date: Fri, 18 May 2012 08:36:19 -0700 Subject: Modifying HttpAccessKeyModule Message-ID: Hello, I would like to add a new feature to the HttpAccessKeyModule ( http://wiki.nginx.org/HttpAccessKeyModule). As currently implemented when the visitor omits the key then the module yields a 403 page. I would like to change this behavior, such that when a visitor omits a key, then the module yields a page that contains javascript to automatically generate a key. This page however, must be dynamically generated for the specific URI requested (SSI seems appropriate). If you're curious, I am building this feature to implement a "client puzzle" protocol to rate-limit DoS attacks targeting upstream servers. http://en.wikipedia.org/wiki/Client_Puzzle_Protocol It is not clear to me how to implement this feature. The NGX_HTTP_ACCESS_PHASE seems to have a limited ability to affect the processing of requests. I.e. it can only return a status code, and it cannot redirect the request to a specific content handler. Is this correct? Do I need to invent a new internal status code? Would that require modifying the nginx core? Thanks! Mike Gagnon -------------- next part -------------- An HTML attachment was scrubbed... URL: From wangsamp at gmail.com Fri May 18 22:59:00 2012 From: wangsamp at gmail.com (Oleksandr V. Typlyns'kyi) Date: Sat, 19 May 2012 01:59:00 +0300 (EEST) Subject: Modifying HttpAccessKeyModule In-Reply-To: References: Message-ID: Yesterday May 18, 2012 at 08:36 Mike Gagnon wrote: > Hello, > > I would like to add a new feature to the HttpAccessKeyModule ( > http://wiki.nginx.org/HttpAccessKeyModule). > > As currently implemented when the visitor omits the key then the module > yields a 403 page. I would like to change this behavior, such that when a > visitor omits a key, then the module yields a page that contains javascript > to automatically generate a key. This page however, must be dynamically > generated for the specific URI requested (SSI seems appropriate). Did you look at secure link module? http://wiki.nginx.org/HttpSecureLinkModule -- WNGS-RIPE From mikegagnon at gmail.com Sat May 19 00:46:28 2012 From: mikegagnon at gmail.com (Mike Gagnon) Date: Fri, 18 May 2012 17:46:28 -0700 Subject: Modifying HttpAccessKeyModule In-Reply-To: References: Message-ID: Thanks for the link. That sounds like a great tip. By following the example of HttpSecureLinkModule, it seems I should have one part of the module parse the URI and determine if the link is valid (by checking the hash). Then I can set an nginx variable valid_link to "1" or "0" (depending whether or not the link has a valid key in the URI). In another part of my module, I'll hook into the phase that decides where to route requests. If the link is valid, I will route it to the normal content handler. If the link didn't exist (or is invalid) I will route the request to the special SSI page that which contains JavaScrip that will generate a key for the user. Does this approach sound like a good way to go? One question I have, is what phase should I hook into inorder to route requests to my special SSI page? Would the NGX_HTTP_REWRITE_PHASE be the right place to read valid_link and determine how to route the request? This way I could read the value of valid_link and re-write the URI as needed. Thanks again! Mike Gagnon On Fri, May 18, 2012 at 3:59 PM, Oleksandr V. Typlyns'kyi < wangsamp at gmail.com> wrote: > Yesterday May 18, 2012 at 08:36 Mike Gagnon wrote: > > > Hello, > > > > I would like to add a new feature to the HttpAccessKeyModule ( > > http://wiki.nginx.org/HttpAccessKeyModule). > > > > As currently implemented when the visitor omits the key then the module > > yields a 403 page. I would like to change this behavior, such that when a > > visitor omits a key, then the module yields a page that contains > javascript > > to automatically generate a key. This page however, must be dynamically > > generated for the specific URI requested (SSI seems appropriate). > > Did you look at secure link module? > http://wiki.nginx.org/HttpSecureLinkModule > > -- > WNGS-RIPE > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ru at nginx.com Mon May 21 10:55:11 2012 From: ru at nginx.com (ru at nginx.com) Date: Mon, 21 May 2012 10:55:11 +0000 Subject: [nginx] svn commit: r4643 - trunk/src/core Message-ID: <20120521105511.9BDCE3F9C1E@mail.nginx.com> Author: ru Date: 2012-05-21 10:55:10 +0000 (Mon, 21 May 2012) New Revision: 4643 URL: http://trac.nginx.org/nginx/changeset/4643/nginx Log: Removed historical and now redundant syntax pre-checks in ngx_parse_url(). Modified: trunk/src/core/ngx_inet.c Modified: trunk/src/core/ngx_inet.c =================================================================== --- trunk/src/core/ngx_inet.c 2012-05-17 18:10:34 UTC (rev 4642) +++ trunk/src/core/ngx_inet.c 2012-05-21 10:55:10 UTC (rev 4643) @@ -522,11 +522,6 @@ return ngx_parse_unix_domain_url(pool, u); } - if ((p[0] == ':' || p[0] == '/') && !u->listen) { - u->err = "invalid host"; - return NGX_ERROR; - } - if (p[0] == '[') { return ngx_parse_inet6_url(pool, u); } From zls.sogou at gmail.com Tue May 22 10:33:01 2012 From: zls.sogou at gmail.com (lanshun zhou) Date: Tue, 22 May 2012 18:33:01 +0800 Subject: Lua Variable access bug? In-Reply-To: <20120518110830.GC31671@mdounin.ru> References: <20120517181141.GV31671@mdounin.ru> <20120518110830.GC31671@mdounin.ru> Message-ID: Not sure if it's a good idea to save the index in main conf. 2012/5/18 Maxim Dounin : > Hello! > > On Fri, May 18, 2012 at 10:56:47AM +0800, lanshun zhou wrote: > >> Yeah, and it's not safe to save variable index in a global variable, >> like ngx_http_userid_reset_index in userid filter module. > > Sure. ?Care to provide patch? > > Maxim Dounin > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel -------------- next part -------------- A non-text attachment was scrubbed... Name: userid_index2.patch Type: application/octet-stream Size: 9831 bytes Desc: not available URL: From pass86 at gmail.com Tue May 22 11:38:44 2012 From: pass86 at gmail.com (l.jay Yuan) Date: Tue, 22 May 2012 19:38:44 +0800 Subject: when ngx_array_t allocate a new array, why not ngx_pfree pre a->elts? Message-ID: when ngx_array_t allocate a new array, why not ngx_pfree pre a->elts? btw: ngx_pfree only free large memory, so I can not use ngx_pool_t everywhere... From ru at nginx.com Tue May 22 13:12:16 2012 From: ru at nginx.com (ru at nginx.com) Date: Tue, 22 May 2012 13:12:16 +0000 Subject: [nginx] svn commit: r4644 - trunk/src/core Message-ID: <20120522131216.511553F9DD0@mail.nginx.com> Author: ru Date: 2012-05-22 13:12:14 +0000 (Tue, 22 May 2012) New Revision: 4644 URL: http://trac.nginx.org/nginx/changeset/4644/nginx Log: Fixed potential null pointer dereference in ngx_resolver_create(). While here, improved error message. Modified: trunk/src/core/ngx_resolver.c Modified: trunk/src/core/ngx_resolver.c =================================================================== --- trunk/src/core/ngx_resolver.c 2012-05-21 10:55:10 UTC (rev 4643) +++ trunk/src/core/ngx_resolver.c 2012-05-22 13:12:14 UTC (rev 4644) @@ -175,7 +175,12 @@ u.port = 53; if (ngx_inet_resolve_host(cf->pool, &u) != NGX_OK) { - ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, "%V: %s", &u.host, u.err); + if (u.err) { + ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, + "%s in resolver \"%V\"", + u.err, &u.host); + } + return NULL; } From vshebordaev at mail.ru Tue May 22 15:51:11 2012 From: vshebordaev at mail.ru (Vladimir Shebordaev) Date: Tue, 22 May 2012 19:51:11 +0400 Subject: when ngx_array_t allocate a new array, why not ngx_pfree pre a->elts? In-Reply-To: References: Message-ID: Hi! 2012/5/22 l.jay Yuan : > when ngx_array_t allocate a new array, why not ngx_pfree pre a->elts? > Well, it seems to be a generic design decision. If you allocate a dynamic object from the pool, you do it for speed, so, it is the most likely relevant to the current request processing and supposed to be destroyed right after that. It is done all at once when you explicitely destroy the pool, either the objects are allocated from one of the server pools that only persist during the current request processing. Since there is no need in partial memory reclamaition, it is faster to just allocate the memory anew. > btw: ngx_pfree only free large memory, so I can not use ngx_pool_t everywhere... > If you do need the objects that would persist through different request processing and the server reloading, you might want to use ngx_slab_pool_t on some shared memory region. As far as I can recall at the moment, nginx slab allocator provides necessary reclamaition facilities. -- Regards, Vladimir From pass86 at gmail.com Tue May 22 16:56:20 2012 From: pass86 at gmail.com (l.jay Yuan) Date: Wed, 23 May 2012 00:56:20 +0800 Subject: when ngx_array_t allocate a new array, why not ngx_pfree pre a->elts? In-Reply-To: References: Message-ID: Thank you very much.So the design is specially for the web request.I were going to learn the source and use for MMORPG game server.Game clients's connection is transient.I want to improve the code. 2012/5/22 Vladimir Shebordaev : > Hi! > > 2012/5/22 l.jay Yuan : >> when ngx_array_t allocate a new array, why not ngx_pfree pre a->elts? >> > > Well, it seems to be a generic design decision. If you allocate a > dynamic object from the pool, you do it for speed, so, it is the most > likely relevant to the current request processing and supposed to be > destroyed right after that. It is done all at once when you > explicitely destroy the pool, either the objects ?are allocated from > one of the server pools that only persist during the current request > processing. ?Since there is no need in partial memory reclamaition, it > is faster to just allocate the memory anew. > >> btw: ngx_pfree only free large memory, so I can not use ngx_pool_t everywhere... >> > > If you do need the objects that would persist through different > request processing and the server reloading, you might want to use > ngx_slab_pool_t on some shared memory region. As far as I can recall > at the moment, nginx slab allocator provides necessary reclamaition > facilities. > > -- > Regards, > Vladimir > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel From pass86 at gmail.com Tue May 22 16:57:01 2012 From: pass86 at gmail.com (l.jay Yuan) Date: Wed, 23 May 2012 00:57:01 +0800 Subject: when ngx_array_t allocate a new array, why not ngx_pfree pre a->elts? In-Reply-To: References: Message-ID: Game clients's connection is not transient. 2012/5/23 l.jay Yuan : > Thank you very much.So the design is specially for the web request.I > were going to learn the source and use for MMORPG game server.Game > clients's connection is transient.I want to improve the code. > > 2012/5/22 Vladimir Shebordaev : >> Hi! >> >> 2012/5/22 l.jay Yuan : >>> when ngx_array_t allocate a new array, why not ngx_pfree pre a->elts? >>> >> >> Well, it seems to be a generic design decision. If you allocate a >> dynamic object from the pool, you do it for speed, so, it is the most >> likely relevant to the current request processing and supposed to be >> destroyed right after that. It is done all at once when you >> explicitely destroy the pool, either the objects ?are allocated from >> one of the server pools that only persist during the current request >> processing. ?Since there is no need in partial memory reclamaition, it >> is faster to just allocate the memory anew. >> >>> btw: ngx_pfree only free large memory, so I can not use ngx_pool_t everywhere... >>> >> >> If you do need the objects that would persist through different >> request processing and the server reloading, you might want to use >> ngx_slab_pool_t on some shared memory region. As far as I can recall >> at the moment, nginx slab allocator provides necessary reclamaition >> facilities. >> >> -- >> Regards, >> Vladimir >> >> _______________________________________________ >> nginx-devel mailing list >> nginx-devel at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx-devel From thaednevol at gmail.com Wed May 23 00:57:07 2012 From: thaednevol at gmail.com (=?ISO-8859-1?Q?Edwin_Hern=E1n_Hurtado_Cruz?=) Date: Tue, 22 May 2012 19:57:07 -0500 Subject: Independient working directories Message-ID: Hola a todos! I'd like to know if its possible to start nginx with a configuration file, with all working directories I mean, I've already successfully compiled it, but I'd like to set directories like "client_body_temp" , or "logs" , in my configuration file. I appreciate your help And great job, by the way Gracias y chao El hombre, as? como el software, debe ser libre. GNU es libertad. Linux user #461462 Escuchad mi palabra, un nuevo mandamiento os doy: "Derribad las naciones, con tolerancia y golpes de amor" -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Wed May 23 10:36:12 2012 From: mdounin at mdounin.ru (mdounin at mdounin.ru) Date: Wed, 23 May 2012 10:36:12 +0000 Subject: [nginx] svn commit: r4645 - trunk/src/http/modules/perl Message-ID: <20120523103613.5A8253F9C1B@mail.nginx.com> Author: mdounin Date: 2012-05-23 10:36:12 +0000 (Wed, 23 May 2012) New Revision: 4645 URL: http://trac.nginx.org/nginx/changeset/4645/nginx Log: Fixed warning during nginx.xs compilation. Modified: trunk/src/http/modules/perl/nginx.xs Modified: trunk/src/http/modules/perl/nginx.xs =================================================================== --- trunk/src/http/modules/perl/nginx.xs 2012-05-22 13:12:14 UTC (rev 4644) +++ trunk/src/http/modules/perl/nginx.xs 2012-05-23 10:36:12 UTC (rev 4645) @@ -476,7 +476,7 @@ } if (header->key.len == sizeof("Content-Encoding") - 1 - && ngx_strncasecmp(header->key.data, "Content-Encoding", + && ngx_strncasecmp(header->key.data, (u_char *) "Content-Encoding", sizeof("Content-Encoding") - 1) == 0) { r->headers_out.content_encoding = header; From mdounin at mdounin.ru Wed May 23 15:07:02 2012 From: mdounin at mdounin.ru (mdounin at mdounin.ru) Date: Wed, 23 May 2012 15:07:02 +0000 Subject: [nginx] svn commit: r4646 - trunk/src/os/unix Message-ID: <20120523150703.422EE3F9EA7@mail.nginx.com> Author: mdounin Date: 2012-05-23 15:07:01 +0000 (Wed, 23 May 2012) New Revision: 4646 URL: http://trac.nginx.org/nginx/changeset/4646/nginx Log: Fixed compilation with -DNGX_DEBUG_MALLOC on FreeBSD 10. After jemalloc 3.0.0 import there is no _malloc_options symbol, it has been replaced with the malloc_conf one with a different syntax. Modified: trunk/src/os/unix/ngx_freebsd_init.c Modified: trunk/src/os/unix/ngx_freebsd_init.c =================================================================== --- trunk/src/os/unix/ngx_freebsd_init.c 2012-05-23 10:36:12 UTC (rev 4645) +++ trunk/src/os/unix/ngx_freebsd_init.c 2012-05-23 15:07:01 UTC (rev 4646) @@ -76,9 +76,9 @@ { #if (NGX_DEBUG_MALLOC) -#if __FreeBSD_version >= 500014 +#if __FreeBSD_version >= 500014 && __FreeBSD_version < 1000011 _malloc_options = "J"; -#else +#elif __FreeBSD_version < 500014 malloc_options = "J"; #endif From ek at kuramoto.org Thu May 24 03:16:08 2012 From: ek at kuramoto.org (Kuramoto Eiji) Date: Thu, 24 May 2012 12:16:08 +0900 Subject: bug at searching parsed DTD/XSLT data Message-ID: Hi, I found a tiny bug, it could not find any parsed DTD/XSLT data in ngx_http_xslt_filter_module. - Kuramoto Eiji --- ngx_http_xslt_filter_module.c.orig 2012-03-28 10:56:49.000000000 +0900 +++ ngx_http_xslt_filter_module.c 2012-05-24 11:28:50.000000000 +0900 @@ -810,7 +810,7 @@ file = xmcf->dtd_files.elts; for (i = 0; i < xmcf->dtd_files.nelts; i++) { - if (ngx_strcmp(file[i].name, &value[1].data) == 0) { + if (ngx_strcmp(file[i].name, value[1].data) == 0) { xlcf->dtd = file[i].data; return NGX_CONF_OK; } @@ -884,7 +884,7 @@ file = xmcf->sheet_files.elts; for (i = 0; i < xmcf->sheet_files.nelts; i++) { - if (ngx_strcmp(file[i].name, &value[1].data) == 0) { + if (ngx_strcmp(file[i].name, value[1].data) == 0) { sheet->stylesheet = file[i].data; goto found; } From ru at nginx.com Thu May 24 07:35:13 2012 From: ru at nginx.com (ru at nginx.com) Date: Thu, 24 May 2012 07:35:13 +0000 Subject: [nginx] svn commit: r4647 - trunk/src/http/modules Message-ID: <20120524073513.1570F3F9E70@mail.nginx.com> Author: ru Date: 2012-05-24 07:35:12 +0000 (Thu, 24 May 2012) New Revision: 4647 URL: http://trac.nginx.org/nginx/changeset/4647/nginx Log: Fixed the reuse of parsed DTDs and XSLTs. Patch by Kuramoto Eiji. Modified: trunk/src/http/modules/ngx_http_xslt_filter_module.c Modified: trunk/src/http/modules/ngx_http_xslt_filter_module.c =================================================================== --- trunk/src/http/modules/ngx_http_xslt_filter_module.c 2012-05-23 15:07:01 UTC (rev 4646) +++ trunk/src/http/modules/ngx_http_xslt_filter_module.c 2012-05-24 07:35:12 UTC (rev 4647) @@ -810,7 +810,7 @@ file = xmcf->dtd_files.elts; for (i = 0; i < xmcf->dtd_files.nelts; i++) { - if (ngx_strcmp(file[i].name, &value[1].data) == 0) { + if (ngx_strcmp(file[i].name, value[1].data) == 0) { xlcf->dtd = file[i].data; return NGX_CONF_OK; } @@ -884,7 +884,7 @@ file = xmcf->sheet_files.elts; for (i = 0; i < xmcf->sheet_files.nelts; i++) { - if (ngx_strcmp(file[i].name, &value[1].data) == 0) { + if (ngx_strcmp(file[i].name, value[1].data) == 0) { sheet->stylesheet = file[i].data; goto found; } From ru at nginx.com Thu May 24 07:36:44 2012 From: ru at nginx.com (Ruslan Ermilov) Date: Thu, 24 May 2012 11:36:44 +0400 Subject: bug at searching parsed DTD/XSLT data In-Reply-To: References: Message-ID: <20120524073644.GA38157@lo0.su> On Thu, May 24, 2012 at 12:16:08PM +0900, Kuramoto Eiji wrote: > I found a tiny bug, it could not find any parsed DTD/XSLT data in > ngx_http_xslt_filter_module. I've just committed your patch, thanks! http://trac.nginx.org/nginx/changeset/4647/nginx From goelvivek2011 at gmail.com Mon May 28 11:27:54 2012 From: goelvivek2011 at gmail.com (vivek goel) Date: Mon, 28 May 2012 16:57:54 +0530 Subject: How to user custom compiled pcre library with nginx configure Message-ID: I am custom compiling pcre library. If I am running nginx configure it is not detecting that library Setting up environment flags doesn't help env CPPFLAGS="-I/opt/local/include" LDFLAGS="-L/opt/local/lib ./configure doesn't help. How can I force it to detect that library ? regards Vivek Goel -------------- next part -------------- An HTML attachment was scrubbed... URL: From ne at vbart.ru Mon May 28 11:47:49 2012 From: ne at vbart.ru (Valentin V. Bartenev) Date: Mon, 28 May 2012 15:47:49 +0400 Subject: How to user custom compiled pcre library with nginx configure In-Reply-To: References: Message-ID: <201205281547.49838.ne@vbart.ru> On Monday 28 May 2012 15:27:54 vivek goel wrote: > I am custom compiling pcre library. > If I am running nginx configure it is not detecting that library > Setting up environment flags doesn't help > env CPPFLAGS="-I/opt/local/include" LDFLAGS="-L/opt/local/lib ./configure > doesn't help. > How can I force it to detect that library ? > http://nginx.org/en/docs/install.html --with-ld-opt --with-cc-opt wbr, Valentin V. Bartenev From ru at nginx.com Mon May 28 13:17:49 2012 From: ru at nginx.com (ru at nginx.com) Date: Mon, 28 May 2012 13:17:49 +0000 Subject: [nginx] svn commit: r4648 - trunk/src/http/modules Message-ID: <20120528131749.EAB9C3F9EA2@mail.nginx.com> Author: ru Date: 2012-05-28 13:17:48 +0000 (Mon, 28 May 2012) New Revision: 4648 URL: http://trac.nginx.org/nginx/changeset/4648/nginx Log: Fixed memory leak if $geoip_org variable was used. Patch by Denis F. Latypoff (slightly modified). Modified: trunk/src/http/modules/ngx_http_geoip_module.c Modified: trunk/src/http/modules/ngx_http_geoip_module.c =================================================================== --- trunk/src/http/modules/ngx_http_geoip_module.c 2012-05-24 07:35:12 UTC (rev 4647) +++ trunk/src/http/modules/ngx_http_geoip_module.c 2012-05-28 13:17:48 UTC (rev 4648) @@ -291,6 +291,7 @@ ngx_http_geoip_variable_handler_pt handler = (ngx_http_geoip_variable_handler_pt) data; + size_t len; const char *val; ngx_http_geoip_conf_t *gcf; @@ -306,12 +307,22 @@ goto not_found; } - v->len = ngx_strlen(val); + len = ngx_strlen(val); + v->data = ngx_pnalloc(r->pool, len); + if (v->data == NULL) { + ngx_free(val); + return NGX_ERROR; + } + + ngx_memcpy(v->data, val, len); + + v->len = len; v->valid = 1; v->no_cacheable = 0; v->not_found = 0; - v->data = (u_char *) val; + ngx_free(val); + return NGX_OK; not_found: From ru at nginx.com Mon May 28 14:20:04 2012 From: ru at nginx.com (ru at nginx.com) Date: Mon, 28 May 2012 14:20:04 +0000 Subject: [nginx] svn commit: r4649 - trunk/src/http/modules Message-ID: <20120528142004.5A25F3FA07E@mail.nginx.com> Author: ru Date: 2012-05-28 14:20:04 +0000 (Mon, 28 May 2012) New Revision: 4649 URL: http://trac.nginx.org/nginx/changeset/4649/nginx Log: Fixed broken build. Modified: trunk/src/http/modules/ngx_http_geoip_module.c Modified: trunk/src/http/modules/ngx_http_geoip_module.c =================================================================== --- trunk/src/http/modules/ngx_http_geoip_module.c 2012-05-28 13:17:48 UTC (rev 4648) +++ trunk/src/http/modules/ngx_http_geoip_module.c 2012-05-28 14:20:04 UTC (rev 4649) @@ -310,7 +310,7 @@ len = ngx_strlen(val); v->data = ngx_pnalloc(r->pool, len); if (v->data == NULL) { - ngx_free(val); + ngx_free((void *) val); return NGX_ERROR; } @@ -321,7 +321,7 @@ v->no_cacheable = 0; v->not_found = 0; - ngx_free(val); + ngx_free((void *) val); return NGX_OK; From mdounin at mdounin.ru Mon May 28 14:26:47 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 28 May 2012 18:26:47 +0400 Subject: [nginx] svn commit: r4649 - trunk/src/http/modules In-Reply-To: <20120528142004.5A25F3FA07E@mail.nginx.com> References: <20120528142004.5A25F3FA07E@mail.nginx.com> Message-ID: <20120528142646.GM31671@mdounin.ru> Hello! On Mon, May 28, 2012 at 02:20:04PM +0000, ru at nginx.com wrote: > Author: ru > Date: 2012-05-28 14:20:04 +0000 (Mon, 28 May 2012) > New Revision: 4649 > URL: http://trac.nginx.org/nginx/changeset/4649/nginx > > Log: > Fixed broken build. > > > Modified: > trunk/src/http/modules/ngx_http_geoip_module.c > > Modified: trunk/src/http/modules/ngx_http_geoip_module.c > =================================================================== > --- trunk/src/http/modules/ngx_http_geoip_module.c 2012-05-28 13:17:48 UTC (rev 4648) > +++ trunk/src/http/modules/ngx_http_geoip_module.c 2012-05-28 14:20:04 UTC (rev 4649) > @@ -310,7 +310,7 @@ > len = ngx_strlen(val); > v->data = ngx_pnalloc(r->pool, len); > if (v->data == NULL) { > - ngx_free(val); > + ngx_free((void *) val); > return NGX_ERROR; > } > > @@ -321,7 +321,7 @@ > v->no_cacheable = 0; > v->not_found = 0; > > - ngx_free(val); > + ngx_free((void *) val); Correct fix would be to remove "const" from the val. Please try again. Maxim Dounin From latypoff at yandex.ru Mon May 28 14:44:39 2012 From: latypoff at yandex.ru (Denis F. Latypoff) Date: Mon, 28 May 2012 21:44:39 +0700 Subject: [nginx] svn commit: r4649 - trunk/src/http/modules In-Reply-To: <20120528142646.GM31671@mdounin.ru> References: <20120528142004.5A25F3FA07E@mail.nginx.com> <20120528142646.GM31671@mdounin.ru> Message-ID: <41391338216279@web23d.yandex.ru> 28.05.2012, 21:26, "Maxim Dounin" : > Hello! > > On Mon, May 28, 2012 at 02:20:04PM +0000, ru at nginx.com wrote: > >> ?Author: ru >> ?Date: 2012-05-28 14:20:04 +0000 (Mon, 28 May 2012) >> ?New Revision: 4649 >> ?URL: http://trac.nginx.org/nginx/changeset/4649/nginx >> >> ?Log: >> ?Fixed broken build. >> >> ?Modified: >> ????trunk/src/http/modules/ngx_http_geoip_module.c >> >> ?Modified: trunk/src/http/modules/ngx_http_geoip_module.c >> ?=================================================================== >> ?--- trunk/src/http/modules/ngx_http_geoip_module.c 2012-05-28 13:17:48 UTC (rev 4648) >> ?+++ trunk/src/http/modules/ngx_http_geoip_module.c 2012-05-28 14:20:04 UTC (rev 4649) >> ?@@ -310,7 +310,7 @@ >> ??????len = ngx_strlen(val); >> ??????v->data = ngx_pnalloc(r->pool, len); >> ??????if (v->data == NULL) { >> ?- ???????ngx_free(val); >> ?+ ???????ngx_free((void *) val); >> ??????????return NGX_ERROR; >> ??????} >> >> ?@@ -321,7 +321,7 @@ >> ??????v->no_cacheable = 0; >> ??????v->not_found = 0; >> >> ?- ???ngx_free(val); >> ?+ ???ngx_free((void *) val); > > Correct fix would be to remove "const" from the val. ?Please try > again. Yes, that was my first version of patch, but there is another warning: src/http/modules/ngx_http_geoip_module.c:304: warning: assignment discards qualifiers from pointer target type val = handler(gcf->org, ngx_http_geoip_addr(r, gcf)); -- br, Denis F. Latypoff. From mdounin at mdounin.ru Tue May 29 08:38:19 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 29 May 2012 12:38:19 +0400 Subject: [nginx] svn commit: r4649 - trunk/src/http/modules In-Reply-To: <41391338216279@web23d.yandex.ru> References: <20120528142004.5A25F3FA07E@mail.nginx.com> <20120528142646.GM31671@mdounin.ru> <41391338216279@web23d.yandex.ru> Message-ID: <20120529083819.GT31671@mdounin.ru> Hello! On Mon, May 28, 2012 at 09:44:39PM +0700, Denis F. Latypoff wrote: > > > 28.05.2012, 21:26, "Maxim Dounin" : > > Hello! > > > > On Mon, May 28, 2012 at 02:20:04PM +0000, ru at nginx.com wrote: > > > >> ?Author: ru > >> ?Date: 2012-05-28 14:20:04 +0000 (Mon, 28 May 2012) > >> ?New Revision: 4649 > >> ?URL: http://trac.nginx.org/nginx/changeset/4649/nginx > >> > >> ?Log: > >> ?Fixed broken build. > >> > >> ?Modified: > >> ????trunk/src/http/modules/ngx_http_geoip_module.c > >> > >> ?Modified: trunk/src/http/modules/ngx_http_geoip_module.c > >> ?=================================================================== > >> ?--- trunk/src/http/modules/ngx_http_geoip_module.c 2012-05-28 13:17:48 UTC (rev 4648) > >> ?+++ trunk/src/http/modules/ngx_http_geoip_module.c 2012-05-28 14:20:04 UTC (rev 4649) > >> ?@@ -310,7 +310,7 @@ > >> ??????len = ngx_strlen(val); > >> ??????v->data = ngx_pnalloc(r->pool, len); > >> ??????if (v->data == NULL) { > >> ?- ???????ngx_free(val); > >> ?+ ???????ngx_free((void *) val); > >> ??????????return NGX_ERROR; > >> ??????} > >> > >> ?@@ -321,7 +321,7 @@ > >> ??????v->no_cacheable = 0; > >> ??????v->not_found = 0; > >> > >> ?- ???ngx_free(val); > >> ?+ ???ngx_free((void *) val); > > > > Correct fix would be to remove "const" from the val. ?Please try > > again. > > Yes, that was my first version of patch, but there is another > warning: > > src/http/modules/ngx_http_geoip_module.c:304: warning: > assignment discards qualifiers from pointer target type > > val = handler(gcf->org, ngx_http_geoip_addr(r, gcf)); Yes, the handler type needs to be fixed, too. Maxim Dounin From ru at nginx.com Tue May 29 09:19:51 2012 From: ru at nginx.com (ru at nginx.com) Date: Tue, 29 May 2012 09:19:51 +0000 Subject: [nginx] svn commit: r4650 - trunk/src/http/modules Message-ID: <20120529091952.069823F9E47@mail.nginx.com> Author: ru Date: 2012-05-29 09:19:51 +0000 (Tue, 29 May 2012) New Revision: 4650 URL: http://trac.nginx.org/nginx/changeset/4650/nginx Log: geoip: got rid of ugly casts when calling ngx_free(). This is done by removing the "const" qualifier from the common return type of handler functions returning either "const char *" or "char *". Modified: trunk/src/http/modules/ngx_http_geoip_module.c Modified: trunk/src/http/modules/ngx_http_geoip_module.c =================================================================== --- trunk/src/http/modules/ngx_http_geoip_module.c 2012-05-28 14:20:04 UTC (rev 4649) +++ trunk/src/http/modules/ngx_http_geoip_module.c 2012-05-29 09:19:51 UTC (rev 4650) @@ -28,7 +28,7 @@ } ngx_http_geoip_var_t; -typedef const char *(*ngx_http_geoip_variable_handler_pt)(GeoIP *, u_long addr); +typedef char *(*ngx_http_geoip_variable_handler_pt)(GeoIP *, u_long addr); static u_long ngx_http_geoip_addr(ngx_http_request_t *r, ngx_http_geoip_conf_t *gcf); @@ -292,7 +292,7 @@ (ngx_http_geoip_variable_handler_pt) data; size_t len; - const char *val; + char *val; ngx_http_geoip_conf_t *gcf; gcf = ngx_http_get_module_main_conf(r, ngx_http_geoip_module); @@ -310,7 +310,7 @@ len = ngx_strlen(val); v->data = ngx_pnalloc(r->pool, len); if (v->data == NULL) { - ngx_free((void *) val); + ngx_free(val); return NGX_ERROR; } @@ -321,7 +321,7 @@ v->no_cacheable = 0; v->not_found = 0; - ngx_free((void *) val); + ngx_free(val); return NGX_OK; From vbart at nginx.com Wed May 30 12:30:05 2012 From: vbart at nginx.com (vbart at nginx.com) Date: Wed, 30 May 2012 12:30:05 +0000 Subject: [nginx] svn commit: r4651 - trunk/src/http Message-ID: <20120530123006.0600D3F9F9C@mail.nginx.com> Author: vbart Date: 2012-05-30 12:30:03 +0000 (Wed, 30 May 2012) New Revision: 4651 URL: http://trac.nginx.org/nginx/changeset/4651/nginx Log: Fixed returned value handling from the cookie rewrite handler. If the "proxy_cookie_domain" or "proxy_cookie_path" directive is used and there are no matches in Set-Cookie header then ngx_http_proxy_rewrite_cookie() returns NGX_DECLINED to indicate that the header was not rewritten. Returning this value further from the upstream headers copy handler resulted in 500 error response. See here for report: http://mailman.nginx.org/pipermail/nginx/2012-May/033858.html Modified: trunk/src/http/ngx_http_upstream.c Modified: trunk/src/http/ngx_http_upstream.c =================================================================== --- trunk/src/http/ngx_http_upstream.c 2012-05-29 09:19:51 UTC (rev 4650) +++ trunk/src/http/ngx_http_upstream.c 2012-05-30 12:30:03 UTC (rev 4651) @@ -3677,6 +3677,7 @@ ngx_http_upstream_rewrite_set_cookie(ngx_http_request_t *r, ngx_table_elt_t *h, ngx_uint_t offset) { + ngx_int_t rc; ngx_table_elt_t *ho; ho = ngx_list_push(&r->headers_out.headers); @@ -3687,7 +3688,20 @@ *ho = *h; if (r->upstream->rewrite_cookie) { - return r->upstream->rewrite_cookie(r, ho); + rc = r->upstream->rewrite_cookie(r, ho); + + if (rc == NGX_DECLINED) { + return NGX_OK; + } + +#if (NGX_DEBUG) + if (rc == NGX_OK) { + ngx_log_debug1(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, + "rewritten cookie: \"%V\"", &ho->value); + } +#endif + + return rc; } return NGX_OK; From vbart at nginx.com Wed May 30 12:43:27 2012 From: vbart at nginx.com (vbart at nginx.com) Date: Wed, 30 May 2012 12:43:27 +0000 Subject: [nginx] svn commit: r4652 - trunk/src/event Message-ID: <20120530124327.9D2AD3F9F46@mail.nginx.com> Author: vbart Date: 2012-05-30 12:43:27 +0000 (Wed, 30 May 2012) New Revision: 4652 URL: http://trac.nginx.org/nginx/changeset/4652/nginx Log: Removed mistaken setting of NGX_SSL_BUFFERED flag in ngx_ssl_send_chain() if SSL buffer is not used. Modified: trunk/src/event/ngx_event_openssl.c Modified: trunk/src/event/ngx_event_openssl.c =================================================================== --- trunk/src/event/ngx_event_openssl.c 2012-05-30 12:30:03 UTC (rev 4651) +++ trunk/src/event/ngx_event_openssl.c 2012-05-30 12:43:27 UTC (rev 4652) @@ -990,7 +990,6 @@ } if (n == NGX_AGAIN) { - c->buffered |= NGX_SSL_BUFFERED; return in; } From dartonow at 163.com Thu May 31 05:23:04 2012 From: dartonow at 163.com (yt) Date: Thu, 31 May 2012 13:23:04 +0800 Subject: about request_time and upstream_response_time In-Reply-To: <4FC6FF30.7030203@163.com> References: <4FC6FF30.7030203@163.com> Message-ID: <4FC70038.30300@163.com> Hi all, My site's architecture likes this: nginx (upstream) -> apache (image process), apache processes a picture and consumes about 0.030 seconds, that is to say, upstream_response_time is about 0.030 seconds. However, nginx's request_time always more than 2 seconds. And , I know, the request_time include receiving request time, processing time, and sending response time, but I don't know why my nginx's request_time is so much too higher than upstream_response_time? Best regards, yt From crk_world at yahoo.com.cn Thu May 31 11:54:02 2012 From: crk_world at yahoo.com.cn (chen cw) Date: Thu, 31 May 2012 19:54:02 +0800 Subject: nginx.org documents in simplified Chinese In-Reply-To: References: <20120530125746.GX31671@mdounin.ru> Message-ID: Hello everybody, I come here with the patch for nginx.org documents in simplified Chinese, and I'm glad that someone will give it a review. This work is done by the whole Server Platforms Team at Taobao.com. Thank you. Regards -- Charles Chen Software Engineer Server Platforms Team at Taobao.com -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: nginx_org.diff Type: application/octet-stream Size: 63709 bytes Desc: not available URL: From maxim at nginx.com Thu May 31 13:32:51 2012 From: maxim at nginx.com (Maxim Konovalov) Date: Thu, 31 May 2012 17:32:51 +0400 Subject: nginx.org documents in simplified Chinese In-Reply-To: References: <20120530125746.GX31671@mdounin.ru> Message-ID: <4FC77303.5060602@nginx.com> Hi, On 5/31/12 3:54 PM, chen cw wrote: > Hello everybody, > I come here with the patch for nginx.org > documents in simplified Chinese, and I'm glad that someone will give > it a review. This work is done by the whole Server Platforms Team at > Taobao.com. > Thanks, great work! We don't have any Chinese speaking person on board and definitely need help with the review. Is it a translation of the official English doc at http://nginx.org/en/docs/? -- Maxim Konovalov +7 (910) 4293178 http://nginx.com/