From tomlove at gmail.com Tue Aug 2 15:08:11 2011 From: tomlove at gmail.com (Thomas Love) Date: Tue, 2 Aug 2011 17:08:11 +0200 Subject: upstream keepalive: call for testing In-Reply-To: <20110729153640.GE1137@mdounin.ru> References: <20110726115726.GE1137@mdounin.ru> <20110727143824.GT1137@mdounin.ru> <20110729153640.GE1137@mdounin.ru> Message-ID: On 29 July 2011 17:36, Maxim Dounin wrote: > > > With "keepalive 32;" you keep busy all of your 32 php processes > even if there are no active requests, and there are no processes > left to process listen queue. > > On the other hand, nginx will still try to establish new > connection if there are no idle one sitting in connections cache > during request processing. Therefore some requests will enter > php's listen queue and have no chance to leave it. Eventually > listen queue will overflow. > > Please also note that "keepalive 10;" means each nginx worker will > keep up to 10 connections. If you are running multiple workers > you may need to use lower number. > > Thanks for this. It's been running fine with keepalive 10 on four workers -- is that because there is connection overlap? Am I risking a possibility of exhausting the 32 processes eventually? Thomas -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Tue Aug 2 15:26:01 2011 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 2 Aug 2011 19:26:01 +0400 Subject: upstream keepalive: call for testing In-Reply-To: References: <20110726115726.GE1137@mdounin.ru> <20110727143824.GT1137@mdounin.ru> <20110729153640.GE1137@mdounin.ru> Message-ID: <20110802152600.GM1137@mdounin.ru> Hello! On Tue, Aug 02, 2011 at 05:08:11PM +0200, Thomas Love wrote: > On 29 July 2011 17:36, Maxim Dounin wrote: > > > > > > With "keepalive 32;" you keep busy all of your 32 php processes > > even if there are no active requests, and there are no processes > > left to process listen queue. > > > > On the other hand, nginx will still try to establish new > > connection if there are no idle one sitting in connections cache > > during request processing. Therefore some requests will enter > > php's listen queue and have no chance to leave it. Eventually > > listen queue will overflow. > > > > Please also note that "keepalive 10;" means each nginx worker will > > keep up to 10 connections. If you are running multiple workers > > you may need to use lower number. > > > > > Thanks for this. It's been running fine with keepalive 10 on four workers -- > is that because there is connection overlap? Am I risking a possibility of > exhausting the 32 processes eventually? With each worker keeping 10 connections 4 workers will be able to keep 40, which is more than you have processes. I suggest it works fine just because nginx is underloaded and most requests are processed by one or two workers. I would recommend setting something like "keepalive 5;" in your case. Maxim Dounin From chateau.xiao at gmail.com Thu Aug 4 01:47:46 2011 From: chateau.xiao at gmail.com (chateau Xiao) Date: Thu, 4 Aug 2011 09:47:46 +0800 Subject: IP Upstream Hash In-Reply-To: <20110721094315.GE23656@sysoev.ru> References: <20110721094315.GE23656@sysoev.ru> Message-ID: do you mean, in the next version, murmur2 hash function will be used in nginx? and do you show the benchmark between the orignial algorithm and the new one? On Thu, Jul 21, 2011 at 5:43 PM, Igor Sysoev wrote: > On Wed, Jul 20, 2011 at 02:18:52PM -0700, Matthieu Tourne wrote: > > Hi all, > > > > I was looking at the code in ngx_http_upstream_ip_hash_module.c > > And I'm not sure where the hashing algorithm for IPs is coming from, > > especially those lines : > > > > iphp->hash = 89; > > > > hash = (hash * 113 + iphp->addr[i]) % 6271; > > > > Just wondering if those constants are arbitrary chosen, or if there is > > something there to guarantee a good distribution ? > > > > If you have some links explaining this algorithm, it would be greatly > > appreciated! > > Also, how would you get a good distribution on IPv6. Maybe it would make > > sense to use murmur ? > > This algorithm came from FastMail.fm. Murmur2 may be better, I'm going > to use it in upcoming upstream hash module which allows to hash any > expression. > > > -- > Igor Sysoev > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From oscaretu at gmail.com Thu Aug 4 07:08:15 2011 From: oscaretu at gmail.com (Oscar Fernandez) Date: Thu, 4 Aug 2011 09:08:15 +0200 Subject: IP Upstream Hash In-Reply-To: References: <20110721094315.GE23656@sysoev.ru> Message-ID: According to http://code.google.com/p/smhasher/ murmur3 is better than murmur2 Oscar On Thu, Aug 4, 2011 at 3:47 AM, chateau Xiao wrote: > do you mean, in the next version, murmur2 hash function will be used in > nginx? > > and do you show the benchmark between the orignial algorithm and the new > one? > > On Thu, Jul 21, 2011 at 5:43 PM, Igor Sysoev wrote: > >> On Wed, Jul 20, 2011 at 02:18:52PM -0700, Matthieu Tourne wrote: >> > Hi all, >> > >> > I was looking at the code in ngx_http_upstream_ip_hash_module.c >> > And I'm not sure where the hashing algorithm for IPs is coming from, >> > especially those lines : >> > >> > iphp->hash = 89; >> > >> > hash = (hash * 113 + iphp->addr[i]) % 6271; >> > >> > Just wondering if those constants are arbitrary chosen, or if there is >> > something there to guarantee a good distribution ? >> > >> > If you have some links explaining this algorithm, it would be greatly >> > appreciated! >> > Also, how would you get a good distribution on IPv6. Maybe it would make >> > sense to use murmur ? >> >> This algorithm came from FastMail.fm. Murmur2 may be better, I'm going >> to use it in upcoming upstream hash module which allows to hash any >> expression. >> >> >> -- >> Igor Sysoev >> >> _______________________________________________ >> nginx-devel mailing list >> nginx-devel at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx-devel >> > > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > > -- Oscar Fernandez Sierra oscaretu at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From peter at smitmail.eu Thu Aug 4 10:32:14 2011 From: peter at smitmail.eu (Peter Smit) Date: Thu, 4 Aug 2011 13:32:14 +0300 Subject: Option to disable buffering in uwsgi module Message-ID: (This is my first message to this list, please let me know if I'm doing something wrong!) In developing a comet style application in a uwsgi/nginx setup I noticed that nginx always buffers the response when uwsgi_pass is used. I'm not sure or there is any particular reason why this is done, except for the fact that the uwsgi code is originally based on the fastcgi module where indeed buffering is unavoidable. I think however that it makes sense to give the option of disabling buffering with uwsgi. I actually already went ahead and wrote a patch that does exactly this. It introduces a uwsgi_buffering flag and now adheres to the "X-Accel-Buffering" header. I have only limited capabilities to test this patch, but for me it does exactly that, disabling the buffer. Could some of you review this patch and if it is ok, could it be introduced in nginx? I made the patch on the 1.1.0 source. I attached it and included it inline below this message. Let me know if I should give it in a different format. Regards, Peter Smit --- nginx-1.1.0/src/http/modules/ngx_http_uwsgi_module.c 2011-07-29 18:33:03.000000000 +0300 +++ nginx-1.1.0-uwsgi_buffering/src/http/modules/ngx_http_uwsgi_module.c 2011-08-04 13:16:54.381528459 +0300 @@ -123,6 +123,13 @@ offsetof(ngx_http_uwsgi_loc_conf_t, upstream.store_access), NULL }, + { ngx_string("uwsgi_buffering"), + NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE123, + ngx_conf_set_flag_slot, + NGX_HTTP_LOC_CONF_OFFSET, + offsetof(ngx_http_uwsgi_loc_conf_t, upstream.buffering), + NULL }, + { ngx_string("uwsgi_ignore_client_abort"), NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_FLAG, ngx_conf_set_flag_slot, @@ -445,7 +452,7 @@ u->abort_request = ngx_http_uwsgi_abort_request; u->finalize_request = ngx_http_uwsgi_finalize_request; - u->buffering = 1; + u->buffering = uwcf->upstream.buffering; u->pipe = ngx_pcalloc(r->pool, sizeof(ngx_event_pipe_t)); if (u->pipe == NULL) { @@ -1083,6 +1090,8 @@ /* "uwsgi_cyclic_temp_file" is disabled */ conf->upstream.cyclic_temp_file = 0; + conf->upstream.change_buffering = 1; + ngx_str_set(&conf->upstream.module, "uwsgi"); return conf; -------------- next part -------------- A non-text attachment was scrubbed... Name: nginx-1.1.0_uwsgi_buffering_patch Type: application/octet-stream Size: 1252 bytes Desc: not available URL: From igor at sysoev.ru Thu Aug 4 10:44:54 2011 From: igor at sysoev.ru (Igor Sysoev) Date: Thu, 4 Aug 2011 14:44:54 +0400 Subject: Option to disable buffering in uwsgi module In-Reply-To: References: Message-ID: <20110804104454.GE56097@sysoev.ru> On Thu, Aug 04, 2011 at 01:32:14PM +0300, Peter Smit wrote: > (This is my first message to this list, please let me know if I'm > doing something wrong!) > > In developing a comet style application in a uwsgi/nginx setup I > noticed that nginx always buffers the response when uwsgi_pass is > used. > > I'm not sure or there is any particular reason why this is done, > except for the fact that the uwsgi code is originally based on the > fastcgi module where indeed buffering is unavoidable. I think however > that it makes sense to give the option of disabling buffering with > uwsgi. > > I actually already went ahead and wrote a patch that does exactly > this. It introduces a uwsgi_buffering flag and now adheres to the > "X-Accel-Buffering" header. I have only limited capabilities to test > this patch, but for me it does exactly that, disabling the buffer. > > Could some of you review this patch and if it is ok, could it be > introduced in nginx? > > I made the patch on the 1.1.0 source. I attached it and included it > inline below this message. Let me know if I should give it in a > different format. Thank you for idea, it seems unbuffered proxying should work for uwsgi as well as for scgi, since they both use simple protocol, that is, no protocol at all :) for body. -- Igor Sysoev From igor at sysoev.ru Thu Aug 4 11:11:11 2011 From: igor at sysoev.ru (Igor Sysoev) Date: Thu, 4 Aug 2011 15:11:11 +0400 Subject: IP Upstream Hash In-Reply-To: References: <20110721094315.GE23656@sysoev.ru> Message-ID: <20110804111110.GG56097@sysoev.ru> On Thu, Aug 04, 2011 at 09:47:46AM +0800, chateau Xiao wrote: > do you mean, in the next version, murmur2 hash function will be used in > nginx? Yes. > and do you show the benchmark between the orignial algorithm and the new > one? On tens bytes the difference will be negligible and I believe you actually can not even test it in a real world because benchmark itself overhead is more than testee. Note, that million of loops are not real world test. > On Thu, Jul 21, 2011 at 5:43 PM, Igor Sysoev wrote: > > > On Wed, Jul 20, 2011 at 02:18:52PM -0700, Matthieu Tourne wrote: > > > Hi all, > > > > > > I was looking at the code in ngx_http_upstream_ip_hash_module.c > > > And I'm not sure where the hashing algorithm for IPs is coming from, > > > especially those lines : > > > > > > iphp->hash = 89; > > > > > > hash = (hash * 113 + iphp->addr[i]) % 6271; > > > > > > Just wondering if those constants are arbitrary chosen, or if there is > > > something there to guarantee a good distribution ? > > > > > > If you have some links explaining this algorithm, it would be greatly > > > appreciated! > > > Also, how would you get a good distribution on IPv6. Maybe it would make > > > sense to use murmur ? > > > > This algorithm came from FastMail.fm. Murmur2 may be better, I'm going > > to use it in upcoming upstream hash module which allows to hash any > > expression. -- Igor Sysoev From andrew at mylanguage.me Sat Aug 6 22:27:21 2011 From: andrew at mylanguage.me (Andrew Lauder) Date: Sat, 6 Aug 2011 15:27:21 -0700 Subject: Regular Expression Parsing of TCP Session Message-ID: Hi, ** This is a very important project, with a very short timeline. If any developers are interested in building this, and potentially releasing it as open source, please contact me asap! My contact details are included below. ** I'm attempting to configure a reverse TCP proxy which is able to provide seamless authentication for a partner company's API. The API has no per-user granular access control capability, so I'm hoping to add this control by inspecting the first non-handshake packet (after SYN, SYN/ACK, ACK). I'm looking for the value between and , which is always sent as the first non-handshake packet. Possible regex: (.*) So far, I've successfully compiled nginx w/ TCP Proxy module, configured it to allow me to access partner API, and it works great. Now, I'm attempting to read the first non-handshake packet, looking for (.*) Once I have this uid value, I will use drizzle module to connect directly to MySQL cluster to see if uid has access to API. If it has access, nginx should simply forward the request. If not, nginx should block the request with an error message. I've tried looking at form-input module, because it is able to parse POST variables. I've also looked at HTTP Header parsing code in nginx core, but I haven't figured out how to get pointer to TCP payload. I believe once I have a pointer, it will be possible to find the value I'm looking for. Another note - If the packet is not #4 in the stream, I don't want to process it. Otherwise it will become very CPU intensive. I'm a complete newbie to nginx, however I am already quite impressed, and I would like to support future development of the product (both open source and paid). Cheers! -- Andrew Lauder CEO, Founder myLanguage, Inc. http://www.myLanguage.me t: +1 408 982 6515 | f: +1 408 856 2534 e: andrew at mylanguage.me From witekfl at gazeta.pl Mon Aug 8 07:48:43 2011 From: witekfl at gazeta.pl (Witold Filipczyk) Date: Mon, 8 Aug 2011 09:48:43 +0200 Subject: 0D in requests causes the 502 error Message-ID: <20110808074843.GA4503@pldmachine> Hi, Requests like http://blabla.com/blabla/%0D/blabla.php causes 502 Bad Gateway errors. I'm using nginx-1.0.5 and apache. These requests goes to apache, but in nginx's logs I see that %0D is replaced by ^M. Is it a problem in my setup, or is it known bug? From adelino at ainou.net Mon Aug 8 15:38:58 2011 From: adelino at ainou.net (Adelino Monteiro) Date: Mon, 8 Aug 2011 16:38:58 +0100 Subject: Output filter is not finished Message-ID: Hello, I've build an output filter based on the guide from Evan Miller and some other modules. Everything has been working fine but lately I found that my next_body_filter call is returning NGX_AGAIN. This happens when I only have a very small chain that won't get processed (I put in the ctx to process the next time), so the call to ngx_http_next_body_filter(r, in) basically don't has any output to process. Strangely I see that nginx put my request into a timer but what happens is that after that time the request is broken and the rest of the file don't get processed. Have a look at the output please: 2011/08/08 15:45:41 [debug] 11461#0: *1 http postpone filter "/u2.mp4?" 0000000001A02790 2011/08/08 15:45:41 [debug] 11461#0: *1 write new buf t:0 f:0 0000000000000000, pos 0000000001A0A56C, size: 5 file: 0, size: 0 2011/08/08 15:45:41 [debug] 11461#0: *1 write new buf t:1 f:0 0000000001A6ACB0, pos 0000000001A6ACB0, size: -5 file: 0, size: 0 2011/08/08 15:45:41 [debug] 11461#0: *1 http write filter: l:0 f:1 s:0 2011/08/08 15:45:41 [debug] 11461#0: *1 Save left_bytes [13] in context 2011/08/08 15:45:41 [debug] 11461#0: *1 http copy filter: -2 "/u2.mp4?" 2011/08/08 15:45:41 [debug] 11461#0: *1 http finalize request: -2, "/u2.mp4?" a:1, c:1 2011/08/08 15:45:41 [debug] 11461#0: *1 event timer add: 3: 60000:1312814801328 2011/08/08 15:46:41 [debug] 11461#0: *1 event timer del: 3: 1312814801328 2011/08/08 15:46:41 [debug] 11461#0: *1 http run request: "/u2.mp4?" 2011/08/08 15:46:41 [debug] 11461#0: *1 http writer handler: "/u2.mp4?" 2011/08/08 15:46:41 [info] 11461#0: *1 client timed out (110: Connection timed out) while sending mp4 to client, client: 192.168.56.1, server: localhost, request: "GET /u2.mp4 HTTP/1.1", host: "192.168.56.101:8080" 2011/08/08 15:46:41 [debug] 11461#0: *1 http finalize request: 408, "/u2.mp4?" a:1, c:1 2011/08/08 15:46:41 [debug] 11461#0: *1 http terminate request count:1 2011/08/08 15:46:41 [debug] 11461#0: *1 http terminate cleanup count:1 blk:0 2011/08/08 15:46:41 [debug] 11461#0: *1 http posted request: "/u2.mp4?" 2011/08/08 15:46:41 [debug] 11461#0: *1 http terminate handler count:1 2011/08/08 15:46:41 [debug] 11461#0: *1 http request count:1 blk:0 2011/08/08 15:46:41 [debug] 11461#0: *1 http close request 2011/08/08 15:46:41 [debug] 11461#0: *1 http log handler 2011/08/08 15:46:41 [debug] 11461#0: *1 run cleanup: 0000000001A0A290 2011/08/08 15:46:41 [debug] 11461#0: *1 file cleanup: fd:12 2011/08/08 15:46:41 [debug] 11461#0: *1 free: 0000000000000000 2011/08/08 15:46:41 [debug] 11461#0: *1 free: 0000000001A6ACB0 2011/08/08 15:46:41 [debug] 11461#0: *1 free: 0000000001A546C0 2011/08/08 15:46:41 [debug] 11461#0: *1 free: 0000000001A098C0, unused: 8 2011/08/08 15:46:41 [debug] 11461#0: *1 free: 0000000001A02680, unused: 3527 2011/08/08 15:46:41 [debug] 11461#0: *1 close http connection: 3 2011/08/08 15:46:41 [debug] 11461#0: *1 reusable connection: 0 2011/08/08 15:46:41 [debug] 11461#0: *1 free: 0000000001A094B0 2011/08/08 15:46:41 [debug] 11461#0: *1 free: 0000000001A08EB0 2011/08/08 15:46:41 [debug] 11461#0: *1 free: 00000000019FDBE0, unused: 8 2011/08/08 15:46:41 [debug] 11461#0: *1 free: 0000000001A093A0, unused: 128 My question is what magic do I need to do in order to tell Nginx that I don't have data to processe this time and to send me other content. I googled around and found some information from 2007 that told me to use timer but I can't find a example with that implementation and don't know if that is any longer supported. Any help is appreciated. AM -------------- next part -------------- An HTML attachment was scrubbed... URL: From adelino at ainou.net Tue Aug 9 18:00:39 2011 From: adelino at ainou.net (Adelino Monteiro) Date: Tue, 9 Aug 2011 19:00:39 +0100 Subject: Output filter is not finished In-Reply-To: References: Message-ID: Well since I got no response I suppose that I might have not expressed myself properly. The problem is that I don't know howto tell nginx that there is no information and to go on with the next request in the output buffer chain. I looked at other modules and google around but couldn't find any information that I could use. Any help would be appreciated. AM On 8 August 2011 16:38, Adelino Monteiro wrote: > > > Hello, > > I've build an output filter based on the guide from Evan Miller and some > other modules. > > Everything has been working fine but lately I found that my > next_body_filter call is returning NGX_AGAIN. > This happens when I only have a very small chain that won't get processed > (I put in the ctx to process the next time), so the call to > ngx_http_next_body_filter(r, in) basically don't has any output to > process. > Strangely I see that nginx put my request into a timer but what happens is > that after that time the request is broken and the rest of the file don't > get processed. > > Have a look at the output please: > > 2011/08/08 15:45:41 [debug] 11461#0: *1 http postpone filter "/u2.mp4?" > 0000000001A02790 > 2011/08/08 15:45:41 [debug] 11461#0: *1 write new buf t:0 f:0 > 0000000000000000, pos 0000000001A0A56C, size: 5 file: 0, size: 0 > 2011/08/08 15:45:41 [debug] 11461#0: *1 write new buf t:1 f:0 > 0000000001A6ACB0, pos 0000000001A6ACB0, size: -5 file: 0, size: 0 > 2011/08/08 15:45:41 [debug] 11461#0: *1 http write filter: l:0 f:1 s:0 > 2011/08/08 15:45:41 [debug] 11461#0: *1 Save left_bytes [13] in context > 2011/08/08 15:45:41 [debug] 11461#0: *1 http copy filter: -2 "/u2.mp4?" > 2011/08/08 15:45:41 [debug] 11461#0: *1 http finalize request: -2, > "/u2.mp4?" a:1, c:1 > 2011/08/08 15:45:41 [debug] 11461#0: *1 event timer add: 3: > 60000:1312814801328 > 2011/08/08 15:46:41 [debug] 11461#0: *1 event timer del: 3: 1312814801328 > 2011/08/08 15:46:41 [debug] 11461#0: *1 http run request: "/u2.mp4?" > 2011/08/08 15:46:41 [debug] 11461#0: *1 http writer handler: "/u2.mp4?" > 2011/08/08 15:46:41 [info] 11461#0: *1 client timed out (110: Connection > timed out) while sending mp4 to client, client: 192.168.56.1, server: > localhost, request: "GET /u2.mp4 HTTP/1.1", host: "192.168.56.101:8080" > 2011/08/08 15:46:41 [debug] 11461#0: *1 http finalize request: 408, > "/u2.mp4?" a:1, c:1 > 2011/08/08 15:46:41 [debug] 11461#0: *1 http terminate request count:1 > 2011/08/08 15:46:41 [debug] 11461#0: *1 http terminate cleanup count:1 > blk:0 > 2011/08/08 15:46:41 [debug] 11461#0: *1 http posted request: "/u2.mp4?" > 2011/08/08 15:46:41 [debug] 11461#0: *1 http terminate handler count:1 > 2011/08/08 15:46:41 [debug] 11461#0: *1 http request count:1 blk:0 > 2011/08/08 15:46:41 [debug] 11461#0: *1 http close request > 2011/08/08 15:46:41 [debug] 11461#0: *1 http log handler > 2011/08/08 15:46:41 [debug] 11461#0: *1 run cleanup: 0000000001A0A290 > 2011/08/08 15:46:41 [debug] 11461#0: *1 file cleanup: fd:12 > 2011/08/08 15:46:41 [debug] 11461#0: *1 free: 0000000000000000 > 2011/08/08 15:46:41 [debug] 11461#0: *1 free: 0000000001A6ACB0 > 2011/08/08 15:46:41 [debug] 11461#0: *1 free: 0000000001A546C0 > 2011/08/08 15:46:41 [debug] 11461#0: *1 free: 0000000001A098C0, unused: 8 > 2011/08/08 15:46:41 [debug] 11461#0: *1 free: 0000000001A02680, unused: > 3527 > 2011/08/08 15:46:41 [debug] 11461#0: *1 close http connection: 3 > 2011/08/08 15:46:41 [debug] 11461#0: *1 reusable connection: 0 > 2011/08/08 15:46:41 [debug] 11461#0: *1 free: 0000000001A094B0 > 2011/08/08 15:46:41 [debug] 11461#0: *1 free: 0000000001A08EB0 > 2011/08/08 15:46:41 [debug] 11461#0: *1 free: 00000000019FDBE0, unused: 8 > 2011/08/08 15:46:41 [debug] 11461#0: *1 free: 0000000001A093A0, unused: 128 > > My question is what magic do I need to do in order to tell Nginx that I > don't have data to processe this time and to send me other content. > > I googled around and found some information from 2007 that told me to use > timer but I can't find a example with that implementation and don't know if > that is any longer supported. > > Any help is appreciated. > > AM > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dave at daveb.net Tue Aug 9 18:17:11 2011 From: dave at daveb.net (Dave Bailey) Date: Tue, 9 Aug 2011 11:17:11 -0700 Subject: Output filter is not finished In-Reply-To: References: Message-ID: On Tue, Aug 9, 2011 at 11:00 AM, Adelino Monteiro wrote: > Well since I got no response I suppose that I might have not expressed > myself properly. > > The problem is that I don't know howto tell nginx that there is no > information and to go on with the next request in the output buffer chain. > I looked at other modules and google around but couldn't find any > information that I could use. > I have found the gzip filter (src/http/modules/ngx_http_gzip_filter_module.c) to be a useful template for this. -dave > Any help would be appreciated. > > AM > > On 8 August 2011 16:38, Adelino Monteiro wrote: > >> >> >> Hello, >> >> I've build an output filter based on the guide from Evan Miller and some >> other modules. >> >> Everything has been working fine but lately I found that my >> next_body_filter call is returning NGX_AGAIN. >> This happens when I only have a very small chain that won't get processed >> (I put in the ctx to process the next time), so the call to >> ngx_http_next_body_filter(r, in) basically don't has any output to >> process. >> Strangely I see that nginx put my request into a timer but what happens is >> that after that time the request is broken and the rest of the file don't >> get processed. >> >> Have a look at the output please: >> >> 2011/08/08 15:45:41 [debug] 11461#0: *1 http postpone filter "/u2.mp4?" >> 0000000001A02790 >> 2011/08/08 15:45:41 [debug] 11461#0: *1 write new buf t:0 f:0 >> 0000000000000000, pos 0000000001A0A56C, size: 5 file: 0, size: 0 >> 2011/08/08 15:45:41 [debug] 11461#0: *1 write new buf t:1 f:0 >> 0000000001A6ACB0, pos 0000000001A6ACB0, size: -5 file: 0, size: 0 >> 2011/08/08 15:45:41 [debug] 11461#0: *1 http write filter: l:0 f:1 s:0 >> 2011/08/08 15:45:41 [debug] 11461#0: *1 Save left_bytes [13] in context >> 2011/08/08 15:45:41 [debug] 11461#0: *1 http copy filter: -2 "/u2.mp4?" >> 2011/08/08 15:45:41 [debug] 11461#0: *1 http finalize request: -2, >> "/u2.mp4?" a:1, c:1 >> 2011/08/08 15:45:41 [debug] 11461#0: *1 event timer add: 3: >> 60000:1312814801328 >> 2011/08/08 15:46:41 [debug] 11461#0: *1 event timer del: 3: 1312814801328 >> 2011/08/08 15:46:41 [debug] 11461#0: *1 http run request: "/u2.mp4?" >> 2011/08/08 15:46:41 [debug] 11461#0: *1 http writer handler: "/u2.mp4?" >> 2011/08/08 15:46:41 [info] 11461#0: *1 client timed out (110: Connection >> timed out) while sending mp4 to client, client: 192.168.56.1, server: >> localhost, request: "GET /u2.mp4 HTTP/1.1", host: "192.168.56.101:8080" >> 2011/08/08 15:46:41 [debug] 11461#0: *1 http finalize request: 408, >> "/u2.mp4?" a:1, c:1 >> 2011/08/08 15:46:41 [debug] 11461#0: *1 http terminate request count:1 >> 2011/08/08 15:46:41 [debug] 11461#0: *1 http terminate cleanup count:1 >> blk:0 >> 2011/08/08 15:46:41 [debug] 11461#0: *1 http posted request: "/u2.mp4?" >> 2011/08/08 15:46:41 [debug] 11461#0: *1 http terminate handler count:1 >> 2011/08/08 15:46:41 [debug] 11461#0: *1 http request count:1 blk:0 >> 2011/08/08 15:46:41 [debug] 11461#0: *1 http close request >> 2011/08/08 15:46:41 [debug] 11461#0: *1 http log handler >> 2011/08/08 15:46:41 [debug] 11461#0: *1 run cleanup: 0000000001A0A290 >> 2011/08/08 15:46:41 [debug] 11461#0: *1 file cleanup: fd:12 >> 2011/08/08 15:46:41 [debug] 11461#0: *1 free: 0000000000000000 >> 2011/08/08 15:46:41 [debug] 11461#0: *1 free: 0000000001A6ACB0 >> 2011/08/08 15:46:41 [debug] 11461#0: *1 free: 0000000001A546C0 >> 2011/08/08 15:46:41 [debug] 11461#0: *1 free: 0000000001A098C0, unused: 8 >> 2011/08/08 15:46:41 [debug] 11461#0: *1 free: 0000000001A02680, unused: >> 3527 >> 2011/08/08 15:46:41 [debug] 11461#0: *1 close http connection: 3 >> 2011/08/08 15:46:41 [debug] 11461#0: *1 reusable connection: 0 >> 2011/08/08 15:46:41 [debug] 11461#0: *1 free: 0000000001A094B0 >> 2011/08/08 15:46:41 [debug] 11461#0: *1 free: 0000000001A08EB0 >> 2011/08/08 15:46:41 [debug] 11461#0: *1 free: 00000000019FDBE0, unused: 8 >> 2011/08/08 15:46:41 [debug] 11461#0: *1 free: 0000000001A093A0, unused: >> 128 >> >> My question is what magic do I need to do in order to tell Nginx that I >> don't have data to processe this time and to send me other content. >> >> I googled around and found some information from 2007 that told me to use >> timer but I can't find a example with that implementation and don't know if >> that is any longer supported. >> >> Any help is appreciated. >> >> AM >> > > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From adelino at ainou.net Thu Aug 11 14:54:26 2011 From: adelino at ainou.net (Adelino Monteiro) Date: Thu, 11 Aug 2011 15:54:26 +0100 Subject: Output filter is not finished In-Reply-To: References: Message-ID: I was just doing that but I found the module to be somehow complex and I found no answers for it (again....) See for instance this line that you can find trough out the file r->connection->buffered &= ~NGX_HTTP_GZIP_BUFFERED; The r->connection->buffered is not used in the nginx core only in one other modulel ngx_http_image_filter_module.c Anyone know what this line influencing in the rest of the nginx environment? Regards, AM On 9 August 2011 19:17, Dave Bailey wrote: > On Tue, Aug 9, 2011 at 11:00 AM, Adelino Monteiro wrote: > >> Well since I got no response I suppose that I might have not expressed >> myself properly. >> >> The problem is that I don't know howto tell nginx that there is no >> information and to go on with the next request in the output buffer chain. >> I looked at other modules and google around but couldn't find any >> information that I could use. >> > > I have found the gzip filter > (src/http/modules/ngx_http_gzip_filter_module.c) to be a useful template for > this. > > -dave > > >> Any help would be appreciated. >> >> AM >> >> On 8 August 2011 16:38, Adelino Monteiro wrote: >> >>> >>> >>> Hello, >>> >>> I've build an output filter based on the guide from Evan Miller and some >>> other modules. >>> >>> Everything has been working fine but lately I found that my >>> next_body_filter call is returning NGX_AGAIN. >>> This happens when I only have a very small chain that won't get >>> processed (I put in the ctx to process the next time), so the call to >>> ngx_http_next_body_filter(r, in) basically don't has any output to >>> process. >>> Strangely I see that nginx put my request into a timer but what happens >>> is that after that time the request is broken and the rest of the file don't >>> get processed. >>> >>> Have a look at the output please: >>> >>> 2011/08/08 15:45:41 [debug] 11461#0: *1 http postpone filter "/u2.mp4?" >>> 0000000001A02790 >>> 2011/08/08 15:45:41 [debug] 11461#0: *1 write new buf t:0 f:0 >>> 0000000000000000, pos 0000000001A0A56C, size: 5 file: 0, size: 0 >>> 2011/08/08 15:45:41 [debug] 11461#0: *1 write new buf t:1 f:0 >>> 0000000001A6ACB0, pos 0000000001A6ACB0, size: -5 file: 0, size: 0 >>> 2011/08/08 15:45:41 [debug] 11461#0: *1 http write filter: l:0 f:1 s:0 >>> 2011/08/08 15:45:41 [debug] 11461#0: *1 Save left_bytes [13] in context >>> 2011/08/08 15:45:41 [debug] 11461#0: *1 http copy filter: -2 "/u2.mp4?" >>> 2011/08/08 15:45:41 [debug] 11461#0: *1 http finalize request: -2, >>> "/u2.mp4?" a:1, c:1 >>> 2011/08/08 15:45:41 [debug] 11461#0: *1 event timer add: 3: >>> 60000:1312814801328 >>> 2011/08/08 15:46:41 [debug] 11461#0: *1 event timer del: 3: 1312814801328 >>> 2011/08/08 15:46:41 [debug] 11461#0: *1 http run request: "/u2.mp4?" >>> 2011/08/08 15:46:41 [debug] 11461#0: *1 http writer handler: "/u2.mp4?" >>> 2011/08/08 15:46:41 [info] 11461#0: *1 client timed out (110: Connection >>> timed out) while sending mp4 to client, client: 192.168.56.1, server: >>> localhost, request: "GET /u2.mp4 HTTP/1.1", host: "192.168.56.101:8080" >>> 2011/08/08 15:46:41 [debug] 11461#0: *1 http finalize request: 408, >>> "/u2.mp4?" a:1, c:1 >>> 2011/08/08 15:46:41 [debug] 11461#0: *1 http terminate request count:1 >>> 2011/08/08 15:46:41 [debug] 11461#0: *1 http terminate cleanup count:1 >>> blk:0 >>> 2011/08/08 15:46:41 [debug] 11461#0: *1 http posted request: "/u2.mp4?" >>> 2011/08/08 15:46:41 [debug] 11461#0: *1 http terminate handler count:1 >>> 2011/08/08 15:46:41 [debug] 11461#0: *1 http request count:1 blk:0 >>> 2011/08/08 15:46:41 [debug] 11461#0: *1 http close request >>> 2011/08/08 15:46:41 [debug] 11461#0: *1 http log handler >>> 2011/08/08 15:46:41 [debug] 11461#0: *1 run cleanup: 0000000001A0A290 >>> 2011/08/08 15:46:41 [debug] 11461#0: *1 file cleanup: fd:12 >>> 2011/08/08 15:46:41 [debug] 11461#0: *1 free: 0000000000000000 >>> 2011/08/08 15:46:41 [debug] 11461#0: *1 free: 0000000001A6ACB0 >>> 2011/08/08 15:46:41 [debug] 11461#0: *1 free: 0000000001A546C0 >>> 2011/08/08 15:46:41 [debug] 11461#0: *1 free: 0000000001A098C0, unused: 8 >>> 2011/08/08 15:46:41 [debug] 11461#0: *1 free: 0000000001A02680, unused: >>> 3527 >>> 2011/08/08 15:46:41 [debug] 11461#0: *1 close http connection: 3 >>> 2011/08/08 15:46:41 [debug] 11461#0: *1 reusable connection: 0 >>> 2011/08/08 15:46:41 [debug] 11461#0: *1 free: 0000000001A094B0 >>> 2011/08/08 15:46:41 [debug] 11461#0: *1 free: 0000000001A08EB0 >>> 2011/08/08 15:46:41 [debug] 11461#0: *1 free: 00000000019FDBE0, unused: 8 >>> 2011/08/08 15:46:41 [debug] 11461#0: *1 free: 0000000001A093A0, unused: >>> 128 >>> >>> My question is what magic do I need to do in order to tell Nginx that I >>> don't have data to processe this time and to send me other content. >>> >>> I googled around and found some information from 2007 that told me to use >>> timer but I can't find a example with that implementation and don't know if >>> that is any longer supported. >>> >>> Any help is appreciated. >>> >>> AM >>> >> >> >> _______________________________________________ >> nginx-devel mailing list >> nginx-devel at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx-devel >> >> > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From oded at geek.co.il Mon Aug 15 14:59:36 2011 From: oded at geek.co.il (Oded Arbel) Date: Mon, 15 Aug 2011 14:59:36 -0000 (UTC) Subject: [PATCH 21 of 31] Fix cpu hog with all upstream servers marked "down" In-Reply-To: <0d819b2c-1e63-481a-b18b-fdebff31e9e9@vhost1.heptagon.co.il> Message-ID: <853a7b84-627c-44ec-9156-95974f88612c@vhost1.heptagon.co.il> Regarding the above mentioned patch (also quoted below), I wanted to provide feedback on this: On my system, we have several reverse proxy servers running Nginx and forwarding requests to upstream. Our configuration looks like this: upstream trc { server prod2-f1:10213 max_fails=500 fail_timeout=30s; server prod2-f2:10213 max_fails=500 fail_timeout=30s; ... server 127.0.0.1:10213 backup; ip_hash; } We've noticed that every once in a while (about 5-10 times a week) one of the servers gets into a state where an Nginx worker starts eating 100% CPU and timing out on requests. I've applied the aforementioned patch to our Nginx installation (release 1.0.0 with the Nginx_Upstream_Hash patch) and deployed to our production servers. After a few hours, we started having the Nginx workers on all the servers eat 100% CPU. Connecting with gdb to one of the problematic worker I got this backtrace: #0 0x000000000044a650 in ngx_http_upstream_get_round_robin_peer () #1 0x00000000004253dc in ngx_event_connect_peer () #2 0x0000000000448618 in ngx_http_upstream_connect () #3 0x0000000000448e10 in ngx_http_upstream_process_header () #4 0x00000000004471fb in ngx_http_upstream_handler () #5 0x00000000004247fa in ngx_event_expire_timers () #6 0x00000000004246ed in ngx_process_events_and_timers () #7 0x000000000042a048 in ngx_worker_process_cycle () #8 0x00000000004287e0 in ngx_spawn_process () #9 0x000000000042963c in ngx_start_worker_processes () #10 0x000000000042a5d5 in ngx_master_process_cycle () #11 0x0000000000410adf in main () I then tried tracing through the running worker using the GDB command "next", which said: Single stepping until exit from function ngx_http_upstream_get_round_robin_peer And never returned until I got fed up and broke it. I finally reverted the patch and restarted the service, and continue to get this behavior. So my conclusion is that for my specific problem, this patch does not solve it. -- Oded diff --git a/src/http/ngx_http_upstream_round_robin.c b/src/http/ngx_http_upstream_round_robin.c --- a/src/http/ngx_http_upstream_round_robin.c +++ b/src/http/ngx_http_upstream_round_robin.c @@ -583,7 +583,7 @@ failed: static ngx_uint_t ngx_http_upstream_get_peer(ngx_http_upstream_rr_peers_t *peers) { - ngx_uint_t i, n; + ngx_uint_t i, n, reset = 0; ngx_http_upstream_rr_peer_t *peer; peer = &peers->peer[0]; @@ -622,6 +622,10 @@ ngx_http_upstream_get_peer(ngx_http_upst return n; } + if (reset++) { + return 0; + } + for (i = 0; i < peers->number; i++) { peer[i].current_weight = peer[i].weight; } From oded at geek.co.il Mon Aug 15 15:09:38 2011 From: oded at geek.co.il (Oded Arbel) Date: Mon, 15 Aug 2011 15:09:38 -0000 (UTC) Subject: [PATCH 21 of 31] Fix cpu hog with all upstream servers marked "down" In-Reply-To: <853a7b84-627c-44ec-9156-95974f88612c@vhost1.heptagon.co.il> Message-ID: ----- Original Message ----- > Regarding the above mentioned patch (also quoted below), I wanted to > provide feedback on this: > > On my system, we have several reverse proxy servers running Nginx and > forwarding requests to upstream. Our configuration looks like this: > upstream trc { > server prod2-f1:10213 max_fails=500 fail_timeout=30s; > server prod2-f2:10213 max_fails=500 fail_timeout=30s; > ... > server 127.0.0.1:10213 backup; > ip_hash; > } > > We've noticed that every once in a while (about 5-10 times a week) > one of the servers gets into a state where an Nginx worker starts > eating 100% CPU and timing out on requests. I've applied the > aforementioned patch to our Nginx installation (release 1.0.0 with > the Nginx_Upstream_Hash patch) and deployed to our production > servers. After a few hours, we started having the Nginx workers on > all the servers eat 100% CPU. > > Connecting with gdb to one of the problematic worker I got this > backtrace: > #0 0x000000000044a650 in ngx_http_upstream_get_round_robin_peer () > #1 0x00000000004253dc in ngx_event_connect_peer () > #2 0x0000000000448618 in ngx_http_upstream_connect () > #3 0x0000000000448e10 in ngx_http_upstream_process_header () > #4 0x00000000004471fb in ngx_http_upstream_handler () > #5 0x00000000004247fa in ngx_event_expire_timers () > #6 0x00000000004246ed in ngx_process_events_and_timers () > #7 0x000000000042a048 in ngx_worker_process_cycle () > #8 0x00000000004287e0 in ngx_spawn_process () > #9 0x000000000042963c in ngx_start_worker_processes () > #10 0x000000000042a5d5 in ngx_master_process_cycle () > #11 0x0000000000410adf in main () > > I then tried tracing through the running worker using the GDB command > "next", which said: > Single stepping until exit from function > ngx_http_upstream_get_round_robin_peer > > And never returned until I got fed up and broke it. > > I finally reverted the patch and restarted the service, and continue > to get this behavior. So my conclusion is that for my specific > problem, this patch does not solve it. Additionally: 1) I believe that my problem is related to the fact that I have 25% of the upstream servers configured in the "down" state (due to some unrelated work on those servers). I've just removed the "down" servers and restarted, and I will see if that will prevent the problem from happening. 2) the trigger for the problem is continuous load on the servers over a length of time - with minimal load or with occasional spikes, the servers performs fine. The reason is likely that under more then moderate load, the upstream application servers have a relatively high request failure rate (something like 2-3%) which causes upstream applications servers to always go in and out of the "down" state automatically, so the list of "up" servers is always in flux. -- Oded From mdounin at mdounin.ru Mon Aug 15 15:59:39 2011 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 15 Aug 2011 19:59:39 +0400 Subject: [PATCH 21 of 31] Fix cpu hog with all upstream servers marked "down" In-Reply-To: <853a7b84-627c-44ec-9156-95974f88612c@vhost1.heptagon.co.il> References: <0d819b2c-1e63-481a-b18b-fdebff31e9e9@vhost1.heptagon.co.il> <853a7b84-627c-44ec-9156-95974f88612c@vhost1.heptagon.co.il> Message-ID: <20110815155939.GM1137@mdounin.ru> Hello! On Mon, Aug 15, 2011 at 02:59:36PM -0000, Oded Arbel wrote: > Regarding the above mentioned patch (also quoted below), I > wanted to provide feedback on this: > > On my system, we have several reverse proxy servers running > Nginx and forwarding requests to upstream. Our configuration > looks like this: > upstream trc { > server prod2-f1:10213 max_fails=500 fail_timeout=30s; > server prod2-f2:10213 max_fails=500 fail_timeout=30s; > ... > server 127.0.0.1:10213 backup; > ip_hash; Ip hash balancer doesn't support "backup" servers (and it will complain loudly if you place "ip_hash" before servers). Could you please check if you still see the problem after removing backup server? > } > > We've noticed that every once in a while (about 5-10 times a > week) one of the servers gets into a state where an Nginx worker > starts eating 100% CPU and timing out on requests. I've applied > the aforementioned patch to our Nginx installation (release > 1.0.0 with the Nginx_Upstream_Hash patch) and deployed to our You mean the one from Evan Miller's upstream hash module, as available at http://wiki.nginx.org/HttpUpstreamRequestHashModule? > production servers. After a few hours, we started having the > Nginx workers on all the servers eat 100% CPU. > > Connecting with gdb to one of the problematic worker I got this > backtrace: > #0 0x000000000044a650 in ngx_http_upstream_get_round_robin_peer () > #1 0x00000000004253dc in ngx_event_connect_peer () > #2 0x0000000000448618 in ngx_http_upstream_connect () > #3 0x0000000000448e10 in ngx_http_upstream_process_header () > #4 0x00000000004471fb in ngx_http_upstream_handler () > #5 0x00000000004247fa in ngx_event_expire_timers () > #6 0x00000000004246ed in ngx_process_events_and_timers () > #7 0x000000000042a048 in ngx_worker_process_cycle () > #8 0x00000000004287e0 in ngx_spawn_process () > #9 0x000000000042963c in ngx_start_worker_processes () > #10 0x000000000042a5d5 in ngx_master_process_cycle () > #11 0x0000000000410adf in main () > > I then tried tracing through the running worker using the GDB > command "next", which said: > Single stepping until exit from function > ngx_http_upstream_get_round_robin_peer > > And never returned until I got fed up and broke it. > > I finally reverted the patch and restarted the service, and > continue to get this behavior. So my conclusion is that for my > specific problem, this patch does not solve it. Your problem is different from one the patch is intended to solve. The patch solves one (and only one) problem where all servers are marked "down" in config, clearly not the case you have. Maxim Dounin From zls.sogou at gmail.com Mon Aug 15 17:51:14 2011 From: zls.sogou at gmail.com (lanshun zhou) Date: Tue, 16 Aug 2011 01:51:14 +0800 Subject: [PATCH 21 of 31] Fix cpu hog with all upstream servers marked "down" In-Reply-To: <20110815155939.GM1137@mdounin.ru> References: <0d819b2c-1e63-481a-b18b-fdebff31e9e9@vhost1.heptagon.co.il> <853a7b84-627c-44ec-9156-95974f88612c@vhost1.heptagon.co.il> <20110815155939.GM1137@mdounin.ru> Message-ID: Do you use the upstream hash module in any of your active upstreams? Can your provide the full upstream configuration ? 2011/8/15 Maxim Dounin > Hello! > > On Mon, Aug 15, 2011 at 02:59:36PM -0000, Oded Arbel wrote: > > > Regarding the above mentioned patch (also quoted below), I > > wanted to provide feedback on this: > > > > On my system, we have several reverse proxy servers running > > Nginx and forwarding requests to upstream. Our configuration > > looks like this: > > upstream trc { > > server prod2-f1:10213 max_fails=500 fail_timeout=30s; > > server prod2-f2:10213 max_fails=500 fail_timeout=30s; > > ... > > server 127.0.0.1:10213 backup; > > ip_hash; > > Ip hash balancer doesn't support "backup" servers (and it will > complain loudly if you place "ip_hash" before servers). Could you > please check if you still see the problem after removing backup > server? > > > } > > > > We've noticed that every once in a while (about 5-10 times a > > week) one of the servers gets into a state where an Nginx worker > > starts eating 100% CPU and timing out on requests. I've applied > > the aforementioned patch to our Nginx installation (release > > 1.0.0 with the Nginx_Upstream_Hash patch) and deployed to our > > You mean the one from Evan Miller's upstream hash module, as > available at http://wiki.nginx.org/HttpUpstreamRequestHashModule? > > > production servers. After a few hours, we started having the > > Nginx workers on all the servers eat 100% CPU. > > > > Connecting with gdb to one of the problematic worker I got this > > backtrace: > > #0 0x000000000044a650 in ngx_http_upstream_get_round_robin_peer () > > #1 0x00000000004253dc in ngx_event_connect_peer () > > #2 0x0000000000448618 in ngx_http_upstream_connect () > > #3 0x0000000000448e10 in ngx_http_upstream_process_header () > > #4 0x00000000004471fb in ngx_http_upstream_handler () > > #5 0x00000000004247fa in ngx_event_expire_timers () > > #6 0x00000000004246ed in ngx_process_events_and_timers () > > #7 0x000000000042a048 in ngx_worker_process_cycle () > > #8 0x00000000004287e0 in ngx_spawn_process () > > #9 0x000000000042963c in ngx_start_worker_processes () > > #10 0x000000000042a5d5 in ngx_master_process_cycle () > > #11 0x0000000000410adf in main () > > > > I then tried tracing through the running worker using the GDB > > command "next", which said: > > Single stepping until exit from function > > ngx_http_upstream_get_round_robin_peer > > > > And never returned until I got fed up and broke it. > > > > I finally reverted the patch and restarted the service, and > > continue to get this behavior. So my conclusion is that for my > > specific problem, this patch does not solve it. > > Your problem is different from one the patch is intended to solve. > The patch solves one (and only one) problem where all servers are > marked "down" in config, clearly not the case you have. > > Maxim Dounin > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From oded at geek.co.il Mon Aug 15 18:46:24 2011 From: oded at geek.co.il (Oded Arbel) Date: Mon, 15 Aug 2011 18:46:24 -0000 (UTC) Subject: [PATCH 21 of 31] Fix cpu hog with all upstream servers marked "down" In-Reply-To: <20110815155939.GM1137@mdounin.ru> Message-ID: <34ca8b89-3033-4464-9aba-295d6c21554e@vhost1.heptagon.co.il> ----- Original Message ----- > Ip hash balancer doesn't support "backup" servers (and it will > complain loudly if you place "ip_hash" before servers). Could you > please check if you still see the problem after removing backup > server? After removing the backup server, I don't get the 100% cpu behavior again. Thanks for noting that. > You mean the one from Evan Miller's upstream hash module, as > available at http://wiki.nginx.org/HttpUpstreamRequestHashModule? Indeed. It is not enabled in our default configuration and used in the testing environment. I don't expect it to be related to the problem but mentioned it for completeness. -- Oded From toli at webforge.bg Tue Aug 16 09:46:11 2011 From: toli at webforge.bg (Anatoli Marinov) Date: Tue, 16 Aug 2011 12:46:11 +0300 Subject: realip_module Message-ID: <4E4A3C63.3070101@webforge.bg> Hello mates, I tried readip_module and I found it does not work as I expect. For example the header may looks like this: X-Forwarded-For: client1, proxy1, proxy2 Where client1 should be the real ip address of the client, proxy1 should be the first proxy after the client and proxy2 should be the last proxy after the client and the first before the nginx. Nginx has the connection with proxy2. I think In this case readip_module should return client1 ip address. It returns the latest address in the field - proxy2. What do you think? Is the behaviour wrong or I do not understand the meaning of this header? p.s. http://en.wikipedia.org/wiki/X-Forwarded-For Thanks in advance. A. Marinov -------------- next part -------------- An HTML attachment was scrubbed... URL: From toli at webforge.bg Tue Aug 16 11:40:29 2011 From: toli at webforge.bg (Anatoli Marinov) Date: Tue, 16 Aug 2011 14:40:29 +0300 Subject: realip_module In-Reply-To: <4E4A3C63.3070101@webforge.bg> References: <4E4A3C63.3070101@webforge.bg> Message-ID: <4E4A572D.3080303@webforge.bg> My patch for this issue was: @@ -157,16 +157,13 @@ len = r->headers_in.x_forwarded_for->value.len; ip = r->headers_in.x_forwarded_for->value.data; - for (p = ip + len - 1; p > ip; p--) { - if (*p == ' ' || *p == ',') { - p++; - len -= p - ip; - ip = p; - break; - } - } + p = ip; - break; + while(*p != ',' && *p != ' ' && p < p + len){ + p++; + } + len = p - ip; + break; default: /* NGX_HTTP_REALIP_HEADER */ @@ -414,6 +411,7 @@ On 08/16/2011 12:46 PM, Anatoli Marinov wrote: > Hello mates, > I tried readip_module and I found it does not work as I expect. > For example the header may looks like this: > X-Forwarded-For: client1, proxy1, proxy2 > > Where client1 should be the real ip address of the client, proxy1 > should be the first proxy after the client and proxy2 should be the > last proxy after the client and the first before the nginx. Nginx has > the connection with proxy2. > I think In this case readip_module should return client1 ip address. > It returns the latest address in the field - proxy2. > What do you think? Is the behaviour wrong or I do not understand the > meaning of this header? > > p.s. http://en.wikipedia.org/wiki/X-Forwarded-For > > Thanks in advance. > A. Marinov > > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Tue Aug 16 12:39:11 2011 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 16 Aug 2011 16:39:11 +0400 Subject: realip_module In-Reply-To: <4E4A3C63.3070101@webforge.bg> References: <4E4A3C63.3070101@webforge.bg> Message-ID: <20110816123911.GO1137@mdounin.ru> Hello! On Tue, Aug 16, 2011 at 12:46:11PM +0300, Anatoli Marinov wrote: > Hello mates, > I tried readip_module and I found it does not work as I expect. > For example the header may looks like this: > X-Forwarded-For: client1, proxy1, proxy2 > > Where client1 should be the real ip address of the client, proxy1 > should be the first proxy after the client and proxy2 should be the > last proxy after the client and the first before the nginx. Nginx > has the connection with proxy2. If request flow looks like client1 -> proxy1 -> proxy2 -> nginx (that is, nginx sees a connection from proxy2) X-Forwarded-For header will be "client1, proxy1". The address added by proxy2 is "proxy1". If we trust proxy2 - we may only use "proxy1" as a client address, everything else isn't trusted. > I think In this case readip_module should return client1 ip address. > It returns the latest address in the field - proxy2. > What do you think? Is the behaviour wrong or I do not understand the > meaning of this header? Right now nginx is only able to took *one* address, the one which was added by a trusted proxy which connected to nginx. As X-Forwarded-For contains chain of addresses, it's possible to pick first untrusted address. That is, in the above case we may pick "client1" if we trust both proxy2 and proxy1. This is not currently done, see http://trac.nginx.org/nginx/ticket/2. Maxim Dounin From mdounin at mdounin.ru Tue Aug 16 12:42:05 2011 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 16 Aug 2011 16:42:05 +0400 Subject: realip_module In-Reply-To: <4E4A572D.3080303@webforge.bg> References: <4E4A3C63.3070101@webforge.bg> <4E4A572D.3080303@webforge.bg> Message-ID: <20110816124205.GP1137@mdounin.ru> Hello! On Tue, Aug 16, 2011 at 02:40:29PM +0300, Anatoli Marinov wrote: > My patch for this issue was: > @@ -157,16 +157,13 @@ > len = r->headers_in.x_forwarded_for->value.len; > ip = r->headers_in.x_forwarded_for->value.data; > > - for (p = ip + len - 1; p > ip; p--) { > - if (*p == ' ' || *p == ',') { > - p++; > - len -= p - ip; > - ip = p; > - break; > - } > - } > + p = ip; > > - break; > + while(*p != ',' && *p != ' ' && p < p + len){ > + p++; > + } > + len = p - ip; > + break; > > default: /* NGX_HTTP_REALIP_HEADER */ This patch is just wrong: it picks first address from X-Forwarded-For which may be easily forged. Maxim Dounin > > @@ -414,6 +411,7 @@ > > On 08/16/2011 12:46 PM, Anatoli Marinov wrote: > >Hello mates, > >I tried readip_module and I found it does not work as I expect. > >For example the header may looks like this: > >X-Forwarded-For: client1, proxy1, proxy2 > > > >Where client1 should be the real ip address of the client, proxy1 > >should be the first proxy after the client and proxy2 should be > >the last proxy after the client and the first before the nginx. > >Nginx has the connection with proxy2. > >I think In this case readip_module should return client1 ip > >address. It returns the latest address in the field - proxy2. > >What do you think? Is the behaviour wrong or I do not understand > >the meaning of this header? > > > >p.s. http://en.wikipedia.org/wiki/X-Forwarded-For > > > >Thanks in advance. > >A. Marinov > > > > > >_______________________________________________ > >nginx-devel mailing list > >nginx-devel at nginx.org > >http://mailman.nginx.org/mailman/listinfo/nginx-devel > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel From toli at webforge.bg Tue Aug 16 13:17:53 2011 From: toli at webforge.bg (Anatoli Marinov) Date: Tue, 16 Aug 2011 16:17:53 +0300 Subject: realip_module In-Reply-To: <20110816124205.GP1137@mdounin.ru> References: <4E4A3C63.3070101@webforge.bg> <4E4A572D.3080303@webforge.bg> <20110816124205.GP1137@mdounin.ru> Message-ID: <4E4A6E01.2060807@webforge.bg> Thanks m-r Dounin, I understood your point of view. The patch is not correct. Thank you. On 08/16/2011 03:42 PM, Maxim Dounin wrote: > Hello! > > On Tue, Aug 16, 2011 at 02:40:29PM +0300, Anatoli Marinov wrote: > >> My patch for this issue was: >> @@ -157,16 +157,13 @@ >> len = r->headers_in.x_forwarded_for->value.len; >> ip = r->headers_in.x_forwarded_for->value.data; >> >> - for (p = ip + len - 1; p> ip; p--) { >> - if (*p == ' ' || *p == ',') { >> - p++; >> - len -= p - ip; >> - ip = p; >> - break; >> - } >> - } >> + p = ip; >> >> - break; >> + while(*p != ','&& *p != ' '&& p< p + len){ >> + p++; >> + } >> + len = p - ip; >> + break; >> >> default: /* NGX_HTTP_REALIP_HEADER */ > This patch is just wrong: it picks first address from > X-Forwarded-For which may be easily forged. > > Maxim Dounin > >> @@ -414,6 +411,7 @@ >> >> On 08/16/2011 12:46 PM, Anatoli Marinov wrote: >>> Hello mates, >>> I tried readip_module and I found it does not work as I expect. >>> For example the header may looks like this: >>> X-Forwarded-For: client1, proxy1, proxy2 >>> >>> Where client1 should be the real ip address of the client, proxy1 >>> should be the first proxy after the client and proxy2 should be >>> the last proxy after the client and the first before the nginx. >>> Nginx has the connection with proxy2. >>> I think In this case readip_module should return client1 ip >>> address. It returns the latest address in the field - proxy2. >>> What do you think? Is the behaviour wrong or I do not understand >>> the meaning of this header? >>> >>> p.s. http://en.wikipedia.org/wiki/X-Forwarded-For >>> >>> Thanks in advance. >>> A. Marinov >>> >>> >>> _______________________________________________ >>> nginx-devel mailing list >>> nginx-devel at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx-devel >> _______________________________________________ >> nginx-devel mailing list >> nginx-devel at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx-devel > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel From Iry.Witham at jax.org Wed Aug 17 12:01:18 2011 From: Iry.Witham at jax.org (Iry Witham) Date: Wed, 17 Aug 2011 12:01:18 +0000 Subject: Proxy configuration Message-ID: I am working on an issue with Nginx, using it as the proxy server for our internal Galaxy server. We are attempting to use the IGV module in Galaxy and when we use the option to view BAM files in the browser we get a Java Runtime error. So far the only way around the error is to modify the URL to be http://galaxy:8080/. I am able to use the other available options just fine. I have been involved in conversations with both Galaxy-Dev and IGV support. The later suggested that I mention the following: ?The issue I have seen with some proxy servers is that they rewrite the request header and in the process remove a "range-byte" request header. This is critical and should not be removed.? Here is a copy of our nginx.conf file. I removed the trailing portion since it is commented out: #user nobody; worker_processes 1; #error_log logs/error.log; #error_log logs/error.log notice; #error_log logs/error.log info; error_log logs/error.log debug; #pid logs/nginx.pid; events { worker_connections 1024; } user galaxy galaxy; http { include mime.types; default_type application/octet-stream; # Added for compression and caching. According to Galaxy wiki page: # http://bitbucket.org/galaxy/galaxy-central/wiki/Config/nginxProxy # This will decrease downlod and page load times for clients gzip on; gzip_http_version 1.1; gzip_vary on; gzip_comp_level 4; gzip_proxied any; gzip_types text/plain text/css application/x-javascript text/xml application/xml text /javascript application/json; gzip_buffers 16 8k; gzip_disable "MSIE [1-6].(?!.*SV1)"; #log_format main '$remote_addr - $remote_user [$time_local] "$request" ' # '$status $body_bytes_sent "$http_referer" ' # '"$http_user_agent" "$http_x_forwarded_for"'; #access_log logs/access.log main; sendfile on; #tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; #gzip on; # Added to support proxying to galaxy server upstream galaxy_app { server localhost:8080; server localhost:8081; server localhost:8082; } server { listen 80; *** listen localhost:80; *** server_name localhost; # Line added for galaxy support client_max_body_size 10G; #charset koi8-r; #access_log logs/host.access.log main; # Added to allow nginx to handle file uploads location /_upload { upload_store /hpcdata/galaxy-setup/galaxy-dist/database/tmp/upload_store; upload_pass_form_field ""; upload_set_form_field "__${upload_field_name}__is_composite" "true"; upload_set_form_field "__${upload_field_name}__keys" "name path"; upload_set_form_field "${upload_field_name}_name" "$upload_file_name"; upload_set_form_field "${upload_field_name}_path" "$upload_tmp_path"; upload_pass_args on; upload_pass /_upload_done; } location /_upload_done { set $dst /tool_runner/index; if ($args ~ nginx_redir=([^&]+)) { set $dst $1; } rewrite "" $dst; } # added to allow nginx to handle file downloads location /_x_accel_redirect/ { internal; alias /; } # The following series of "locations" are for off-loading static # content from the galaxy server. location /static { alias /hpcdata/galaxy-setup/galaxy-dist/static; # This will decrease downlod and page load times for clients expires 24h; } location /static/style { alias /hpcdata/galaxy-setup/galaxy-dist/static/june_2007_style/blue; } location /static/scripts { alias /hpcdata/galaxy-setup/galaxy-dist/static/scripts/packed; # This will decrease downlod and page load times for clients expires 24h; } location /favicon.ico { alias /hpcdata/galaxy-setup/galaxy-dist/static/favicon.ico; } location /robots.txt { alias /hpcdata/galaxy-setup/galaxy-dist/static/robots.txt; } # End of static content location / { # next two lines commented out for galaxy redirect #root html; #index index.html index.htm; proxy_pass http://galaxy_app; *** proxy_pass http://galaxy_app:8080; *** proxy_read_timeout 500; proxy_next_upstream error; proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } Is there something that I am missing? Any assistance would be greatly appreciated. Regards, Iry Witham Applications Administrator Scientific Computing Group Computational Sciences Dept. The Jackson Laboratory 600 Main Street Bar Harbor, ME 04609 Phone: 207-288-6744 email: iry.witham at jax.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Wed Aug 17 14:29:06 2011 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 17 Aug 2011 18:29:06 +0400 Subject: Proxy configuration In-Reply-To: References: Message-ID: <20110817142906.GY1137@mdounin.ru> Hello! On Wed, Aug 17, 2011 at 12:01:18PM +0000, Iry Witham wrote: > I am working on an issue with Nginx, using it as the proxy > server for our internal Galaxy server. We are attempting to > use the IGV module in Galaxy and when we use the option to > view BAM files in the browser we get a Java Runtime error. > So far the only way around the error is to modify the URL to > be http://galaxy:8080/. I am able to use the other available > options just fine. I have been involved in conversations > with both Galaxy-Dev and IGV support. The later suggested > that I mention the following: > > ?The issue I have seen with some proxy servers is that > they rewrite the request header and in the process remove a "range-byte" > request header. This is critical and should not be removed.? There is no such thing as "range-byte" request header. Probably they were taking about "Range" request header. It is not removed by nginx by default (unless you use proxy_cache, and you config suggests you aren't). On the other hand, this means that the app in question fails to correctly use http. Http clients may not rely that range request will be satisfied by range response, full response is perfectly valid per http protocol and clients should handle this. > Here is a copy of our nginx.conf file. I removed the > trailing portion since it is commented out: [...] > Is there something that I am missing? Any assistance would be > greatly appreciated. Config looks ok. You may try to produce debug log as described at [1], but I don't expect it to show anything. Any perfectly valid thing may cause app to misbehave (especially keeping in mind the app known to not handle http properly, see above), and it's not really possible to say which one without debugging app in question. You may also want to trace what's happening on the wire with nginx compared to direct connection. Quick things to try also include: 1. Disable keepalive with "keepalive off;". There have been reports here that some apps (Java apps, if I recall correctly) doesn't understand chunked responses despite sending HTTP/1.1 requests. (Disabling keepalive will also disable use of chunked encoding.) 2. Disable gzip with "gzip off;". [1] http://wiki.nginx.org/Debugging Maxim Dounin From mdounin at mdounin.ru Wed Aug 17 15:43:15 2011 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 17 Aug 2011 19:43:15 +0400 Subject: [PATCH] Lower optimization level for Sun Studio before 12.1 Message-ID: <0879b1bfe58dbccc21ac.1313595795@vm-bsd.mdounin.ru> # HG changeset patch # User Maxim Dounin # Date 1313530448 -14400 # Node ID 0879b1bfe58dbccc21accbf32f25306ecacb26e4 # Parent 561a37709f6d7f31424a04d7e2c4855a7464a933 Lower optimization level for Sun Studio before 12.1. At least Sun Studio 12 has problems with bit-fields exposed by nginx code (caught by test suite). They seems to be fixed in Sun Studio 12.1. As a workaround use "-fast -xalias_level=any" for older versions, it resolves the problem. diff --git a/auto/cc/sunc b/auto/cc/sunc --- a/auto/cc/sunc +++ b/auto/cc/sunc @@ -6,6 +6,7 @@ # Sun C 5.8 2005/10/13 Sun Studio 11 # Sun C 5.9 SunOS_i386 2007/05/03 Sun Studio 12 # Sun C 5.9 SunOS_sparc 2007/05/03 +# Sun C 5.10 SunOS_i386 2009/06/03 Sun Studio 12.1 NGX_SUNC_VER=`$CC -V 2>&1 | grep 'Sun C' 2>&1 \ | sed -e 's/^.* Sun C \(.*\)/\1/'` @@ -57,9 +58,19 @@ esac # optimizations +# 20736 == 0x5100, Sun Studio 12.1 + +if [ "$ngx_sunc_ver" -ge 20736 ]; then + FAST="-fast" + +else + # older versions had problems with bit-fields + FAST="-fast -xalias_level=any" +fi + IPO=-xipo -CFLAGS="$CFLAGS -fast $IPO" -CORE_LINK="$CORE_LINK -fast $IPO" +CFLAGS="$CFLAGS $FAST $IPO" +CORE_LINK="$CORE_LINK $FAST $IPO" case $CPU in @@ -126,15 +137,15 @@ CFLAGS="$CFLAGS $CPU_OPT" if [ ".$PCRE_OPT" = "." ]; then - PCRE_OPT="-fast $IPO $CPU_OPT" + PCRE_OPT="$FAST $IPO $CPU_OPT" fi if [ ".$MD5_OPT" = "." ]; then - MD5_OPT="-fast $IPO $CPU_OPT" + MD5_OPT="$FAST $IPO $CPU_OPT" fi if [ ".$ZLIB_OPT" = "." ]; then - ZLIB_OPT="-fast $IPO $CPU_OPT" + ZLIB_OPT="$FAST $IPO $CPU_OPT" fi From thibault.koechlin at nbs-system.com Thu Aug 18 16:45:49 2011 From: thibault.koechlin at nbs-system.com (Thibault Koechlin) Date: Thu, 18 Aug 2011 18:45:49 +0200 Subject: Persistent data accross rewrite Message-ID: <1313685949.7486.23.camel@zeroed.int.nbs-system.com> Hello List, I'm currently working on a module that needs to parse every incoming request, but only once. In the past, I was relying on r->internal to decide if I need to parse the request or not, but my assumption was wrong (I was naive and thinking I can always catch the request before it goes "internal"). I figured out that if there is some rewrite (and probably in some other cases), as my module is registered in the ACCESS phase, the r->internal flag will be set when the request comes to the ACCESS phase (as the rewrite phase is processed earlier and sets the r->internal flag). As well, the module's CTX doesn't 'survive' to rewrite, so I cannot use this neither. I am looking for a way to keep "persistent" data across the whole request life, some data that will not disappear with rewrites etc. Actually, I just want to know if I already parsed the request or not, to be sure to parse it only once. This data can be accessible only by my module, that's not an issue. I had a look at ngx_http_variable, but I'm afraid it's not adapted to my needs, as it's persistent, and aimed at being accessed from config file, while I just need a flag to say "this request has already been parsed, don't parse it" or "this request has not been parsed, parse it, and set the flag to said it has been parsed". Except some (terribly wrong and dirty) tricks (like adding a field to ngx_http_request_t, using global variables ...), I didn't find any way to do so. Can someone point me to something that might allow me to keep a 'flag' for the whole request life ? Regards, -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 490 bytes Desc: This is a digitally signed message part URL: From Brian.Akins at turner.com Fri Aug 19 14:50:22 2011 From: Brian.Akins at turner.com (Akins, Brian) Date: Fri, 19 Aug 2011 10:50:22 -0400 Subject: Context for connections Message-ID: I'd like to be able to store some per-module, per-connection data. Like the ctx in the requests. I was wondering if I should just patch nginx, or if someone else had done something similar. Thanks. -- Brian Akins From ngx.eugaia at gmail.com Fri Aug 19 22:18:36 2011 From: ngx.eugaia at gmail.com (Eugaia) Date: Sat, 20 Aug 2011 01:18:36 +0300 Subject: Context for connections In-Reply-To: References: Message-ID: <4E4EE13C.5070002@gmail.com> Hi Brian, On 19/08/2011 17:50, Akins, Brian wrote: > I'd like to be able to store some per-module, per-connection data. Like the > ctx in the requests. I was wondering if I should just patch nginx, or if > someone else had done something similar. Can I assume that in your case just storing the data in the c->data slot isn't going to work (easily/nicely)? e.g. typedef struct { ngx_http_request_t *r; /* or whatever other info you need */ void **ctx; } ngx_my_connection_data_t; Assuming you're talking about hooking into the http requests somehow, you probably don't want to be playing around with the main connection in this way, because there are loads of functions that assume that c->data = (some ngx_http_request_t* - either the parent or a subrequest) - it's doable, but messy. However, if you're creating your own connections, then there's no reason you can't use such a construction. You'd have to make sure that all your event handlers and other callbacks were able to handle this data struct of course, though. If you are talking about doing this for connections including 'standard' ones, like the main http request, I'd patch the ngx_connection_t struct to just add an extra ctx to it. It's a very minor addition, and would be much easier than trying to deal with changing event handlers all over the place. Cheers, Marcus. From mdounin at mdounin.ru Sat Aug 20 11:31:19 2011 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sat, 20 Aug 2011 15:31:19 +0400 Subject: [PATCH] Move SO_ACCEPTFILTER and TCP_DEFER_ACCEPT checks into configure Message-ID: <1343bc568a27e99d3f5b.1313839879@vm-bsd.mdounin.ru> # HG changeset patch # User Maxim Dounin # Date 1313839727 -14400 # Node ID 1343bc568a27e99d3f5b7d9bcbefefc1a26cf565 # Parent 808804a512eff99261b7c0643256f8824866afa7 Move SO_ACCEPTFILTER and TCP_DEFER_ACCEPT checks into configure. NetBSD 5.0+ has SO_ACCEPTFILTER support merged from FreeBSD, and having accept filter check in FreeBSD-specific ngx_freebsd_config.h prevents it from being used on NetBSD. Therefore move the check into configure (and do the same for Linux-specific TCP_DEFER_ACCEPT, just to be in line). diff --git a/auto/unix b/auto/unix --- a/auto/unix +++ b/auto/unix @@ -295,6 +295,7 @@ if [ $ngx_found != yes ]; then fi fi + ngx_feature="SO_SETFIB" ngx_feature_name="NGX_HAVE_SETFIB" ngx_feature_run=no @@ -305,6 +306,28 @@ ngx_feature_test="setsockopt(0, SOL_SOCK . auto/feature +ngx_feature="SO_ACCEPTFILTER" +ngx_feature_name="NGX_HAVE_DEFERRED_ACCEPT" +ngx_feature_run=no +ngx_feature_incs="#include " +ngx_feature_path= +ngx_feature_libs= +ngx_feature_test="setsockopt(0, SOL_SOCKET, SO_ACCEPTFILTER, NULL, 0)" +. auto/feature + + +ngx_feature="TCP_DEFER_ACCEPT" +ngx_feature_name="NGX_HAVE_DEFERRED_ACCEPT" +ngx_feature_run=no +ngx_feature_incs="#include + #include + #include " +ngx_feature_path= +ngx_feature_libs= +ngx_feature_test="setsockopt(0, IPPROTO_TCP, TCP_DEFER_ACCEPT, NULL, 0)" +. auto/feature + + ngx_feature="accept4()" ngx_feature_name="NGX_HAVE_ACCEPT4" ngx_feature_run=no diff --git a/src/core/ngx_connection.c b/src/core/ngx_connection.c --- a/src/core/ngx_connection.c +++ b/src/core/ngx_connection.c @@ -580,7 +580,7 @@ ngx_configure_listening_sockets(ngx_cycl { ngx_log_error(NGX_LOG_ALERT, cycle->log, ngx_errno, "setsockopt(SO_ACCEPTFILTER, \"%s\") " - " for %V failed, ignored", + "for %V failed, ignored", ls[i].accept_filter, &ls[i].addr_text); continue; } diff --git a/src/os/unix/ngx_freebsd_config.h b/src/os/unix/ngx_freebsd_config.h --- a/src/os/unix/ngx_freebsd_config.h +++ b/src/os/unix/ngx_freebsd_config.h @@ -92,11 +92,6 @@ typedef struct aiocb ngx_aiocb_t; #define NGX_LISTEN_BACKLOG -1 -#if (defined SO_ACCEPTFILTER && !defined NGX_HAVE_DEFERRED_ACCEPT) -#define NGX_HAVE_DEFERRED_ACCEPT 1 -#endif - - #if (__FreeBSD_version < 430000 || __FreeBSD_version < 500012) pid_t rfork_thread(int flags, void *stack, int (*func)(void *arg), void *arg); diff --git a/src/os/unix/ngx_linux_config.h b/src/os/unix/ngx_linux_config.h --- a/src/os/unix/ngx_linux_config.h +++ b/src/os/unix/ngx_linux_config.h @@ -96,11 +96,6 @@ typedef struct iocb ngx_aiocb_t; #define NGX_LISTEN_BACKLOG 511 -#if defined TCP_DEFER_ACCEPT && !defined NGX_HAVE_DEFERRED_ACCEPT -#define NGX_HAVE_DEFERRED_ACCEPT 1 -#endif - - #ifndef NGX_HAVE_SO_SNDLOWAT /* setsockopt(SO_SNDLOWAT) returns ENOPROTOOPT */ #define NGX_HAVE_SO_SNDLOWAT 0 From sb at waeme.net Wed Aug 24 08:21:41 2011 From: sb at waeme.net (Sergey Budnevitch) Date: Wed, 24 Aug 2011 12:21:41 +0400 Subject: svn.nginx.org repository changes Message-ID: <9B24920E-060F-43A1-B823-54A1E572FF97@waeme.net> FYI svn://svn.nginx.org structure has changed. Previously there was one repository svn.nginx.org and /nginx directory was inside this repository. Since we need to add one more repository with nginx.org site sources, /nginx directory was striped from original repository. New structure: svn://svn.nginx.org/nginx - repo with nginx sources svn://svn.nginx.org/nginx.org - repo with nginx.org site sources Those who need to fix old checkout may run this simple script for dir in `find . -name ".svn" -type d`; do find $dir -type f -exec perl -pi -e 's|svn://svn.nginx.org$|svn://svn.nginx.org/nginx|' {} \; done From thias at spam.spam.spam.spam.spam.spam.spam.egg.and.spam.freshrpms.net Wed Aug 24 12:41:58 2011 From: thias at spam.spam.spam.spam.spam.spam.spam.egg.and.spam.freshrpms.net (Matthias Saou) Date: Wed, 24 Aug 2011 14:41:58 +0200 Subject: svn.nginx.org repository changes In-Reply-To: <9B24920E-060F-43A1-B823-54A1E572FF97@waeme.net> References: <9B24920E-060F-43A1-B823-54A1E572FF97@waeme.net> Message-ID: <20110824144158.60ba6c79@fusion> Sergey Budnevitch wrote: > Those who need to fix old checkout may run this simple script > > for dir in `find . -name ".svn" -type d`; do > find $dir -type f -exec perl -pi -e > 's|svn://svn.nginx.org$|svn://svn.nginx.org/nginx|' {} \; done There should be a cleaner way. Something like this should work : svn switch --relocate svn.nginx.org svn.nginx.org/nginx Matthias From sb at waeme.net Wed Aug 24 15:02:07 2011 From: sb at waeme.net (Sergey Budnevitch) Date: Wed, 24 Aug 2011 19:02:07 +0400 Subject: svn.nginx.org repository changes In-Reply-To: <20110824144158.60ba6c79@fusion> References: <9B24920E-060F-43A1-B823-54A1E572FF97@waeme.net> <20110824144158.60ba6c79@fusion> Message-ID: On 24.08.2011, at 16:41, Matthias Saou wrote: > Sergey Budnevitch wrote: > >> Those who need to fix old checkout may run this simple script >> >> for dir in `find . -name ".svn" -type d`; do >> find $dir -type f -exec perl -pi -e >> 's|svn://svn.nginx.org$|svn://svn.nginx.org/nginx|' {} \; done > > There should be a cleaner way. Something like this should work : > svn switch --relocate svn.nginx.org svn.nginx.org/nginx Unfortunately svn switch --relocate do not work in this case, i have tried already with slightly different repo. --relocate actually replace old repo path with new one in s/oldpath/newpath/ manner, but there are actually two paths in each .svn/entries: "repo path" and "current path", for example svn://svn.nginx.org and svn://svn.nginx.org/nginx/trunk. So --relocated will transform "repo path" correctly, but /nginx in "current path" will be doubled. From fin at xbhd.org Fri Aug 26 11:40:09 2011 From: fin at xbhd.org (fin) Date: Fri, 26 Aug 2011 13:40:09 +0200 Subject: busywaiting without blocking event loop OR timers Message-ID: hello devel, i'm working on a module that uses ngx_event_connect to connect to a 3rdparty service and parses its request. the answer depends on the request hostname, so now i'm implementing a cache in shared memory to cache the responses on a hostname basis. i'm protecting access to every cache entry using a ngx_atomic_t. now i'd like to prevent multiple concurrent connections from requests to the same hostname to the 3rd party service. i see two ways to solve that: * busy-wait on ngx_atomic_t as long as a request to the service is open * create an event and wait for it now, how can i busy-wait on a ngx_atomic_t without blocking the handling of other requests in the same process? (these requests can last up to 10 seconds in worst-case) -> can i yield to the event loop while busy-waiting? can i create events independent of I/O, like timer events? thanks in advance -fin PS: our howto on doing what we're doing in our module: http://blog.efficientcloud.com/2011/08/24/nginx-upstream/ From zhu.qunying at gmail.com Tue Aug 30 05:19:30 2011 From: zhu.qunying at gmail.com (Qun-Ying) Date: Mon, 29 Aug 2011 22:19:30 -0700 Subject: Is that possible to define a common location handler for all servers Message-ID: Hi, Let's say we have a sample configuration: http { server { server_name "server1" } server { server_name "server2" } .. } Is that possible to define a single location entry (not using the include directive, which actually defined the entry for each server after include expand) location /error { error_handler } So that it could be use in all servers? I try to define it at http level, but encounter error saying location could not be defined outside server. Thanks -- Qun-Ying -------------- next part -------------- An HTML attachment was scrubbed... URL: From igor at sysoev.ru Tue Aug 30 05:24:27 2011 From: igor at sysoev.ru (Igor Sysoev) Date: Tue, 30 Aug 2011 09:24:27 +0400 Subject: Is that possible to define a common location handler for all servers In-Reply-To: References: Message-ID: <20110830052427.GE65166@nginx.com> On Mon, Aug 29, 2011 at 10:19:30PM -0700, Qun-Ying wrote: > Hi, > > Let's say we have a sample configuration: > > http { > server { > server_name "server1" > } > > server { > server_name "server2" > } > .. > } > > Is that possible to define a single location entry (not using the include > directive, which actually defined the entry for each server after include > expand) > location /error { > error_handler > } > So that it could be use in all servers? > > I try to define it at http level, but encounter error saying location could > not be defined outside server. It's not possible. -- Igor Sysoev From ximaera at highloadlab.com Tue Aug 30 17:22:20 2011 From: ximaera at highloadlab.com (Artyom Gavrichenkov) Date: Tue, 30 Aug 2011 21:22:20 +0400 Subject: [PATCH] Fixed Nginx 1.1.1 eating 100% CPU time on occasions Message-ID: Problem: 2011/08/30 19:35:05 [debug] 3186#0: *5193 http upstream request: "/ximaera/images/whiting_buddh.png?" 2011/08/30 19:35:05 [debug] 3186#0: *5193 http upstream process header 2011/08/30 19:35:05 [debug] 3186#0: *5193 malloc: 0000000002239950:65536 2011/08/30 19:35:05 [debug] 3186#0: *5193 recv: fd:252 0 of 65536 2011/08/30 19:35:05 [error] 3186#0: *5193 upstream prematurely closed connection while reading response header from upstream, client: 217.26.0.104, server: , request: "GET /ximaera/images/whiting_buddh.png HTTP/1.1", upstream: "http://192.168.1.5:80/ximaera/images/whiting_buddh.png", host: "www.ximaera.name", referrer: "http://twitter.com/" 2011/08/30 19:35:05 [debug] 3186#0: *5193 http next upstream, 2 2011/08/30 19:35:05 [debug] 3186#0: *5193 free keepalive peer 2011/08/30 19:35:05 [debug] 3186#0: *5193 free rr peer 1 4 2011/08/30 19:35:05 [debug] 3186#0: *5193 free rr peer failed: 0 -1 2011/08/30 19:35:05 [debug] 3186#0: *5193 close http upstream connection: 252 2011/08/30 19:35:05 [debug] 3186#0: *5193 event timer del: 252: 1314718565804 2011/08/30 19:35:05 [debug] 3186#0: *5193 reusable connection: 0 2011/08/30 19:35:05 [debug] 3186#0: *5193 get keepalive peer 2011/08/30 19:35:05 [debug] 3186#0: *5193 get rr peer, try: 0 2011/08/30 19:35:05 [debug] 3186#0: *5193 [XIMAERA] before round_robin.c:505, try: 0 2011/08/30 19:35:05 [debug] 3186#0: *5193 [XIMAERA] before round_robin.c:508, try: 18446744073709551615 2011/08/30 19:35:05 [debug] 3186#0: *5193 [XIMAERA] before round_robin.c:514, try: 18446744073709551615 After unsuccessful attempt to ngx_http_upstream_free_round_robin_peer() we had (pc->tries == 0). Then we tried to ngx_http_upstream_get_round_robin_peer() with (pc->tries == 0 && rrp->peers->number == 1). On ngx_http_upstream_round_robin.c:505 we did pc->tries-- and started to decrement 0xffffffffffffffff to zero. Quick and probably dirty (but working) patch follows. Signed-off-by: Artyom Gavrichenkov diff -Nurp nginx-1.1.1.orig/src/http/ngx_http_upstream_round_robin.c nginx-1.1.1/src/http/ngx_http_upstream_round_robin.c --- nginx-1.1.1.orig/src/http/ngx_http_upstream_round_robin.c 2011-08-18 21:04:52.000000000 +0400 +++ nginx-1.1.1/src/http/ngx_http_upstream_round_robin.c 2011-08-30 21:22:40.000000000 +0400 @@ -473,6 +473,10 @@ ngx_http_upstream_get_round_robin_peer(n } else { + if (pc->tries == 0) { + goto failed; + } + i = pc->tries; for ( ;; ) { From mdounin at mdounin.ru Wed Aug 31 00:02:49 2011 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 31 Aug 2011 04:02:49 +0400 Subject: [PATCH] Fixed Nginx 1.1.1 eating 100% CPU time on occasions In-Reply-To: References: Message-ID: <20110831000249.GN1137@mdounin.ru> Hello! On Tue, Aug 30, 2011 at 09:22:20PM +0400, Artyom Gavrichenkov wrote: > Problem: > > 2011/08/30 19:35:05 [debug] 3186#0: *5193 http upstream request: > "/ximaera/images/whiting_buddh.png?" > 2011/08/30 19:35:05 [debug] 3186#0: *5193 http upstream process header > 2011/08/30 19:35:05 [debug] 3186#0: *5193 malloc: 0000000002239950:65536 > 2011/08/30 19:35:05 [debug] 3186#0: *5193 recv: fd:252 0 of 65536 > 2011/08/30 19:35:05 [error] 3186#0: *5193 upstream prematurely closed > connection while reading response header from upstream, client: > 217.26.0.104, server: , request: "GET > /ximaera/images/whiting_buddh.png HTTP/1.1", upstream: > "http://192.168.1.5:80/ximaera/images/whiting_buddh.png", host: > "www.ximaera.name", referrer: "http://twitter.com/" > 2011/08/30 19:35:05 [debug] 3186#0: *5193 http next upstream, 2 > 2011/08/30 19:35:05 [debug] 3186#0: *5193 free keepalive peer > 2011/08/30 19:35:05 [debug] 3186#0: *5193 free rr peer 1 4 > 2011/08/30 19:35:05 [debug] 3186#0: *5193 free rr peer failed: 0 -1 > 2011/08/30 19:35:05 [debug] 3186#0: *5193 close http upstream connection: 252 > 2011/08/30 19:35:05 [debug] 3186#0: *5193 event timer del: 252: 1314718565804 > 2011/08/30 19:35:05 [debug] 3186#0: *5193 reusable connection: 0 > 2011/08/30 19:35:05 [debug] 3186#0: *5193 get keepalive peer > 2011/08/30 19:35:05 [debug] 3186#0: *5193 get rr peer, try: 0 > 2011/08/30 19:35:05 [debug] 3186#0: *5193 [XIMAERA] before > round_robin.c:505, try: 0 > 2011/08/30 19:35:05 [debug] 3186#0: *5193 [XIMAERA] before > round_robin.c:508, try: 18446744073709551615 > 2011/08/30 19:35:05 [debug] 3186#0: *5193 [XIMAERA] before > round_robin.c:514, try: 18446744073709551615 > > After unsuccessful attempt to > ngx_http_upstream_free_round_robin_peer() we had (pc->tries == 0). > Then we tried to ngx_http_upstream_get_round_robin_peer() with > (pc->tries == 0 && rrp->peers->number == 1). > On ngx_http_upstream_round_robin.c:505 we did pc->tries-- and started > to decrement 0xffffffffffffffff to zero. This is upstream keepalive related problem. Normally connection should not be retried if peer.tries == 0, see ngx_http_upstream_next() in ngx_http_upstream.c. But in case of errors on cached connection this check is bypassed and this causes cpu hog you observed. I'll take a look how to fix this properly. Though it looks like there is no easy fix, as e.g. in case of ip_hash we need to retry the same upstream server in such situations... Maxim Dounin From speedfirst at gmail.com Wed Aug 31 07:46:50 2011 From: speedfirst at gmail.com (Speed First) Date: Wed, 31 Aug 2011 15:46:50 +0800 Subject: Fix the incorrect value of $host when IPv6 in Host header Message-ID: This is the diff to fix the problem of http://forum.nginx.org/read.php?2,214541 Can this be integrated into main branch? Thanks. --- ngx_http_request.c 2011-08-24 05:21:59.354049000 -0700 +++ ngx_http_request.c.backup 2011-08-24 05:05:33.244048997 -0700 @@ -1658,20 +1658,10 @@ size_t i, last; ngx_uint_t dot; -#if (NGX_HAVE_INET6) - ngx_uint_t ipv6 = 0; -#endif - last = len; h = *host; dot = 0; -#if (NGX_HAVE_INET6) - if (len > 0 && h[0] == '[') { - ipv6 = 1; - } -#endif - for (i = 0; i < len; i++) { ch = h[i]; @@ -1687,13 +1677,7 @@ dot = 0; if (ch == ':') { -#if (NGX_HAVE_INET6) - if (!ipv6) { - last = i; - } -#else last = i; -#endif continue; } @@ -1704,11 +1688,6 @@ if (ch >= 'A' || ch < 'Z') { alloc = 1; } -#if (NGX_HAVE_INET6) - if (ch == ']') { - ipv6 = 0; - } -#endif } if (dot) { -------------- next part -------------- An HTML attachment was scrubbed... URL: From thibault.koechlin at nbs-system.com Wed Aug 31 08:21:31 2011 From: thibault.koechlin at nbs-system.com (Thibault Koechlin) Date: Wed, 31 Aug 2011 10:21:31 +0200 Subject: Web Application Firewall module for NGINX Message-ID: <1314778891.16017.1249.camel@zeroed.int.nbs-system.com> Hello list, Just a short mail to announce the release of Naxsi, a WAF (Web Application Firewall) for NGINX. Web Application Firewalls aims at protecting web-sites from exploitation of vulnerabilities, such as SQL injection, Cross Site Scripting and so on. You can find more details here (wiki, downloads, etc.) : naxsi.googlecode.com The project is now in version alpha 0.2 (read : young !), but we've already performed some tests on it (with various commercial web vulnerability scanning softwares, performed static analysis on its code source, and a few manual reviews). On a side note, and I hope there are security enthusiasts amongst us, we setup a dedicated testing environment, where nginx+naxsi is acting as reverse proxy for three "on purpose" vulnerable websites. I hope in this way people will play and find vulnerabilities in naxsi, ways to bypass it, or trust it ;) (Those three sites are usually used to test web vulnerability application scanners) (details here : http://code.google.com/p/naxsi/wiki/OnlyTrustWhatYouCanTest) Regards, PS: Feel free to contact me by mail, or on irc/freenode, nickname bui. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 490 bytes Desc: This is a digitally signed message part URL: From mdounin at mdounin.ru Wed Aug 31 09:20:54 2011 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 31 Aug 2011 13:20:54 +0400 Subject: Fix the incorrect value of $host when IPv6 in Host header In-Reply-To: References: Message-ID: <20110831092054.GP1137@mdounin.ru> Hello! On Wed, Aug 31, 2011 at 03:46:50PM +0800, Speed First wrote: > This is the diff to fix the problem of > http://forum.nginx.org/read.php?2,214541 > > Can this be integrated into main branch? Thanks. This patch isn't enough, please see http://trac.nginx.org/nginx/ticket/1 Additionally, it's not really correct: it hides the logic under #if, while this should be supported even without ipv6 compiled in. The ticket in question links to a bit better patch (which still not enough though). Maxim Dounin From phanquochien at gmail.com Wed Aug 31 11:20:03 2011 From: phanquochien at gmail.com (Hien P) Date: Wed, 31 Aug 2011 18:20:03 +0700 Subject: Web Application Firewall module for NGINX In-Reply-To: <1314778891.16017.1249.camel@zeroed.int.nbs-system.com> References: <1314778891.16017.1249.camel@zeroed.int.nbs-system.com> Message-ID: Hello, I've waited this long ago. Finally, WAF for nginx has been released. Thank for your great works. On Wed, Aug 31, 2011 at 3:21 PM, Thibault Koechlin < thibault.koechlin at nbs-system.com> wrote: > Hello list, > > Just a short mail to announce the release of Naxsi, a WAF (Web > Application Firewall) for NGINX. Web Application Firewalls aims at > protecting web-sites from exploitation of vulnerabilities, such as SQL > injection, Cross Site Scripting and so on. > You can find more details here (wiki, downloads, etc.) : > naxsi.googlecode.com > > The project is now in version alpha 0.2 (read : young !), but we've > already performed some tests on it (with various commercial web > vulnerability scanning softwares, performed static analysis on its code > source, and a few manual reviews). > > On a side note, and I hope there are security enthusiasts amongst us, we > setup a dedicated testing environment, where nginx+naxsi is acting as > reverse proxy for three "on purpose" vulnerable websites. I hope in this > way people will play and find vulnerabilities in naxsi, ways to bypass > it, or trust it ;) (Those three sites are usually used to test web > vulnerability application scanners) (details here : > http://code.google.com/p/naxsi/wiki/OnlyTrustWhatYouCanTest) > > > Regards, > PS: Feel free to contact me by mail, or on irc/freenode, nickname bui. > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > > -- Best regards, Mr.Hien -------------- next part -------------- An HTML attachment was scrubbed... URL: