From serg.brester at sebres.de Mon May 4 08:02:33 2015 From: serg.brester at sebres.de (Sergey Brester) Date: Mon, 04 May 2015 10:02:33 +0200 Subject: nginx http-core enhancement: named location in subrequests + directive use_location In-Reply-To: <20150430185655.GG32429@mdounin.ru> References: <5196ab90dc32b09880371ff3d29c2826@sebres.de> <20150429134828.GZ32429@mdounin.ru> <20150430135541.GD32429@mdounin.ru> <20150430185655.GG32429@mdounin.ru> Message-ID: <790cb05618540854e8c8fe49b77e12d0@sebres.de> Am 30.04.2015 20:56, schrieb Maxim Dounin: > > Hello! > > > > On Thu, Apr 30, 2015 at 07:13:01PM +0200, Sergey Brester wrote: > > > > I think not for internal request?! if (r->internal && r->uri... > > You think it's not a problem, or you think it won't be illegal? > What I meant was, it is for internal request only - so If I itself want use named location (with @), I must take care about uri, will be passed to the backend. > While it's not generally a problem for nginx if an URI in an > internal request becomes illegal, it's certainly not a case we are > going to promote by applying patches. If illegal URIs are ok for > you, you may just use something like > > auth_request @foo; > > location = @foo { > ... > } > > And it will work right now out of the box. > Yes, but not as NAMED location! With everything what belongs to (slowly). > What you are trying to do is to misuse named locations as static > locations with some invalid URIs. This is wrong, named locations > are different. They preserve URI of a request untouched. That's > their main property and main advantage. > Now little bit understandable, what you mean. What can I do, to prevent changing uri in this case? I've taken simply existing code and packaged as ngx_http_core_find_named_location > (BTW, please use plain text for further messages, I'm a bit bored > to fix quoting in what your mail client produces as a plain text > version. Thank you.) Sorry now as plain (but it is more likely your client:) From nowshek2 at gmail.com Mon May 4 12:11:49 2015 From: nowshek2 at gmail.com (Abhishek Kumar) Date: Mon, 4 May 2015 17:41:49 +0530 Subject: Help in contributing changes Message-ID: Hi, Can the github repo be used for committing the changes instead of mercurial repo ? Regards, Abhishek Kumar -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Mon May 4 13:23:20 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 4 May 2015 16:23:20 +0300 Subject: Help in contributing changes In-Reply-To: References: Message-ID: <20150504132319.GO32429@mdounin.ru> Hello! On Mon, May 04, 2015 at 05:41:49PM +0530, Abhishek Kumar wrote: > Hi, > Can the github repo be used for committing > the changes instead of mercurial repo ? No. See also http://nginx.org/en/docs/contributing_changes.html. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Mon May 4 14:17:21 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 4 May 2015 17:17:21 +0300 Subject: nginx http-core enhancement: named location in subrequests + directive use_location In-Reply-To: <790cb05618540854e8c8fe49b77e12d0@sebres.de> References: <5196ab90dc32b09880371ff3d29c2826@sebres.de> <20150429134828.GZ32429@mdounin.ru> <20150430135541.GD32429@mdounin.ru> <20150430185655.GG32429@mdounin.ru> <790cb05618540854e8c8fe49b77e12d0@sebres.de> Message-ID: <20150504141721.GP32429@mdounin.ru> Hello! On Mon, May 04, 2015 at 10:02:33AM +0200, Sergey Brester wrote: > Am 30.04.2015 20:56, schrieb Maxim Dounin: > > >> Hello! > >> > >> On Thu, Apr 30, 2015 at 07:13:01PM +0200, Sergey Brester wrote: > >> > >> I think not for internal request?! if (r->internal && r->uri... > > > >You think it's not a problem, or you think it won't be illegal? > > > What I meant was, it is for internal request only - so If I itself want use > named location (with @), I must take care about uri, will be passed to the > backend. As already explained, this is a wrong assumption. > >While it's not generally a problem for nginx if an URI in an > >internal request becomes illegal, it's certainly not a case we are > >going to promote by applying patches. If illegal URIs are ok for > >you, you may just use something like > > > > auth_request @foo; > > > > location = @foo { > > ... > > } > > > >And it will work right now out of the box. > > > Yes, but not as NAMED location! With everything what belongs to (slowly). As long as you don't use the only property of named locations, you don't need them and can use static locations instead. > >What you are trying to do is to misuse named locations as static > >locations with some invalid URIs. This is wrong, named locations > >are different. They preserve URI of a request untouched. That's > >their main property and main advantage. > > > Now little bit understandable, what you mean. What can I do, to prevent > changing uri in this case? > I've taken simply existing code and packaged as > ngx_http_core_find_named_location I've reviewed the patch you've proposed and explained why it's obviously wrong. Proper implementation should be very different. I haven't looked into details, but may be just a ngx_http_named_location() call after a subrequest was created will do the trick. -- Maxim Dounin http://nginx.org/ From tfransosi at gmail.com Mon May 4 18:24:00 2015 From: tfransosi at gmail.com (Thiago Farina) Date: Mon, 4 May 2015 15:24:00 -0300 Subject: Help in contributing changes In-Reply-To: References: Message-ID: On Mon, May 4, 2015 at 9:11 AM, Abhishek Kumar wrote: > Hi, > Can the github repo be used for committing the changes instead of mercurial > repo ? > This might be helpful https://github.com/glandium/git-cinnabar, but haven't tried it myself yet. Best regards, -- Thiago Farina From gzchenym at 126.com Tue May 5 13:39:40 2015 From: gzchenym at 126.com (chen) Date: Tue, 5 May 2015 21:39:40 +0800 (CST) Subject: [RFC] event/openssl: Add dynamic record size support for serving ssl trafic Message-ID: <34abb03e.16155.14d244dae6f.Coremail.gzchenym@126.com> Hi list: This is v1 of the patchset the implementing the feature SSL Dynamic Record Sizing, inspiring by Google Front End (https://www.igvita.com/2013/10/24/optimizing-tls-record-size-and-buffering-latency/) . There are 3 conditions, if true at the same time, may trigger SSL_write to send small record over the link, hard coded 1400 bytes at this time to keep it fit into MTU size. We just send out 3 of this small record at most to reduce framing overhead when serving large object, that is enough for browser to discovery other dependency of the page at top of html file. If the buffer chain is smaller than 4096 bytes, it will not justify the overhead of sending small record. After idle for 60s(hard coded at this moment), start all over again. Any comments is welcome. Regard YM hg export tip # HG changeset patch # User YM Chen # Date 1430828974 -28800 # Node ID 31bfe6403c340bdc4c04e8e87721736c07bceef8 # Parent 162b2d27d4e1ce45bb9217d6958348c64f726a28 [RFC] event/openssl: Add dynamic record size support for serving ssl trafic SSL Dynamic Record Sizing is a long sought after feature for website that serving huge amount of encrypted traffic. The rational behide this is that SSL record should not overflow the congestion window at the beginning of slow-start period and by doing so, we can let the browser decode the first ssl record within 1 rtt and establish other connections to fetch other resources that are referenced at the top of the html file. diff -r 162b2d27d4e1 -r 31bfe6403c34 src/event/ngx_event_openssl.c --- a/src/event/ngx_event_openssl.c Wed Apr 29 14:59:02 2015 +0300 +++ b/src/event/ngx_event_openssl.c Tue May 05 20:29:34 2015 +0800 @@ -1508,6 +1508,11 @@ ngx_uint_t flush; ssize_t send, size; ngx_buf_t *buf; + ngx_msec_t last_sent_timer_diff; + ngx_uint_t loop_count; + + last_sent_timer_diff = ngx_current_msec - c->ssl->last_write_msec; + loop_count = 0; if (!c->ssl->buffer) { @@ -1517,7 +1522,13 @@ continue; } - n = ngx_ssl_write(c, in->buf->pos, in->buf->last - in->buf->pos); + size = in->buf->last - in->buf->pos; + + if(last_sent_timer_diff > 1000*60 && loop_count < 3 && size > 4096) { + size = 1400; + } + + n = ngx_ssl_write(c, in->buf->pos, size); if (n == NGX_ERROR) { return NGX_CHAIN_ERROR; @@ -1532,8 +1543,11 @@ if (in->buf->pos == in->buf->last) { in = in->next; } + + loop_count ++; } + c->ssl->last_write_msec = ngx_current_msec; return in; } @@ -1614,9 +1628,14 @@ if (size == 0) { buf->flush = 0; c->buffered &= ~NGX_SSL_BUFFERED; + c->ssl->last_write_msec = ngx_current_msec; return in; } + if(last_sent_timer_diff > 1000*60 && loop_count < 3 && size > 4096) { + size = 1400; + } + n = ngx_ssl_write(c, buf->pos, size); if (n == NGX_ERROR) { @@ -1633,14 +1652,18 @@ break; } - flush = 0; - - buf->pos = buf->start; - buf->last = buf->start; + if(buf->last == buf->pos) { + flush = 0; + + buf->pos = buf->start; + buf->last = buf->start; + } if (in == NULL || send == limit) { break; } + + loop_count++; } buf->flush = flush; @@ -1652,6 +1675,7 @@ c->buffered &= ~NGX_SSL_BUFFERED; } + c->ssl->last_write_msec = ngx_current_msec; return in; } diff -r 162b2d27d4e1 -r 31bfe6403c34 src/event/ngx_event_openssl.h --- a/src/event/ngx_event_openssl.h Wed Apr 29 14:59:02 2015 +0300 +++ b/src/event/ngx_event_openssl.h Tue May 05 20:29:34 2015 +0800 @@ -51,6 +51,8 @@ ngx_buf_t *buf; size_t buffer_size; + ngx_msec_t last_write_msec; + ngx_connection_handler_pt handler; ngx_event_handler_pt saved_read_handler; -------------- next part -------------- An HTML attachment was scrubbed... URL: From hungnv at opensource.com.vn Wed May 6 07:35:35 2015 From: hungnv at opensource.com.vn (hungnv at opensource.com.vn) Date: Wed, 6 May 2015 14:35:35 +0700 Subject: [1.8.0 stable] bug when install on old linux version Message-ID: <56F16490-2C70-4A82-B45F-46A1DC98586D@opensource.com.vn> Hello, I tested new stable version (1.8.0) with simple option: ./configure ?add-module=./ngx_enhance_mp4_module (https://github.com/whatvn/ngx_http_enhance_mp4_module ) , then start nginx and it fail to spawn child process (with error similar to compiling with ?with-file-aio on old linux kernel): 2015/05/06 14:22:28 [emerg] 19004#0: eventfd() failed (38: Function not implemented) 2015/05/06 14:22:28 [emerg] 19005#0: eventfd() failed (38: Function not implemented) 2015/05/06 14:22:28 [emerg] 19006#0: eventfd() failed (38: Function not implemented) 2015/05/06 14:22:28 [alert] 18999#0: worker process 19000 exited with fatal code 2 and cannot be respawned 2015/05/06 14:22:28 [alert] 18999#0: worker process 19002 exited with fatal code 2 and cannot be respawned 2015/05/06 14:22:28 [alert] 18999#0: worker process 19003 exited with fatal code 2 and cannot be respawned 2015/05/06 14:22:28 [alert] 18999#0: worker process 19004 exited with fatal code 2 and cannot be respawned 2015/05/06 14:22:28 [alert] 18999#0: worker process 19005 exited with fatal code 2 and cannot be respawned 2015/05/06 14:22:28 [alert] 18999#0: worker process 19006 exited with fatal code 2 and cannot be respawned with same configure options, nginx stable 1.6.3 work fine. Maybe a bug? System details: Centos 5, kernel: 2.6.18-164.el5 -- H?ng Email: hungnv at opensource.com.vn -------------- next part -------------- An HTML attachment was scrubbed... URL: From fdasilva at ingima.com Wed May 6 08:53:54 2015 From: fdasilva at ingima.com (Filipe DA SILVA) Date: Wed, 6 May 2015 08:53:54 +0000 Subject: [1.8.0 stable] bug when install on old linux version In-Reply-To: <56F16490-2C70-4A82-B45F-46A1DC98586D@opensource.com.vn> References: <56F16490-2C70-4A82-B45F-46A1DC98586D@opensource.com.vn> Message-ID: Hi , I quickly review your code . Please check this : while (1) { ? free(ftyp_atom); ftyp_atom = ngx_palloc(r->connection->pool, ftyp_atom_size); // ftyp_atom = malloc(ftyp_atom_size); I see others ngx_palloc/free mix-up? You also may merge 6d468b45f40e change (rev 5807) . Regards, Filipe De : nginx-devel-bounces at nginx.org [mailto:nginx-devel-bounces at nginx.org] De la part de hungnv at opensource.com.vn Envoy? : mercredi 6 mai 2015 09:36 ? : nginx-devel at nginx.org Objet : [1.8.0 stable] bug when install on old linux version Hello, I tested new stable version (1.8.0) with simple option: ./configure ?add-module=./ngx_enhance_mp4_module (https://github.com/whatvn/ngx_http_enhance_mp4_module) , then start nginx and it fail to spawn child process (with error similar to compiling with ?with-file-aio on old linux kernel): 2015/05/06 14:22:28 [emerg] 19004#0: eventfd() failed (38: Function not implemented) 2015/05/06 14:22:28 [emerg] 19005#0: eventfd() failed (38: Function not implemented) 2015/05/06 14:22:28 [emerg] 19006#0: eventfd() failed (38: Function not implemented) 2015/05/06 14:22:28 [alert] 18999#0: worker process 19000 exited with fatal code 2 and cannot be respawned 2015/05/06 14:22:28 [alert] 18999#0: worker process 19002 exited with fatal code 2 and cannot be respawned 2015/05/06 14:22:28 [alert] 18999#0: worker process 19003 exited with fatal code 2 and cannot be respawned 2015/05/06 14:22:28 [alert] 18999#0: worker process 19004 exited with fatal code 2 and cannot be respawned 2015/05/06 14:22:28 [alert] 18999#0: worker process 19005 exited with fatal code 2 and cannot be respawned 2015/05/06 14:22:28 [alert] 18999#0: worker process 19006 exited with fatal code 2 and cannot be respawned with same configure options, nginx stable 1.6.3 work fine. Maybe a bug? System details: Centos 5, kernel: 2.6.18-164.el5 -- H?ng Email: hungnv at opensource.com.vn -------------- next part -------------- An HTML attachment was scrubbed... URL: From ru at nginx.com Wed May 6 13:42:20 2015 From: ru at nginx.com (Ruslan Ermilov) Date: Wed, 6 May 2015 16:42:20 +0300 Subject: [1.8.0 stable] bug when install on old linux version In-Reply-To: <56F16490-2C70-4A82-B45F-46A1DC98586D@opensource.com.vn> References: <56F16490-2C70-4A82-B45F-46A1DC98586D@opensource.com.vn> Message-ID: <20150506134220.GA31801@lo0.su> On Wed, May 06, 2015 at 02:35:35PM +0700, hungnv at opensource.com.vn wrote: > Hello, > > I tested new stable version (1.8.0) with simple option: ./configure ?add-module=./ngx_enhance_mp4_module (https://github.com/whatvn/ngx_http_enhance_mp4_module ) , then start nginx and it fail to spawn child process (with error similar to compiling with ?with-file-aio on old linux kernel): > > 2015/05/06 14:22:28 [emerg] 19004#0: eventfd() failed (38: Function not implemented) > 2015/05/06 14:22:28 [emerg] 19005#0: eventfd() failed (38: Function not implemented) > 2015/05/06 14:22:28 [emerg] 19006#0: eventfd() failed (38: Function not implemented) > 2015/05/06 14:22:28 [alert] 18999#0: worker process 19000 exited with fatal code 2 and cannot be respawned > 2015/05/06 14:22:28 [alert] 18999#0: worker process 19002 exited with fatal code 2 and cannot be respawned > 2015/05/06 14:22:28 [alert] 18999#0: worker process 19003 exited with fatal code 2 and cannot be respawned > 2015/05/06 14:22:28 [alert] 18999#0: worker process 19004 exited with fatal code 2 and cannot be respawned > 2015/05/06 14:22:28 [alert] 18999#0: worker process 19005 exited with fatal code 2 and cannot be respawned > 2015/05/06 14:22:28 [alert] 18999#0: worker process 19006 exited with fatal code 2 and cannot be respawned > > > > with same configure options, nginx stable 1.6.3 work fine. Maybe a bug? > > System details: > > Centos 5, kernel: 2.6.18-164.el5 Could you verify that this patch helps you? diff --git a/src/event/modules/ngx_epoll_module.c b/src/event/modules/ngx_epoll_module.c --- a/src/event/modules/ngx_epoll_module.c +++ b/src/event/modules/ngx_epoll_module.c @@ -329,7 +329,7 @@ ngx_epoll_init(ngx_cycle_t *cycle, ngx_m #if (NGX_HAVE_EVENTFD) if (ngx_epoll_notify_init(cycle->log) != NGX_OK) { - return NGX_ERROR; + ngx_epoll_module_ctx.actions.notify = NULL; } #endif From sarah at nginx.com Wed May 6 19:04:24 2015 From: sarah at nginx.com (Sarah Novotny) Date: Wed, 6 May 2015 12:04:24 -0700 Subject: nginx.conf 2015 CFP is open Message-ID: nginx.conf 2015 Join us at Fort Mason in San Francisco from September 22-24, 2015. Submit a proposal to nginx.conf 2015! TL;DR ? Speaker proposals due: 11:59 PM PDT, June 2, 2015 ? Speakers notified: early July, 2015 ? Program schedule announced: late July, 2015 As a member of the NGINX community, you?re probably passionate about web performance, security, reliability, and scale. We?re excited to offer you the opportunity to teach (and learn from) your peers as a speaker at nginx.conf 2015 (September 22-24 in San Francisco). Please share with us how you and your company make the web speed along, instantly offering our always-on-society highly personalized and ever more creative experiences. Tell us how you solved an intractable scaling problem or shaved milliseconds (or seconds) off an RTT. Blog Post - http://nginx.com/blog/nginx-conf-2015-call-proposals-now-open/ CFP - https://nginxconf15.busyconf.com/proposals/new Twitter - https://twitter.com/nginxorg/status/596012137610260481 We want to hear your NGINX story! Sarah From hungnv at opensource.com.vn Thu May 7 03:47:13 2015 From: hungnv at opensource.com.vn (hungnv at opensource.com.vn) Date: Thu, 7 May 2015 10:47:13 +0700 Subject: [1.8.0 stable] bug when install on old linux version In-Reply-To: <20150506134220.GA31801@lo0.su> References: <56F16490-2C70-4A82-B45F-46A1DC98586D@opensource.com.vn> <20150506134220.GA31801@lo0.su> Message-ID: <97DCF6E0-7A42-4ACE-8A25-35F74403316C@opensource.com.vn> Hello, This patch works, thanks. @Filipe: Thanks. I will fix that. But I don?t know where is `6d468b45f40e`? I don?t see any pull request. Thanks -- H?ng Email: hungnv at opensource.com.vn > On May 6, 2015, at 8:42 PM, Ruslan Ermilov wrote: > > Could you verify that this patch helps you? > > diff --git a/src/event/modules/ngx_epoll_module.c b/src/event/modules/ngx_epoll_module.c > --- a/src/event/modules/ngx_epoll_module.c > +++ b/src/event/modules/ngx_epoll_module.c > @@ -329,7 +329,7 @@ ngx_epoll_init(ngx_cycle_t *cycle, ngx_m > > #if (NGX_HAVE_EVENTFD) > if (ngx_epoll_notify_init(cycle->log) != NGX_OK) { > - return NGX_ERROR; > + ngx_epoll_module_ctx.actions.notify = NULL; > } > #endif -------------- next part -------------- An HTML attachment was scrubbed... URL: From ru at nginx.com Thu May 7 05:21:06 2015 From: ru at nginx.com (Ruslan Ermilov) Date: Thu, 07 May 2015 05:21:06 +0000 Subject: [nginx] Events: made a failure to create a notification channel ... Message-ID: details: http://hg.nginx.org/nginx/rev/d0a84ae2fb48 branches: changeset: 6144:d0a84ae2fb48 user: Ruslan Ermilov date: Wed May 06 17:04:00 2015 +0300 description: Events: made a failure to create a notification channel non-fatal. This may happen if eventfd() returns ENOSYS, notably seen on CentOS 5.4. Such a failure will now just disable the notification mechanism and let the callers cope with it, instead of failing to start worker processes. If thread pools are not configured, this can safely be ignored. diffstat: src/event/modules/ngx_epoll_module.c | 2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diffs (12 lines): diff -r 162b2d27d4e1 -r d0a84ae2fb48 src/event/modules/ngx_epoll_module.c --- a/src/event/modules/ngx_epoll_module.c Wed Apr 29 14:59:02 2015 +0300 +++ b/src/event/modules/ngx_epoll_module.c Wed May 06 17:04:00 2015 +0300 @@ -329,7 +329,7 @@ ngx_epoll_init(ngx_cycle_t *cycle, ngx_m #if (NGX_HAVE_EVENTFD) if (ngx_epoll_notify_init(cycle->log) != NGX_OK) { - return NGX_ERROR; + ngx_epoll_module_ctx.actions.notify = NULL; } #endif From nowshek2 at gmail.com Thu May 7 10:46:36 2015 From: nowshek2 at gmail.com (Abhishek Kumar) Date: Thu, 7 May 2015 16:16:36 +0530 Subject: New feature request: Docker files for Power platform (SLES, RHEL, Ubuntu) Message-ID: Hi, I have written dockerfile for building nginx from source. I have built and tested the source code available on git successfully through the dockerfile for PPC64LE architecture. The dockerfile is successfully run on following platforms: Ubuntu 14.10 SUSE Linux 12.0 RHEL 7.1 Kindly suggest me where can I (which repository) contribute this dockerfile for nginx Regards, Abhishek Kumar -------------- next part -------------- An HTML attachment was scrubbed... URL: From serg.brester at sebres.de Thu May 7 10:51:33 2015 From: serg.brester at sebres.de (Sergey Brester) Date: Thu, 07 May 2015 12:51:33 +0200 Subject: execution of post_action each time breaks a keepalive connection to upstream Message-ID: Hi all, I've found that use of "post_action @named_post" always (each time) closes a upstream connection (despite of keepalive). I've been using fastcgi in @named_post. I think it belong somehow to "r->header_only=1", because fastcgi request does not wait for end-request record from fastcgi, so end of request and closing immediately after logging of "http fastcgi header done". Facts are (as debug shows): - r->keepalive == 1, but in "ngx_http_upstream_free_keepalive_peer" - u->keepalive == 0, so "goto invalid"; so, saving connection will not be held. - I think, that in fastcgi u->keepalive will be set to 1 only by processing of end-request record and possible neither ngx_http_fastcgi_input_filter or ngx_http_fastcgi_non_buffered_filter will not executed, but sure is, that line "u->keepalive = 1" will never executed for "post_action" request: if (f->state == ngx_http_fastcgi_st_padding) { if (f->type == NGX_HTTP_FASTCGI_END_REQUEST) { ... if (f->pos + f->padding == f->last) { ... u->keepalive = 1; - I don't think that is fastcgi only, and that never work, because similar execution plan used by "auth_request" (header_only also, but all that over subrequest) and there "u->keepalive" is 1, so it holds resp. saves connection and uses it again later. But as I said it uses a subrequest, and "post_action" uses ngx_http_named_location. Keep-alive is very very important for me, unfortunately I can not give up this. I can rewrite post_action over subrequest, but imho it's not a correct solution for this. In which direction should I dig to fix this issue ? Possible it is not "r->header_only", but one of "if (r->post_action) ..." or anything else, that prevents execution of input_filter... Any suggestions are welcome... Regards, sebres. From serg.brester at sebres.de Thu May 7 11:34:33 2015 From: serg.brester at sebres.de (Sergey Brester) Date: Thu, 07 May 2015 13:34:33 +0200 Subject: New feature request: Docker files for Power platform (SLES, RHEL, Ubuntu) In-Reply-To: References: Message-ID: <3084c2061c84deb9e3e40e7afbc58153@sebres.de> Hi, It is a mercurial (hg) repo, for contribution to it please read hier: http://nginx.org/en/docs/contributing_changes.html Short, it should be a changeset (created with hg export)... BTW: I don't know, will nginx developers want it, but if even not (and you have possibly a github account), so please make a pull request in: https://github.com/sebres/nginx Or if you have your own repository on github please tell me know. Thx, sebres. 07.05.2015 12:46, Abhishek Kumar: > Hi, > I have written dockerfile for building nginx from source. I have built > and tested the source code available on git successfully through the > dockerfile for PPC64LE architecture. The dockerfile is successfully run > on following platforms: > Ubuntu 14.10 > SUSE Linux 12.0 > RHEL 7.1 > > Kindly suggest me where can I (which repository) contribute this > dockerfile for nginx > > Regards, > Abhishek Kumar > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel [1] Links: ------ [1] http://mailman.nginx.org/mailman/listinfo/nginx-devel From mdounin at mdounin.ru Thu May 7 12:30:31 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 7 May 2015 15:30:31 +0300 Subject: execution of post_action each time breaks a keepalive connection to upstream In-Reply-To: References: Message-ID: <20150507123031.GL98215@mdounin.ru> Hello! On Thu, May 07, 2015 at 12:51:33PM +0200, Sergey Brester wrote: > Hi all, > > I've found that use of "post_action @named_post" always (each time) closes a > upstream connection (despite of keepalive). In short: - post_action is a dirty hack and undocumented on purpose, avoid using it; - as long as an upstream response is complete after receiving a header, upstream keepalive should work even with post_action; it might be tricky to ensure this with fastcgi though. -- Maxim Dounin http://nginx.org/ From serg.brester at sebres.de Thu May 7 15:37:44 2015 From: serg.brester at sebres.de (Sergey Brester) Date: Thu, 07 May 2015 17:37:44 +0200 Subject: execution of post_action each time breaks a keepalive connection to upstream In-Reply-To: <20150507123031.GL98215@mdounin.ru> References: <20150507123031.GL98215@mdounin.ru> Message-ID: > Hello! > > On Thu, May 07, 2015 at 12:51:33PM +0200, Sergey Brester wrote: > >> Hi all, I've found that use of "post_action @named_post" always (each >> time) closes a upstream connection (despite of keepalive). > > In short: > > - post_action is a dirty hack and undocumented on purpose, avoid > using it; Undocumented in the long term or just still not? Because meanwhile it will be used from "half" world... I know at least a dozen companies, using that. > > - as long as an upstream response is complete after receiving a > header, upstream keepalive should work even with post_action; it > might be tricky to ensure this with fastcgi though. What confuses me, that a header_only subrequest to fastcgi works fine! And what I mean was, it can be not "post_action" self, but possible all "header_only" requests via ngx_http_named_location etc. so in this case many things are affected (including third-party modules). Thx, Serg. From mdounin at mdounin.ru Thu May 7 16:17:44 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 7 May 2015 19:17:44 +0300 Subject: execution of post_action each time breaks a keepalive connection to upstream In-Reply-To: References: <20150507123031.GL98215@mdounin.ru> Message-ID: <20150507161744.GR98215@mdounin.ru> Hello! On Thu, May 07, 2015 at 05:37:44PM +0200, Sergey Brester wrote: > >Hello! > > > >On Thu, May 07, 2015 at 12:51:33PM +0200, Sergey Brester wrote: > > > >>Hi all, I've found that use of "post_action @named_post" always (each > >>time) closes a upstream connection (despite of keepalive). > > > >In short: > > > >- post_action is a dirty hack and undocumented on purpose, avoid > >using it; > > Undocumented in the long term or just still not? > Because meanwhile it will be used from "half" world... I know at least a > dozen companies, using that. It was never documented, and will never be documented. Well, may be we'll add something like "post_action: don't use it unless you understand what are you doing" to let people know that this directive should not be used. > >- as long as an upstream response is complete after receiving a > >header, upstream keepalive should work even with post_action; it > >might be tricky to ensure this with fastcgi though. > > What confuses me, that a header_only subrequest to fastcgi works fine! > And what I mean was, it can be not "post_action" self, but possible all > "header_only" requests via ngx_http_named_location etc. so in this case many > things are affected (including third-party modules). A connections to an upstream server can be only kept alive if it is in some consistent state and no outstanding data are expected in it. On the other hand, nginx doesn't try to read anything in addition to what it already read during normal upstream response parsing. As a result, if sending of a response is stopped once nginx got a header (this happens in case of post_action and in some cases with r->header_only), nginx will only be able to cache a connection if it's already in a consistent state. This may be the case with HTTP if Content-Length is explicitly set to 0 in the response headers and in some other cases (see ngx_http_proxy_process_header() for details). Quick look suggests that with FastCGI it doesn't seems to be possible at all, at least with current code, as nginx parses headers from FCGI_STDOUT records, but at least a FCGI_END_REQUEST record is additionally expected. Therefore, FastCGI connections are always closed after a post_action request. We event have a test for this: http://hg.nginx.org/nginx-tests/file/f7bc1f74970a/fastcgi_keepalive.t#l66 If you think that a header_only subrequest works for you - well, you've probably missed something. E.g., a header_only subrequest can work because of a configured cache. -- Maxim Dounin http://nginx.org/ From serg.brester at sebres.de Thu May 7 16:48:09 2015 From: serg.brester at sebres.de (Sergey Brester) Date: Thu, 07 May 2015 18:48:09 +0200 Subject: execution of post_action each time breaks a keepalive connection to upstream In-Reply-To: <20150507161744.GR98215@mdounin.ru> References: <20150507123031.GL98215@mdounin.ru> <20150507161744.GR98215@mdounin.ru> Message-ID: > It was never documented, and will never be documented. Well, may > be we'll add something like "post_action: don't use it unless you > understand what are you doing" to let people know that this > directive should not be used. It's a proper pity if that some day gets the chop :( Because I see no really options to easy realize it (with any current advantages) as module without great integrity in nginx core. > A connections to an upstream server can be only kept alive if it > is in some consistent state and no outstanding data are expected > in it. On the other hand, nginx doesn't try to read anything in > addition to what it already read during normal upstream response > parsing. > As a result, if sending of a response is stopped once nginx got a > header (this happens in case of post_action and in some cases with > r->header_only), nginx will only be able to cache a connection if > it's already in a consistent state. This may be the case with > HTTP if Content-Length is explicitly set to 0 in the response > headers and in some other cases (see ngx_http_proxy_process_header() > for details). > Quick look suggests that with FastCGI it doesn't seems to be > possible at all, at least with current code, as nginx parses > headers from FCGI_STDOUT records, but at least a FCGI_END_REQUEST > record is additionally expected. I know that all... And both fsgi upstreams work proper (header only also) and sends end-request record hereafter. (Just checked again). The problem is that it does not "wait" (I know it's not really wait) for proper endrequest and does not set u-keepalive to 1, so worker closes a connection. But I will find it out and fix anyway :) From jefftk at google.com Tue May 12 12:28:16 2015 From: jefftk at google.com (Jeff Kaufman) Date: Tue, 12 May 2015 08:28:16 -0400 Subject: Debugging net::ERR_SPDY_PROTOCOL_ERROR Message-ID: I'm trying to debug a SPDY 3.1 protocol error that you get when combining nginx, Chrome, and ngx_pagespeed. Chrome sends a SPDY request for /pagespeed_static/1.JiBnMqyl6S.gif as: :host = www.kluisstore.nl :method = GET :path = /pagespeed_static/1.JiBnMqyl6S.gif :scheme = https :version = HTTP/1.1 accept = image/webp,*/*;q=0.8 accept-encoding = gzip, deflate, sdch accept-language = en-US,en;q=0.8 cookie = frontend=csmoib69rn9324hhfilka78ab4 referer = https://www.kluisstore.nl/ user-agent = Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/44.0.2399.0 Safari/537.36 This is handled by ngx_pagespeed where the (tiny) static file is part of its binary. It should simply send headers and body, but Chrome receives a RST_STREAM from nginx, which then drops the connection. This was reported to ngx_pagespeed as https://github.com/pagespeed/ngx_pagespeed/issues/962 and then to Chrome as https://code.google.com/p/chromium/issues/detail?id=484793 I'm trying to figure out if this is a bug in nginx, ngx_pagespeed, or Chrome. I can't reproduce this locally, but I can reproduce it maybe 25% of the time on Chrome requests with a clean cache to https://www.kluisstore.nl/ (the original reporter). From igrigorik at gmail.com Tue May 12 14:57:49 2015 From: igrigorik at gmail.com (Ilya Grigorik) Date: Tue, 12 May 2015 07:57:49 -0700 Subject: [RFC] event/openssl: Add dynamic record size support for serving ssl trafic In-Reply-To: References: <34abb03e.16155.14d244dae6f.Coremail.gzchenym@126.com> Message-ID: Awesome, thanks for putting this together! On Tue, May 5, 2015 at 6:39 AM, chen wrote: > There are 3 conditions, if true at the same time, may trigger SSL_write to > send small record over the link, hard coded 1400 bytes at this time to keep > it fit into MTU size. We just send out 3 of this small record at most to > reduce framing overhead when serving large object, that is enough for > browser to discovery other dependency of the page at top of html file. If > the buffer chain is smaller than 4096 bytes, it will not justify the > overhead of sending small record. After idle for 60s(hard coded at this > moment), start all over again. > A few followup notes and questions... 1) "small record" size should be closer to 1300 bytes to account for various overhead, see [1]. 2) any way to guarantee that packets are flushed at record boundaries? 3) why just 3 packets? I'd suggest emitting the first CWND's worth.. aka, 10. ig [1] https://issues.apache.org/jira/browse/TS-2503 On Tue, May 12, 2015 at 7:52 AM, Ilya Grigorik wrote: > Awesome, thanks for putting this together! > > On Tue, May 5, 2015 at 6:39 AM, chen wrote: > >> There are 3 conditions, if true at the same time, may trigger SSL_write >> to send small record over the link, hard coded 1400 bytes at this time to >> keep it fit into MTU size. We just send out 3 of this small record at most >> to reduce framing overhead when serving large object, that is enough for >> browser to discovery other dependency of the page at top of html file. If >> the buffer chain is smaller than 4096 bytes, it will not justify the >> overhead of sending small record. After idle for 60s(hard coded at this >> moment), start all over again. >> > > A few followup notes and questions... > > 1) "small record" size should be closer to 1300 bytes to account for > various overhead, see [1]. > 2) any way to guarantee that packets are flushed at record boundaries? > 3) why just 3 packets? I'd suggest emitting the first CWND's worth.. aka, > 10. > > ig > > [1] https://issues.apache.org/jira/browse/TS-2503 > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gzchenym at 126.com Wed May 13 02:28:16 2015 From: gzchenym at 126.com (chen) Date: Wed, 13 May 2015 10:28:16 +0800 (CST) Subject: [RFC] event/openssl: Add dynamic record size support for serving ssl trafic In-Reply-To: <34abb03e.16155.14d244dae6f.Coremail.gzchenym@126.com> References: <34abb03e.16155.14d244dae6f.Coremail.gzchenym@126.com> Message-ID: 1) we will have that fixed 2) no api is exposed by openssl that we can use to trigger a FLUSH, use SSL_write is what we can do. If we inspect the data using wireshark, you will find out that one SSL_write we result in one ssl record. 3) there are some old linux box that are still using IW4, To Q2 specifically, BIO_flush we disrupt the internal state of ssl layer? And it will be better if we let ssl layer itself handle the bio stuff. At 2015-05-05 21:39:40, "chen" wrote: Hi list: This is v1 of the patchset the implementing the feature SSL Dynamic Record Sizing, inspiring by Google Front End (https://www.igvita.com/2013/10/24/optimizing-tls-record-size-and-buffering-latency/) . There are 3 conditions, if true at the same time, may trigger SSL_write to send small record over the link, hard coded 1400 bytes at this time to keep it fit into MTU size. We just send out 3 of this small record at most to reduce framing overhead when serving large object, that is enough for browser to discovery other dependency of the page at top of html file. If the buffer chain is smaller than 4096 bytes, it will not justify the overhead of sending small record. After idle for 60s(hard coded at this moment), start all over again. Any comments is welcome. Regard YM hg export tip # HG changeset patch # User YM Chen # Date 1430828974 -28800 # Node ID 31bfe6403c340bdc4c04e8e87721736c07bceef8 # Parent 162b2d27d4e1ce45bb9217d6958348c64f726a28 [RFC] event/openssl: Add dynamic record size support for serving ssl trafic SSL Dynamic Record Sizing is a long sought after feature for website that serving huge amount of encrypted traffic. The rational behide this is that SSL record should not overflow the congestion window at the beginning of slow-start period and by doing so, we can let the browser decode the first ssl record within 1 rtt and establish other connections to fetch other resources that are referenced at the top of the html file. diff -r 162b2d27d4e1 -r 31bfe6403c34 src/event/ngx_event_openssl.c --- a/src/event/ngx_event_openssl.c Wed Apr 29 14:59:02 2015 +0300 +++ b/src/event/ngx_event_openssl.c Tue May 05 20:29:34 2015 +0800 @@ -1508,6 +1508,11 @@ ngx_uint_t flush; ssize_t send, size; ngx_buf_t *buf; + ngx_msec_t last_sent_timer_diff; + ngx_uint_t loop_count; + + last_sent_timer_diff = ngx_current_msec - c->ssl->last_write_msec; + loop_count = 0; if (!c->ssl->buffer) { @@ -1517,7 +1522,13 @@ continue; } - n = ngx_ssl_write(c, in->buf->pos, in->buf->last - in->buf->pos); + size = in->buf->last - in->buf->pos; + + if(last_sent_timer_diff > 1000*60 && loop_count < 3 && size > 4096) { + size = 1400; + } + + n = ngx_ssl_write(c, in->buf->pos, size); if (n == NGX_ERROR) { return NGX_CHAIN_ERROR; @@ -1532,8 +1543,11 @@ if (in->buf->pos == in->buf->last) { in = in->next; } + + loop_count ++; } + c->ssl->last_write_msec = ngx_current_msec; return in; } @@ -1614,9 +1628,14 @@ if (size == 0) { buf->flush = 0; c->buffered &= ~NGX_SSL_BUFFERED; + c->ssl->last_write_msec = ngx_current_msec; return in; } + if(last_sent_timer_diff > 1000*60 && loop_count < 3 && size > 4096) { + size = 1400; + } + n = ngx_ssl_write(c, buf->pos, size); if (n == NGX_ERROR) { @@ -1633,14 +1652,18 @@ break; } - flush = 0; - - buf->pos = buf->start; - buf->last = buf->start; + if(buf->last == buf->pos) { + flush = 0; + + buf->pos = buf->start; + buf->last = buf->start; + } if (in == NULL || send == limit) { break; } + + loop_count++; } buf->flush = flush; @@ -1652,6 +1675,7 @@ c->buffered &= ~NGX_SSL_BUFFERED; } + c->ssl->last_write_msec = ngx_current_msec; return in; } diff -r 162b2d27d4e1 -r 31bfe6403c34 src/event/ngx_event_openssl.h --- a/src/event/ngx_event_openssl.h Wed Apr 29 14:59:02 2015 +0300 +++ b/src/event/ngx_event_openssl.h Tue May 05 20:29:34 2015 +0800 @@ -51,6 +51,8 @@ ngx_buf_t *buf; size_t buffer_size; + ngx_msec_t last_write_msec; + ngx_connection_handler_pt handler; ngx_event_handler_pt saved_read_handler; -------------- next part -------------- An HTML attachment was scrubbed... URL: From mat999 at gmail.com Wed May 13 04:57:00 2015 From: mat999 at gmail.com (SplitIce) Date: Wed, 13 May 2015 14:57:00 +1000 Subject: [RFC] event/openssl: Add dynamic record size support for serving ssl trafic In-Reply-To: References: <34abb03e.16155.14d244dae6f.Coremail.gzchenym@126.com> Message-ID: Good Job. Perhaps rather than changing the constants, they could be exposed as configuration options? On Wed, May 13, 2015 at 12:28 PM, chen wrote: > 1) we will have that fixed > 2) no api is exposed by openssl that we can use to trigger a FLUSH, use > SSL_write is what we can do. If we inspect the data using wireshark, you > will find out that one SSL_write we result in one ssl record. > 3) there are some old linux box that are still using IW4, > > To Q2 specifically, BIO_flush we disrupt the internal state of ssl > layer? And it will be better if we let ssl layer itself handle the bio > stuff. > > > > > > At 2015-05-05 21:39:40, "chen" wrote: > > Hi list: > This is v1 of the patchset the implementing the feature SSL Dynamic Record > Sizing, inspiring by Google Front End ( > https://www.igvita.com/2013/10/24/optimizing-tls-record-size-and-buffering-latency/ > ) . > There are 3 conditions, if true at the same time, may trigger SSL_write to > send small record over the link, hard coded 1400 bytes at this time to keep > it fit into MTU size. We just send out 3 of this small record at most to > reduce framing overhead when serving large object, that is enough for > browser to discovery other dependency of the page at top of html file. If > the buffer chain is smaller than 4096 bytes, it will not justify the > overhead of sending small record. After idle for 60s(hard coded at this > moment), start all over again. > > Any comments is welcome. > > Regard > YM > > hg export tip > # HG changeset patch > # User YM Chen > # Date 1430828974 -28800 > # Node ID 31bfe6403c340bdc4c04e8e87721736c07bceef8 > # Parent 162b2d27d4e1ce45bb9217d6958348c64f726a28 > [RFC] event/openssl: Add dynamic record size support for serving ssl trafic > > SSL Dynamic Record Sizing is a long sought after feature for website that > serving > huge amount of encrypted traffic. The rational behide this is that SSL > record should > not overflow the congestion window at the beginning of slow-start period > and by doing > so, we can let the browser decode the first ssl record within 1 rtt and > establish other > connections to fetch other resources that are referenced at the top of the > html file. > > diff -r 162b2d27d4e1 -r 31bfe6403c34 src/event/ngx_event_openssl.c > --- a/src/event/ngx_event_openssl.c Wed Apr 29 14:59:02 2015 +0300 > +++ b/src/event/ngx_event_openssl.c Tue May 05 20:29:34 2015 +0800 > @@ -1508,6 +1508,11 @@ > ngx_uint_t flush; > ssize_t send, size; > ngx_buf_t *buf; > + ngx_msec_t last_sent_timer_diff; > + ngx_uint_t loop_count; > + > + last_sent_timer_diff = ngx_current_msec - c->ssl->last_write_msec; > + loop_count = 0; > > if (!c->ssl->buffer) { > > @@ -1517,7 +1522,13 @@ > continue; > } > > - n = ngx_ssl_write(c, in->buf->pos, in->buf->last - > in->buf->pos); > + size = in->buf->last - in->buf->pos; > + > + if(last_sent_timer_diff > 1000*60 && loop_count < 3 && size > > 4096) { > + size = 1400; > + } > + > + n = ngx_ssl_write(c, in->buf->pos, size); > > if (n == NGX_ERROR) { > return NGX_CHAIN_ERROR; > @@ -1532,8 +1543,11 @@ > if (in->buf->pos == in->buf->last) { > in = in->next; > } > + > + loop_count ++; > } > > + c->ssl->last_write_msec = ngx_current_msec; > return in; > } > > @@ -1614,9 +1628,14 @@ > if (size == 0) { > buf->flush = 0; > c->buffered &= ~NGX_SSL_BUFFERED; > + c->ssl->last_write_msec = ngx_current_msec; > return in; > } > > + if(last_sent_timer_diff > 1000*60 && loop_count < 3 && size > > 4096) { > + size = 1400; > + } > + > n = ngx_ssl_write(c, buf->pos, size); > > if (n == NGX_ERROR) { > @@ -1633,14 +1652,18 @@ > break; > } > > - flush = 0; > - > - buf->pos = buf->start; > - buf->last = buf->start; > + if(buf->last == buf->pos) { > + flush = 0; > + > + buf->pos = buf->start; > + buf->last = buf->start; > + } > > if (in == NULL || send == limit) { > break; > } > + > + loop_count++; > } > > buf->flush = flush; > @@ -1652,6 +1675,7 @@ > c->buffered &= ~NGX_SSL_BUFFERED; > } > > + c->ssl->last_write_msec = ngx_current_msec; > return in; > } > > diff -r 162b2d27d4e1 -r 31bfe6403c34 src/event/ngx_event_openssl.h > --- a/src/event/ngx_event_openssl.h Wed Apr 29 14:59:02 2015 +0300 > +++ b/src/event/ngx_event_openssl.h Tue May 05 20:29:34 2015 +0800 > @@ -51,6 +51,8 @@ > ngx_buf_t *buf; > size_t buffer_size; > > + ngx_msec_t last_write_msec; > + > ngx_connection_handler_pt handler; > > ngx_event_handler_pt saved_read_handler; > > > > > > > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gzchenym at 126.com Wed May 13 05:11:30 2015 From: gzchenym at 126.com (chen) Date: Wed, 13 May 2015 13:11:30 +0800 (GMT+08:00) Subject: [RFC] event/openssl: Add dynamic record size support for serving ssl trafic In-Reply-To: References: <34abb03e.16155.14d244dae6f.Coremail.gzchenym@126.com> Message-ID: <6c0ea560.98f6.14d4baf5218.Coremail.gzchenym@126.com> the constant value here is optimal for most cases, like another patch that optimizing ssl initial write buffer size for large certificate, I think having a knob at conf file did not make any difference. On 2015-05-13 12:57 , SplitIce Wrote: Good Job. Perhaps rather than changing the constants, they could be exposed as configuration options? On Wed, May 13, 2015 at 12:28 PM, chen wrote: 1) we will have that fixed 2) no api is exposed by openssl that we can use to trigger a FLUSH, use SSL_write is what we can do. If we inspect the data using wireshark, you will find out that one SSL_write we result in one ssl record. 3) there are some old linux box that are still using IW4, To Q2 specifically, BIO_flush we disrupt the internal state of ssl layer? And it will be better if we let ssl layer itself handle the bio stuff. At 2015-05-05 21:39:40, "chen" wrote: Hi list: This is v1 of the patchset the implementing the feature SSL Dynamic Record Sizing, inspiring by Google Front End (https://www.igvita.com/2013/10/24/optimizing-tls-record-size-and-buffering-latency/) . There are 3 conditions, if true at the same time, may trigger SSL_write to send small record over the link, hard coded 1400 bytes at this time to keep it fit into MTU size. We just send out 3 of this small record at most to reduce framing overhead when serving large object, that is enough for browser to discovery other dependency of the page at top of html file. If the buffer chain is smaller than 4096 bytes, it will not justify the overhead of sending small record. After idle for 60s(hard coded at this moment), start all over again. Any comments is welcome. Regard YM hg export tip # HG changeset patch # User YM Chen # Date 1430828974 -28800 # Node ID 31bfe6403c340bdc4c04e8e87721736c07bceef8 # Parent 162b2d27d4e1ce45bb9217d6958348c64f726a28 [RFC] event/openssl: Add dynamic record size support for serving ssl trafic SSL Dynamic Record Sizing is a long sought after feature for website that serving huge amount of encrypted traffic. The rational behide this is that SSL record should not overflow the congestion window at the beginning of slow-start period and by doing so, we can let the browser decode the first ssl record within 1 rtt and establish other connections to fetch other resources that are referenced at the top of the html file. diff -r 162b2d27d4e1 -r 31bfe6403c34 src/event/ngx_event_openssl.c --- a/src/event/ngx_event_openssl.c Wed Apr 29 14:59:02 2015 +0300 +++ b/src/event/ngx_event_openssl.c Tue May 05 20:29:34 2015 +0800 @@ -1508,6 +1508,11 @@ ngx_uint_t flush; ssize_t send, size; ngx_buf_t *buf; + ngx_msec_t last_sent_timer_diff; + ngx_uint_t loop_count; + + last_sent_timer_diff = ngx_current_msec - c->ssl->last_write_msec; + loop_count = 0; if (!c->ssl->buffer) { @@ -1517,7 +1522,13 @@ continue; } - n = ngx_ssl_write(c, in->buf->pos, in->buf->last - in->buf->pos); + size = in->buf->last - in->buf->pos; + + if(last_sent_timer_diff > 1000*60 && loop_count < 3 && size > 4096) { + size = 1400; + } + + n = ngx_ssl_write(c, in->buf->pos, size); if (n == NGX_ERROR) { return NGX_CHAIN_ERROR; @@ -1532,8 +1543,11 @@ if (in->buf->pos == in->buf->last) { in = in->next; } + + loop_count ++; } + c->ssl->last_write_msec = ngx_current_msec; return in; } @@ -1614,9 +1628,14 @@ if (size == 0) { buf->flush = 0; c->buffered &= ~NGX_SSL_BUFFERED; + c->ssl->last_write_msec = ngx_current_msec; return in; } + if(last_sent_timer_diff > 1000*60 && loop_count < 3 && size > 4096) { + size = 1400; + } + n = ngx_ssl_write(c, buf->pos, size); if (n == NGX_ERROR) { @@ -1633,14 +1652,18 @@ break; } - flush = 0; - - buf->pos = buf->start; - buf->last = buf->start; + if(buf->last == buf->pos) { + flush = 0; + + buf->pos = buf->start; + buf->last = buf->start; + } if (in == NULL || send == limit) { break; } + + loop_count++; } buf->flush = flush; @@ -1652,6 +1675,7 @@ c->buffered &= ~NGX_SSL_BUFFERED; } + c->ssl->last_write_msec = ngx_current_msec; return in; } diff -r 162b2d27d4e1 -r 31bfe6403c34 src/event/ngx_event_openssl.h --- a/src/event/ngx_event_openssl.h Wed Apr 29 14:59:02 2015 +0300 +++ b/src/event/ngx_event_openssl.h Tue May 05 20:29:34 2015 +0800 @@ -51,6 +51,8 @@ ngx_buf_t *buf; size_t buffer_size; + ngx_msec_t last_write_msec; + ngx_connection_handler_pt handler; ngx_event_handler_pt saved_read_handler; _______________________________________________ nginx-devel mailing list nginx-devel at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx-devel -------------- next part -------------- An HTML attachment was scrubbed... URL: From igrigorik at gmail.com Wed May 13 07:15:42 2015 From: igrigorik at gmail.com (Ilya Grigorik) Date: Wed, 13 May 2015 00:15:42 -0700 Subject: [RFC] event/openssl: Add dynamic record size support for serving ssl trafic In-Reply-To: References: <34abb03e.16155.14d244dae6f.Coremail.gzchenym@126.com> Message-ID: On Tue, May 12, 2015 at 7:28 PM, chen wrote: > 3) there are some old linux box that are still using IW4, We shouldn't penalize modern systems because there are some laggards. IW10 has been default setting in Linux since 2.6.38... That said, can we expose this as a configuration option? ig -------------- next part -------------- An HTML attachment was scrubbed... URL: From gzchenym at 126.com Wed May 13 08:27:32 2015 From: gzchenym at 126.com (chen) Date: Wed, 13 May 2015 16:27:32 +0800 (GMT+08:00) Subject: [RFC] event/openssl: Add dynamic record size support for serving ssl trafic In-Reply-To: References: <34abb03e.16155.14d244dae6f.Coremail.gzchenym@126.com> Message-ID: <28668fb8.7928.14d4c62ccd9.Coremail.gzchenym@126.com> yes, I will add a knob for this feature in v2 of the patch.And by the way, did you think it is necessary to add bio_flush call after each ssl_write? On 2015-05-13 15:15 , Ilya Grigorik Wrote: On Tue, May 12, 2015 at 7:28 PM, chen wrote: 3) there are some old linux box that are still using IW4, We shouldn't penalize modern systems because there are some laggards. IW10 has been default setting in Linux since 2.6.38... That said, can we expose this as a configuration option? ig -------------- next part -------------- An HTML attachment was scrubbed... URL: From igrigorik at gmail.com Wed May 13 16:06:15 2015 From: igrigorik at gmail.com (Ilya Grigorik) Date: Wed, 13 May 2015 09:06:15 -0700 Subject: [RFC] event/openssl: Add dynamic record size support for serving ssl trafic In-Reply-To: <28668fb8.7928.14d4c62ccd9.Coremail.gzchenym@126.com> References: <34abb03e.16155.14d244dae6f.Coremail.gzchenym@126.com> <28668fb8.7928.14d4c62ccd9.Coremail.gzchenym@126.com> Message-ID: On Wed, May 13, 2015 at 1:27 AM, chen wrote: > And by the way, did you think it is necessary to add bio_flush call after > each ssl_write? Good question, not sure. Perhaps someone else on the list can chime in.. ig -------------- next part -------------- An HTML attachment was scrubbed... URL: From igor at sysoev.ru Fri May 15 14:16:06 2015 From: igor at sysoev.ru (Igor Sysoev) Date: Fri, 15 May 2015 14:16:06 +0000 Subject: [nginx] Events: ngx_event_t size reduction by grouping bit fields. Message-ID: details: http://hg.nginx.org/nginx/rev/0b8f6f75245d branches: changeset: 6145:0b8f6f75245d user: Igor Sysoev date: Fri May 15 17:15:33 2015 +0300 description: Events: ngx_event_t size reduction by grouping bit fields. diffstat: src/event/ngx_event.h | 17 ++++++++--------- 1 files changed, 8 insertions(+), 9 deletions(-) diffs (34 lines): diff -r d0a84ae2fb48 -r 0b8f6f75245d src/event/ngx_event.h --- a/src/event/ngx_event.h Wed May 06 17:04:00 2015 +0300 +++ b/src/event/ngx_event.h Fri May 15 17:15:33 2015 +0300 @@ -68,6 +68,14 @@ struct ngx_event_s { unsigned posted:1; + unsigned closed:1; + + /* to test on worker exit */ + unsigned channel:1; + unsigned resolver:1; + + unsigned cancelable:1; + #if (NGX_WIN32) /* setsockopt(SO_UPDATE_ACCEPT_CONTEXT) was successful */ unsigned accept_context_updated:1; @@ -116,15 +124,6 @@ struct ngx_event_s { /* the posted queue */ ngx_queue_t queue; - unsigned closed:1; - - /* to test on worker exit */ - unsigned channel:1; - unsigned resolver:1; - - unsigned cancelable:1; - - #if 0 /* the threads support */ From george at ucdn.com Fri May 15 14:45:44 2015 From: george at ucdn.com (George .) Date: Fri, 15 May 2015 17:45:44 +0300 Subject: wrong $bytes_sent on nginx-1.8.0 if aio threads is enabled Message-ID: Hi, I found following bug in nginx-1.8.0: if aio is configured with threads support - sometime (one in thousands requests) $bytes_sent contains only length of the header. I'm attaching my nginx.conf, build params and simple python script I'm using the reproduce this issue. Here is the output of test script when the problem appears: . . . received: 101700000 from access_log : 101700000 on 26 iteration 127.0.0.1 - - [15/May/2015 17:27:45] "GET /test HTTP/1.0" 200 - 127.0.0.1 - - [15/May/2015 17:27:47] "GET /test HTTP/1.0" 200 - received: 101700000 from access_log : 101700000 on 27 iteration 127.0.0.1 - - [15/May/2015 17:27:58] "GET /test HTTP/1.0" 200 - 127.0.0.1 - - [15/May/2015 17:28:00] "GET /test HTTP/1.0" 200 - received: 101700000 from access_log : 101690000 on 28 iteration test failed!! also in access_log file" . . . 10170 GET /test HTTP/1.1 10170 GET /test HTTP/1.1 10170 GET /test HTTP/1.1 170 GET /test HTTP/1.1 10170 GET /test HTTP/1.1 10170 GET /test HTTP/1.1 . . Best regards, George -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: nginx.conf Type: application/octet-stream Size: 869 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: my_configure Type: application/octet-stream Size: 604 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: thread_bad_sent_bytes.py Type: text/x-python Size: 2354 bytes Desc: not available URL: From vbart at nginx.com Fri May 15 20:37:29 2015 From: vbart at nginx.com (Valentin V. Bartenev) Date: Fri, 15 May 2015 23:37:29 +0300 Subject: wrong $bytes_sent on nginx-1.8.0 if aio threads is enabled In-Reply-To: References: Message-ID: <5552150.qNeIUJE5qs@vbart-workstation> On Friday 15 May 2015 17:45:44 George . wrote: > Hi, > > I found following bug in nginx-1.8.0: > > if aio is configured with threads support - sometime (one in thousands > requests) $bytes_sent contains only length of the header. I'm attaching my > nginx.conf, build params and simple python script I'm using the reproduce > this issue. > > Here is the output of test script when the problem appears: > . > . > . > received: 101700000 from access_log : 101700000 on 26 iteration > 127.0.0.1 - - [15/May/2015 17:27:45] "GET /test HTTP/1.0" 200 - > 127.0.0.1 - - [15/May/2015 17:27:47] "GET /test HTTP/1.0" 200 - > received: 101700000 from access_log : 101700000 on 27 iteration > 127.0.0.1 - - [15/May/2015 17:27:58] "GET /test HTTP/1.0" 200 - > 127.0.0.1 - - [15/May/2015 17:28:00] "GET /test HTTP/1.0" 200 - > received: 101700000 from access_log : 101690000 on 28 iteration > test failed!! > > also in access_log file" > > . > . > . > 10170 GET /test HTTP/1.1 > 10170 GET /test HTTP/1.1 > 10170 GET /test HTTP/1.1 > 170 GET /test HTTP/1.1 > 10170 GET /test HTTP/1.1 > 10170 GET /test HTTP/1.1 > . > . > Thank you for the report. It caused by a race condition between sendfile() task completion and connection close notifications. If the latter comes first, nginx logs that client prematurely closed connection. Unfortunately, it's not easy to fix. I'll look at it later. wbr, Valentin V. Bartenev From teward at dark-net.net Fri May 15 20:51:38 2015 From: teward at dark-net.net (Thomas Ward (Dark-Net)) Date: Fri, 15 May 2015 16:51:38 -0400 Subject: Possible bug - hostname-defined upstream locations, bind addresses, etc. from /etc/hosts not referenced at some load times Message-ID: Long subject, I know, but this has been noticed in NGINX 1.6.x, 1.7.x, 1.8.x, and my 1.9.x from-source builds. The documentation for 'listen', 'upstream' blocks, and many other things does state hostnames are usable in binding and proxying. While this is very understandable, I think we have a few issues in there, perhaps even bugs. At some system boots, /etc/hosts is not read or interpreted by nginx. Therefore, I run into resolution issues for addresses that are defined in teh local /etc/hosts file. Running nginx as a service *after* a reboot seems to make the binding work, but this is an issue that has cropped up multiple times downstream on bugs on my radar in Debian and Ubuntu, and even on my own packaging of the nginx source code. I've also replicated it on the nginx.org repository as well. Is there some way to enforce nginx checking the /etc/hosts? Or is this just a system boot time race condition where nginx tries to load before /etc/hosts and the underlying system/kernel DNS handling (which states that /etc/hosts has resolution or such) isn't initiated yet? Thomas From ru at nginx.com Fri May 15 22:32:42 2015 From: ru at nginx.com (Ruslan Ermilov) Date: Fri, 15 May 2015 22:32:42 +0000 Subject: [nginx] Upstream: times to obtain header/response are stored as ... Message-ID: details: http://hg.nginx.org/nginx/rev/59fc60585f1e branches: changeset: 6146:59fc60585f1e user: Ruslan Ermilov date: Sat May 16 01:31:04 2015 +0300 description: Upstream: times to obtain header/response are stored as ngx_msec_t. diffstat: src/http/ngx_http_upstream.c | 37 +++++++++++-------------------------- src/http/ngx_http_upstream.h | 6 ++---- 2 files changed, 13 insertions(+), 30 deletions(-) diffs (111 lines): diff -r 0b8f6f75245d -r 59fc60585f1e src/http/ngx_http_upstream.c --- a/src/http/ngx_http_upstream.c Fri May 15 17:15:33 2015 +0300 +++ b/src/http/ngx_http_upstream.c Sat May 16 01:31:04 2015 +0300 @@ -1300,15 +1300,12 @@ static void ngx_http_upstream_connect(ngx_http_request_t *r, ngx_http_upstream_t *u) { ngx_int_t rc; - ngx_time_t *tp; ngx_connection_t *c; r->connection->log->action = "connecting to upstream"; - if (u->state && u->state->response_sec) { - tp = ngx_timeofday(); - u->state->response_sec = tp->sec - u->state->response_sec; - u->state->response_msec = tp->msec - u->state->response_msec; + if (u->state && u->state->response_time) { + u->state->response_time = ngx_current_msec - u->state->response_time; } u->state = ngx_array_push(r->upstream_states); @@ -1320,10 +1317,8 @@ ngx_http_upstream_connect(ngx_http_reque ngx_memzero(u->state, sizeof(ngx_http_upstream_state_t)); - tp = ngx_timeofday(); - u->state->response_sec = tp->sec; - u->state->response_msec = tp->msec; - u->state->header_sec = (time_t) NGX_ERROR; + u->state->response_time = ngx_current_msec; + u->state->header_time = (ngx_msec_t) -1; rc = ngx_event_connect_peer(&u->peer); @@ -2017,7 +2012,6 @@ ngx_http_upstream_process_header(ngx_htt { ssize_t n; ngx_int_t rc; - ngx_time_t *tp; ngx_connection_t *c; c = u->peer.connection; @@ -2138,9 +2132,7 @@ ngx_http_upstream_process_header(ngx_htt /* rc == NGX_OK */ - tp = ngx_timeofday(); - u->state->header_sec = tp->sec - u->state->response_sec; - u->state->header_msec = tp->msec - u->state->response_msec; + u->state->header_time = ngx_current_msec - u->state->response_time; if (u->headers_in.status_n >= NGX_HTTP_SPECIAL_RESPONSE) { @@ -3923,8 +3915,7 @@ static void ngx_http_upstream_finalize_request(ngx_http_request_t *r, ngx_http_upstream_t *u, ngx_int_t rc) { - ngx_uint_t flush; - ngx_time_t *tp; + ngx_uint_t flush; ngx_log_debug1(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, "finalize http upstream request: %i", rc); @@ -3943,10 +3934,8 @@ ngx_http_upstream_finalize_request(ngx_h u->resolved->ctx = NULL; } - if (u->state && u->state->response_sec) { - tp = ngx_timeofday(); - u->state->response_sec = tp->sec - u->state->response_sec; - u->state->response_msec = tp->msec - u->state->response_msec; + if (u->state && u->state->response_time) { + u->state->response_time = ngx_current_msec - u->state->response_time; if (u->pipe && u->pipe->read_length) { u->state->response_length = u->pipe->read_length; @@ -5020,15 +5009,11 @@ ngx_http_upstream_response_time_variable for ( ;; ) { if (state[i].status) { - if (data - && state[i].header_sec != (time_t) NGX_ERROR) - { - ms = (ngx_msec_int_t) - (state[i].header_sec * 1000 + state[i].header_msec); + if (data && state[i].header_time != (ngx_msec_t) -1) { + ms = state[i].header_time; } else { - ms = (ngx_msec_int_t) - (state[i].response_sec * 1000 + state[i].response_msec); + ms = state[i].response_time; } ms = ngx_max(ms, 0); diff -r 0b8f6f75245d -r 59fc60585f1e src/http/ngx_http_upstream.h --- a/src/http/ngx_http_upstream.h Fri May 15 17:15:33 2015 +0300 +++ b/src/http/ngx_http_upstream.h Sat May 16 01:31:04 2015 +0300 @@ -58,10 +58,8 @@ typedef struct { ngx_uint_t bl_state; ngx_uint_t status; - time_t response_sec; - ngx_uint_t response_msec; - time_t header_sec; - ngx_uint_t header_msec; + ngx_msec_t response_time; + ngx_msec_t header_time; off_t response_length; ngx_str_t *peer; From ru at nginx.com Fri May 15 22:32:45 2015 From: ru at nginx.com (Ruslan Ermilov) Date: Fri, 15 May 2015 22:32:45 +0000 Subject: [nginx] Upstream: $upstream_connect_time. Message-ID: details: http://hg.nginx.org/nginx/rev/74b6ef56ea56 branches: changeset: 6147:74b6ef56ea56 user: Ruslan Ermilov date: Sat May 16 01:32:27 2015 +0300 description: Upstream: $upstream_connect_time. The variable keeps time spent on establishing a connection with the upstream server. diffstat: src/http/ngx_http_upstream.c | 14 +++++++++++++- src/http/ngx_http_upstream.h | 1 + 2 files changed, 14 insertions(+), 1 deletions(-) diffs (58 lines): diff -r 59fc60585f1e -r 74b6ef56ea56 src/http/ngx_http_upstream.c --- a/src/http/ngx_http_upstream.c Sat May 16 01:31:04 2015 +0300 +++ b/src/http/ngx_http_upstream.c Sat May 16 01:32:27 2015 +0300 @@ -363,6 +363,10 @@ static ngx_http_variable_t ngx_http_ups ngx_http_upstream_status_variable, 0, NGX_HTTP_VAR_NOCACHEABLE, 0 }, + { ngx_string("upstream_connect_time"), NULL, + ngx_http_upstream_response_time_variable, 2, + NGX_HTTP_VAR_NOCACHEABLE, 0 }, + { ngx_string("upstream_header_time"), NULL, ngx_http_upstream_response_time_variable, 1, NGX_HTTP_VAR_NOCACHEABLE, 0 }, @@ -1318,6 +1322,7 @@ ngx_http_upstream_connect(ngx_http_reque ngx_memzero(u->state, sizeof(ngx_http_upstream_state_t)); u->state->response_time = ngx_current_msec; + u->state->connect_time = (ngx_msec_t) -1; u->state->header_time = (ngx_msec_t) -1; rc = ngx_event_connect_peer(&u->peer); @@ -1760,6 +1765,10 @@ ngx_http_upstream_send_request(ngx_http_ ngx_log_debug0(NGX_LOG_DEBUG_HTTP, c->log, 0, "http upstream send request"); + if (u->state->connect_time == (ngx_msec_t) -1) { + u->state->connect_time = ngx_current_msec - u->state->response_time; + } + if (!u->request_sent && ngx_http_upstream_test_connect(c) != NGX_OK) { ngx_http_upstream_next(r, u, NGX_HTTP_UPSTREAM_FT_ERROR); return; @@ -5009,9 +5018,12 @@ ngx_http_upstream_response_time_variable for ( ;; ) { if (state[i].status) { - if (data && state[i].header_time != (ngx_msec_t) -1) { + if (data == 1 && state[i].header_time != (ngx_msec_t) -1) { ms = state[i].header_time; + } else if (data == 2 && state[i].connect_time != (ngx_msec_t) -1) { + ms = state[i].connect_time; + } else { ms = state[i].response_time; } diff -r 59fc60585f1e -r 74b6ef56ea56 src/http/ngx_http_upstream.h --- a/src/http/ngx_http_upstream.h Sat May 16 01:31:04 2015 +0300 +++ b/src/http/ngx_http_upstream.h Sat May 16 01:32:27 2015 +0300 @@ -59,6 +59,7 @@ typedef struct { ngx_uint_t status; ngx_msec_t response_time; + ngx_msec_t connect_time; ngx_msec_t header_time; off_t response_length; From mdounin at mdounin.ru Sun May 17 04:18:26 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 17 May 2015 07:18:26 +0300 Subject: Possible bug - hostname-defined upstream locations, bind addresses, etc. from /etc/hosts not referenced at some load times In-Reply-To: References: Message-ID: <20150517041826.GB72766@mdounin.ru> Hello! On Fri, May 15, 2015 at 04:51:38PM -0400, Thomas Ward (Dark-Net) wrote: > Long subject, I know, but this has been noticed in NGINX 1.6.x, 1.7.x, > 1.8.x, and my 1.9.x from-source builds. > > The documentation for 'listen', 'upstream' blocks, and many other > things does state hostnames are usable in binding and proxying. While > this is very understandable, I think we have a few issues in there, > perhaps even bugs. > > At some system boots, /etc/hosts is not read or interpreted by nginx. > Therefore, I run into resolution issues for addresses that are defined > in teh local /etc/hosts file. Running nginx as a service *after* a > reboot seems to make the binding work, but this is an issue that has > cropped up multiple times downstream on bugs on my radar in Debian and > Ubuntu, and even on my own packaging of the nginx source code. I've > also replicated it on the nginx.org repository as well. > > Is there some way to enforce nginx checking the /etc/hosts? Or is > this just a system boot time race condition where nginx tries to load > before /etc/hosts and the underlying system/kernel DNS handling (which > states that /etc/hosts has resolution or such) isn't initiated yet? To resolve names during configuration parsing nginx uses gethostbyname() or getaddrinfo() functions. It's up to the OS to provide appropriate service. That is, what you describe looks like a system startup race condition. I'm not really familiar with Debian/Ubuntu, but may be adding $named to Required-Start list in the init script will fix things. -- Maxim Dounin http://nginx.org/ From teward at dark-net.net Sun May 17 05:30:01 2015 From: teward at dark-net.net (Thomas Ward) Date: Sun, 17 May 2015 01:30:01 -0400 Subject: Possible bug - hostname-defined upstream locations, bind addresses, etc. from /etc/hosts not referenced at some load times In-Reply-To: <20150517041826.GB72766@mdounin.ru> References: <20150517041826.GB72766@mdounin.ru> Message-ID: Maxim: > On May 17, 2015, at 00:18, Maxim Dounin wrote: > It's up to the > OS to provide appropriate service. That is, what you describe > looks like a system startup race condition. That's what I thought, but wasn't entirely sure. > I'm not really > familiar with Debian/Ubuntu, but may be adding $named to > Required-Start list in the init script will fix things. I'll have to check the three init systems to see how to do this in all versions... some use sysvinit, some use upstart, and some use SystemD. Thanks for the pointers on potential solutions, Maxim, and for confirming what I thought: that this is a system startup race condition and not really a bug. Thomas From nowshek2 at gmail.com Mon May 18 07:28:56 2015 From: nowshek2 at gmail.com (Abhishek Kumar) Date: Mon, 18 May 2015 12:58:56 +0530 Subject: New feature request: Docker files for Power platform (SLES, RHEL, Ubuntu) In-Reply-To: <3084c2061c84deb9e3e40e7afbc58153@sebres.de> References: <3084c2061c84deb9e3e40e7afbc58153@sebres.de> Message-ID: Hi all, I have written Dockerfile for nginx.It is for Ubuntu on PPC64LE. I have some questions as stated below : ? What is the procedure to check in dockerfiles into https://github.com/nginxinc/docker-nginx repository? ? Do I need to sign any contributor license agreement before checking anything in https://github.com/nginxinc/docker-nginx? Regards, Abhishek Kumar On Thu, May 7, 2015 at 5:04 PM, Sergey Brester wrote: > Hi, > > It is a mercurial (hg) repo, for contribution to it please read hier: > > http://nginx.org/en/docs/contributing_changes.html > > Short, it should be a changeset (created with hg export)... > > BTW: I don't know, will nginx developers want it, but if even not (and you > have possibly a github account), so please make a pull request in: > > https://github.com/sebres/nginx > > Or if you have your own repository on github please tell me know. > > Thx, > sebres. > > 07.05.2015 12:46, Abhishek Kumar: > > Hi, >> I have written dockerfile for building nginx from source. I have built >> and tested the source code available on git successfully through the >> dockerfile for PPC64LE architecture. The dockerfile is successfully run on >> following platforms: >> Ubuntu 14.10 >> SUSE Linux 12.0 >> RHEL 7.1 >> >> Kindly suggest me where can I (which repository) contribute this >> dockerfile for nginx >> >> Regards, >> Abhishek Kumar >> >> _______________________________________________ >> nginx-devel mailing list >> nginx-devel at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx-devel [1] >> > > > Links: > ------ > [1] http://mailman.nginx.org/mailman/listinfo/nginx-devel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pluknet at nginx.com Mon May 18 13:06:16 2015 From: pluknet at nginx.com (Sergey Kandaurov) Date: Mon, 18 May 2015 13:06:16 +0000 Subject: [nginx] Upstream hash: consistency across little/big endianness. Message-ID: details: http://hg.nginx.org/nginx/rev/bf8b6534db3a branches: changeset: 6148:bf8b6534db3a user: Sergey Kandaurov date: Mon May 18 16:05:44 2015 +0300 description: Upstream hash: consistency across little/big endianness. diffstat: src/http/modules/ngx_http_upstream_hash_module.c | 19 +++++++++++++++---- src/stream/ngx_stream_upstream_hash_module.c | 19 +++++++++++++++---- 2 files changed, 30 insertions(+), 8 deletions(-) diffs (106 lines): diff -r 74b6ef56ea56 -r bf8b6534db3a src/http/modules/ngx_http_upstream_hash_module.c --- a/src/http/modules/ngx_http_upstream_hash_module.c Sat May 16 01:32:27 2015 +0300 +++ b/src/http/modules/ngx_http_upstream_hash_module.c Mon May 18 16:05:44 2015 +0300 @@ -277,13 +277,17 @@ ngx_http_upstream_init_chash(ngx_conf_t { u_char *host, *port, c; size_t host_len, port_len, size; - uint32_t hash, base_hash, prev_hash; + uint32_t hash, base_hash; ngx_str_t *server; ngx_uint_t npoints, i, j; ngx_http_upstream_rr_peer_t *peer; ngx_http_upstream_rr_peers_t *peers; ngx_http_upstream_chash_points_t *points; ngx_http_upstream_hash_srv_conf_t *hcf; + union { + uint32_t value; + u_char byte[4]; + } prev_hash; if (ngx_http_upstream_init_round_robin(cf, us) != NGX_OK) { return NGX_ERROR; @@ -350,20 +354,27 @@ ngx_http_upstream_init_chash(ngx_conf_t ngx_crc32_update(&base_hash, (u_char *) "", 1); ngx_crc32_update(&base_hash, port, port_len); - prev_hash = 0; + prev_hash.value = 0; npoints = peer->weight * 160; for (j = 0; j < npoints; j++) { hash = base_hash; - ngx_crc32_update(&hash, (u_char *) &prev_hash, sizeof(uint32_t)); + ngx_crc32_update(&hash, prev_hash.byte, 4); ngx_crc32_final(hash); points->point[points->number].hash = hash; points->point[points->number].server = server; points->number++; - prev_hash = hash; +#if (NGX_HAVE_LITTLE_ENDIAN) + prev_hash.value = hash; +#else + prev_hash.byte[0] = (u_char) (hash & 0xff); + prev_hash.byte[1] = (u_char) ((hash >> 8) & 0xff); + prev_hash.byte[2] = (u_char) ((hash >> 16) & 0xff); + prev_hash.byte[3] = (u_char) ((hash >> 24) & 0xff); +#endif } } diff -r 74b6ef56ea56 -r bf8b6534db3a src/stream/ngx_stream_upstream_hash_module.c --- a/src/stream/ngx_stream_upstream_hash_module.c Sat May 16 01:32:27 2015 +0300 +++ b/src/stream/ngx_stream_upstream_hash_module.c Mon May 18 16:05:44 2015 +0300 @@ -271,13 +271,17 @@ ngx_stream_upstream_init_chash(ngx_conf_ { u_char *host, *port, c; size_t host_len, port_len, size; - uint32_t hash, base_hash, prev_hash; + uint32_t hash, base_hash; ngx_str_t *server; ngx_uint_t npoints, i, j; ngx_stream_upstream_rr_peer_t *peer; ngx_stream_upstream_rr_peers_t *peers; ngx_stream_upstream_chash_points_t *points; ngx_stream_upstream_hash_srv_conf_t *hcf; + union { + uint32_t value; + u_char byte[4]; + } prev_hash; if (ngx_stream_upstream_init_round_robin(cf, us) != NGX_OK) { return NGX_ERROR; @@ -344,20 +348,27 @@ ngx_stream_upstream_init_chash(ngx_conf_ ngx_crc32_update(&base_hash, (u_char *) "", 1); ngx_crc32_update(&base_hash, port, port_len); - prev_hash = 0; + prev_hash.value = 0; npoints = peer->weight * 160; for (j = 0; j < npoints; j++) { hash = base_hash; - ngx_crc32_update(&hash, (u_char *) &prev_hash, sizeof(uint32_t)); + ngx_crc32_update(&hash, prev_hash.byte, 4); ngx_crc32_final(hash); points->point[points->number].hash = hash; points->point[points->number].server = server; points->number++; - prev_hash = hash; +#if (NGX_HAVE_LITTLE_ENDIAN) + prev_hash.value = hash; +#else + prev_hash.byte[0] = (u_char) (hash & 0xff); + prev_hash.byte[1] = (u_char) ((hash >> 8) & 0xff); + prev_hash.byte[2] = (u_char) ((hash >> 16) & 0xff); + prev_hash.byte[3] = (u_char) ((hash >> 24) & 0xff); +#endif } } From vbart at nginx.com Tue May 19 16:27:38 2015 From: vbart at nginx.com (Valentin Bartenev) Date: Tue, 19 May 2015 16:27:38 +0000 Subject: [nginx] Core: properly initialized written bytes counter in memo... Message-ID: details: http://hg.nginx.org/nginx/rev/2c21bfe3da89 branches: changeset: 6149:2c21bfe3da89 user: Valentin Bartenev date: Tue May 19 19:27:07 2015 +0300 description: Core: properly initialized written bytes counter in memory log. diffstat: src/core/ngx_log.c | 2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diffs (12 lines): diff -r bf8b6534db3a -r 2c21bfe3da89 src/core/ngx_log.c --- a/src/core/ngx_log.c Mon May 18 16:05:44 2015 +0300 +++ b/src/core/ngx_log.c Tue May 19 19:27:07 2015 +0300 @@ -609,7 +609,7 @@ ngx_log_set_log(ngx_conf_t *cf, ngx_log_ return NGX_CONF_ERROR; } - buf = ngx_palloc(cf->pool, sizeof(ngx_log_memory_buf_t)); + buf = ngx_pcalloc(cf->pool, sizeof(ngx_log_memory_buf_t)); if (buf == NULL) { return NGX_CONF_ERROR; } From mdounin at mdounin.ru Wed May 20 12:59:49 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 20 May 2015 12:59:49 +0000 Subject: [nginx] Configure: style. Message-ID: details: http://hg.nginx.org/nginx/rev/0371ef1c24a9 branches: changeset: 6150:0371ef1c24a9 user: Maxim Dounin date: Wed May 20 15:51:13 2015 +0300 description: Configure: style. diffstat: auto/unix | 2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diffs (12 lines): diff --git a/auto/unix b/auto/unix --- a/auto/unix +++ b/auto/unix @@ -304,7 +304,7 @@ ngx_feature_run=no ngx_feature_incs="#include " ngx_feature_path= ngx_feature_libs= -ngx_feature_test="setsockopt(0, SOL_SOCKET, SO_SETFIB, NULL, 4)" +ngx_feature_test="setsockopt(0, SOL_SOCKET, SO_SETFIB, NULL, 0)" . auto/feature From mdounin at mdounin.ru Wed May 20 12:59:51 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 20 May 2015 12:59:51 +0000 Subject: [nginx] Introduced worker number, ngx_worker. Message-ID: details: http://hg.nginx.org/nginx/rev/b4cc553aafeb branches: changeset: 6151:b4cc553aafeb user: Maxim Dounin date: Wed May 20 15:51:21 2015 +0300 description: Introduced worker number, ngx_worker. diffstat: src/os/unix/ngx_process_cycle.c | 2 ++ src/os/unix/ngx_process_cycle.h | 1 + src/os/win32/ngx_process_cycle.c | 1 + src/os/win32/ngx_process_cycle.h | 1 + 4 files changed, 5 insertions(+), 0 deletions(-) diffs (52 lines): diff --git a/src/os/unix/ngx_process_cycle.c b/src/os/unix/ngx_process_cycle.c --- a/src/os/unix/ngx_process_cycle.c +++ b/src/os/unix/ngx_process_cycle.c @@ -29,6 +29,7 @@ static void ngx_cache_loader_process_han ngx_uint_t ngx_process; +ngx_uint_t ngx_worker; ngx_pid_t ngx_pid; sig_atomic_t ngx_reap; @@ -731,6 +732,7 @@ ngx_worker_process_cycle(ngx_cycle_t *cy ngx_connection_t *c; ngx_process = NGX_PROCESS_WORKER; + ngx_worker = worker; ngx_worker_process_init(cycle, worker); diff --git a/src/os/unix/ngx_process_cycle.h b/src/os/unix/ngx_process_cycle.h --- a/src/os/unix/ngx_process_cycle.h +++ b/src/os/unix/ngx_process_cycle.h @@ -39,6 +39,7 @@ void ngx_single_process_cycle(ngx_cycle_ extern ngx_uint_t ngx_process; +extern ngx_uint_t ngx_worker; extern ngx_pid_t ngx_pid; extern ngx_pid_t ngx_new_binary; extern ngx_uint_t ngx_inherited; diff --git a/src/os/win32/ngx_process_cycle.c b/src/os/win32/ngx_process_cycle.c --- a/src/os/win32/ngx_process_cycle.c +++ b/src/os/win32/ngx_process_cycle.c @@ -29,6 +29,7 @@ static ngx_thread_value_t __stdcall ngx_ ngx_uint_t ngx_process; +ngx_uint_t ngx_worker; ngx_pid_t ngx_pid; ngx_uint_t ngx_inherited; diff --git a/src/os/win32/ngx_process_cycle.h b/src/os/win32/ngx_process_cycle.h --- a/src/os/win32/ngx_process_cycle.h +++ b/src/os/win32/ngx_process_cycle.h @@ -25,6 +25,7 @@ void ngx_close_handle(HANDLE h); extern ngx_uint_t ngx_process; +extern ngx_uint_t ngx_worker; extern ngx_pid_t ngx_pid; extern ngx_uint_t ngx_exiting; From mdounin at mdounin.ru Wed May 20 12:59:54 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 20 May 2015 12:59:54 +0000 Subject: [nginx] Simplified ngx_http_init_listening(). Message-ID: details: http://hg.nginx.org/nginx/rev/3c344ea7d88b branches: changeset: 6152:3c344ea7d88b user: Maxim Dounin date: Wed May 20 15:51:28 2015 +0300 description: Simplified ngx_http_init_listening(). There is no need to set "i" to 0, as it's expected to be 0 assuming the bindings are properly sorted, and we already rely on this when explicitly set hport->naddrs to 1. Remaining conditional code is replaced with identical "hport->naddrs = i + 1". Identical modifications are done in the mail and stream modules, in the ngx_mail_optimize_servers() and ngx_stream_optimize_servers() functions, respectively. No functional changes. diffstat: src/http/ngx_http.c | 8 +------- src/mail/ngx_mail.c | 8 +------- src/stream/ngx_stream.c | 8 +------- 3 files changed, 3 insertions(+), 21 deletions(-) diffs (54 lines): diff --git a/src/http/ngx_http.c b/src/http/ngx_http.c --- a/src/http/ngx_http.c +++ b/src/http/ngx_http.c @@ -1719,13 +1719,7 @@ ngx_http_init_listening(ngx_conf_t *cf, ls->servers = hport; - if (i == last - 1) { - hport->naddrs = last; - - } else { - hport->naddrs = 1; - i = 0; - } + hport->naddrs = i + 1; switch (ls->sockaddr->sa_family) { diff --git a/src/mail/ngx_mail.c b/src/mail/ngx_mail.c --- a/src/mail/ngx_mail.c +++ b/src/mail/ngx_mail.c @@ -392,13 +392,7 @@ ngx_mail_optimize_servers(ngx_conf_t *cf ls->servers = mport; - if (i == last - 1) { - mport->naddrs = last; - - } else { - mport->naddrs = 1; - i = 0; - } + mport->naddrs = i + 1; switch (ls->sockaddr->sa_family) { #if (NGX_HAVE_INET6) diff --git a/src/stream/ngx_stream.c b/src/stream/ngx_stream.c --- a/src/stream/ngx_stream.c +++ b/src/stream/ngx_stream.c @@ -393,13 +393,7 @@ ngx_stream_optimize_servers(ngx_conf_t * ls->servers = stport; - if (i == last - 1) { - stport->naddrs = last; - - } else { - stport->naddrs = 1; - i = 0; - } + stport->naddrs = i + 1; switch (ls->sockaddr->sa_family) { #if (NGX_HAVE_INET6) From mdounin at mdounin.ru Wed May 20 12:59:57 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 20 May 2015 12:59:57 +0000 Subject: [nginx] The "reuseport" option of the "listen" directive. Message-ID: details: http://hg.nginx.org/nginx/rev/4f6efabcb09b branches: changeset: 6153:4f6efabcb09b user: Maxim Dounin date: Wed May 20 15:51:56 2015 +0300 description: The "reuseport" option of the "listen" directive. When configured, an individual listen socket on a given address is created for each worker process. This allows to reduce in-kernel lock contention on configurations with high accept rates, resulting in better performance. As of now it works on Linux and DragonFly BSD. Note that on Linux incoming connection requests are currently tied up to a specific listen socket, and if some sockets are closed, connection requests will be reset, see https://lwn.net/Articles/542629/. With nginx, this may happen if the number of worker processes is reduced. There is no such problem on DragonFly BSD. Based on previous work by Sepherosa Ziehau and Yingqi Lu. diffstat: auto/unix | 10 +++ src/core/ngx_connection.c | 110 ++++++++++++++++++++++++++++++++++++ src/core/ngx_connection.h | 7 ++ src/core/ngx_cycle.c | 11 +++ src/event/ngx_event.c | 6 + src/event/ngx_event_accept.c | 25 ++++++- src/http/ngx_http.c | 8 ++ src/http/ngx_http_core_module.c | 13 ++++ src/http/ngx_http_core_module.h | 3 + src/stream/ngx_stream.c | 4 + src/stream/ngx_stream.h | 3 + src/stream/ngx_stream_core_module.c | 12 +++ 12 files changed, 206 insertions(+), 6 deletions(-) diffs (truncated from 426 to 300 lines): diff --git a/auto/unix b/auto/unix --- a/auto/unix +++ b/auto/unix @@ -308,6 +308,16 @@ ngx_feature_test="setsockopt(0, SOL_SOCK . auto/feature +ngx_feature="SO_REUSEPORT" +ngx_feature_name="NGX_HAVE_REUSEPORT" +ngx_feature_run=no +ngx_feature_incs="#include " +ngx_feature_path= +ngx_feature_libs= +ngx_feature_test="setsockopt(0, SOL_SOCKET, SO_REUSEPORT, NULL, 0)" +. auto/feature + + ngx_feature="SO_ACCEPTFILTER" ngx_feature_name="NGX_HAVE_DEFERRED_ACCEPT" ngx_feature_run=no diff --git a/src/core/ngx_connection.c b/src/core/ngx_connection.c --- a/src/core/ngx_connection.c +++ b/src/core/ngx_connection.c @@ -91,6 +91,43 @@ ngx_create_listening(ngx_conf_t *cf, voi ngx_int_t +ngx_clone_listening(ngx_conf_t *cf, ngx_listening_t *ls) +{ +#if (NGX_HAVE_REUSEPORT) + + ngx_int_t n; + ngx_core_conf_t *ccf; + ngx_listening_t ols; + + if (!ls->reuseport) { + return NGX_OK; + } + + ols = *ls; + + ccf = (ngx_core_conf_t *) ngx_get_conf(cf->cycle->conf_ctx, + ngx_core_module); + + for (n = 1; n < ccf->worker_processes; n++) { + + /* create a socket for each worker process */ + + ls = ngx_array_push(&cf->cycle->listening); + if (ls == NULL) { + return NGX_ERROR; + } + + *ls = ols; + ls->worker = n; + } + +#endif + + return NGX_OK; +} + + +ngx_int_t ngx_set_inherited_sockets(ngx_cycle_t *cycle) { size_t len; @@ -106,6 +143,9 @@ ngx_set_inherited_sockets(ngx_cycle_t *c #if (NGX_HAVE_DEFERRED_ACCEPT && defined TCP_DEFER_ACCEPT) int timeout; #endif +#if (NGX_HAVE_REUSEPORT) + int reuseport; +#endif ls = cycle->listening.elts; for (i = 0; i < cycle->listening.nelts; i++) { @@ -215,6 +255,25 @@ ngx_set_inherited_sockets(ngx_cycle_t *c #endif #endif +#if (NGX_HAVE_REUSEPORT) + + reuseport = 0; + olen = sizeof(int); + + if (getsockopt(ls[i].fd, SOL_SOCKET, SO_REUSEPORT, + (void *) &reuseport, &olen) + == -1) + { + ngx_log_error(NGX_LOG_ALERT, cycle->log, ngx_socket_errno, + "getsockopt(SO_REUSEPORT) %V failed, ignored", + &ls[i].addr_text); + + } else { + ls[i].reuseport = reuseport ? 1 : 0; + } + +#endif + #if (NGX_HAVE_TCP_FASTOPEN) olen = sizeof(int); @@ -332,6 +391,31 @@ ngx_open_listening_sockets(ngx_cycle_t * continue; } +#if (NGX_HAVE_REUSEPORT) + + if (ls[i].add_reuseport) { + + /* + * to allow transition from a socket without SO_REUSEPORT + * to multiple sockets with SO_REUSEPORT, we have to set + * SO_REUSEPORT on the old socket before opening new ones + */ + + int reuseport = 1; + + if (setsockopt(ls[i].fd, SOL_SOCKET, SO_REUSEPORT, + (const void *) &reuseport, sizeof(int)) + == -1) + { + ngx_log_error(NGX_LOG_ALERT, cycle->log, ngx_socket_errno, + "setsockopt(SO_REUSEPORT) %V failed, ignored", + &ls[i].addr_text); + } + + ls[i].add_reuseport = 0; + } +#endif + if (ls[i].fd != (ngx_socket_t) -1) { continue; } @@ -370,6 +454,32 @@ ngx_open_listening_sockets(ngx_cycle_t * return NGX_ERROR; } +#if (NGX_HAVE_REUSEPORT) + + if (ls[i].reuseport) { + int reuseport; + + reuseport = 1; + + if (setsockopt(s, SOL_SOCKET, SO_REUSEPORT, + (const void *) &reuseport, sizeof(int)) + == -1) + { + ngx_log_error(NGX_LOG_EMERG, log, ngx_socket_errno, + "setsockopt(SO_REUSEPORT) %V failed, ignored", + &ls[i].addr_text); + + if (ngx_close_socket(s) == -1) { + ngx_log_error(NGX_LOG_EMERG, log, ngx_socket_errno, + ngx_close_socket_n " %V failed", + &ls[i].addr_text); + } + + return NGX_ERROR; + } + } +#endif + #if (NGX_HAVE_INET6 && defined IPV6_V6ONLY) if (ls[i].sockaddr->sa_family == AF_INET6) { diff --git a/src/core/ngx_connection.h b/src/core/ngx_connection.h --- a/src/core/ngx_connection.h +++ b/src/core/ngx_connection.h @@ -51,6 +51,8 @@ struct ngx_listening_s { ngx_listening_t *previous; ngx_connection_t *connection; + ngx_uint_t worker; + unsigned open:1; unsigned remain:1; unsigned ignore:1; @@ -66,6 +68,10 @@ struct ngx_listening_s { #if (NGX_HAVE_INET6 && defined IPV6_V6ONLY) unsigned ipv6only:1; #endif +#if (NGX_HAVE_REUSEPORT) + unsigned reuseport:1; + unsigned add_reuseport:1; +#endif unsigned keepalive:2; #if (NGX_HAVE_DEFERRED_ACCEPT) @@ -203,6 +209,7 @@ struct ngx_connection_s { ngx_listening_t *ngx_create_listening(ngx_conf_t *cf, void *sockaddr, socklen_t socklen); +ngx_int_t ngx_clone_listening(ngx_conf_t *cf, ngx_listening_t *ls); ngx_int_t ngx_set_inherited_sockets(ngx_cycle_t *cycle); ngx_int_t ngx_open_listening_sockets(ngx_cycle_t *cycle); void ngx_configure_listening_sockets(ngx_cycle_t *cycle); diff --git a/src/core/ngx_cycle.c b/src/core/ngx_cycle.c --- a/src/core/ngx_cycle.c +++ b/src/core/ngx_cycle.c @@ -493,6 +493,10 @@ ngx_init_cycle(ngx_cycle_t *old_cycle) continue; } + if (ls[i].remain) { + continue; + } + if (ngx_cmp_sockaddr(nls[n].sockaddr, nls[n].socklen, ls[i].sockaddr, ls[i].socklen, 1) == NGX_OK) @@ -540,6 +544,13 @@ ngx_init_cycle(ngx_cycle_t *old_cycle) nls[n].add_deferred = 1; } #endif + +#if (NGX_HAVE_REUSEPORT) + if (nls[n].reuseport && !ls[i].reuseport) { + nls[n].add_reuseport = 1; + } +#endif + break; } } diff --git a/src/event/ngx_event.c b/src/event/ngx_event.c --- a/src/event/ngx_event.c +++ b/src/event/ngx_event.c @@ -725,6 +725,12 @@ ngx_event_process_init(ngx_cycle_t *cycl ls = cycle->listening.elts; for (i = 0; i < cycle->listening.nelts; i++) { +#if (NGX_HAVE_REUSEPORT) + if (ls[i].reuseport && ls[i].worker != ngx_worker) { + continue; + } +#endif + c = ngx_get_connection(ls[i].fd, cycle->log); if (c == NULL) { diff --git a/src/event/ngx_event_accept.c b/src/event/ngx_event_accept.c --- a/src/event/ngx_event_accept.c +++ b/src/event/ngx_event_accept.c @@ -11,7 +11,7 @@ static ngx_int_t ngx_enable_accept_events(ngx_cycle_t *cycle); -static ngx_int_t ngx_disable_accept_events(ngx_cycle_t *cycle); +static ngx_int_t ngx_disable_accept_events(ngx_cycle_t *cycle, ngx_uint_t all); static void ngx_close_accepted_connection(ngx_connection_t *c); @@ -109,7 +109,7 @@ ngx_event_accept(ngx_event_t *ev) } if (err == NGX_EMFILE || err == NGX_ENFILE) { - if (ngx_disable_accept_events((ngx_cycle_t *) ngx_cycle) + if (ngx_disable_accept_events((ngx_cycle_t *) ngx_cycle, 1) != NGX_OK) { return; @@ -390,7 +390,7 @@ ngx_trylock_accept_mutex(ngx_cycle_t *cy "accept mutex lock failed: %ui", ngx_accept_mutex_held); if (ngx_accept_mutex_held) { - if (ngx_disable_accept_events(cycle) == NGX_ERROR) { + if (ngx_disable_accept_events(cycle, 0) == NGX_ERROR) { return NGX_ERROR; } @@ -413,7 +413,7 @@ ngx_enable_accept_events(ngx_cycle_t *cy c = ls[i].connection; - if (c->read->active) { + if (c == NULL || c->read->active) { continue; } @@ -427,7 +427,7 @@ ngx_enable_accept_events(ngx_cycle_t *cy static ngx_int_t -ngx_disable_accept_events(ngx_cycle_t *cycle) +ngx_disable_accept_events(ngx_cycle_t *cycle, ngx_uint_t all) { ngx_uint_t i; ngx_listening_t *ls; @@ -438,10 +438,23 @@ ngx_disable_accept_events(ngx_cycle_t *c c = ls[i].connection; - if (!c->read->active) { + if (c == NULL || !c->read->active) { continue; } From ru at nginx.com Wed May 20 19:44:49 2015 From: ru at nginx.com (Ruslan Ermilov) Date: Wed, 20 May 2015 19:44:49 +0000 Subject: [nginx] Upstream: report to error_log when max_fails is reached. Message-ID: details: http://hg.nginx.org/nginx/rev/cca856715722 branches: changeset: 6154:cca856715722 user: Ruslan Ermilov date: Wed May 20 22:44:00 2015 +0300 description: Upstream: report to error_log when max_fails is reached. This can be useful to understand why "no live upstreams" happens, in particular. diffstat: src/http/ngx_http_upstream_round_robin.c | 5 +++++ src/stream/ngx_stream_upstream_round_robin.c | 5 +++++ 2 files changed, 10 insertions(+), 0 deletions(-) diffs (30 lines): diff -r 4f6efabcb09b -r cca856715722 src/http/ngx_http_upstream_round_robin.c --- a/src/http/ngx_http_upstream_round_robin.c Wed May 20 15:51:56 2015 +0300 +++ b/src/http/ngx_http_upstream_round_robin.c Wed May 20 22:44:00 2015 +0300 @@ -622,6 +622,11 @@ ngx_http_upstream_free_round_robin_peer( if (peer->max_fails) { peer->effective_weight -= peer->weight / peer->max_fails; + + if (peer->fails >= peer->max_fails) { + ngx_log_error(NGX_LOG_WARN, pc->log, 0, + "upstream server temporarily disabled"); + } } ngx_log_debug2(NGX_LOG_DEBUG_HTTP, pc->log, 0, diff -r 4f6efabcb09b -r cca856715722 src/stream/ngx_stream_upstream_round_robin.c --- a/src/stream/ngx_stream_upstream_round_robin.c Wed May 20 15:51:56 2015 +0300 +++ b/src/stream/ngx_stream_upstream_round_robin.c Wed May 20 22:44:00 2015 +0300 @@ -495,6 +495,11 @@ ngx_stream_upstream_free_round_robin_pee if (peer->max_fails) { peer->effective_weight -= peer->weight / peer->max_fails; + + if (peer->fails >= peer->max_fails) { + ngx_log_error(NGX_LOG_WARN, pc->log, 0, + "upstream server temporarily disabled"); + } } ngx_log_debug2(NGX_LOG_DEBUG_STREAM, pc->log, 0, From mdounin at mdounin.ru Thu May 21 18:33:44 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 21 May 2015 18:33:44 +0000 Subject: [nginx] Fixed reuseport with accept_mutex. Message-ID: details: http://hg.nginx.org/nginx/rev/193bbc006d5e branches: changeset: 6155:193bbc006d5e user: Maxim Dounin date: Thu May 21 19:39:11 2015 +0300 description: Fixed reuseport with accept_mutex. diffstat: src/event/ngx_event.c | 7 ++++++- 1 files changed, 6 insertions(+), 1 deletions(-) diffs (17 lines): diff --git a/src/event/ngx_event.c b/src/event/ngx_event.c --- a/src/event/ngx_event.c +++ b/src/event/ngx_event.c @@ -811,7 +811,12 @@ ngx_event_process_init(ngx_cycle_t *cycl rev->handler = ngx_event_accept; - if (ngx_use_accept_mutex) { + if (ngx_use_accept_mutex +#if (NGX_HAVE_REUSEPORT) + && !ls[i].reuseport +#endif + ) + { continue; } From mdounin at mdounin.ru Mon May 25 15:00:55 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 25 May 2015 15:00:55 +0000 Subject: [nginx] Configure: GNU Hurd properly recognized. Message-ID: details: http://hg.nginx.org/nginx/rev/a88e309f839b branches: changeset: 6156:a88e309f839b user: Maxim Dounin date: Mon May 25 17:58:13 2015 +0300 description: Configure: GNU Hurd properly recognized. With this change it's no longer needed to pass -D_GNU_SOURCE manually, and -D_FILE_OFFSET_BITS=64 is set to use 64-bit off_t. Note that nginx currently fails to work properly with master process enabled on GNU Hurd, as fcntl(F_SETOWN) returns EOPNOTSUPP for sockets as of GNU Hurd 0.6. Additionally, our strerror() preloading doesn't work well with GNU Hurd, as it uses large numbers for most errors. diffstat: auto/os/conf | 9 +++++++++ src/os/unix/ngx_posix_config.h | 8 ++++++++ 2 files changed, 17 insertions(+), 0 deletions(-) diffs (37 lines): diff --git a/auto/os/conf b/auto/os/conf --- a/auto/os/conf +++ b/auto/os/conf @@ -60,6 +60,15 @@ case "$NGX_PLATFORM" in CORE_SRCS="$UNIX_SRCS" ;; + GNU:*) + # GNU Hurd + have=NGX_GNU_HURD . auto/have_headers + CORE_INCS="$UNIX_INCS" + CORE_DEPS="$UNIX_DEPS $POSIX_DEPS" + CORE_SRCS="$UNIX_SRCS" + CC_AUX_FLAGS="$CC_AUX_FLAGS -D_GNU_SOURCE -D_FILE_OFFSET_BITS=64" + ;; + *) CORE_INCS="$UNIX_INCS" CORE_DEPS="$UNIX_DEPS $POSIX_DEPS" diff --git a/src/os/unix/ngx_posix_config.h b/src/os/unix/ngx_posix_config.h --- a/src/os/unix/ngx_posix_config.h +++ b/src/os/unix/ngx_posix_config.h @@ -21,6 +21,14 @@ #endif +#if (NGX_GNU_HURD) +#ifndef _GNU_SOURCE +#define _GNU_SOURCE /* accept4() */ +#endif +#define _FILE_OFFSET_BITS 64 +#endif + + #ifdef __CYGWIN__ #define timezonevar /* timezone is variable */ #define NGX_BROKEN_SCM_RIGHTS 1 From mdounin at mdounin.ru Mon May 25 15:00:57 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 25 May 2015 15:00:57 +0000 Subject: [nginx] Disabled SSLv3 by default (ticket #653). Message-ID: details: http://hg.nginx.org/nginx/rev/b2899e7d0ef8 branches: changeset: 6157:b2899e7d0ef8 user: Maxim Dounin date: Mon May 25 17:58:20 2015 +0300 description: Disabled SSLv3 by default (ticket #653). diffstat: src/http/modules/ngx_http_proxy_module.c | 5 ++--- src/http/modules/ngx_http_ssl_module.c | 2 +- src/http/modules/ngx_http_uwsgi_module.c | 5 ++--- src/mail/ngx_mail_ssl_module.c | 2 +- src/stream/ngx_stream_proxy_module.c | 5 ++--- src/stream/ngx_stream_ssl_module.c | 2 +- 6 files changed, 9 insertions(+), 12 deletions(-) diffs (81 lines): diff --git a/src/http/modules/ngx_http_proxy_module.c b/src/http/modules/ngx_http_proxy_module.c --- a/src/http/modules/ngx_http_proxy_module.c +++ b/src/http/modules/ngx_http_proxy_module.c @@ -3168,9 +3168,8 @@ ngx_http_proxy_merge_loc_conf(ngx_conf_t prev->upstream.ssl_session_reuse, 1); ngx_conf_merge_bitmask_value(conf->ssl_protocols, prev->ssl_protocols, - (NGX_CONF_BITMASK_SET|NGX_SSL_SSLv3 - |NGX_SSL_TLSv1|NGX_SSL_TLSv1_1 - |NGX_SSL_TLSv1_2)); + (NGX_CONF_BITMASK_SET|NGX_SSL_TLSv1 + |NGX_SSL_TLSv1_1|NGX_SSL_TLSv1_2)); ngx_conf_merge_str_value(conf->ssl_ciphers, prev->ssl_ciphers, "DEFAULT"); diff --git a/src/http/modules/ngx_http_ssl_module.c b/src/http/modules/ngx_http_ssl_module.c --- a/src/http/modules/ngx_http_ssl_module.c +++ b/src/http/modules/ngx_http_ssl_module.c @@ -561,7 +561,7 @@ ngx_http_ssl_merge_srv_conf(ngx_conf_t * prev->prefer_server_ciphers, 0); ngx_conf_merge_bitmask_value(conf->protocols, prev->protocols, - (NGX_CONF_BITMASK_SET|NGX_SSL_SSLv3|NGX_SSL_TLSv1 + (NGX_CONF_BITMASK_SET|NGX_SSL_TLSv1 |NGX_SSL_TLSv1_1|NGX_SSL_TLSv1_2)); ngx_conf_merge_size_value(conf->buffer_size, prev->buffer_size, diff --git a/src/http/modules/ngx_http_uwsgi_module.c b/src/http/modules/ngx_http_uwsgi_module.c --- a/src/http/modules/ngx_http_uwsgi_module.c +++ b/src/http/modules/ngx_http_uwsgi_module.c @@ -1724,9 +1724,8 @@ ngx_http_uwsgi_merge_loc_conf(ngx_conf_t prev->upstream.ssl_session_reuse, 1); ngx_conf_merge_bitmask_value(conf->ssl_protocols, prev->ssl_protocols, - (NGX_CONF_BITMASK_SET|NGX_SSL_SSLv3 - |NGX_SSL_TLSv1|NGX_SSL_TLSv1_1 - |NGX_SSL_TLSv1_2)); + (NGX_CONF_BITMASK_SET|NGX_SSL_TLSv1 + |NGX_SSL_TLSv1_1|NGX_SSL_TLSv1_2)); ngx_conf_merge_str_value(conf->ssl_ciphers, prev->ssl_ciphers, "DEFAULT"); diff --git a/src/mail/ngx_mail_ssl_module.c b/src/mail/ngx_mail_ssl_module.c --- a/src/mail/ngx_mail_ssl_module.c +++ b/src/mail/ngx_mail_ssl_module.c @@ -284,7 +284,7 @@ ngx_mail_ssl_merge_conf(ngx_conf_t *cf, prev->prefer_server_ciphers, 0); ngx_conf_merge_bitmask_value(conf->protocols, prev->protocols, - (NGX_CONF_BITMASK_SET|NGX_SSL_SSLv3|NGX_SSL_TLSv1 + (NGX_CONF_BITMASK_SET|NGX_SSL_TLSv1 |NGX_SSL_TLSv1_1|NGX_SSL_TLSv1_2)); ngx_conf_merge_uint_value(conf->verify, prev->verify, 0); diff --git a/src/stream/ngx_stream_proxy_module.c b/src/stream/ngx_stream_proxy_module.c --- a/src/stream/ngx_stream_proxy_module.c +++ b/src/stream/ngx_stream_proxy_module.c @@ -1139,9 +1139,8 @@ ngx_stream_proxy_merge_srv_conf(ngx_conf prev->ssl_session_reuse, 1); ngx_conf_merge_bitmask_value(conf->ssl_protocols, prev->ssl_protocols, - (NGX_CONF_BITMASK_SET|NGX_SSL_SSLv3 - |NGX_SSL_TLSv1|NGX_SSL_TLSv1_1 - |NGX_SSL_TLSv1_2)); + (NGX_CONF_BITMASK_SET|NGX_SSL_TLSv1 + |NGX_SSL_TLSv1_1|NGX_SSL_TLSv1_2)); ngx_conf_merge_str_value(conf->ssl_ciphers, prev->ssl_ciphers, "DEFAULT"); diff --git a/src/stream/ngx_stream_ssl_module.c b/src/stream/ngx_stream_ssl_module.c --- a/src/stream/ngx_stream_ssl_module.c +++ b/src/stream/ngx_stream_ssl_module.c @@ -211,7 +211,7 @@ ngx_stream_ssl_merge_conf(ngx_conf_t *cf prev->prefer_server_ciphers, 0); ngx_conf_merge_bitmask_value(conf->protocols, prev->protocols, - (NGX_CONF_BITMASK_SET|NGX_SSL_SSLv3|NGX_SSL_TLSv1 + (NGX_CONF_BITMASK_SET|NGX_SSL_TLSv1 |NGX_SSL_TLSv1_1|NGX_SSL_TLSv1_2)); ngx_conf_merge_str_value(conf->certificate, prev->certificate, ""); From tim at bastelstu.be Mon May 25 22:59:11 2015 From: tim at bastelstu.be (=?UTF-8?Q?Tim_D=c3=bcsterhus?=) Date: Tue, 26 May 2015 00:59:11 +0200 Subject: Trac broken? Message-ID: <5563A93F.4070108@bastelstu.be> Hi I just wanted to access nginx' trac. It automatically redirects me to the TLS encrypted version at: https://trac.nginx.org/. Unfortunatly both Firefox and Google Chrome are unable to establish a connection due to a cipher type mismatch. I asked a friend of mine, it does not work for him either. SSL Labs SSL tester fails with an Internal Error. Is Trac still in use? If not: Where to report a bug? Tim From tim at bastelstu.be Mon May 25 23:05:56 2015 From: tim at bastelstu.be (=?UTF-8?Q?Tim_D=c3=bcsterhus?=) Date: Tue, 26 May 2015 01:05:56 +0200 Subject: Trac broken? In-Reply-To: <5563A93F.4070108@bastelstu.be> References: <5563A93F.4070108@bastelstu.be> Message-ID: <5563AAD4.9070204@bastelstu.be> Hi On 26.05.2015 00:59, Tim D?sterhus wrote: > I just wanted to access nginx' trac. It automatically redirects me to > the TLS encrypted version at: https://trac.nginx.org/ It was HTTPS Everywhere's fault. Sorry for the noise! Tim From mdounin at mdounin.ru Tue May 26 14:00:30 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 26 May 2015 14:00:30 +0000 Subject: [nginx] nginx-1.9.1-RELEASE Message-ID: details: http://hg.nginx.org/nginx/rev/884a967c369f branches: changeset: 6158:884a967c369f user: Maxim Dounin date: Tue May 26 16:49:50 2015 +0300 description: nginx-1.9.1-RELEASE diffstat: docs/xml/nginx/changes.xml | 74 ++++++++++++++++++++++++++++++++++++++++++++++ 1 files changed, 74 insertions(+), 0 deletions(-) diffs (84 lines): diff --git a/docs/xml/nginx/changes.xml b/docs/xml/nginx/changes.xml --- a/docs/xml/nginx/changes.xml +++ b/docs/xml/nginx/changes.xml @@ -5,6 +5,80 @@ + + + + +?????? ???????? SSLv3 ?? ????????? ????????. + + +now SSLv3 protocol is disabled by default. + + + + + +????????? ????? ?????????? ????????? ?????? ?? ??????????????. + + +some long deprecated directives are not supported anymore. + + + + + +???????? reuseport ????????? listen.
+??????? Sepherosa Ziehau ? Yingqi Lu. +
+ +the "reuseport" parameter of the "listen" directive.
+Thanks to Sepherosa Ziehau and Yingqi Lu. +
+
+ + + +?????????? $upstream_connect_time. + + +the $upstream_connect_time variable. + + + + + +? ????????? hash ?? big-endian ??????????. + + +in the "hash" directive on big-endian platforms. + + + + + +nginx ??? ?? ??????????? ?? ????????? ?????? ??????? Linux; +?????? ????????? ? 1.7.11. + + +nginx might fail to start on some old Linux variants; +the bug had appeared in 1.7.11. + + + + + +? ???????? IP-???????.
+??????? ?????? ???????. +
+ +in IP address parsing.
+Thanks to Sergey Polovko. +
+
+ +
+ + From mdounin at mdounin.ru Tue May 26 14:00:33 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 26 May 2015 14:00:33 +0000 Subject: [nginx] release-1.9.1 tag Message-ID: details: http://hg.nginx.org/nginx/rev/0a096e2e51fc branches: changeset: 6159:0a096e2e51fc user: Maxim Dounin date: Tue May 26 16:49:51 2015 +0300 description: release-1.9.1 tag diffstat: .hgtags | 1 + 1 files changed, 1 insertions(+), 0 deletions(-) diffs (8 lines): diff --git a/.hgtags b/.hgtags --- a/.hgtags +++ b/.hgtags @@ -383,3 +383,4 @@ 860cfbcc4606ee36d898a9cd0c5ae8858db984d6 2b3b737b5456c05cd63d3d834f4fb4d3776953d0 release-1.7.11 3ef00a71f56420a9c3e9cec311c9a2109a015d67 release-1.7.12 53d850fe292f157d2fb999c52788ec1dc53c91ed release-1.9.0 +884a967c369f73ab16ea859670d690fb094d3850 release-1.9.1 From wmark+nginx at hurrikane.de Wed May 27 16:43:01 2015 From: wmark+nginx at hurrikane.de (W-Mark Kubacki) Date: Wed, 27 May 2015 18:43:01 +0200 Subject: [RFC] event/openssl: Add dynamic record size support for serving ssl trafic In-Reply-To: <34abb03e.16155.14d244dae6f.Coremail.gzchenym@126.com> References: <34abb03e.16155.14d244dae6f.Coremail.gzchenym@126.com> Message-ID: 2015-05-05 15:39 GMT+02:00 chen : > > This is v1 of the patchset the implementing the feature SSL Dynamic Record > Sizing, inspiring by Google Front End [?] > > Any comments is welcome. Nice! I've implemented that for Golang in the past and have ported it to C for you today. Although a single initial packet might seem more attractive in benchmarks, I found that sending two results in better catching parts of HEAD ? which is what we want. Then you will notice some dancing around IW4, by which we've already sent about 5683 octets. Enough for me for a making a tradeoff here. 16k as ssl->buffer_size results in partially filled packets. A better default value could minimize the overhead (<0.5%) for that trailing PDUs. SSL libraries really should provide a function for computing overhead. -- Mark -------------- next part -------------- A non-text attachment was scrubbed... Name: nginx-1.9.1-SSL-dynamic-record-size-redux.patch Type: application/octet-stream Size: 7441 bytes Desc: not available URL: From carlos-eduardo-rodrigues at telecom.pt Wed May 27 17:05:45 2015 From: carlos-eduardo-rodrigues at telecom.pt (Carlos Eduardo Ferreira Rodrigues) Date: Wed, 27 May 2015 18:05:45 +0100 Subject: Strange status 500 with empty response Message-ID: Hi, Since upgrading to nginx 1.8.0, we started seeing some requests being logged with status 500 and a response of 0 bytes ($bytes_sent). Not many, only one for about 36000 requests. I've been trying to reliably reproduce this and/or figure out the cause without success, at least to know if this is an nginx bug or some module's. Any ideas or suggestions of how I can go about doing this? These requests use proxy_pass (with cache) and header_filter_by_lua, so I've been able to figure out some things for these failed requests: * They are always a cache hit; * Headers created by the Lua code are correctly logged with $sent_http_*; * Errors in Lua code result in status 500 with a non-empty response (our custom error page); Also, the error log shows nothing along with these errors. Thanks in advance, -- Carlos Rodrigues From greearb at candelatech.com Thu May 28 19:26:55 2015 From: greearb at candelatech.com (Ben Greear) Date: Thu, 28 May 2015 12:26:55 -0700 Subject: Kernel stall while testing high-speed HTTPS traffic. Message-ID: <55676BFF.9020800@candelatech.com> We are seeing problems with Nginx (mostly)locking up the server when running high loads of HTTPS traffic. This scenario we had nginx configured to bind to eth3 but our ssh sessions on eth0 were frozen during this condition as well. The system restores itself after a few minutes, (the load generation would have stopped after a minute or two of lockup, that may be what lets things recover). We tested different kernels (4.0.4+, 4.0.0+, 3.17.8+ with local patches, and stock 3.14.27-100.fc19.x86_64, all with same results), different NICs (Intel 10G, Intel 40G), and Apache as web server. Apache can sustain about 10.8Gbps of HTTPS traffic and shows no instability/lockups. nginx maxes out at 2.2Gbps (until it locks up machine). Some kernel splats indicated some files writing to the file system journal were blocked > 180 seconds, but they recover, so it is not a hard lock. The system should not be doing any heavy disk access since we have 32GB RAM. Swap shows no usage. === Scenario === Load testing box has a direct connection to eth3->eth3 over 10Gbps port. Curl clients using https, keepalive, requesting a 1MB file: 1000 clients @ 0.25 req/sec = 243 req/sec, 2.2Gbps tx, load 8.3 400 clients @ 0.65 req/sec = 260 req/sec, 2.2Gbps tx, load 9.2 === Environment === processor : 7 vendor_id : GenuineIntel cpu family : 6 model : 63 model name : Intel(R) Xeon(R) CPU E5-1630 v3 @ 3.70GHz > free total used free shared buffers cached Mem: 32840296 1394884 31445412 0 132792 632068 -/+ buffers/cache: 630024 32210272 Swap: 16457724 0 16457724 > cat /etc/issue Fedora release 19 (Schr?dinger?s Cat) Kernel \r on an \m (\l) # uname -a Linux e5-1630-v3-qc 3.14.27-100.fc19.x86_64 #1 SMP Wed Dec 17 19:36:34 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux # /usr/local/lanforge/nginx/sbin/nginx -v nginx version: nginx/1.9.1 We are running small patch to allow nginx to bind to a particular interface. We tried with this option disabled, and that causes the same trouble. The exact source is found below: https://github.com/greearb/nginx/commits/master We are compiling nginx with these options: ./configure --prefix=/usr/local/lanforge/nginx/ --with-http_ssl_module --with-ipv6 --without-http_rewrite_module === Nginx Config === worker_processes auto; worker_rlimit_nofile 100000; error_log logs/eth3_error.log; pid /home/lanforge/vr_conf/nginx_eth3.pid; events { use epoll; worker_connections 8096; multi_accept on; } http { include /usr/local/lanforge/nginx/conf/mime.types; default_type application/octet-stream; access_log off; sendfile on; directio 1m; disable_symlinks on; gzip off; tcp_nopush on; tcp_nodelay on; open_file_cache max=1000 inactive=10s; open_file_cache_valid 600s; open_file_cache_min_uses 2000; open_file_cache_errors off; etag off; server { listen 1.1.1.1:80 so_keepalive=on bind_dev=eth3; server_name nginx.local nginx web.local web; location / { root /var/www/html; index index.html index.htm; } error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } } server { listen 1.1.1.1:443 so_keepalive=on ssl bind_dev=eth3; server_name nginx.local nginx web.local web; ssl_certificate /usr/local/lanforge/apache.crt; ssl_certificate_key /usr/local/lanforge/apache.key; location / { root /var/www/html; index index.html index.htm; } error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } } } Any help or suggestions is appreciated. Thanks, Ben -- Ben Greear Candela Technologies Inc http://www.candelatech.com From code at bluebot.org Thu May 28 21:39:08 2015 From: code at bluebot.org (Jon Nalley) Date: Thu, 28 May 2015 16:39:08 -0500 Subject: [PATCH] Adds $orig_remote_addr in realip module Message-ID: <4e59c130d468ee6757b5.1432849148@metis.bluebot> # HG changeset patch # User Jon Nalley # Date 1432848566 18000 # Thu May 28 16:29:26 2015 -0500 # Node ID 4e59c130d468ee6757b5ba97f912b6c72c3f7c0d # Parent 0a096e2e51fcbb536007d94bf3edfc308e214f56 Adds $orig_remote_addr in realip module. When the realip module sets $remote_addr, the connecting IP is no longer available for logging etc. This change preserves the connecting IP as $orig_remote_addr. diff -r 0a096e2e51fc -r 4e59c130d468 src/http/modules/ngx_http_realip_module.c --- a/src/http/modules/ngx_http_realip_module.c Tue May 26 16:49:51 2015 +0300 +++ b/src/http/modules/ngx_http_realip_module.c Thu May 28 16:29:26 2015 -0500 @@ -33,6 +33,10 @@ } ngx_http_realip_ctx_t; +static ngx_int_t + ngx_http_realip_orig_remote_addr_variable(ngx_http_request_t *r, + ngx_http_variable_value_t *v, uintptr_t data); +static ngx_int_t ngx_http_realip_add_variables(ngx_conf_t *cf); static ngx_int_t ngx_http_realip_handler(ngx_http_request_t *r); static ngx_int_t ngx_http_realip_set_addr(ngx_http_request_t *r, ngx_addr_t *addr); @@ -75,7 +79,7 @@ static ngx_http_module_t ngx_http_realip_module_ctx = { - NULL, /* preconfiguration */ + ngx_http_realip_add_variables, /* preconfiguration */ ngx_http_realip_init, /* postconfiguration */ NULL, /* create main configuration */ @@ -105,6 +109,15 @@ }; +static ngx_http_variable_t ngx_http_realip_vars[] = { + + { ngx_string("orig_remote_addr"), NULL, + ngx_http_realip_orig_remote_addr_variable, 0, NGX_HTTP_VAR_NOHASH, 0 }, + + { ngx_null_string, NULL, NULL, 0, 0, 0 } +}; + + static ngx_int_t ngx_http_realip_handler(ngx_http_request_t *r) { @@ -369,6 +382,48 @@ } +static ngx_int_t +ngx_http_realip_orig_remote_addr_variable(ngx_http_request_t *r, + ngx_http_variable_value_t *v, uintptr_t data) +{ + ngx_http_realip_ctx_t *ctx; + + ctx = ngx_http_get_module_ctx(r, ngx_http_realip_module); + + if (ctx == NULL) { + v->not_found = 1; + return NGX_OK; + } + + v->len = ctx->addr_text.len; + v->valid = 1; + v->no_cacheable = 0; + v->not_found = 0; + v->data = ctx->addr_text.data; + + return NGX_OK; +} + + +static ngx_int_t +ngx_http_realip_add_variables(ngx_conf_t *cf) +{ + ngx_http_variable_t *var, *v; + + for (v = ngx_http_realip_vars; v->name.len; v++) { + var = ngx_http_add_variable(cf, &v->name, v->flags); + if (var == NULL) { + return NGX_ERROR; + } + + var->get_handler = v->get_handler; + var->data = v->data; + } + + return NGX_OK; +} + + static void * ngx_http_realip_create_loc_conf(ngx_conf_t *cf) { From ru at nginx.com Thu May 28 21:56:00 2015 From: ru at nginx.com (Ruslan Ermilov) Date: Fri, 29 May 2015 00:56:00 +0300 Subject: [PATCH] Adds $orig_remote_addr in realip module In-Reply-To: <4e59c130d468ee6757b5.1432849148@metis.bluebot> References: <4e59c130d468ee6757b5.1432849148@metis.bluebot> Message-ID: <20150528215600.GC10130@lo0.su> On Thu, May 28, 2015 at 04:39:08PM -0500, Jon Nalley wrote: > # HG changeset patch > # User Jon Nalley > # Date 1432848566 18000 > # Thu May 28 16:29:26 2015 -0500 > # Node ID 4e59c130d468ee6757b5ba97f912b6c72c3f7c0d > # Parent 0a096e2e51fcbb536007d94bf3edfc308e214f56 > Adds $orig_remote_addr in realip module. > > When the realip module sets $remote_addr, the connecting IP is > no longer available for logging etc. This change preserves the > connecting IP as $orig_remote_addr. > > diff -r 0a096e2e51fc -r 4e59c130d468 src/http/modules/ngx_http_realip_module.c > --- a/src/http/modules/ngx_http_realip_module.c Tue May 26 16:49:51 2015 +0300 > +++ b/src/http/modules/ngx_http_realip_module.c Thu May 28 16:29:26 2015 -0500 > @@ -33,6 +33,10 @@ > } ngx_http_realip_ctx_t; > > > +static ngx_int_t > + ngx_http_realip_orig_remote_addr_variable(ngx_http_request_t *r, > + ngx_http_variable_value_t *v, uintptr_t data); > +static ngx_int_t ngx_http_realip_add_variables(ngx_conf_t *cf); > static ngx_int_t ngx_http_realip_handler(ngx_http_request_t *r); > static ngx_int_t ngx_http_realip_set_addr(ngx_http_request_t *r, > ngx_addr_t *addr); > @@ -75,7 +79,7 @@ > > > static ngx_http_module_t ngx_http_realip_module_ctx = { > - NULL, /* preconfiguration */ > + ngx_http_realip_add_variables, /* preconfiguration */ > ngx_http_realip_init, /* postconfiguration */ > > NULL, /* create main configuration */ > @@ -105,6 +109,15 @@ > }; > > > +static ngx_http_variable_t ngx_http_realip_vars[] = { > + > + { ngx_string("orig_remote_addr"), NULL, > + ngx_http_realip_orig_remote_addr_variable, 0, NGX_HTTP_VAR_NOHASH, 0 }, > + > + { ngx_null_string, NULL, NULL, 0, 0, 0 } > +}; > + > + > static ngx_int_t > ngx_http_realip_handler(ngx_http_request_t *r) > { > @@ -369,6 +382,48 @@ > } > > > +static ngx_int_t > +ngx_http_realip_orig_remote_addr_variable(ngx_http_request_t *r, > + ngx_http_variable_value_t *v, uintptr_t data) > +{ > + ngx_http_realip_ctx_t *ctx; > + > + ctx = ngx_http_get_module_ctx(r, ngx_http_realip_module); Contexts are lost with at least internal redirects and redirects to named locations, so this approach won't work, unfortunately. /* clear the modules contexts */ ngx_memzero(r->ctx, sizeof(void *) * ngx_http_max_module); > + > + if (ctx == NULL) { > + v->not_found = 1; > + return NGX_OK; > + } > + > + v->len = ctx->addr_text.len; > + v->valid = 1; > + v->no_cacheable = 0; > + v->not_found = 0; > + v->data = ctx->addr_text.data; > + > + return NGX_OK; > +} > + > + > +static ngx_int_t > +ngx_http_realip_add_variables(ngx_conf_t *cf) > +{ > + ngx_http_variable_t *var, *v; > + > + for (v = ngx_http_realip_vars; v->name.len; v++) { > + var = ngx_http_add_variable(cf, &v->name, v->flags); > + if (var == NULL) { > + return NGX_ERROR; > + } > + > + var->get_handler = v->get_handler; > + var->data = v->data; > + } > + > + return NGX_OK; > +} > + > + > static void * > ngx_http_realip_create_loc_conf(ngx_conf_t *cf) > { From greearb at candelatech.com Thu May 28 22:24:23 2015 From: greearb at candelatech.com (Ben Greear) Date: Thu, 28 May 2015 15:24:23 -0700 Subject: Kernel stall while testing high-speed HTTPS traffic. In-Reply-To: <55676BFF.9020800@candelatech.com> References: <55676BFF.9020800@candelatech.com> Message-ID: <55679597.3030305@candelatech.com> Some additional info was requested: [root at e5-1630-v3-qc lanforge]# openssl engine -tt (rdrand) Intel RDRAND engine [ available ] (dynamic) Dynamic engine loading support [ unavailable ] [root at e5-1630-v3-qc lanforge]# openssl version OpenSSL 1.0.1e-fips 11 Feb 2013 [root at e5-1630-v3-qc lanforge]# openssl speed -multi ^C # NOTE: My CPU supports AES-NI instructions...do I need to do anything # special to enable that with nginx, or should it be working by default? [root at e5-1630-v3-qc lanforge]# openssl speed -multi 4 rsa2048 ecdsap256 Forked child 0 Forked child 1 Forked child 2 Forked child 3 +DTP:2048:private:rsa:10 +DTP:2048:private:rsa:10 +DTP:2048:private:rsa:10 +DTP:2048:private:rsa:10 +R1:10253:2048:10.00 +DTP:2048:public:rsa:10 +R1:10345:2048:10.00 +DTP:2048:public:rsa:10 +R1:5385:2048:10.00 +DTP:2048:public:rsa:10 +R1:5387:2048:10.00 +DTP:2048:public:rsa:10 +R2:334855:2048:10.00 +R2:336207:2048:10.00 +DTP:256:sign:ecdsa:10 +DTP:256:sign:ecdsa:10 +R2:185283:2048:10.00 +R2:185265:2048:10.00 +DTP:256:sign:ecdsa:10 +DTP:256:sign:ecdsa:10 +R5:115623:256:10.00 +R5:116966:256:10.00 +DTP:256:verify:ecdsa:10 +DTP:256:verify:ecdsa:10 +R5:64033:256:10.00 +R5:64223:256:10.00 +DTP:256:verify:ecdsa:10 +DTP:256:verify:ecdsa:10 +R6:29783:256:10.00 +R6:30572:256:10.00 Got: +F2:2:2048:0.000967:0.000030 from 0 Got: +F4:3:256:0.000085:0.000327 from 0 +R6:15179:256:10.00 +R6:15196:256:10.00 Got: +F2:2:2048:0.001857:0.000054 from 1 Got: +F4:3:256:0.000156:0.000658 from 1 Got: +F2:2:2048:0.000975:0.000030 from 2 Got: +F4:3:256:0.000086:0.000336 from 2 Got: +F2:2:2048:0.001856:0.000054 from 3 Got: +F4:3:256:0.000156:0.000659 from 3 OpenSSL 1.0.1e-fips 11 Feb 2013 built on: Thu Oct 16 11:09:39 UTC 2014 options:bn(64,64) md2(int) rc4(16x,int) des(idx,cisc,16,int) aes(partial) idea(int) blowfish(idx) compiler: gcc -fPIC -DOPENSSL_PIC -DZLIB -DOPENSSL_THREADS -D_REENTRANT -DDSO_DLFCN -DHAVE_DLFCN_H -DKRB5_MIT -m64 -DL_ENDIAN -DTERMIO -Wall -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -Wa,--noexecstack -DPURIFY -DOPENSSL_IA32_SSE2 -DOPENSSL_BN_ASM_MONT -DOPENSSL_BN_ASM_MONT5 -DOPENSSL_BN_ASM_GF2m -DSHA1_ASM -DSHA256_ASM -DSHA512_ASM -DMD5_ASM -DAES_ASM -DVPAES_ASM -DBSAES_ASM -DWHIRLPOOL_ASM -DGHASH_ASM sign verify sign/s verify/s rsa 2048 bits 0.000319s 0.000010s 3137.1 103703.7 sign verify sign/s verify/s 256 bit ecdsa (nistp256) 0.0000s 0.0001s 36213.1 9071.5 # NOTE on the below ldd info: the /home/lanforge/libssl.so.10 and libcrypto.so.10 are # just copies of the same files from /usr/lib64/ [root at e5-1630-v3-qc lanforge]# ldd /usr/local/lanforge/nginx/sbin/nginx linux-vdso.so.1 => (0x00007fff5d7fe000) libpthread.so.0 => /lib64/libpthread.so.0 (0x0000003a9d800000) libcrypt.so.1 => /lib64/libcrypt.so.1 (0x0000003aae000000) libssl.so.10 => /home/lanforge/libssl.so.10 (0x00000033fe000000) libcrypto.so.10 => /home/lanforge/libcrypto.so.10 (0x00000033f8000000) libdl.so.2 => /lib64/libdl.so.2 (0x0000003a9d400000) libz.so.1 => /lib64/libz.so.1 (0x0000003a9dc00000) libc.so.6 => /lib64/libc.so.6 (0x0000003a9d000000) /lib64/ld-linux-x86-64.so.2 (0x0000003a9c800000) libfreebl3.so => /lib64/libfreebl3.so (0x0000003aac800000) libgssapi_krb5.so.2 => /lib64/libgssapi_krb5.so.2 (0x0000003aae400000) libkrb5.so.3 => /lib64/libkrb5.so.3 (0x0000003ab2400000) libcom_err.so.2 => /lib64/libcom_err.so.2 (0x0000003aadc00000) libk5crypto.so.3 => /lib64/libk5crypto.so.3 (0x0000003aae800000) libkrb5support.so.0 => /lib64/libkrb5support.so.0 (0x0000003ab1800000) libkeyutils.so.1 => /lib64/libkeyutils.so.1 (0x0000003aaf400000) libresolv.so.2 => /lib64/libresolv.so.2 (0x0000003a9f400000) libselinux.so.1 => /lib64/libselinux.so.1 (0x0000003a9e800000) libpcre.so.1 => /lib64/libpcre.so.1 (0x0000003a9e400000) [root at e5-1630-v3-qc lanforge]# lspci|grep -F Eth 02:00.0 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01) 02:00.1 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01) 03:00.0 Ethernet controller: Intel Corporation Ethernet Controller LX710 for 40GbE QSFP+ (rev 01) 03:00.1 Ethernet controller: Intel Corporation Ethernet Controller LX710 for 40GbE QSFP+ (rev 01) 07:00.0 Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01) 07:00.1 Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01) I am using in-kernel drivers, and I am quite sure it is not a NIC issue since this same system can sustain 10.8Gbps of HTTPS traffic served by Apache, and the 40G NICs can sustain 20+Gbps of UDP traffic. So, I skiped the NIC stats that were requested. If they really seem to be needed, I can gather that info. Thanks, Ben On 05/28/2015 12:26 PM, Ben Greear wrote: > We are seeing problems with Nginx (mostly)locking up the server when > running high loads of HTTPS traffic. > > This scenario we had nginx configured to > bind to eth3 but our ssh sessions on eth0 were frozen during this condition as well. > The system restores itself after a few minutes, (the load generation would > have stopped after a minute or two of lockup, that may be what lets things > recover). > > We tested different kernels (4.0.4+, 4.0.0+, 3.17.8+ with local patches, > and stock 3.14.27-100.fc19.x86_64, all with same results), different NICs (Intel 10G, Intel 40G), > and Apache as web server. > > Apache can sustain about 10.8Gbps of HTTPS traffic and shows no > instability/lockups. nginx maxes out at 2.2Gbps (until it locks up machine). > > Some kernel splats indicated some files writing to the file system > journal were blocked > 180 seconds, but they recover, so it is not > a hard lock. The system should not be doing any heavy disk access > since we have 32GB RAM. Swap shows no usage. > > === Scenario === > Load testing box has a direct connection to eth3->eth3 over 10Gbps port. > > Curl clients using https, keepalive, requesting a 1MB file: > 1000 clients @ 0.25 req/sec = 243 req/sec, 2.2Gbps tx, load 8.3 > 400 clients @ 0.65 req/sec = 260 req/sec, 2.2Gbps tx, load 9.2 > > > > === Environment === > processor : 7 > vendor_id : GenuineIntel > cpu family : 6 > model : 63 > model name : Intel(R) Xeon(R) CPU E5-1630 v3 @ 3.70GHz > >> free > total used free shared buffers cached > Mem: 32840296 1394884 31445412 0 132792 632068 > -/+ buffers/cache: 630024 32210272 > Swap: 16457724 0 16457724 > >> cat /etc/issue > Fedora release 19 (Schr?dinger?s Cat) > Kernel \r on an \m (\l) > > # uname -a > Linux e5-1630-v3-qc 3.14.27-100.fc19.x86_64 #1 SMP Wed Dec 17 19:36:34 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux > > # /usr/local/lanforge/nginx/sbin/nginx -v > nginx version: nginx/1.9.1 > > We are running small patch to allow nginx to bind to a particular interface. We > tried with this option disabled, and that causes the same trouble. The exact > source is found below: > > https://github.com/greearb/nginx/commits/master > > We are compiling nginx with these options: > > ./configure --prefix=/usr/local/lanforge/nginx/ --with-http_ssl_module --with-ipv6 --without-http_rewrite_module > > === Nginx Config === > > worker_processes auto; > worker_rlimit_nofile 100000; > error_log logs/eth3_error.log; > pid /home/lanforge/vr_conf/nginx_eth3.pid; > events { > use epoll; > worker_connections 8096; > multi_accept on; > } > http { > include /usr/local/lanforge/nginx/conf/mime.types; > default_type application/octet-stream; > access_log off; > sendfile on; > directio 1m; > disable_symlinks on; > gzip off; > tcp_nopush on; > tcp_nodelay on; > > open_file_cache max=1000 inactive=10s; > open_file_cache_valid 600s; > open_file_cache_min_uses 2000; > open_file_cache_errors off; > etag off; > > server { > listen 1.1.1.1:80 so_keepalive=on bind_dev=eth3; > server_name nginx.local nginx web.local web; > > location / { > root /var/www/html; > index index.html index.htm; > } > error_page 500 502 503 504 /50x.html; > location = /50x.html { > root html; > } > } > server { > listen 1.1.1.1:443 so_keepalive=on ssl bind_dev=eth3; > server_name nginx.local nginx web.local web; > ssl_certificate /usr/local/lanforge/apache.crt; > ssl_certificate_key /usr/local/lanforge/apache.key; > location / { > root /var/www/html; > index index.html index.htm; > } > error_page 500 502 503 504 /50x.html; > location = /50x.html { > root html; > } > } > } > > > Any help or suggestions is appreciated. > > Thanks, > Ben > -- Ben Greear Candela Technologies Inc http://www.candelatech.com From linsu at feinno.com Fri May 29 03:37:13 2015 From: linsu at feinno.com (=?gb2312?B?wdba1Q==?=) Date: Fri, 29 May 2015 11:37:13 +0800 Subject: problems when use fastcgi_pass to deliver request to backend Message-ID: <35AFFAB0BF4E8B41BD040AD35CA32BEC4806B8A6D9@mailbox.feinno.com> Hi, I write a fastcgi server and use nginx to pass request to my server. It works till now. But I find a problem. Nginx always set requestId = 1 when sending fastcgi record. I was a little upset for this, cause according to fastcgi protocol, web server can send fastcgi records belonging to different request simultaneously, and requestIds are different and keep unique. I really need this feature, because requests can be handled simultaneously just over one connetion. Can I find a way out? -------------- next part -------------- An HTML attachment was scrubbed... URL: From ru at nginx.com Fri May 29 06:27:37 2015 From: ru at nginx.com (Ruslan Ermilov) Date: Fri, 29 May 2015 06:27:37 +0000 Subject: [nginx] Version bump. Message-ID: details: http://hg.nginx.org/nginx/rev/8edec63bd14d branches: changeset: 6160:8edec63bd14d user: Ruslan Ermilov date: Fri May 29 09:26:27 2015 +0300 description: Version bump. diffstat: src/core/nginx.h | 4 ++-- 1 files changed, 2 insertions(+), 2 deletions(-) diffs (14 lines): diff -r 0a096e2e51fc -r 8edec63bd14d src/core/nginx.h --- a/src/core/nginx.h Tue May 26 16:49:51 2015 +0300 +++ b/src/core/nginx.h Fri May 29 09:26:27 2015 +0300 @@ -9,8 +9,8 @@ #define _NGINX_H_INCLUDED_ -#define nginx_version 1009001 -#define NGINX_VERSION "1.9.1" +#define nginx_version 1009002 +#define NGINX_VERSION "1.9.2" #define NGINX_VER "nginx/" NGINX_VERSION #ifdef NGX_BUILD From ru at nginx.com Fri May 29 06:27:40 2015 From: ru at nginx.com (Ruslan Ermilov) Date: Fri, 29 May 2015 06:27:40 +0000 Subject: [nginx] Fixed bullying style of comments. Message-ID: details: http://hg.nginx.org/nginx/rev/e034af368274 branches: changeset: 6161:e034af368274 user: Ruslan Ermilov date: Fri May 29 09:26:33 2015 +0300 description: Fixed bullying style of comments. diffstat: src/core/ngx_log.h | 8 ++++---- 1 files changed, 4 insertions(+), 4 deletions(-) diffs (39 lines): diff -r 8edec63bd14d -r e034af368274 src/core/ngx_log.h --- a/src/core/ngx_log.h Fri May 29 09:26:27 2015 +0300 +++ b/src/core/ngx_log.h Fri May 29 09:26:33 2015 +0300 @@ -111,7 +111,7 @@ void ngx_log_error_core(ngx_uint_t level /*********************************/ -#else /* NO VARIADIC MACROS */ +#else /* no variadic macros */ #define NGX_HAVE_VARIADIC_MACROS 0 @@ -123,7 +123,7 @@ void ngx_cdecl ngx_log_debug_core(ngx_lo const char *fmt, ...); -#endif /* VARIADIC MACROS */ +#endif /* variadic macros */ /*********************************/ @@ -166,7 +166,7 @@ void ngx_cdecl ngx_log_debug_core(ngx_lo arg1, arg2, arg3, arg4, arg5, arg6, arg7, arg8) -#else /* NO VARIADIC MACROS */ +#else /* no variadic macros */ #define ngx_log_debug0(level, log, err, fmt) \ if ((log)->log_level & level) \ @@ -211,7 +211,7 @@ void ngx_cdecl ngx_log_debug_core(ngx_lo #endif -#else /* NO NGX_DEBUG */ +#else /* !NGX_DEBUG */ #define ngx_log_debug0(level, log, err, fmt) #define ngx_log_debug1(level, log, err, fmt, arg1) From linsu at feinno.com Fri May 29 07:58:46 2015 From: linsu at feinno.com (=?gb2312?B?wdba1Q==?=) Date: Fri, 29 May 2015 15:58:46 +0800 Subject: =?UTF-8?Q?=E7=AD=94=E5=A4=8D=3A_problems_when_use_fastcgi=5Fpass_to_delive?= =?UTF-8?Q?r_request_to_backend?= Message-ID: <35AFFAB0BF4E8B41BD040AD35CA32BEC4806B8A6DD@mailbox.feinno.com> /* we support the single request per connection */ 2573 2574 case ngx_http_fastcgi_st_request_id_hi: 2575 if (ch != 0) { 2576 ngx_log_error(NGX_LOG_ERR, r->connection->log, 0, 2577 "upstream sent unexpected FastCGI " 2578 "request id high byte: %d", ch); 2579 return NGX_ERROR; 2580 } 2581 state = ngx_http_fastcgi_st_request_id_lo; 2582 break; 2583 2584 case ngx_http_fastcgi_st_request_id_lo: 2585 if (ch != 1) { 2586 ngx_log_error(NGX_LOG_ERR, r->connection->log, 0, 2587 "upstream sent unexpected FastCGI " 2588 "request id low byte: %d", ch); 2589 return NGX_ERROR; 2590 } 2591 state = ngx_http_fastcgi_st_content_length_hi; 2592 break; By reading source code, I saw the reason , so can nginx support multi request per connection in future? ???: ?? ????: 2015?5?29? 11:37 ???: 'nginx-devel at nginx.org' ??: problems when use fastcgi_pass to deliver request to backend Hi, I write a fastcgi server and use nginx to pass request to my server. It works till now. But I find a problem. Nginx always set requestId = 1 when sending fastcgi record. I was a little upset for this, cause according to fastcgi protocol, web server can send fastcgi records belonging to different request simultaneously, and requestIds are different and keep unique. I really need this feature, because requests can be handled simultaneously just over one connetion. Can I find a way out? -------------- next part -------------- An HTML attachment was scrubbed... URL: From serg.brester at sebres.de Fri May 29 08:40:05 2015 From: serg.brester at sebres.de (Sergey Brester) Date: Fri, 29 May 2015 10:40:05 +0200 Subject: =?UTF-8?Q?Re=3A_=E7=AD=94=E5=A4=8D=3A_problems_when_use_fastcgi=5Fpass_to_?= =?UTF-8?Q?deliver_request_to_backend?= In-Reply-To: <35AFFAB0BF4E8B41BD040AD35CA32BEC4806B8A6DD@mailbox.feinno.com> References: <35AFFAB0BF4E8B41BD040AD35CA32BEC4806B8A6DD@mailbox.feinno.com> Message-ID: Hi, It's called fastcgi multiplexing and nginx currently does not implement that (and I don't know . There were already several discussions about that, so read here, please. [22] Short, very fast fastcgi processing may be implemented without multiplexing (should be event-driven also). Regards, sebres. Am 29.05.2015 09:58, schrieb ??: > _/* we support the single request per connection */_ > > 2573 [2] > > 2574 [3] > > CASE ngx_http_fastcgi_st_request_id_hi: > > 2575 [4] > > IF (ch != 0) { > > 2576 [5] > > ngx_log_error(NGX_LOG_ERR, r->connection->log, 0, > > 2577 [6] > > "upstream sent unexpected FastCGI " > > 2578 [7] > > "request id high byte: %d", ch); > > 2579 [8] > > RETURN NGX_ERROR; > > 2580 [9] > > } > > 2581 [10] > > state = ngx_http_fastcgi_st_request_id_lo; > > 2582 [11] > > BREAK; > > 2583 [12] > > 2584 [13] > > CASE ngx_http_fastcgi_st_request_id_lo: > > 2585 [14] > > IF (ch != 1) { > > 2586 [15] > > ngx_log_error(NGX_LOG_ERR, r->connection->log, 0, > > 2587 [16] > > "upstream sent unexpected FastCGI " > > 2588 [17] > > "request id low byte: %d", ch); > > 2589 [18] > > RETURN NGX_ERROR; > > 2590 [19] > > } > > 2591 [20] > > state = ngx_http_fastcgi_st_content_length_hi; > > 2592 [21] > > BREAK; > > By reading source code, I saw the reason , so can nginx support multi request per connection in future? > > ???: ?? > ????: 2015?5?29? 11:37 > ???: 'nginx-devel at nginx.org' > ??: problems when use fastcgi_pass to deliver request to backend > > Hi, > > I write a fastcgi server and use nginx to pass request to my server. It works till now. > > But I find a problem. Nginx always set requestId = 1 when sending fastcgi record. > > I was a little upset for this, cause according to fastcgi protocol, web server can send fastcgi records belonging to different request simultaneously, and requestIds are different and keep unique. I really need this feature, because requests can be handled simultaneously just over one connetion. > > Can I find a way out? > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel [1] Links: ------ [1] http://mailman.nginx.org/mailman/listinfo/nginx-devel [2] http://trac.nginx.org/nginx/browser/nginx/src/http/modules/ngx_http_fastcgi_module.c#L2573 [3] http://trac.nginx.org/nginx/browser/nginx/src/http/modules/ngx_http_fastcgi_module.c#L2574 [4] http://trac.nginx.org/nginx/browser/nginx/src/http/modules/ngx_http_fastcgi_module.c#L2575 [5] http://trac.nginx.org/nginx/browser/nginx/src/http/modules/ngx_http_fastcgi_module.c#L2576 [6] http://trac.nginx.org/nginx/browser/nginx/src/http/modules/ngx_http_fastcgi_module.c#L2577 [7] http://trac.nginx.org/nginx/browser/nginx/src/http/modules/ngx_http_fastcgi_module.c#L2578 [8] http://trac.nginx.org/nginx/browser/nginx/src/http/modules/ngx_http_fastcgi_module.c#L2579 [9] http://trac.nginx.org/nginx/browser/nginx/src/http/modules/ngx_http_fastcgi_module.c#L2580 [10] http://trac.nginx.org/nginx/browser/nginx/src/http/modules/ngx_http_fastcgi_module.c#L2581 [11] http://trac.nginx.org/nginx/browser/nginx/src/http/modules/ngx_http_fastcgi_module.c#L2582 [12] http://trac.nginx.org/nginx/browser/nginx/src/http/modules/ngx_http_fastcgi_module.c#L2583 [13] http://trac.nginx.org/nginx/browser/nginx/src/http/modules/ngx_http_fastcgi_module.c#L2584 [14] http://trac.nginx.org/nginx/browser/nginx/src/http/modules/ngx_http_fastcgi_module.c#L2585 [15] http://trac.nginx.org/nginx/browser/nginx/src/http/modules/ngx_http_fastcgi_module.c#L2586 [16] http://trac.nginx.org/nginx/browser/nginx/src/http/modules/ngx_http_fastcgi_module.c#L2587 [17] http://trac.nginx.org/nginx/browser/nginx/src/http/modules/ngx_http_fastcgi_module.c#L2588 [18] http://trac.nginx.org/nginx/browser/nginx/src/http/modules/ngx_http_fastcgi_module.c#L2589 [19] http://trac.nginx.org/nginx/browser/nginx/src/http/modules/ngx_http_fastcgi_module.c#L2590 [20] http://trac.nginx.org/nginx/browser/nginx/src/http/modules/ngx_http_fastcgi_module.c#L2591 [21] http://trac.nginx.org/nginx/browser/nginx/src/http/modules/ngx_http_fastcgi_module.c#L2592 [22] http://forum.nginx.org/read.php?2,237158 -------------- next part -------------- An HTML attachment was scrubbed... URL: From carlos-eduardo-rodrigues at telecom.pt Fri May 29 10:21:28 2015 From: carlos-eduardo-rodrigues at telecom.pt (Carlos Eduardo Ferreira Rodrigues) Date: Fri, 29 May 2015 11:21:28 +0100 Subject: zero size buf in output In-Reply-To: References: Message-ID: Hi, Another issue I started seeing after moving to 1.8.0 is messages like these in the error log: 2015/05/29 09:29:40 [alert] 16116#0: *82677257 zero size buf in output t:0 r:0 f:0 00007FF33C18F6D0 00007FF33C18F6D0-00007FF33C19F2B6 0000000000000000 0-0 while sending to client [...] I've seen a mention of this as having been fixed a few versions ago, but it seems to have returned. Best regards, -- Carlos Rodrigues ________________________________________ From: nginx-devel-bounces at nginx.org [nginx-devel-bounces at nginx.org] Sent: Wednesday, May 27, 2015 18:05 To: nginx-devel at nginx.org Subject: Strange status 500 with empty response Hi, Since upgrading to nginx 1.8.0, we started seeing some requests being logged with status 500 and a response of 0 bytes ($bytes_sent). Not many, only one for about 36000 requests. I've been trying to reliably reproduce this and/or figure out the cause without success, at least to know if this is an nginx bug or some module's. Any ideas or suggestions of how I can go about doing this? These requests use proxy_pass (with cache) and header_filter_by_lua, so I've been able to figure out some things for these failed requests: * They are always a cache hit; * Headers created by the Lua code are correctly logged with $sent_http_*; * Errors in Lua code result in status 500 with a non-empty response (our custom error page); Also, the error log shows nothing along with these errors. Thanks in advance, -- Carlos Rodrigues From linsu at feinno.com Fri May 29 10:48:30 2015 From: linsu at feinno.com (=?utf-8?B?5p6X6LCh?=) Date: Fri, 29 May 2015 18:48:30 +0800 Subject: =?UTF-8?B?562U5aSNOiDnrZTlpI06IHByb2JsZW1zIHdoZW4gdXNlIGZhc3RjZ2lfcGFzcyB0?= =?UTF-8?B?byBkZWxpdmVyIHJlcXVlc3QgdG8gYmFja2VuZA==?= In-Reply-To: References: <35AFFAB0BF4E8B41BD040AD35CA32BEC4806B8A6DD@mailbox.feinno.com> Message-ID: <35AFFAB0BF4E8B41BD040AD35CA32BEC4806B8A6DF@mailbox.feinno.com> Thanks for reply, I had read all the Discussions you suggested. The main reason is that multiplexing seems useless when using ?keep alive? feature and backend is fast enough. It?s true! But real world is more sophisticated. Our system is very big, and over 5k machines are providing services. In Our system, nginx proxy_pass http request to http applications by using ?keep alive?, it works well, over 10 k requests were processed per second and tcp connections between nginx and backend were blow 100. But, sometimes, response time become 1-10s or more for a while, because maybe a db server fail over or network shrink. Over 10k tcp connection need to be setup as we see. our backend is written by java, connections cannot be setup all a sudden, and memory needed is big , GC collections became bottleneck, GC keep on working even if db server or network resumed to normal, and backend server did not work orderly any more, I observed these things several times. If multiplexing, no more connections are needed and memory needed is far small under such a circumstance. We use multiplexing everywhere in our java applications, It can prove my idea. Nginx is needed for sure for client http access, so I study fastcgi to solve above problem, but nginx does support fastcgi multiplexing, which can trigger the same problem. As a conclusion, a big production system really need that nginx pass request to backend by multiplexing. Can you make nginx developing team work on it? ???: Sergey Brester [mailto:serg.brester at sebres.de] ????: 2015?5?29? 16:40 ???: nginx-devel at nginx.org ??: ?? ??: Re: ??: problems when use fastcgi_pass to deliver request to backend Hi, It's called fastcgi multiplexing and nginx currently does not implement that (and I don't know . There were already several discussions about that, so read here, please. Short, very fast fastcgi processing may be implemented without multiplexing (should be event-driven also). Regards, sebres. Am 29.05.2015 09:58, schrieb ??: /* we support the single request per connection */ 2573 2574 case ngx_http_fastcgi_st_request_id_hi: 2575 if (ch != 0) { 2576 ngx_log_error(NGX_LOG_ERR, r->connection->log, 0, 2577 "upstream sent unexpected FastCGI " 2578 "request id high byte: %d", ch); 2579 return NGX_ERROR; 2580 } 2581 state = ngx_http_fastcgi_st_request_id_lo; 2582 break; 2583 2584 case ngx_http_fastcgi_st_request_id_lo: 2585 if (ch != 1) { 2586 ngx_log_error(NGX_LOG_ERR, r->connection->log, 0, 2587 "upstream sent unexpected FastCGI " 2588 "request id low byte: %d", ch); 2589 return NGX_ERROR; 2590 } 2591 state = ngx_http_fastcgi_st_content_length_hi; 2592 break; By reading source code, I saw the reason , so can nginx support multi request per connection in future? ???: ?? ????: 2015?5?29? 11:37 ???: 'nginx-devel at nginx.org' ??: problems when use fastcgi_pass to deliver request to backend Hi, I write a fastcgi server and use nginx to pass request to my server. It works till now. But I find a problem. Nginx always set requestId = 1 when sending fastcgi record. I was a little upset for this, cause according to fastcgi protocol, web server can send fastcgi records belonging to different request simultaneously, and requestIds are different and keep unique. I really need this feature, because requests can be handled simultaneously just over one connetion. Can I find a way out? _______________________________________________ nginx-devel mailing list nginx-devel at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx-devel -------------- next part -------------- An HTML attachment was scrubbed... URL: From code at bluebot.org Sun May 31 01:45:57 2015 From: code at bluebot.org (Jon Nalley) Date: Sat, 30 May 2015 20:45:57 -0500 Subject: [PATCH] Adds $orig_remote_addr in realip module In-Reply-To: <20150528215600.GC10130@lo0.su> References: <4e59c130d468ee6757b5.1432849148@metis.bluebot> <20150528215600.GC10130@lo0.su> Message-ID: Thanks, I didn't realize the contexts were cleared. It should be ok to copy the context data correct? e.g. v->data = ngx_pnalloc(r->pool, ctx->addr_text.len); ngx_memcpy(v->data, ctx->addr_text.data, ctx->addr_text.len); Or is it unsafe to reference the context at all in the handler?