From sepherosa at gmail.com Fri Aug 2 05:16:53 2013 From: sepherosa at gmail.com (Sepherosa Ziehau) Date: Fri, 2 Aug 2013 13:16:53 +0800 Subject: [PATCH] SO_REUSEPORT support for listen sockets (round 3) Message-ID: Hi all, Here is another round of SO_REUSEPORT support. The plot is changed a little bit to allow smooth configure reloading and binary upgrading. Here is what happens when so_reuseport is enable (this does not affect single process model): - Master creates the listen sockets w/ SO_REUSEPORT, but does not configure them - The first worker process will inherit the listen sockets created by master and configure them - After master forked the first worker process all listen sockets are closed - The rest of the workers will create their own listen sockets w/ SO_REUSEPORT - During binary upgrade, listen sockets are no longer passed through environment variables, since new master will create its own listen sockets. Well, the old master actually does not have any listen sockets opened :). The idea behind this plot is that at any given time, there is always one listen socket left, which could inherit the syncaches and pending sockets on the to-be-closed listen sockets. The inheritance itself is handled by the kernel; I implemented this inheritance for DragonFlyBSD recently (http://gitweb.dragonflybsd.org/dragonfly.git/commit/02ad2f0b874fb0a45eb69750219f79f5e8982272). I am not tracking Linux's code, but I think Linux side will eventually get (or already got) the proper fix. The patch itself: http://leaf.dragonflybsd.org/~sephe/ngx_soreuseport3.diff Configuration reloading and binary upgrading will not be interfered as w/ the first 2 patches. Binary upgrading reverting method 1 ("Send the HUP signal to the old master process. ...") will not be interfered as w/ the first 2 patches. There still could be some glitch (but not that worse as w/ the first 2 patches) if binary upgrading reverting method 2 ("Send the TERM signal to the new master process. ...") is used. I think we probably just need to mention that in the document. Best Regards, sephe -- Tomorrow Will Never Die From jzefip at gmail.com Fri Aug 2 16:44:48 2013 From: jzefip at gmail.com (Julien Zefi) Date: Fri, 2 Aug 2013 10:44:48 -0600 Subject: Looking for developer to fix a NginX test case module In-Reply-To: References: <20130726133136.GP90722@mdounin.ru> <20130730093454.GB2130@mdounin.ru> Message-ID: On Wed, Jul 31, 2013 at 3:33 PM, Julien Zefi wrote: > > On Tue, Jul 30, 2013 at 3:34 AM, Maxim Dounin wrote: > >> Hello! >> >> On Mon, Jul 29, 2013 at 07:07:10PM -0600, Julien Zefi wrote: >> >> > hi Maxim, >> > >> > thanks so much for the code provided, i have merged that code in my >> module >> > and it worked as expected!. Would you please send me the details to send >> > you the money ? >> >> Please use donations form here: >> >> http://nginx.org/en/donation.html >> > > thanks, i will be transferring the money this Friday. > > the donation have been sent, please confirm if it was received. thnks -------------- next part -------------- An HTML attachment was scrubbed... URL: From manowar at gsc-game.kiev.ua Sun Aug 4 21:15:04 2013 From: manowar at gsc-game.kiev.ua (Serguei I. Ivantsov) Date: Mon, 5 Aug 2013 00:15:04 +0300 Subject: Dead code in accept Message-ID: Hi, While researching nginx, I found that there are a lot of dead code, mostly in ngx_event_accept.c One of those blocks: if (ngx_event_flags & NGX_USE_RTSIG_EVENT) { ev->available = 1; } else if (!(ngx_event_flags & NGX_USE_KQUEUE_EVENT)) { ev->available = ecf->multi_accept; } I have EPOLL and do not have neither RTSIG nor KQUEUE on my Linux, but this conditionals are executed on every accept. This wastes CPU cycles and cause possible branch miss-predictions. I see two ways, how to address this. 1. nginx-style - wrap this code blocks with pre-processor #if/#endif the same way it is done in other parts of nginx code. Resulting code will look like this: #if (NGX_HAVE_RTSIG) if (ngx_event_flags & NGX_USE_RTSIG_EVENT) { ev->available = 1; } else #endif #if (NGX_HAVE_KQUEUE) if (!(ngx_event_flags & NGX_USE_KQUEUE_EVENT)) { #endif ev->available = ecf->multi_accept; #if (NGX_HAVE_KQUEUE) } #endif Yep, some kind of spaghetti. 2. A little "hack". In ngx_event.h we can conditionally define NGX_USE_*_EVENT constants to zero, if we have no support for specific event module. Thus, we do not need to touch the code, and optimizing compiler will remove this code blocks, because condition expression is constant and available at compile time. A little test with high volume of simple requests shows 0.5% overall speed improvement. I have patches for both, so just need to know which approach is better in terms of nginx ideology. From mdounin at mdounin.ru Sun Aug 4 22:18:31 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 5 Aug 2013 02:18:31 +0400 Subject: Dead code in accept In-Reply-To: References: Message-ID: <20130804221831.GQ2130@mdounin.ru> Hello! On Mon, Aug 05, 2013 at 12:15:04AM +0300, Serguei I. Ivantsov wrote: [...] > A little test with high volume of simple requests shows 0.5% overall speed > improvement. Are you sure the numbers are significant? Doing a ministat(1) analysis or similar is a good idea. http://www.freebsd.org/cgi/man.cgi?query=ministat&sektion=1 -- Maxim Dounin http://nginx.org/en/donation.html From ru at nginx.com Mon Aug 5 06:56:18 2013 From: ru at nginx.com (Ruslan Ermilov) Date: Mon, 05 Aug 2013 06:56:18 +0000 Subject: [nginx] Core: only resolve address families configured on the lo... Message-ID: details: http://hg.nginx.org/nginx/rev/ec8594b9bf11 branches: changeset: 5312:ec8594b9bf11 user: Ruslan Ermilov date: Mon Aug 05 10:55:59 2013 +0400 description: Core: only resolve address families configured on the local system. This is done by passing AI_ADDRCONFIG to getaddrinfo(). On Linux, setting net.ipv6.conf.all.disable_ipv6 to 1 will now be respected. On FreeBSD, AI_ADDRCONFIG filtering is currently implemented by attempting to create a datagram socket for the corresponding family, which succeeds even if the system doesn't in fact have any addresses of that family configured. That is, if the system with IPv6 support in the kernel doesn't have IPv6 addresses configured, AI_ADDRCONFIG will filter out IPv6 only inside a jail without IPv6 addresses or with IPv6 disabled. diffstat: auto/unix | 8 ++++++-- src/core/ngx_inet.c | 1 + 2 files changed, 7 insertions(+), 2 deletions(-) diffs (28 lines): diff -r ae3fd1ca62e0 -r ec8594b9bf11 auto/unix --- a/auto/unix Wed Jul 31 23:40:46 2013 +0400 +++ b/auto/unix Mon Aug 05 10:55:59 2013 +0400 @@ -788,7 +788,11 @@ ngx_feature_incs="#include #include " ngx_feature_path= ngx_feature_libs= -ngx_feature_test='struct addrinfo *res; - if (getaddrinfo("localhost", NULL, NULL, &res) != 0) return 1; +ngx_feature_test='struct addrinfo hints, *res; + hints.ai_family = AF_UNSPEC; + hints.ai_socktype = SOCK_STREAM; + hints.ai_flags = AI_ADDRCONFIG; + if (getaddrinfo("localhost", NULL, &hints, &res) != 0) + return 1; freeaddrinfo(res)' . auto/feature diff -r ae3fd1ca62e0 -r ec8594b9bf11 src/core/ngx_inet.c --- a/src/core/ngx_inet.c Wed Jul 31 23:40:46 2013 +0400 +++ b/src/core/ngx_inet.c Mon Aug 05 10:55:59 2013 +0400 @@ -963,6 +963,7 @@ ngx_inet_resolve_host(ngx_pool_t *pool, ngx_memzero(&hints, sizeof(struct addrinfo)); hints.ai_family = AF_UNSPEC; hints.ai_socktype = SOCK_STREAM; + hints.ai_flags = AI_ADDRCONFIG; if (getaddrinfo((char *) host, NULL, &hints, &res) != 0) { u->err = "host not found"; From manowar at gsc-game.kiev.ua Mon Aug 5 06:58:01 2013 From: manowar at gsc-game.kiev.ua (Serguei I. Ivantsov) Date: Mon, 5 Aug 2013 09:58:01 +0300 Subject: Dead code in accept In-Reply-To: <20130804221831.GQ2130@mdounin.ru> References: <20130804221831.GQ2130@mdounin.ru> Message-ID: <9ab22c740d9fad5e5b5a0d516035c4f9.squirrel@webmail.gsc-game.kiev.ua> I think it is not a matter for trade - "dead code" - is bad programming technique. And it should be eliminated in a fast web server. Of course, general impact on overall performance is not significant, but on function level it will be much noticeable. I can make a perf profiling test to get exact counters. > Hello! > > On Mon, Aug 05, 2013 at 12:15:04AM +0300, Serguei I. Ivantsov wrote: > > [...] > >> A little test with high volume of simple requests shows 0.5% overall >> speed >> improvement. > > Are you sure the numbers are significant? Doing a ministat(1) > analysis or similar is a good idea. > > http://www.freebsd.org/cgi/man.cgi?query=ministat&sektion=1 > > -- > Maxim Dounin > http://nginx.org/en/donation.html > From info at tvdw.eu Mon Aug 5 07:15:19 2013 From: info at tvdw.eu (Tom van der Woerdt) Date: Mon, 5 Aug 2013 09:15:19 +0200 Subject: Dead code in accept In-Reply-To: <9ab22c740d9fad5e5b5a0d516035c4f9.squirrel@webmail.gsc-game.kiev.ua> References: <20130804221831.GQ2130@mdounin.ru> <9ab22c740d9fad5e5b5a0d516035c4f9.squirrel@webmail.gsc-game.kiev.ua> Message-ID: <8E18D2FB-AAB2-439C-A824-448CA7F2F83D@tvdw.eu> Frankly, I don't see the need. We're talking about eliminating 4 or 5 CPU instructions per accept() call which only seems relevant if that's the only thing nginx does: but with HTTP pipelining this optimization is very insignificant as that already reduces the amount of accept() calls a lot. Also, you're talking about adding extra code which might add bugs (in extensions or the main code) and that's probably not worth it. About your benchmark: I assume you turned off keepalive and serve empty pages? Your 0.5% improvement will be more like 0.05% or less in reality. (I'm not a nginx developer though, so this post is only intended as advice) Tom > On 5 aug. 2013, at 08:58, "Serguei I. Ivantsov" wrote: > > I think it is not a matter for trade - "dead code" - is bad programming > technique. And it should be eliminated in a fast web server. > Of course, general impact on overall performance is not significant, but > on function level it will be much noticeable. I can make a perf profiling > test to get exact counters. > >> Hello! >> >> On Mon, Aug 05, 2013 at 12:15:04AM +0300, Serguei I. Ivantsov wrote: >> >> [...] >> >>> A little test with high volume of simple requests shows 0.5% overall >>> speed >>> improvement. >> >> Are you sure the numbers are significant? Doing a ministat(1) >> analysis or similar is a good idea. >> >> http://www.freebsd.org/cgi/man.cgi?query=ministat&sektion=1 >> >> -- >> Maxim Dounin >> http://nginx.org/en/donation.html > > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel From manowar at gsc-game.kiev.ua Mon Aug 5 07:23:58 2013 From: manowar at gsc-game.kiev.ua (Serguei I. Ivantsov) Date: Mon, 5 Aug 2013 10:23:58 +0300 Subject: Dead code in accept In-Reply-To: <8E18D2FB-AAB2-439C-A824-448CA7F2F83D@tvdw.eu> References: <20130804221831.GQ2130@mdounin.ru> <9ab22c740d9fad5e5b5a0d516035c4f9.squirrel@webmail.gsc-game.kiev.ua> <8E18D2FB-AAB2-439C-A824-448CA7F2F83D@tvdw.eu> Message-ID: <0ba998775db36f4b1f6bb922837ea129.squirrel@webmail.gsc-game.kiev.ua> 1. Amount of instructions is somewhat bigger - about 500 bytes. 2. In high load you have thousands of accept()s per second. 3. No, I do not talk about adding some code, conversely - actually I want remove dead code! :) > Frankly, I don't see the need. We're talking about eliminating 4 or 5 CPU > instructions per accept() call which only seems relevant if that's the > only thing nginx does: but with HTTP pipelining this optimization is very > insignificant as that already reduces the amount of accept() calls a lot. > Also, you're talking about adding extra code which might add bugs (in > extensions or the main code) and that's probably not worth it. > > About your benchmark: I assume you turned off keepalive and serve empty > pages? Your 0.5% improvement will be more like 0.05% or less in reality. > > (I'm not a nginx developer though, so this post is only intended as > advice) > > Tom > > >> On 5 aug. 2013, at 08:58, "Serguei I. Ivantsov" >> wrote: >> >> I think it is not a matter for trade - "dead code" - is bad programming >> technique. And it should be eliminated in a fast web server. >> Of course, general impact on overall performance is not significant, but >> on function level it will be much noticeable. I can make a perf >> profiling >> test to get exact counters. >> >>> Hello! >>> >>> On Mon, Aug 05, 2013 at 12:15:04AM +0300, Serguei I. Ivantsov wrote: >>> >>> [...] >>> >>>> A little test with high volume of simple requests shows 0.5% overall >>>> speed >>>> improvement. >>> >>> Are you sure the numbers are significant? Doing a ministat(1) >>> analysis or similar is a good idea. >>> >>> http://www.freebsd.org/cgi/man.cgi?query=ministat&sektion=1 >>> >>> -- >>> Maxim Dounin >>> http://nginx.org/en/donation.html >> >> >> _______________________________________________ >> nginx-devel mailing list >> nginx-devel at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx-devel > From maxim at nginx.com Mon Aug 5 07:28:28 2013 From: maxim at nginx.com (Maxim Konovalov) Date: Mon, 05 Aug 2013 11:28:28 +0400 Subject: Looking for developer to fix a NginX test case module In-Reply-To: References: <20130726133136.GP90722@mdounin.ru> <20130730093454.GB2130@mdounin.ru> Message-ID: <51FF541C.8070800@nginx.com> On 8/2/13 8:44 PM, Julien Zefi wrote: > On Wed, Jul 31, 2013 at 3:33 PM, Julien Zefi > wrote: > > > On Tue, Jul 30, 2013 at 3:34 AM, Maxim Dounin > > wrote: > > Hello! > > On Mon, Jul 29, 2013 at 07:07:10PM -0600, Julien Zefi wrote: > > > hi Maxim, > > > > thanks so much for the code provided, i have merged that > code in my module > > and it worked as expected!. Would you please send me the > details to send > > you the money ? > > Please use donations form here: > > http://nginx.org/en/donation.html > > > thanks, i will be transferring the money this Friday. > > > the donation have been sent, please confirm if it was received. > We indeed received a donation but a name of the donor is different. Please check the donors list here http://nginx.org/en/donation.html -- Maxim Konovalov http://nginx.com From mdounin at mdounin.ru Mon Aug 5 07:42:15 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 05 Aug 2013 07:42:15 +0000 Subject: [nginx] Fixed build with signed socklen_t and unix sockets. Message-ID: details: http://hg.nginx.org/nginx/rev/1fe5f7fb6ced branches: changeset: 5313:1fe5f7fb6ced user: Maxim Dounin date: Mon Aug 05 11:40:33 2013 +0400 description: Fixed build with signed socklen_t and unix sockets. This seems to be the case at least under Cygwin, where build was broken by 05ba5bce31e0 (1.5.3). Reported by Kevin Worthington, http://mailman.nginx.org/pipermail/nginx/2013-August/040028.html. diffstat: src/core/ngx_inet.c | 2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diffs (12 lines): diff --git a/src/core/ngx_inet.c b/src/core/ngx_inet.c --- a/src/core/ngx_inet.c +++ b/src/core/ngx_inet.c @@ -233,7 +233,7 @@ ngx_sock_ntop(struct sockaddr *sa, sockl /* on Linux sockaddr might not include sun_path at all */ - if (socklen <= offsetof(struct sockaddr_un, sun_path)) { + if (socklen <= (socklen_t) offsetof(struct sockaddr_un, sun_path)) { p = ngx_snprintf(text, len, "unix:%Z"); } else { From piotr at cloudflare.com Mon Aug 5 07:45:11 2013 From: piotr at cloudflare.com (Piotr Sikora) Date: Mon, 5 Aug 2013 00:45:11 -0700 Subject: [PATCH] Core: guard use of AI_ADDRCONFIG Message-ID: # HG changeset patch # User Piotr Sikora # Date 1375688648 0 # Mon Aug 05 07:44:08 2013 +0000 # Node ID c9e0a2f54810335ba91b86fdb92ef63571680dae # Parent ec8594b9bf11de3599af15de8e73e41bf7a8b42c Core: guard use of AI_ADDRCONFIG. AI_ADDRCONFIG is not available on all operating systems (e.g. OpenBSD) and using it without a guard results in dropped getaddrinfo() support. Signed-off-by: Piotr Sikora diff -r ec8594b9bf11 -r c9e0a2f54810 auto/unix --- a/auto/unix Mon Aug 05 10:55:59 2013 +0400 +++ b/auto/unix Mon Aug 05 07:44:08 2013 +0000 @@ -791,7 +791,9 @@ ngx_feature_libs= ngx_feature_test='struct addrinfo hints, *res; hints.ai_family = AF_UNSPEC; hints.ai_socktype = SOCK_STREAM; + #ifdef AI_ADDRCONFIG hints.ai_flags = AI_ADDRCONFIG; + #endif if (getaddrinfo("localhost", NULL, &hints, &res) != 0) return 1; freeaddrinfo(res)' diff -r ec8594b9bf11 -r c9e0a2f54810 src/core/ngx_inet.c --- a/src/core/ngx_inet.c Mon Aug 05 10:55:59 2013 +0400 +++ b/src/core/ngx_inet.c Mon Aug 05 07:44:08 2013 +0000 @@ -963,7 +963,9 @@ ngx_inet_resolve_host(ngx_pool_t *pool, ngx_memzero(&hints, sizeof(struct addrinfo)); hints.ai_family = AF_UNSPEC; hints.ai_socktype = SOCK_STREAM; +#ifdef AI_ADDRCONFIG hints.ai_flags = AI_ADDRCONFIG; +#endif if (getaddrinfo((char *) host, NULL, &hints, &res) != 0) { u->err = "host not found"; From mat999 at gmail.com Mon Aug 5 09:03:07 2013 From: mat999 at gmail.com (SplitIce) Date: Mon, 5 Aug 2013 18:33:07 +0930 Subject: Dead code in accept In-Reply-To: <9ab22c740d9fad5e5b5a0d516035c4f9.squirrel@webmail.gsc-game.kiev.ua> References: <20130804221831.GQ2130@mdounin.ru> <9ab22c740d9fad5e5b5a0d516035c4f9.squirrel@webmail.gsc-game.kiev.ua> Message-ID: I agree, I view dead code at run-time as a liability. Something that can go wrong. At compile time, you know everything about the sytem, the config script makes assumptions and the binaries shouldnt work if moved to a different OS etc anyway. I am not a nginx developer just a C++ developer. On Mon, Aug 5, 2013 at 4:28 PM, Serguei I. Ivantsov < manowar at gsc-game.kiev.ua> wrote: > I think it is not a matter for trade - "dead code" - is bad programming > technique. And it should be eliminated in a fast web server. > Of course, general impact on overall performance is not significant, but > on function level it will be much noticeable. I can make a perf profiling > test to get exact counters. > > > Hello! > > > > On Mon, Aug 05, 2013 at 12:15:04AM +0300, Serguei I. Ivantsov wrote: > > > > [...] > > > >> A little test with high volume of simple requests shows 0.5% overall > >> speed > >> improvement. > > > > Are you sure the numbers are significant? Doing a ministat(1) > > analysis or similar is a good idea. > > > > http://www.freebsd.org/cgi/man.cgi?query=ministat&sektion=1 > > > > -- > > Maxim Dounin > > http://nginx.org/en/donation.html > > > > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ru at nginx.com Mon Aug 5 09:28:48 2013 From: ru at nginx.com (Ruslan Ermilov) Date: Mon, 5 Aug 2013 13:28:48 +0400 Subject: [PATCH] Core: guard use of AI_ADDRCONFIG In-Reply-To: References: Message-ID: <20130805092848.GK3866@lo0.su> On Mon, Aug 05, 2013 at 12:45:11AM -0700, Piotr Sikora wrote: > # HG changeset patch > # User Piotr Sikora > # Date 1375688648 0 > # Mon Aug 05 07:44:08 2013 +0000 > # Node ID c9e0a2f54810335ba91b86fdb92ef63571680dae > # Parent ec8594b9bf11de3599af15de8e73e41bf7a8b42c > Core: guard use of AI_ADDRCONFIG. > > AI_ADDRCONFIG is not available on all operating systems (e.g. OpenBSD) > and using it without a guard results in dropped getaddrinfo() support. > > Signed-off-by: Piotr Sikora Thanks. I suggest a different patch instead: # HG changeset patch # User Ruslan Ermilov # Date 1375694198 -14400 # Mon Aug 05 13:16:38 2013 +0400 # Node ID 294cead2bb846e1f6cf1469af14c9221adac74d3 # Parent ec8594b9bf11de3599af15de8e73e41bf7a8b42c Core: guard use of AI_ADDRCONFIG. Some systems (notably NetBSD and OpenBSD) lack AI_ADDRCONFIG support. Reported by Piotr Sikora. diff --git a/auto/unix b/auto/unix --- a/auto/unix +++ b/auto/unix @@ -788,11 +788,7 @@ ngx_feature_incs="#include #include " ngx_feature_path= ngx_feature_libs= -ngx_feature_test='struct addrinfo hints, *res; - hints.ai_family = AF_UNSPEC; - hints.ai_socktype = SOCK_STREAM; - hints.ai_flags = AI_ADDRCONFIG; - if (getaddrinfo("localhost", NULL, &hints, &res) != 0) - return 1; +ngx_feature_test='struct addrinfo *res; + if (getaddrinfo("localhost", NULL, NULL, &res) != 0) return 1; freeaddrinfo(res)' . auto/feature diff --git a/src/core/ngx_inet.c b/src/core/ngx_inet.c --- a/src/core/ngx_inet.c +++ b/src/core/ngx_inet.c @@ -963,7 +963,9 @@ ngx_inet_resolve_host(ngx_pool_t *pool, ngx_memzero(&hints, sizeof(struct addrinfo)); hints.ai_family = AF_UNSPEC; hints.ai_socktype = SOCK_STREAM; +#ifdef AI_ADDRCONFIG hints.ai_flags = AI_ADDRCONFIG; +#endif if (getaddrinfo((char *) host, NULL, &hints, &res) != 0) { u->err = "host not found"; From piotr at cloudflare.com Mon Aug 5 09:38:06 2013 From: piotr at cloudflare.com (Piotr Sikora) Date: Mon, 5 Aug 2013 02:38:06 -0700 Subject: [PATCH] Core: guard use of AI_ADDRCONFIG In-Reply-To: <20130805092848.GK3866@lo0.su> References: <20130805092848.GK3866@lo0.su> Message-ID: Hey Ruslan, > Thanks. I suggest a different patch instead: > (...) > diff --git a/auto/unix b/auto/unix > --- a/auto/unix > +++ b/auto/unix > @@ -788,11 +788,7 @@ ngx_feature_incs="#include > #include " > ngx_feature_path= > ngx_feature_libs= > -ngx_feature_test='struct addrinfo hints, *res; > - hints.ai_family = AF_UNSPEC; > - hints.ai_socktype = SOCK_STREAM; > - hints.ai_flags = AI_ADDRCONFIG; > - if (getaddrinfo("localhost", NULL, &hints, &res) != 0) > - return 1; > +ngx_feature_test='struct addrinfo *res; > + if (getaddrinfo("localhost", NULL, NULL, &res) != 0) return 1; > freeaddrinfo(res)' > . auto/feature I was just about to send an updated patch because I've noticed that hints struct wasn't zero'ed in the feature test, but reverting those changes altogether looks like a better solution... OK from me. Best regards, Piotr Sikora From ru at nginx.com Mon Aug 5 09:46:38 2013 From: ru at nginx.com (Ruslan Ermilov) Date: Mon, 05 Aug 2013 09:46:38 +0000 Subject: [nginx] Core: guard use of AI_ADDRCONFIG. Message-ID: details: http://hg.nginx.org/nginx/rev/0300d97c6084 branches: changeset: 5314:0300d97c6084 user: Ruslan Ermilov date: Mon Aug 05 13:44:56 2013 +0400 description: Core: guard use of AI_ADDRCONFIG. Some systems (notably NetBSD and OpenBSD) lack AI_ADDRCONFIG support. Reported by Piotr Sikora. diffstat: auto/unix | 8 ++------ src/core/ngx_inet.c | 2 ++ 2 files changed, 4 insertions(+), 6 deletions(-) diffs (30 lines): diff -r 1fe5f7fb6ced -r 0300d97c6084 auto/unix --- a/auto/unix Mon Aug 05 11:40:33 2013 +0400 +++ b/auto/unix Mon Aug 05 13:44:56 2013 +0400 @@ -788,11 +788,7 @@ ngx_feature_incs="#include #include " ngx_feature_path= ngx_feature_libs= -ngx_feature_test='struct addrinfo hints, *res; - hints.ai_family = AF_UNSPEC; - hints.ai_socktype = SOCK_STREAM; - hints.ai_flags = AI_ADDRCONFIG; - if (getaddrinfo("localhost", NULL, &hints, &res) != 0) - return 1; +ngx_feature_test='struct addrinfo *res; + if (getaddrinfo("localhost", NULL, NULL, &res) != 0) return 1; freeaddrinfo(res)' . auto/feature diff -r 1fe5f7fb6ced -r 0300d97c6084 src/core/ngx_inet.c --- a/src/core/ngx_inet.c Mon Aug 05 11:40:33 2013 +0400 +++ b/src/core/ngx_inet.c Mon Aug 05 13:44:56 2013 +0400 @@ -963,7 +963,9 @@ ngx_inet_resolve_host(ngx_pool_t *pool, ngx_memzero(&hints, sizeof(struct addrinfo)); hints.ai_family = AF_UNSPEC; hints.ai_socktype = SOCK_STREAM; +#ifdef AI_ADDRCONFIG hints.ai_flags = AI_ADDRCONFIG; +#endif if (getaddrinfo((char *) host, NULL, &hints, &res) != 0) { u->err = "host not found"; From vbart at nginx.com Mon Aug 5 10:33:18 2013 From: vbart at nginx.com (Valentin Bartenev) Date: Mon, 05 Aug 2013 10:33:18 +0000 Subject: [nginx] Image filter: use "application/json" MIME type for JSON ... Message-ID: details: http://hg.nginx.org/nginx/rev/31932b5464f0 branches: changeset: 5315:31932b5464f0 user: Valentin Bartenev date: Mon Aug 05 14:30:03 2013 +0400 description: Image filter: use "application/json" MIME type for JSON output. As it is defined by RFC 4627, and allows for various browser tools like JSONView to display JSON well-formatted. diffstat: src/http/modules/ngx_http_image_filter_module.c | 2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diffs (12 lines): diff -r 0300d97c6084 -r 31932b5464f0 src/http/modules/ngx_http_image_filter_module.c --- a/src/http/modules/ngx_http_image_filter_module.c Mon Aug 05 13:44:56 2013 +0400 +++ b/src/http/modules/ngx_http_image_filter_module.c Mon Aug 05 14:30:03 2013 +0400 @@ -567,7 +567,7 @@ ngx_http_image_json(ngx_http_request_t * ngx_http_clean_header(r); r->headers_out.status = NGX_HTTP_OK; - ngx_str_set(&r->headers_out.content_type, "text/plain"); + ngx_str_set(&r->headers_out.content_type, "application/json"); r->headers_out.content_type_lowcase = NULL; if (ctx == NULL) { From manowar at gsc-game.kiev.ua Mon Aug 5 18:06:57 2013 From: manowar at gsc-game.kiev.ua (Serguei I. Ivantsov) Date: Mon, 5 Aug 2013 21:06:57 +0300 Subject: cache for ngx_http_time() Message-ID: Hi! ngx_http_time() is called once per request and it call a heavy ngx_sprintf() function. Why not cache the output for one second (resolution of time_t)? I find nice time caching framework on ngx_times.c, with slots and memory barriers, but ngx_http_time() is not using it for some reason. In this case, probably, it is easier to cache just one string buffer and time_t value per process. From manowar at gsc-game.kiev.ua Mon Aug 5 19:36:39 2013 From: manowar at gsc-game.kiev.ua (Serguei I. Ivantsov) Date: Mon, 5 Aug 2013 22:36:39 +0300 Subject: cache for ngx_http_time() In-Reply-To: References: Message-ID: <0fabdad5a72df58599afe37064de9fb1.squirrel@webmail.gsc-game.kiev.ua> Actually this is "Last-Modified:" header. In case of several files, caching will improve processing speed, but it is useful for benchmarks only. In real life environment we have tons of files. From piotr at cloudflare.com Mon Aug 5 20:52:33 2013 From: piotr at cloudflare.com (Piotr Sikora) Date: Mon, 5 Aug 2013 13:52:33 -0700 Subject: [PATCH] SSL: support ALPN (IETF's successor to NPN) Message-ID: # HG changeset patch # User Piotr Sikora # Date 1375735383 25200 # Mon Aug 05 13:43:03 2013 -0700 # Node ID 997b00c5c7f377a6c18874311fe39f22655616f6 # Parent 31932b5464f0230d5039fafed94c33f495da35f6 SSL: support ALPN (IETF's successor to NPN). Signed-off-by: Piotr Sikora diff -r 31932b5464f0 -r 997b00c5c7f3 src/http/modules/ngx_http_ssl_module.c --- a/src/http/modules/ngx_http_ssl_module.c Mon Aug 05 14:30:03 2013 +0400 +++ b/src/http/modules/ngx_http_ssl_module.c Mon Aug 05 13:43:03 2013 -0700 @@ -17,6 +17,17 @@ typedef ngx_int_t (*ngx_ssl_variable_han #define NGX_DEFAULT_CIPHERS "HIGH:!aNULL:!MD5" #define NGX_DEFAULT_ECDH_CURVE "prime256v1" +#if (defined TLSEXT_TYPE_application_layer_protocol_negotiation \ + || defined TLSEXT_TYPE_next_proto_neg) +#define NGX_HTTP_NPN_ADVERTISE "\x08http/1.1" +#endif + + +#ifdef TLSEXT_TYPE_application_layer_protocol_negotiation +static int ngx_http_ssl_alpn_select(ngx_ssl_conn_t *ssl_conn, + const unsigned char **out, unsigned char *outlen, + const unsigned char *in, unsigned int inlen, void *arg); +#endif #ifdef TLSEXT_TYPE_next_proto_neg static int ngx_http_ssl_npn_advertised(ngx_ssl_conn_t *ssl_conn, @@ -267,10 +278,64 @@ static ngx_http_variable_t ngx_http_ssl static ngx_str_t ngx_http_ssl_sess_id_ctx = ngx_string("HTTP"); +#ifdef TLSEXT_TYPE_application_layer_protocol_negotiation + +static int +ngx_http_ssl_alpn_select(ngx_ssl_conn_t *ssl_conn, const unsigned char **out, + unsigned char *outlen, const unsigned char *in, unsigned int inlen, + void *arg) +{ + unsigned int srvlen; + unsigned char *srv; +#if (NGX_DEBUG) + unsigned int i; +#endif +#if (NGX_HTTP_SPDY || NGX_DEBUG) + ngx_connection_t *c; + + c = ngx_ssl_get_connection(ssl_conn); +#endif +#if (NGX_DEBUG) + for (i = 0; i < inlen; i += in[i] + 1) { + ngx_log_debug2(NGX_LOG_DEBUG_HTTP, c->log, 0, + "SSL ALPN supported by client: %*s", in[i], &in[i + 1]); + } +#endif + +#if (NGX_HTTP_SPDY) + ngx_http_connection_t *hc; + + hc = c->data; + + if (hc->addr_conf->spdy) { + srv = (unsigned char *) NGX_SPDY_NPN_ADVERTISE NGX_HTTP_NPN_ADVERTISE; + srvlen = sizeof(NGX_SPDY_NPN_ADVERTISE NGX_HTTP_NPN_ADVERTISE) - 1; + + } else +#endif + { + srv = (unsigned char *) NGX_HTTP_NPN_ADVERTISE; + srvlen = sizeof(NGX_HTTP_NPN_ADVERTISE) - 1; + } + + if (SSL_select_next_proto((unsigned char **) out, outlen, srv, srvlen, + in, inlen) + != OPENSSL_NPN_NEGOTIATED) + { + return SSL_TLSEXT_ERR_NOACK; + } + + ngx_log_debug2(NGX_LOG_DEBUG_HTTP, c->log, 0, + "SSL ALPN selected: %*s", *outlen, *out); + + return SSL_TLSEXT_ERR_OK; +} + +#endif + + #ifdef TLSEXT_TYPE_next_proto_neg -#define NGX_HTTP_NPN_ADVERTISE "\x08http/1.1" - static int ngx_http_ssl_npn_advertised(ngx_ssl_conn_t *ssl_conn, const unsigned char **out, unsigned int *outlen, void *arg) @@ -534,6 +599,10 @@ ngx_http_ssl_merge_srv_conf(ngx_conf_t * #endif +#ifdef TLSEXT_TYPE_application_layer_protocol_negotiation + SSL_CTX_set_alpn_select_cb(conf->ssl.ctx, ngx_http_ssl_alpn_select, NULL); +#endif + #ifdef TLSEXT_TYPE_next_proto_neg SSL_CTX_set_next_protos_advertised_cb(conf->ssl.ctx, ngx_http_ssl_npn_advertised, NULL); diff -r 31932b5464f0 -r 997b00c5c7f3 src/http/ngx_http.c --- a/src/http/ngx_http.c Mon Aug 05 14:30:03 2013 +0400 +++ b/src/http/ngx_http.c Mon Aug 05 13:43:03 2013 -0700 @@ -1346,11 +1346,12 @@ ngx_http_add_address(ngx_conf_t *cf, ngx } } -#if (NGX_HTTP_SPDY && NGX_HTTP_SSL && !defined TLSEXT_TYPE_next_proto_neg) +#if (NGX_HTTP_SPDY && NGX_HTTP_SSL && !defined TLSEXT_TYPE_next_proto_neg \ + && !defined TLSEXT_TYPE_application_layer_protocol_negotiation) if (lsopt->spdy && lsopt->ssl) { ngx_conf_log_error(NGX_LOG_WARN, cf, 0, - "nginx was built without OpenSSL NPN support, " - "SPDY is not enabled for %s", lsopt->addr); + "nginx was built without OpenSSL ALPN and NPN " + "support, SPDY is not enabled for %s", lsopt->addr); } #endif diff -r 31932b5464f0 -r 997b00c5c7f3 src/http/ngx_http_request.c --- a/src/http/ngx_http_request.c Mon Aug 05 14:30:03 2013 +0400 +++ b/src/http/ngx_http_request.c Mon Aug 05 13:43:03 2013 -0700 @@ -727,18 +727,31 @@ ngx_http_ssl_handshake_handler(ngx_conne c->ssl->no_wait_shutdown = 1; -#if (NGX_HTTP_SPDY && defined TLSEXT_TYPE_next_proto_neg) +#if (NGX_HTTP_SPDY \ + && (defined TLSEXT_TYPE_application_layer_protocol_negotiation \ + || defined TLSEXT_TYPE_next_proto_neg)) { unsigned int len; const unsigned char *data; static const ngx_str_t spdy = ngx_string(NGX_SPDY_NPN_NEGOTIATED); - SSL_get0_next_proto_negotiated(c->ssl->connection, &data, &len); +#ifdef TLSEXT_TYPE_application_layer_protocol_negotiation + SSL_get0_alpn_selected(c->ssl->connection, &data, &len); if (len == spdy.len && ngx_strncmp(data, spdy.data, spdy.len) == 0) { ngx_http_spdy_init(c->read); return; } +#endif + +#ifdef TLSEXT_TYPE_next_proto_neg + SSL_get0_next_proto_negotiated(c->ssl->connection, &data, &len); + + if (len == spdy.len && ngx_strncmp(data, spdy.data, spdy.len) == 0) { + ngx_http_spdy_init(c->read); + return; + } +#endif } #endif diff -r 31932b5464f0 -r 997b00c5c7f3 src/http/ngx_http_spdy.h --- a/src/http/ngx_http_spdy.h Mon Aug 05 14:30:03 2013 +0400 +++ b/src/http/ngx_http_spdy.h Mon Aug 05 13:43:03 2013 -0700 @@ -17,7 +17,8 @@ #define NGX_SPDY_VERSION 2 -#ifdef TLSEXT_TYPE_next_proto_neg +#if (defined TLSEXT_TYPE_application_layer_protocol_negotiation \ + || defined TLSEXT_TYPE_next_proto_neg) #define NGX_SPDY_NPN_ADVERTISE "\x06spdy/2" #define NGX_SPDY_NPN_NEGOTIATED "spdy/2" #endif From piotr at cloudflare.com Mon Aug 5 20:53:02 2013 From: piotr at cloudflare.com (Piotr Sikora) Date: Mon, 5 Aug 2013 13:53:02 -0700 Subject: [PATCH] SSL: support automatic selection of ECDH temporary key parameters Message-ID: # HG changeset patch # User Piotr Sikora # Date 1375735677 25200 # Mon Aug 05 13:47:57 2013 -0700 # Node ID bff5a43ea1596c1b0d2bb0b2fe698c7c79d8348a # Parent 997b00c5c7f377a6c18874311fe39f22655616f6 SSL: support automatic selection of ECDH temporary key parameters. Signed-off-by: Piotr Sikora diff -r 997b00c5c7f3 -r bff5a43ea159 src/event/ngx_event_openssl.c --- a/src/event/ngx_event_openssl.c Mon Aug 05 13:43:03 2013 -0700 +++ b/src/event/ngx_event_openssl.c Mon Aug 05 13:47:57 2013 -0700 @@ -630,6 +630,19 @@ ngx_ssl_ecdh_curve(ngx_conf_t *cf, ngx_s * maximum interoperability. */ + if (ngx_strcmp(name->data, "auto") == 0) { +#ifdef SSL_CTRL_SET_ECDH_AUTO + SSL_CTX_set_ecdh_auto(ssl->ctx, 1); + return NGX_OK; +#else + ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, + "nginx was built without OpenSSL support for " + "automatic selection of ECDH temporary key " + "parameters"); + return NGX_ERROR; +#endif + } + nid = OBJ_sn2nid((const char *) name->data); if (nid == 0) { ngx_ssl_error(NGX_LOG_EMERG, ssl->log, 0, From jzefip at gmail.com Mon Aug 5 21:59:14 2013 From: jzefip at gmail.com (Julien Zefi) Date: Mon, 5 Aug 2013 15:59:14 -0600 Subject: Looking for developer to fix a NginX test case module In-Reply-To: <51FF541C.8070800@nginx.com> References: <20130726133136.GP90722@mdounin.ru> <20130730093454.GB2130@mdounin.ru> <51FF541C.8070800@nginx.com> Message-ID: On Mon, Aug 5, 2013 at 1:28 AM, Maxim Konovalov wrote: > On 8/2/13 8:44 PM, Julien Zefi wrote: > > On Wed, Jul 31, 2013 at 3:33 PM, Julien Zefi > > wrote: > > > > > > On Tue, Jul 30, 2013 at 3:34 AM, Maxim Dounin > > > wrote: > > > > Hello! > > > > On Mon, Jul 29, 2013 at 07:07:10PM -0600, Julien Zefi wrote: > > > > > hi Maxim, > > > > > > thanks so much for the code provided, i have merged that > > code in my module > > > and it worked as expected!. Would you please send me the > > details to send > > > you the money ? > > > > Please use donations form here: > > > > http://nginx.org/en/donation.html > > > > > > thanks, i will be transferring the money this Friday. > > > > > > the donation have been sent, please confirm if it was received. > > > We indeed received a donation but a name of the donor is different. > > Please check the donors list here http://nginx.org/en/donation.html > > the donor name is ok, it was sent by a partner :) -------------- next part -------------- An HTML attachment was scrubbed... URL: From vbart at nginx.com Tue Aug 6 16:01:22 2013 From: vbart at nginx.com (Valentin Bartenev) Date: Tue, 06 Aug 2013 16:01:22 +0000 Subject: [nginx] Fixed memory leaks in the root and auth_basic_user_file ... Message-ID: details: http://hg.nginx.org/nginx/rev/12dd27b74117 branches: changeset: 5316:12dd27b74117 user: Valentin Bartenev date: Tue Aug 06 19:58:40 2013 +0400 description: Fixed memory leaks in the root and auth_basic_user_file directives. If a relative path is set by variables, then the ngx_conf_full_name() function was called while processing requests, which causes allocations from the cycle pool. A new function that takes pool as an argument was introduced. diffstat: src/core/ngx_conf_file.c | 94 +--------------------------------------- src/core/ngx_file.c | 91 +++++++++++++++++++++++++++++++++++++++ src/core/ngx_file.h | 3 + src/http/ngx_http_core_module.c | 4 +- src/http/ngx_http_script.c | 6 ++- src/http/ngx_http_variables.c | 8 ++- 6 files changed, 112 insertions(+), 94 deletions(-) diffs (285 lines): diff -r 31932b5464f0 -r 12dd27b74117 src/core/ngx_conf_file.c --- a/src/core/ngx_conf_file.c Mon Aug 05 14:30:03 2013 +0400 +++ b/src/core/ngx_conf_file.c Tue Aug 06 19:58:40 2013 +0400 @@ -12,7 +12,6 @@ static ngx_int_t ngx_conf_handler(ngx_conf_t *cf, ngx_int_t last); static ngx_int_t ngx_conf_read_token(ngx_conf_t *cf); -static ngx_int_t ngx_conf_test_full_name(ngx_str_t *name); static void ngx_conf_flush_files(ngx_cycle_t *cycle); @@ -801,95 +800,10 @@ ngx_conf_include(ngx_conf_t *cf, ngx_com ngx_int_t ngx_conf_full_name(ngx_cycle_t *cycle, ngx_str_t *name, ngx_uint_t conf_prefix) { - size_t len; - u_char *p, *n, *prefix; - ngx_int_t rc; - - rc = ngx_conf_test_full_name(name); - - if (rc == NGX_OK) { - return rc; - } - - if (conf_prefix) { - len = cycle->conf_prefix.len; - prefix = cycle->conf_prefix.data; - - } else { - len = cycle->prefix.len; - prefix = cycle->prefix.data; - } - -#if (NGX_WIN32) - - if (rc == 2) { - len = rc; - } - -#endif - - n = ngx_pnalloc(cycle->pool, len + name->len + 1); - if (n == NULL) { - return NGX_ERROR; - } - - p = ngx_cpymem(n, prefix, len); - ngx_cpystrn(p, name->data, name->len + 1); - - name->len += len; - name->data = n; - - return NGX_OK; -} - - -static ngx_int_t -ngx_conf_test_full_name(ngx_str_t *name) -{ -#if (NGX_WIN32) - u_char c0, c1; - - c0 = name->data[0]; - - if (name->len < 2) { - if (c0 == '/') { - return 2; - } - - return NGX_DECLINED; - } - - c1 = name->data[1]; - - if (c1 == ':') { - c0 |= 0x20; - - if ((c0 >= 'a' && c0 <= 'z')) { - return NGX_OK; - } - - return NGX_DECLINED; - } - - if (c1 == '/') { - return NGX_OK; - } - - if (c0 == '/') { - return 2; - } - - return NGX_DECLINED; - -#else - - if (name->data[0] == '/') { - return NGX_OK; - } - - return NGX_DECLINED; - -#endif + return ngx_get_full_name(cycle->pool, + conf_prefix ? &cycle->conf_prefix: + &cycle->prefix, + name); } diff -r 31932b5464f0 -r 12dd27b74117 src/core/ngx_file.c --- a/src/core/ngx_file.c Mon Aug 05 14:30:03 2013 +0400 +++ b/src/core/ngx_file.c Tue Aug 06 19:58:40 2013 +0400 @@ -9,11 +9,102 @@ #include +static ngx_int_t ngx_test_full_name(ngx_str_t *name); + + static ngx_atomic_t temp_number = 0; ngx_atomic_t *ngx_temp_number = &temp_number; ngx_atomic_int_t ngx_random_number = 123456; +ngx_int_t +ngx_get_full_name(ngx_pool_t *pool, ngx_str_t *prefix, ngx_str_t *name) +{ + size_t len; + u_char *p, *n; + ngx_int_t rc; + + rc = ngx_test_full_name(name); + + if (rc == NGX_OK) { + return rc; + } + + len = prefix->len; + +#if (NGX_WIN32) + + if (rc == 2) { + len = rc; + } + +#endif + + n = ngx_pnalloc(pool, len + name->len + 1); + if (n == NULL) { + return NGX_ERROR; + } + + p = ngx_cpymem(n, prefix->data, len); + ngx_cpystrn(p, name->data, name->len + 1); + + name->len += len; + name->data = n; + + return NGX_OK; +} + + +static ngx_int_t +ngx_test_full_name(ngx_str_t *name) +{ +#if (NGX_WIN32) + u_char c0, c1; + + c0 = name->data[0]; + + if (name->len < 2) { + if (c0 == '/') { + return 2; + } + + return NGX_DECLINED; + } + + c1 = name->data[1]; + + if (c1 == ':') { + c0 |= 0x20; + + if ((c0 >= 'a' && c0 <= 'z')) { + return NGX_OK; + } + + return NGX_DECLINED; + } + + if (c1 == '/') { + return NGX_OK; + } + + if (c0 == '/') { + return 2; + } + + return NGX_DECLINED; + +#else + + if (name->data[0] == '/') { + return NGX_OK; + } + + return NGX_DECLINED; + +#endif +} + + ssize_t ngx_write_chain_to_temp_file(ngx_temp_file_t *tf, ngx_chain_t *chain) { diff -r 31932b5464f0 -r 12dd27b74117 src/core/ngx_file.h --- a/src/core/ngx_file.h Mon Aug 05 14:30:03 2013 +0400 +++ b/src/core/ngx_file.h Tue Aug 06 19:58:40 2013 +0400 @@ -122,6 +122,9 @@ struct ngx_tree_ctx_s { }; +ngx_int_t ngx_get_full_name(ngx_pool_t *pool, ngx_str_t *prefix, + ngx_str_t *name); + ssize_t ngx_write_chain_to_temp_file(ngx_temp_file_t *tf, ngx_chain_t *chain); ngx_int_t ngx_create_temp_file(ngx_file_t *file, ngx_path_t *path, ngx_pool_t *pool, ngx_uint_t persistent, ngx_uint_t clean, diff -r 31932b5464f0 -r 12dd27b74117 src/http/ngx_http_core_module.c --- a/src/http/ngx_http_core_module.c Mon Aug 05 14:30:03 2013 +0400 +++ b/src/http/ngx_http_core_module.c Tue Aug 06 19:58:40 2013 +0400 @@ -2016,7 +2016,9 @@ ngx_http_map_uri_to_path(ngx_http_reques return NULL; } - if (ngx_conf_full_name((ngx_cycle_t *) ngx_cycle, path, 0) != NGX_OK) { + if (ngx_get_full_name(r->pool, (ngx_str_t *) &ngx_cycle->prefix, path) + != NGX_OK) + { return NULL; } diff -r 31932b5464f0 -r 12dd27b74117 src/http/ngx_http_script.c --- a/src/http/ngx_http_script.c Mon Aug 05 14:30:03 2013 +0400 +++ b/src/http/ngx_http_script.c Tue Aug 06 19:58:40 2013 +0400 @@ -1334,7 +1334,11 @@ ngx_http_script_full_name_code(ngx_http_ value.data = e->buf.data; value.len = e->pos - e->buf.data; - if (ngx_conf_full_name((ngx_cycle_t *) ngx_cycle, &value, code->conf_prefix) + if (ngx_get_full_name(e->request->pool, + code->conf_prefix + ? (ngx_str_t *) &ngx_cycle->conf_prefix: + (ngx_str_t *) &ngx_cycle->prefix, + &value) != NGX_OK) { e->ip = ngx_http_script_exit; diff -r 31932b5464f0 -r 12dd27b74117 src/http/ngx_http_variables.c --- a/src/http/ngx_http_variables.c Mon Aug 05 14:30:03 2013 +0400 +++ b/src/http/ngx_http_variables.c Tue Aug 06 19:58:40 2013 +0400 @@ -1374,7 +1374,9 @@ ngx_http_variable_document_root(ngx_http return NGX_ERROR; } - if (ngx_conf_full_name((ngx_cycle_t *) ngx_cycle, &path, 0) != NGX_OK) { + if (ngx_get_full_name(r->pool, (ngx_str_t *) &ngx_cycle->prefix, &path) + != NGX_OK) + { return NGX_ERROR; } @@ -1416,7 +1418,9 @@ ngx_http_variable_realpath_root(ngx_http path.data[path.len - 1] = '\0'; - if (ngx_conf_full_name((ngx_cycle_t *) ngx_cycle, &path, 0) != NGX_OK) { + if (ngx_get_full_name(r->pool, (ngx_str_t *) &ngx_cycle->prefix, &path) + != NGX_OK) + { return NGX_ERROR; } } From vbart at nginx.com Tue Aug 6 16:01:24 2013 From: vbart at nginx.com (Valentin Bartenev) Date: Tue, 06 Aug 2013 16:01:24 +0000 Subject: [nginx] Replaced ngx_conf_full_name() with ngx_get_full_name(). Message-ID: details: http://hg.nginx.org/nginx/rev/f1a91825730a branches: changeset: 5317:f1a91825730a user: Valentin Bartenev date: Tue Aug 06 19:58:40 2013 +0400 description: Replaced ngx_conf_full_name() with ngx_get_full_name(). The ngx_get_full_name() function takes more readable arguments list. diffstat: src/core/nginx.c | 10 +++++++--- src/core/ngx_conf_file.c | 4 ++-- src/core/ngx_file.c | 8 ++++++-- src/event/ngx_event_openssl.c | 12 ++++++------ src/event/ngx_event_openssl_stapling.c | 2 +- src/http/modules/ngx_http_geo_module.c | 2 +- src/http/modules/ngx_http_log_module.c | 4 +++- src/http/modules/ngx_http_xslt_filter_module.c | 2 +- src/http/modules/perl/ngx_http_perl_module.c | 4 +++- src/http/ngx_http_core_module.c | 8 ++++++-- src/http/ngx_http_file_cache.c | 4 +++- src/http/ngx_http_script.c | 7 ++++++- 12 files changed, 45 insertions(+), 22 deletions(-) diffs (257 lines): diff -r 12dd27b74117 -r f1a91825730a src/core/nginx.c --- a/src/core/nginx.c Tue Aug 06 19:58:40 2013 +0400 +++ b/src/core/nginx.c Tue Aug 06 19:58:40 2013 +0400 @@ -897,7 +897,9 @@ ngx_process_options(ngx_cycle_t *cycle) ngx_str_set(&cycle->conf_file, NGX_CONF_PATH); } - if (ngx_conf_full_name(cycle, &cycle->conf_file, 0) != NGX_OK) { + if (ngx_get_full_name(cycle->pool, &cycle->prefix, &cycle->conf_file) + != NGX_OK) + { return NGX_ERROR; } @@ -1013,7 +1015,7 @@ ngx_core_module_init_conf(ngx_cycle_t *c ngx_str_set(&ccf->pid, NGX_PID_PATH); } - if (ngx_conf_full_name(cycle, &ccf->pid, 0) != NGX_OK) { + if (ngx_get_full_name(cycle->pool, &cycle->prefix, &ccf->pid) != NGX_OK) { return NGX_CONF_ERROR; } @@ -1061,7 +1063,9 @@ ngx_core_module_init_conf(ngx_cycle_t *c ngx_str_set(&ccf->lock_file, NGX_LOCK_PATH); } - if (ngx_conf_full_name(cycle, &ccf->lock_file, 0) != NGX_OK) { + if (ngx_get_full_name(cycle->pool, &cycle->prefix, &ccf->lock_file) + != NGX_OK) + { return NGX_CONF_ERROR; } diff -r 12dd27b74117 -r f1a91825730a src/core/ngx_conf_file.c --- a/src/core/ngx_conf_file.c Tue Aug 06 19:58:40 2013 +0400 +++ b/src/core/ngx_conf_file.c Tue Aug 06 19:58:40 2013 +0400 @@ -747,7 +747,7 @@ ngx_conf_include(ngx_conf_t *cf, ngx_com ngx_log_debug1(NGX_LOG_DEBUG_CORE, cf->log, 0, "include %s", file.data); - if (ngx_conf_full_name(cf->cycle, &file, 1) != NGX_OK) { + if (ngx_get_full_name(cf->pool, &cf->cycle->conf_prefix, &file) != NGX_OK) { return NGX_CONF_ERROR; } @@ -822,7 +822,7 @@ ngx_conf_open_file(ngx_cycle_t *cycle, n if (name->len) { full = *name; - if (ngx_conf_full_name(cycle, &full, 0) != NGX_OK) { + if (ngx_get_full_name(cycle->pool, &cycle->prefix, &full) != NGX_OK) { return NULL; } diff -r 12dd27b74117 -r f1a91825730a src/core/ngx_file.c --- a/src/core/ngx_file.c Tue Aug 06 19:58:40 2013 +0400 +++ b/src/core/ngx_file.c Tue Aug 06 19:58:40 2013 +0400 @@ -355,7 +355,9 @@ ngx_conf_set_path_slot(ngx_conf_t *cf, n path->name.len--; } - if (ngx_conf_full_name(cf->cycle, &path->name, 0) != NGX_OK) { + if (ngx_get_full_name(cf->pool, &cf->cycle->prefix, &path->name) + != NGX_OK) + { return NULL; } @@ -409,7 +411,9 @@ ngx_conf_merge_path_value(ngx_conf_t *cf (*path)->name = init->name; - if (ngx_conf_full_name(cf->cycle, &(*path)->name, 0) != NGX_OK) { + if (ngx_get_full_name(cf->pool, &cf->cycle->prefix, &(*path)->name) + != NGX_OK) + { return NGX_CONF_ERROR; } diff -r 12dd27b74117 -r f1a91825730a src/event/ngx_event_openssl.c --- a/src/event/ngx_event_openssl.c Tue Aug 06 19:58:40 2013 +0400 +++ b/src/event/ngx_event_openssl.c Tue Aug 06 19:58:40 2013 +0400 @@ -240,7 +240,7 @@ ngx_ssl_certificate(ngx_conf_t *cf, ngx_ X509 *x509; u_long n; - if (ngx_conf_full_name(cf->cycle, cert, 1) != NGX_OK) { + if (ngx_get_full_name(cf->pool, &cf->cycle->conf_prefix, cert) != NGX_OK) { return NGX_ERROR; } @@ -319,7 +319,7 @@ ngx_ssl_certificate(ngx_conf_t *cf, ngx_ BIO_free(bio); - if (ngx_conf_full_name(cf->cycle, key, 1) != NGX_OK) { + if (ngx_get_full_name(cf->pool, &cf->cycle->conf_prefix, key) != NGX_OK) { return NGX_ERROR; } @@ -350,7 +350,7 @@ ngx_ssl_client_certificate(ngx_conf_t *c return NGX_OK; } - if (ngx_conf_full_name(cf->cycle, cert, 1) != NGX_OK) { + if (ngx_get_full_name(cf->pool, &cf->cycle->conf_prefix, cert) != NGX_OK) { return NGX_ERROR; } @@ -394,7 +394,7 @@ ngx_ssl_trusted_certificate(ngx_conf_t * return NGX_OK; } - if (ngx_conf_full_name(cf->cycle, cert, 1) != NGX_OK) { + if (ngx_get_full_name(cf->pool, &cf->cycle->conf_prefix, cert) != NGX_OK) { return NGX_ERROR; } @@ -421,7 +421,7 @@ ngx_ssl_crl(ngx_conf_t *cf, ngx_ssl_t *s return NGX_OK; } - if (ngx_conf_full_name(cf->cycle, crl, 1) != NGX_OK) { + if (ngx_get_full_name(cf->pool, &cf->cycle->conf_prefix, crl) != NGX_OK) { return NGX_ERROR; } @@ -587,7 +587,7 @@ ngx_ssl_dhparam(ngx_conf_t *cf, ngx_ssl_ return NGX_OK; } - if (ngx_conf_full_name(cf->cycle, file, 1) != NGX_OK) { + if (ngx_get_full_name(cf->pool, &cf->cycle->conf_prefix, file) != NGX_OK) { return NGX_ERROR; } diff -r 12dd27b74117 -r f1a91825730a src/event/ngx_event_openssl_stapling.c --- a/src/event/ngx_event_openssl_stapling.c Tue Aug 06 19:58:40 2013 +0400 +++ b/src/event/ngx_event_openssl_stapling.c Tue Aug 06 19:58:40 2013 +0400 @@ -197,7 +197,7 @@ ngx_ssl_stapling_file(ngx_conf_t *cf, ng staple = SSL_CTX_get_ex_data(ssl->ctx, ngx_ssl_stapling_index); - if (ngx_conf_full_name(cf->cycle, file, 1) != NGX_OK) { + if (ngx_get_full_name(cf->pool, &cf->cycle->conf_prefix, file) != NGX_OK) { return NGX_ERROR; } diff -r 12dd27b74117 -r f1a91825730a src/http/modules/ngx_http_geo_module.c --- a/src/http/modules/ngx_http_geo_module.c Tue Aug 06 19:58:40 2013 +0400 +++ b/src/http/modules/ngx_http_geo_module.c Tue Aug 06 19:58:40 2013 +0400 @@ -1327,7 +1327,7 @@ ngx_http_geo_include(ngx_conf_t *cf, ngx ngx_sprintf(file.data, "%V.bin%Z", name); - if (ngx_conf_full_name(cf->cycle, &file, 1) != NGX_OK) { + if (ngx_get_full_name(cf->pool, &cf->cycle->conf_prefix, &file) != NGX_OK) { return NGX_CONF_ERROR; } diff -r 12dd27b74117 -r f1a91825730a src/http/modules/ngx_http_log_module.c --- a/src/http/modules/ngx_http_log_module.c Tue Aug 06 19:58:40 2013 +0400 +++ b/src/http/modules/ngx_http_log_module.c Tue Aug 06 19:58:40 2013 +0400 @@ -1134,7 +1134,9 @@ ngx_http_log_set_log(ngx_conf_t *cf, ngx } } else { - if (ngx_conf_full_name(cf->cycle, &value[1], 0) != NGX_OK) { + if (ngx_get_full_name(cf->pool, &cf->cycle->prefix, &value[1]) + != NGX_OK) + { return NGX_CONF_ERROR; } diff -r 12dd27b74117 -r f1a91825730a src/http/modules/ngx_http_xslt_filter_module.c --- a/src/http/modules/ngx_http_xslt_filter_module.c Tue Aug 06 19:58:40 2013 +0400 +++ b/src/http/modules/ngx_http_xslt_filter_module.c Tue Aug 06 19:58:40 2013 +0400 @@ -892,7 +892,7 @@ ngx_http_xslt_stylesheet(ngx_conf_t *cf, ngx_memzero(sheet, sizeof(ngx_http_xslt_sheet_t)); - if (ngx_conf_full_name(cf->cycle, &value[1], 0) != NGX_OK) { + if (ngx_get_full_name(cf->pool, &cf->cycle->prefix, &value[1]) != NGX_OK) { return NGX_CONF_ERROR; } diff -r 12dd27b74117 -r f1a91825730a src/http/modules/perl/ngx_http_perl_module.c --- a/src/http/modules/perl/ngx_http_perl_module.c Tue Aug 06 19:58:40 2013 +0400 +++ b/src/http/modules/perl/ngx_http_perl_module.c Tue Aug 06 19:58:40 2013 +0400 @@ -485,7 +485,9 @@ ngx_http_perl_init_interpreter(ngx_conf_ if (pmcf->modules != NGX_CONF_UNSET_PTR) { m = pmcf->modules->elts; for (i = 0; i < pmcf->modules->nelts; i++) { - if (ngx_conf_full_name(cf->cycle, &m[i], 0) != NGX_OK) { + if (ngx_get_full_name(cf->pool, &cf->cycle->prefix, &m[i]) + != NGX_OK) + { return NGX_CONF_ERROR; } } diff -r 12dd27b74117 -r f1a91825730a src/http/ngx_http_core_module.c --- a/src/http/ngx_http_core_module.c Tue Aug 06 19:58:40 2013 +0400 +++ b/src/http/ngx_http_core_module.c Tue Aug 06 19:58:40 2013 +0400 @@ -3686,7 +3686,9 @@ ngx_http_core_merge_loc_conf(ngx_conf_t if (prev->root.data == NULL) { ngx_str_set(&conf->root, "html"); - if (ngx_conf_full_name(cf->cycle, &conf->root, 0) != NGX_OK) { + if (ngx_get_full_name(cf->pool, &cf->cycle->prefix, &conf->root) + != NGX_OK) + { return NGX_CONF_ERROR; } } @@ -4428,7 +4430,9 @@ ngx_http_core_root(ngx_conf_t *cf, ngx_c } if (clcf->root.data[0] != '$') { - if (ngx_conf_full_name(cf->cycle, &clcf->root, 0) != NGX_OK) { + if (ngx_get_full_name(cf->pool, &cf->cycle->prefix, &clcf->root) + != NGX_OK) + { return NGX_CONF_ERROR; } } diff -r 12dd27b74117 -r f1a91825730a src/http/ngx_http_file_cache.c --- a/src/http/ngx_http_file_cache.c Tue Aug 06 19:58:40 2013 +0400 +++ b/src/http/ngx_http_file_cache.c Tue Aug 06 19:58:40 2013 +0400 @@ -1626,7 +1626,9 @@ ngx_http_file_cache_set_slot(ngx_conf_t cache->path->name.len--; } - if (ngx_conf_full_name(cf->cycle, &cache->path->name, 0) != NGX_OK) { + if (ngx_get_full_name(cf->pool, &cf->cycle->prefix, &cache->path->name) + != NGX_OK) + { return NGX_CONF_ERROR; } diff -r 12dd27b74117 -r f1a91825730a src/http/ngx_http_script.c --- a/src/http/ngx_http_script.c Tue Aug 06 19:58:40 2013 +0400 +++ b/src/http/ngx_http_script.c Tue Aug 06 19:58:40 2013 +0400 @@ -131,7 +131,12 @@ ngx_http_compile_complex_value(ngx_http_ if ((v->len == 0 || v->data[0] != '$') && (ccv->conf_prefix || ccv->root_prefix)) { - if (ngx_conf_full_name(ccv->cf->cycle, v, ccv->conf_prefix) != NGX_OK) { + if (ngx_get_full_name(ccv->cf->pool, + ccv->conf_prefix ? &ccv->cf->cycle->conf_prefix: + &ccv->cf->cycle->prefix, + v) + != NGX_OK) + { return NGX_ERROR; } From vbart at nginx.com Tue Aug 6 16:01:25 2013 From: vbart at nginx.com (Valentin Bartenev) Date: Tue, 06 Aug 2013 16:01:25 +0000 Subject: [nginx] API change: removed the ngx_conf_full_name() function. Message-ID: details: http://hg.nginx.org/nginx/rev/7094bd12c1ff branches: changeset: 5318:7094bd12c1ff user: Valentin Bartenev date: Tue Aug 06 19:58:40 2013 +0400 description: API change: removed the ngx_conf_full_name() function. The ngx_get_full_name() should be used instead. diffstat: src/core/ngx_conf_file.c | 10 ---------- src/core/ngx_conf_file.h | 2 -- 2 files changed, 0 insertions(+), 12 deletions(-) diffs (32 lines): diff -r f1a91825730a -r 7094bd12c1ff src/core/ngx_conf_file.c --- a/src/core/ngx_conf_file.c Tue Aug 06 19:58:40 2013 +0400 +++ b/src/core/ngx_conf_file.c Tue Aug 06 19:58:40 2013 +0400 @@ -797,16 +797,6 @@ ngx_conf_include(ngx_conf_t *cf, ngx_com } -ngx_int_t -ngx_conf_full_name(ngx_cycle_t *cycle, ngx_str_t *name, ngx_uint_t conf_prefix) -{ - return ngx_get_full_name(cycle->pool, - conf_prefix ? &cycle->conf_prefix: - &cycle->prefix, - name); -} - - ngx_open_file_t * ngx_conf_open_file(ngx_cycle_t *cycle, ngx_str_t *name) { diff -r f1a91825730a -r 7094bd12c1ff src/core/ngx_conf_file.h --- a/src/core/ngx_conf_file.h Tue Aug 06 19:58:40 2013 +0400 +++ b/src/core/ngx_conf_file.h Tue Aug 06 19:58:40 2013 +0400 @@ -311,8 +311,6 @@ char *ngx_conf_parse(ngx_conf_t *cf, ngx char *ngx_conf_include(ngx_conf_t *cf, ngx_command_t *cmd, void *conf); -ngx_int_t ngx_conf_full_name(ngx_cycle_t *cycle, ngx_str_t *name, - ngx_uint_t conf_prefix); ngx_open_file_t *ngx_conf_open_file(ngx_cycle_t *cycle, ngx_str_t *name); void ngx_cdecl ngx_conf_log_error(ngx_uint_t level, ngx_conf_t *cf, ngx_err_t err, const char *fmt, ...); From john at disqus.com Wed Aug 7 18:59:13 2013 From: john at disqus.com (John Watson) Date: Wed, 7 Aug 2013 11:59:13 -0700 Subject: Help with shared memory usage Message-ID: I've been running a node with this patch on a production machine for 5 days and am seeing marked improvements. The instance hasn't needed to be restarted due to "ngx_slab_alloc() failed: no memory". The shared memory usage has been growing at a far slower rate compared to a node without the patch. Also, not seeing any significant increase in CPU usage. nginx/1.2.9 Mixture of HTTP/HTTPS traffic 400,000 to 700,000 concurrent connections Linux 3.2.0-40-generic x86_64 It's a dedicated instance for Wandenberg's https://github.com/wandenberg/nginx-push-stream-module On Tue, Aug 6, 2013 at 1:19 PM, Wandenberg Peixoto wrote: > > Hello! > > Thanks for your help. I hope that the patch be OK now. > I don't know if the function and variable names are on nginx pattern. > Feel free to change the patch. > If you have any other point before accept it, will be a pleasure to fix it. > > > --- src/core/ngx_slab.c 2013-05-06 07:27:10.000000000 -0300 > +++ src/core/ngx_slab.c 2013-07-31 00:21:08.043034442 -0300 > @@ -615,6 +615,26 @@ fail: > > > static ngx_slab_page_t * > +ngx_slab_merge_with_neighbour(ngx_slab_pool_t *pool, ngx_slab_page_t *page) > +{ > > + ngx_slab_page_t *neighbour = &page[page->slab]; > + if (((ngx_slab_page_t *) neighbour->prev != NULL) && (neighbour->next != NULL) && ((neighbour->prev & NGX_SLAB_PAGE_MASK) == NGX_SLAB_PAGE)) { > + page->slab += neighbour->slab; > > + > + ((ngx_slab_page_t *) neighbour->prev)->next = neighbour->next; > + neighbour->next->prev = neighbour->prev; > + > + neighbour->slab = NGX_SLAB_PAGE_FREE; > + neighbour->prev = (uintptr_t) &pool->free; > + neighbour->next = &pool->free; > + > + return page; > + } > + return NULL; > +} > + > + > +static ngx_slab_page_t * > ngx_slab_alloc_pages(ngx_slab_pool_t *pool, ngx_uint_t pages) > { > ngx_slab_page_t *page, *p; > @@ -657,6 +677,19 @@ ngx_slab_alloc_pages(ngx_slab_pool_t *po > } > } > > + ngx_flag_t retry = 0; > + for (page = pool->free.next; page != &pool->free;) { > + if (ngx_slab_merge_with_neighbour(pool, page)) { > + retry = 1; > + } else { > + page = page->next; > + } > + } > + > + if (retry) { > + return ngx_slab_alloc_pages(pool, pages); > + } > + > ngx_slab_error(pool, NGX_LOG_CRIT, "ngx_slab_alloc() failed: no memory"); > > return NULL; > @@ -687,6 +720,8 @@ ngx_slab_free_pages(ngx_slab_pool_t *poo > > page->next->prev = (uintptr_t) page; > > pool->free.next = page; > + > + ngx_slab_merge_with_neighbour(pool, page); > } > > > > > > > On Tue, Jul 30, 2013 at 7:09 AM, Maxim Dounin wrote: >> >> Hello! >> >> On Mon, Jul 29, 2013 at 04:01:37PM -0300, Wandenberg Peixoto wrote: >> >> [...] >> >> > What would be an alternative to not loop on pool->pages? >> >> Free memory blocks are linked in pool->free list, it should be >> enough to look there. >> >> [...] >> >> -- >> Maxim Dounin >> http://nginx.org/en/donation.html >> >> _______________________________________________ >> nginx-devel mailing list >> nginx-devel at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx-devel > > > From sb at waeme.net Thu Aug 8 10:08:17 2013 From: sb at waeme.net (Sergey Budnevitch) Date: Thu, 08 Aug 2013 10:08:17 +0000 Subject: [nginx] Fixed misleading example SSL config. Message-ID: details: http://hg.nginx.org/nginx/rev/50f531a55b73 branches: changeset: 5319:50f531a55b73 user: Sergey Budnevitch date: Wed Aug 07 20:01:43 2013 +0400 description: Fixed misleading example SSL config. a) ssl as listen parameter is preferable. b) ssl_protocols defaults are better because they do not forbid TLS versions 1.1 and 1.2. c) ssl_session_timeout has sense only with SSL cache. diffstat: conf/nginx.conf | 5 ++--- 1 files changed, 2 insertions(+), 3 deletions(-) diffs (22 lines): diff -r 7094bd12c1ff -r 50f531a55b73 conf/nginx.conf --- a/conf/nginx.conf Tue Aug 06 19:58:40 2013 +0400 +++ b/conf/nginx.conf Wed Aug 07 20:01:43 2013 +0400 @@ -96,16 +96,15 @@ http { # HTTPS server # #server { - # listen 443; + # listen 443 ssl; # server_name localhost; - # ssl on; # ssl_certificate cert.pem; # ssl_certificate_key cert.key; + # ssl_session_cache shared:SSL:1m; # ssl_session_timeout 5m; - # ssl_protocols SSLv2 SSLv3 TLSv1; # ssl_ciphers HIGH:!aNULL:!MD5; # ssl_prefer_server_ciphers on; From glebius at nginx.com Thu Aug 8 11:37:16 2013 From: glebius at nginx.com (Gleb Smirnoff) Date: Thu, 08 Aug 2013 11:37:16 +0000 Subject: [nginx] Don't lose pointer to first nonempty buf in ngx_*_sendfi... Message-ID: details: http://hg.nginx.org/nginx/rev/ad137a80919f branches: changeset: 5320:ad137a80919f user: Gleb Smirnoff date: Thu Aug 08 15:06:39 2013 +0400 description: Don't lose pointer to first nonempty buf in ngx_*_sendfile_chain(). In ngx_*_sendfile_chain() when calculating pointer to a first non-zero sized buf, use "in" as iterator. This fixes processing of zero sized buf(s) after EINTR. Otherwise function can return zero sized buf to caller, and later ngx_http_write_filter() logs warning. diffstat: src/os/unix/ngx_darwin_sendfile_chain.c | 30 ++++++++++++-------------- src/os/unix/ngx_freebsd_sendfile_chain.c | 34 ++++++++++++++---------------- src/os/unix/ngx_linux_sendfile_chain.c | 30 ++++++++++++-------------- src/os/unix/ngx_solaris_sendfilev_chain.c | 30 ++++++++++++-------------- 4 files changed, 58 insertions(+), 66 deletions(-) diffs (298 lines): diff -r 50f531a55b73 -r ad137a80919f src/os/unix/ngx_darwin_sendfile_chain.c --- a/src/os/unix/ngx_darwin_sendfile_chain.c Wed Aug 07 20:01:43 2013 +0400 +++ b/src/os/unix/ngx_darwin_sendfile_chain.c Thu Aug 08 15:06:39 2013 +0400 @@ -317,9 +317,9 @@ ngx_darwin_sendfile_chain(ngx_connection c->sent += sent; - for (cl = in; cl; cl = cl->next) { + for ( /* void */ ; in; in = in->next) { - if (ngx_buf_special(cl->buf)) { + if (ngx_buf_special(in->buf)) { continue; } @@ -327,28 +327,28 @@ ngx_darwin_sendfile_chain(ngx_connection break; } - size = ngx_buf_size(cl->buf); + size = ngx_buf_size(in->buf); if (sent >= size) { sent -= size; - if (ngx_buf_in_memory(cl->buf)) { - cl->buf->pos = cl->buf->last; + if (ngx_buf_in_memory(in->buf)) { + in->buf->pos = in->buf->last; } - if (cl->buf->in_file) { - cl->buf->file_pos = cl->buf->file_last; + if (in->buf->in_file) { + in->buf->file_pos = in->buf->file_last; } continue; } - if (ngx_buf_in_memory(cl->buf)) { - cl->buf->pos += (size_t) sent; + if (ngx_buf_in_memory(in->buf)) { + in->buf->pos += (size_t) sent; } - if (cl->buf->in_file) { - cl->buf->file_pos += sent; + if (in->buf->in_file) { + in->buf->file_pos += sent; } break; @@ -360,13 +360,11 @@ ngx_darwin_sendfile_chain(ngx_connection if (!complete) { wev->ready = 0; - return cl; + return in; } - if (send >= limit || cl == NULL) { - return cl; + if (send >= limit || in == NULL) { + return in; } - - in = cl; } } diff -r 50f531a55b73 -r ad137a80919f src/os/unix/ngx_freebsd_sendfile_chain.c --- a/src/os/unix/ngx_freebsd_sendfile_chain.c Wed Aug 07 20:01:43 2013 +0400 +++ b/src/os/unix/ngx_freebsd_sendfile_chain.c Thu Aug 08 15:06:39 2013 +0400 @@ -368,9 +368,9 @@ ngx_freebsd_sendfile_chain(ngx_connectio c->sent += sent; - for (cl = in; cl; cl = cl->next) { + for ( /* void */ ; in; in = in->next) { - if (ngx_buf_special(cl->buf)) { + if (ngx_buf_special(in->buf)) { continue; } @@ -378,28 +378,28 @@ ngx_freebsd_sendfile_chain(ngx_connectio break; } - size = ngx_buf_size(cl->buf); + size = ngx_buf_size(in->buf); if (sent >= size) { sent -= size; - if (ngx_buf_in_memory(cl->buf)) { - cl->buf->pos = cl->buf->last; + if (ngx_buf_in_memory(in->buf)) { + in->buf->pos = in->buf->last; } - if (cl->buf->in_file) { - cl->buf->file_pos = cl->buf->file_last; + if (in->buf->in_file) { + in->buf->file_pos = in->buf->file_last; } continue; } - if (ngx_buf_in_memory(cl->buf)) { - cl->buf->pos += (size_t) sent; + if (ngx_buf_in_memory(in->buf)) { + in->buf->pos += (size_t) sent; } - if (cl->buf->in_file) { - cl->buf->file_pos += sent; + if (in->buf->in_file) { + in->buf->file_pos += sent; } break; @@ -407,7 +407,7 @@ ngx_freebsd_sendfile_chain(ngx_connectio #if (NGX_HAVE_AIO_SENDFILE) if (c->busy_sendfile) { - return cl; + return in; } #endif @@ -421,7 +421,7 @@ ngx_freebsd_sendfile_chain(ngx_connectio */ wev->ready = 0; - return cl; + return in; } if (eintr) { @@ -430,13 +430,11 @@ ngx_freebsd_sendfile_chain(ngx_connectio if (!complete) { wev->ready = 0; - return cl; + return in; } - if (send >= limit || cl == NULL) { - return cl; + if (send >= limit || in == NULL) { + return in; } - - in = cl; } } diff -r 50f531a55b73 -r ad137a80919f src/os/unix/ngx_linux_sendfile_chain.c --- a/src/os/unix/ngx_linux_sendfile_chain.c Wed Aug 07 20:01:43 2013 +0400 +++ b/src/os/unix/ngx_linux_sendfile_chain.c Thu Aug 08 15:06:39 2013 +0400 @@ -325,9 +325,9 @@ ngx_linux_sendfile_chain(ngx_connection_ c->sent += sent; - for (cl = in; cl; cl = cl->next) { + for ( /* void */ ; in; in = in->next) { - if (ngx_buf_special(cl->buf)) { + if (ngx_buf_special(in->buf)) { continue; } @@ -335,28 +335,28 @@ ngx_linux_sendfile_chain(ngx_connection_ break; } - size = ngx_buf_size(cl->buf); + size = ngx_buf_size(in->buf); if (sent >= size) { sent -= size; - if (ngx_buf_in_memory(cl->buf)) { - cl->buf->pos = cl->buf->last; + if (ngx_buf_in_memory(in->buf)) { + in->buf->pos = in->buf->last; } - if (cl->buf->in_file) { - cl->buf->file_pos = cl->buf->file_last; + if (in->buf->in_file) { + in->buf->file_pos = in->buf->file_last; } continue; } - if (ngx_buf_in_memory(cl->buf)) { - cl->buf->pos += (size_t) sent; + if (ngx_buf_in_memory(in->buf)) { + in->buf->pos += (size_t) sent; } - if (cl->buf->in_file) { - cl->buf->file_pos += sent; + if (in->buf->in_file) { + in->buf->file_pos += sent; } break; @@ -368,13 +368,11 @@ ngx_linux_sendfile_chain(ngx_connection_ if (!complete) { wev->ready = 0; - return cl; + return in; } - if (send >= limit || cl == NULL) { - return cl; + if (send >= limit || in == NULL) { + return in; } - - in = cl; } } diff -r 50f531a55b73 -r ad137a80919f src/os/unix/ngx_solaris_sendfilev_chain.c --- a/src/os/unix/ngx_solaris_sendfilev_chain.c Wed Aug 07 20:01:43 2013 +0400 +++ b/src/os/unix/ngx_solaris_sendfilev_chain.c Thu Aug 08 15:06:39 2013 +0400 @@ -207,9 +207,9 @@ ngx_solaris_sendfilev_chain(ngx_connecti c->sent += sent; - for (cl = in; cl; cl = cl->next) { + for ( /* void */ ; in; in = in->next) { - if (ngx_buf_special(cl->buf)) { + if (ngx_buf_special(in->buf)) { continue; } @@ -217,28 +217,28 @@ ngx_solaris_sendfilev_chain(ngx_connecti break; } - size = ngx_buf_size(cl->buf); + size = ngx_buf_size(in->buf); if ((off_t) sent >= size) { sent = (size_t) ((off_t) sent - size); - if (ngx_buf_in_memory(cl->buf)) { - cl->buf->pos = cl->buf->last; + if (ngx_buf_in_memory(in->buf)) { + in->buf->pos = in->buf->last; } - if (cl->buf->in_file) { - cl->buf->file_pos = cl->buf->file_last; + if (in->buf->in_file) { + in->buf->file_pos = in->buf->file_last; } continue; } - if (ngx_buf_in_memory(cl->buf)) { - cl->buf->pos += sent; + if (ngx_buf_in_memory(in->buf)) { + in->buf->pos += sent; } - if (cl->buf->in_file) { - cl->buf->file_pos += sent; + if (in->buf->in_file) { + in->buf->file_pos += sent; } break; @@ -250,13 +250,11 @@ ngx_solaris_sendfilev_chain(ngx_connecti if (!complete) { wev->ready = 0; - return cl; + return in; } - if (send >= limit || cl == NULL) { - return cl; + if (send >= limit || in == NULL) { + return in; } - - in = cl; } } From postmaster at softsearch.ru Fri Aug 9 18:21:25 2013 From: postmaster at softsearch.ru (=?Windows-1251?B?zOj14OjrIMzu7eD4uOI=?=) Date: Fri, 9 Aug 2013 22:21:25 +0400 Subject: libgd recent version Message-ID: <1414613783.20130809222125@softsearch.ru> ????????????. ?? ???????? http://nginx.org/ru/docs/http/ngx_http_image_filter_module.html ??????? ?????????? ????????? ?????? libgd. -- ? ?????????, ?????? mailto:postmaster at softsearch.ru From B22173 at freescale.com Tue Aug 13 00:11:43 2013 From: B22173 at freescale.com (Myla John-B22173) Date: Tue, 13 Aug 2013 00:11:43 +0000 Subject: Handlers Message-ID: Hi, I am VERY new to NGINX and trying to understand how it works. I understand there are 3 main components, handlers, filters and load balancers and there can be multiple "Filters". My question is, can we have multiple "Handlers"? I want to develop a module, which need to capture all the requests, need to record these requests and continue the normal processing. Regards, John -------------- next part -------------- An HTML attachment was scrubbed... URL: From wandenberg at gmail.com Tue Aug 13 14:51:25 2013 From: wandenberg at gmail.com (Wandenberg Peixoto) Date: Tue, 13 Aug 2013 11:51:25 -0300 Subject: Handlers In-Reply-To: References: Message-ID: You can use an accept handler to do that. something like that ngx_http_handler_pt *h; ngx_http_core_main_conf_t *cmcf; cmcf = ngx_http_conf_get_module_main_conf(cf, ngx_http_core_module); h = ngx_array_push(&cmcf->phases[NGX_HTTP_ACCESS_PHASE].handlers); if (h == NULL) { return NGX_ERROR; } *h = your_handler_function; and in your_handler_function you record the request and return a NGX_OK, which allow the request to continue to the next handler. On Mon, Aug 12, 2013 at 9:11 PM, Myla John-B22173 wrote: > Hi,**** > > ** ** > > I am VERY new to NGINX and trying to understand how it works. I understand > there are 3 main components, handlers, filters and load balancers and there > can be multiple ?Filters?. **** > > My question is, can we have multiple ?Handlers?? I want to develop a > module, which need to capture all the requests, need to record these > requests and continue the normal processing.**** > > ** ** > > Regards,**** > > John **** > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pluknet at nginx.com Tue Aug 13 15:15:21 2013 From: pluknet at nginx.com (Sergey Kandaurov) Date: Tue, 13 Aug 2013 15:15:21 +0000 Subject: [nginx] Referer module: fixed regex matching against HTTPS refer... Message-ID: details: http://hg.nginx.org/nginx/rev/9806f7932474 branches: changeset: 5321:9806f7932474 user: Sergey Kandaurov date: Tue Aug 13 17:47:04 2013 +0400 description: Referer module: fixed regex matching against HTTPS referers. When matching a compiled regex against value in the "Referer" header field, the length was calculated incorrectly for strings that start from "https://". This might cause matching to fail for regexes with end-of-line anchors. Patch by Liangbin Li. diffstat: src/http/modules/ngx_http_referer_module.c | 4 +++- 1 files changed, 3 insertions(+), 1 deletions(-) diffs (25 lines): diff -r ad137a80919f -r 9806f7932474 src/http/modules/ngx_http_referer_module.c --- a/src/http/modules/ngx_http_referer_module.c Thu Aug 08 15:06:39 2013 +0400 +++ b/src/http/modules/ngx_http_referer_module.c Tue Aug 13 17:47:04 2013 +0400 @@ -147,10 +147,12 @@ ngx_http_referer_variable(ngx_http_reque if (ngx_strncasecmp(ref, (u_char *) "http://", 7) == 0) { ref += 7; + len -= 7; goto valid_scheme; } else if (ngx_strncasecmp(ref, (u_char *) "https://", 8) == 0) { ref += 8; + len -= 8; goto valid_scheme; } } @@ -191,7 +193,7 @@ valid_scheme: ngx_int_t rc; ngx_str_t referer; - referer.len = len - 7; + referer.len = len; referer.data = ref; rc = ngx_regex_exec_array(rlcf->regex, &referer, r->connection->log); From B22173 at freescale.com Wed Aug 14 00:04:49 2013 From: B22173 at freescale.com (Myla John-B22173) Date: Wed, 14 Aug 2013 00:04:49 +0000 Subject: Handlers In-Reply-To: References: Message-ID: Hi Wandenberg Peixoto, I appreciate your response. I have a simple follow-up question. I have looked at the function ngx_http_access_handler and it is checking for access privileges. Do you suggest to over write this functionality or add my code here (before exiting the function calling my routine)? Regards, John From: nginx-devel-bounces at nginx.org [mailto:nginx-devel-bounces at nginx.org] On Behalf Of Wandenberg Peixoto Sent: Tuesday, August 13, 2013 7:51 AM To: nginx-devel at nginx.org Subject: Re: Handlers You can use an accept handler to do that. something like that ngx_http_handler_pt *h; ngx_http_core_main_conf_t *cmcf; cmcf = ngx_http_conf_get_module_main_conf(cf, ngx_http_core_module); h = ngx_array_push(&cmcf->phases[NGX_HTTP_ACCESS_PHASE].handlers); if (h == NULL) { return NGX_ERROR; } *h = your_handler_function; and in your_handler_function you record the request and return a NGX_OK, which allow the request to continue to the next handler. On Mon, Aug 12, 2013 at 9:11 PM, Myla John-B22173 > wrote: Hi, I am VERY new to NGINX and trying to understand how it works. I understand there are 3 main components, handlers, filters and load balancers and there can be multiple "Filters". My question is, can we have multiple "Handlers"? I want to develop a module, which need to capture all the requests, need to record these requests and continue the normal processing. Regards, John _______________________________________________ nginx-devel mailing list nginx-devel at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx-devel -------------- next part -------------- An HTML attachment was scrubbed... URL: From wandenberg at gmail.com Wed Aug 14 12:54:22 2013 From: wandenberg at gmail.com (Wandenberg Peixoto) Date: Wed, 14 Aug 2013 09:54:22 -0300 Subject: Handlers In-Reply-To: References: Message-ID: Hi John, no, you don't have to change any code on nginx core, if this was your doubt. You have to create your own module, (take a look on this page as a guide, http://www.evanmiller.org/nginx-modules-guide.html). On the "handler installation" part, instead of doing what the guide said, replace by the code I sent. On this code you are registering more one handler at access phase (pushing more one position on the array "h = ngx_array_push") and telling that this handler will be the function you have developed. With that, nginx will execute any access handler configured by other modules including yours, and than execute the final handler. Regards, Wandenberg On Tue, Aug 13, 2013 at 9:04 PM, Myla John-B22173 wrote: > Hi Wandenberg Peixoto,**** > > ** ** > > I appreciate your response.**** > > ** ** > > I have a simple follow-up question. I have looked at the function > ngx_http_access_handler and it is checking for access privileges.**** > > ** ** > > Do you suggest to over write this functionality or add my code here > (before exiting the function calling my routine)?**** > > ** ** > > Regards,**** > > John**** > > ** ** > > *From:* nginx-devel-bounces at nginx.org [mailto: > nginx-devel-bounces at nginx.org] *On Behalf Of *Wandenberg Peixoto > *Sent:* Tuesday, August 13, 2013 7:51 AM > *To:* nginx-devel at nginx.org > *Subject:* Re: Handlers**** > > ** ** > > You can use an accept handler to do that. > something like that > > ngx_http_handler_pt *h; > ngx_http_core_main_conf_t *cmcf; > > cmcf = ngx_http_conf_get_module_main_conf(cf, ngx_http_core_module); > > h = ngx_array_push(&cmcf->phases[NGX_HTTP_ACCESS_PHASE].handlers); > if (h == NULL) { > return NGX_ERROR; > } > > *h = your_handler_function;**** > > and in your_handler_function you record the request and return a NGX_OK, > which allow the request to continue to the next handler.**** > > ** ** > > ** ** > > On Mon, Aug 12, 2013 at 9:11 PM, Myla John-B22173 > wrote:**** > > Hi,**** > > **** > > I am VERY new to NGINX and trying to understand how it works. I understand > there are 3 main components, handlers, filters and load balancers and there > can be multiple ?Filters?. **** > > My question is, can we have multiple ?Handlers?? I want to develop a > module, which need to capture all the requests, need to record these > requests and continue the normal processing.**** > > **** > > Regards,**** > > John **** > > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel**** > > ** ** > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ru at nginx.com Thu Aug 15 10:29:10 2013 From: ru at nginx.com (Ruslan Ermilov) Date: Thu, 15 Aug 2013 14:29:10 +0400 Subject: libgd recent version In-Reply-To: <1414613783.20130809222125@softsearch.ru> References: <1414613783.20130809222125@softsearch.ru> Message-ID: <20130815102910.GD64735@lo0.su> On Fri, Aug 09, 2013 at 10:21:25PM +0400, ?????? ??????? wrote: > ????????????. > > ?? ???????? > http://nginx.org/ru/docs/http/ngx_http_image_filter_module.html > ??????? ?????????? ????????? ?????? libgd. ? ?????? ??? ?????????? ?????? ?????? ???????????? ??? ?????????? ? ?????-???? ?????????? ?????? ?????????? libgd. From vbart at nginx.com Thu Aug 15 15:18:26 2013 From: vbart at nginx.com (Valentin Bartenev) Date: Thu, 15 Aug 2013 15:18:26 +0000 Subject: [nginx] Unbreak building on Linux without sendfile64() support. Message-ID: details: http://hg.nginx.org/nginx/rev/bdb3588681c9 branches: changeset: 5322:bdb3588681c9 user: Valentin Bartenev date: Thu Aug 15 19:14:33 2013 +0400 description: Unbreak building on Linux without sendfile64() support. It was broken in 8e446a2daf48 when the NGX_SENDFILE_LIMIT constant was added to ngx_linux_sendfile_chain.c having the same name as already defined one in ngx_linux_config.h. The newer is needed to overcome a bug in old Linux kernels by limiting the number of bytes to send per sendfile() syscall. The older is used with sendfile() on ancient kernels that works with 32-bit offsets only. One of these renamed to NGX_SENDFILE_MAXSIZE. diffstat: src/os/unix/ngx_linux_sendfile_chain.c | 6 +++--- 1 files changed, 3 insertions(+), 3 deletions(-) diffs (23 lines): diff -r 9806f7932474 -r bdb3588681c9 src/os/unix/ngx_linux_sendfile_chain.c --- a/src/os/unix/ngx_linux_sendfile_chain.c Tue Aug 13 17:47:04 2013 +0400 +++ b/src/os/unix/ngx_linux_sendfile_chain.c Thu Aug 15 19:14:33 2013 +0400 @@ -24,7 +24,7 @@ * so we limit it to 2G-1 bytes. */ -#define NGX_SENDFILE_LIMIT 2147483647L +#define NGX_SENDFILE_MAXSIZE 2147483647L #if (IOV_MAX > 64) @@ -63,8 +63,8 @@ ngx_linux_sendfile_chain(ngx_connection_ /* the maximum limit size is 2G-1 - the page size */ - if (limit == 0 || limit > (off_t) (NGX_SENDFILE_LIMIT - ngx_pagesize)) { - limit = NGX_SENDFILE_LIMIT - ngx_pagesize; + if (limit == 0 || limit > (off_t) (NGX_SENDFILE_MAXSIZE - ngx_pagesize)) { + limit = NGX_SENDFILE_MAXSIZE - ngx_pagesize; } From vbart at nginx.com Thu Aug 15 15:18:28 2013 From: vbart at nginx.com (Valentin Bartenev) Date: Thu, 15 Aug 2013 15:18:28 +0000 Subject: [nginx] SPDY: fixed corruption of headers with names longer than... Message-ID: details: http://hg.nginx.org/nginx/rev/2be1a9ce9d8e branches: changeset: 5323:2be1a9ce9d8e user: Valentin Bartenev date: Thu Aug 15 19:14:58 2013 +0400 description: SPDY: fixed corruption of headers with names longer than 255. It is a bad idea to put zero byte in position where the length of the next header name can be stored before it was parsed. diffstat: src/http/ngx_http_spdy.c | 18 ++++++++++++++++-- 1 files changed, 16 insertions(+), 2 deletions(-) diffs (69 lines): diff -r bdb3588681c9 -r 2be1a9ce9d8e src/http/ngx_http_spdy.c --- a/src/http/ngx_http_spdy.c Thu Aug 15 19:14:33 2013 +0400 +++ b/src/http/ngx_http_spdy.c Thu Aug 15 19:14:58 2013 +0400 @@ -809,6 +809,8 @@ ngx_http_spdy_state_headers(ngx_http_spd sc->zstream_in.next_in = pos; sc->zstream_in.avail_in = size; sc->zstream_in.next_out = buf->last; + + /* one byte is reserved for null-termination of the last header value */ sc->zstream_in.avail_out = buf->end - buf->last - 1; z = inflate(&sc->zstream_in, Z_NO_FLUSH); @@ -912,9 +914,14 @@ ngx_http_spdy_state_headers(ngx_http_spd return ngx_http_spdy_state_headers_error(sc, pos, end); } + /* null-terminate the last processed header name or value */ + *buf->pos = '\0'; + buf = r->header_in; sc->zstream_in.next_out = buf->last; + + /* one byte is reserved for null-termination */ sc->zstream_in.avail_out = buf->end - buf->last - 1; z = inflate(&sc->zstream_in, Z_NO_FLUSH); @@ -996,6 +1003,9 @@ ngx_http_spdy_state_headers(ngx_http_spd ngx_http_spdy_state_headers); } + /* null-terminate the last header value */ + *buf->pos = '\0'; + ngx_http_spdy_run_request(r); return ngx_http_spdy_state_complete(sc, pos, end); @@ -1936,6 +1946,9 @@ ngx_http_spdy_parse_header(ngx_http_requ return NGX_HTTP_PARSE_INVALID_HEADER; } + /* null-terminate the previous header value */ + *p = '\0'; + p += NGX_SPDY_NV_NLEN_SIZE; r->header_name_end = p + len; @@ -2005,6 +2018,9 @@ ngx_http_spdy_parse_header(ngx_http_requ return NGX_ERROR; } + /* null-terminate header name */ + *p = '\0'; + p += NGX_SPDY_NV_VLEN_SIZE; r->header_end = p + len; @@ -2163,11 +2179,9 @@ ngx_http_spdy_handle_request_header(ngx_ h->key.len = r->lowcase_index; h->key.data = r->header_name_start; - h->key.data[h->key.len] = '\0'; h->value.len = r->header_size; h->value.data = r->header_start; - h->value.data[h->value.len] = '\0'; h->lowcase_key = h->key.data; From vbart at nginx.com Thu Aug 15 15:18:29 2013 From: vbart at nginx.com (Valentin Bartenev) Date: Thu, 15 Aug 2013 15:18:29 +0000 Subject: [nginx] SPDY: do not reject headers with empty value (ticket #396). Message-ID: details: http://hg.nginx.org/nginx/rev/8ef1722143dc branches: changeset: 5324:8ef1722143dc user: Valentin Bartenev date: Thu Aug 15 19:16:09 2013 +0400 description: SPDY: do not reject headers with empty value (ticket #396). A quote from SPDY draft 2 specification: "The length of each name and value must be greater than zero. A receiver of a zero-length name or value must send a RST_STREAM with code PROTOCOL error." But it appears that Chrome browser allows sending requests over SPDY/2 connection using JavaScript that contain headers with empty values. For better compatibility across SPDY clients and to be compliant with HTTP, such headers are no longer rejected. Also, it is worth noting that in SPDY draft 3 the statement has been changed so that it permits empty values for headers. diffstat: src/http/ngx_http_spdy.c | 4 ---- 1 files changed, 0 insertions(+), 4 deletions(-) diffs (14 lines): diff -r 2be1a9ce9d8e -r 8ef1722143dc src/http/ngx_http_spdy.c --- a/src/http/ngx_http_spdy.c Thu Aug 15 19:14:58 2013 +0400 +++ b/src/http/ngx_http_spdy.c Thu Aug 15 19:16:09 2013 +0400 @@ -2014,10 +2014,6 @@ ngx_http_spdy_parse_header(ngx_http_requ len = ngx_spdy_frame_parse_uint16(p); - if (!len) { - return NGX_ERROR; - } - /* null-terminate header name */ *p = '\0'; From vbart at nginx.com Thu Aug 15 15:18:30 2013 From: vbart at nginx.com (Valentin Bartenev) Date: Thu, 15 Aug 2013 15:18:30 +0000 Subject: [nginx] SPDY: alert about activated fake events instead of delet... Message-ID: details: http://hg.nginx.org/nginx/rev/abf7813b927e branches: changeset: 5325:abf7813b927e user: Valentin Bartenev date: Thu Aug 15 19:16:12 2013 +0400 description: SPDY: alert about activated fake events instead of deleting them. They refer to the same socket descriptor as our real connection, and deleting them will stop processing of the connection. Events of fake connections must not be activated, and if it happened there is nothing we can do. The whole processing should be terminated as soon as possible, but it is not obvious how to do this safely. diffstat: src/http/ngx_http_spdy.c | 6 ++++-- 1 files changed, 4 insertions(+), 2 deletions(-) diffs (23 lines): diff -r 8ef1722143dc -r abf7813b927e src/http/ngx_http_spdy.c --- a/src/http/ngx_http_spdy.c Thu Aug 15 19:16:09 2013 +0400 +++ b/src/http/ngx_http_spdy.c Thu Aug 15 19:16:12 2013 +0400 @@ -2663,7 +2663,8 @@ ngx_http_spdy_close_stream(ngx_http_spdy ev = fc->read; if (ev->active || ev->disabled) { - ngx_del_event(ev, NGX_READ_EVENT, 0); + ngx_log_error(NGX_LOG_ALERT, sc->connection->log, 0, + "spdy fake read event was activated"); } if (ev->timer_set) { @@ -2677,7 +2678,8 @@ ngx_http_spdy_close_stream(ngx_http_spdy ev = fc->write; if (ev->active || ev->disabled) { - ngx_del_event(ev, NGX_WRITE_EVENT, 0); + ngx_log_error(NGX_LOG_ALERT, sc->connection->log, 0, + "spdy fake write event was activated"); } if (ev->timer_set) { From B22173 at freescale.com Thu Aug 15 16:54:51 2013 From: B22173 at freescale.com (Myla John-B22173) Date: Thu, 15 Aug 2013 16:54:51 +0000 Subject: SAML2.0 support in NGINX Message-ID: Hi, Is there any SAML2.0 module available for NGINX? Regards, John -------------- next part -------------- An HTML attachment was scrubbed... URL: From drew.abbot at acision.com Thu Aug 15 21:25:24 2013 From: drew.abbot at acision.com (Abbot, Drew) Date: Thu, 15 Aug 2013 23:25:24 +0200 Subject: help with issue related to ngx_peer_connection_t, and ngx_unix_recv() returning NGX_AGAIN Message-ID: Greetings nginx developers, I work at Acision, and we make use of nginx, especially its mail module, which we have added considerable code to. I'm currently experiencing issues related to non-blocking sockets and ngx_unix_recv() returning NGX_AGAIN which I can't make sense of, and I was wondering if anyone could help. Forgive me if these questions are answered somewhere in this list, or elsewhere online -- if so, I cannot find them. So, my chief goal is to know the proper way to simply create a new ngx_connection_t (specifically, with non-blocking sockets and in the mail module), which gets properly scheduled via the "nginx event queue" -- that is, to create an ngx_connection_t such that, when data arrives on the socket, the event engine calls my read handler, and when data is to be written, the event engine calls my write handler. To achieve that goal (to not only create an ngx_connection_t, but also have it "scheduled" in the "proper ngnix style"), I have thus far used ngx_event_connect_peer(), passing it the address of a local ngx_peer_connection_t variable, like so: ngx_peer_connection_t peer; ngx_str_t peer_name = {13, (u_char*)"MyName"}; ... /* build peer */ peer.sockaddr = (struct sockaddr *) saddr; peer.socklen = sizeof(struct sockaddr_in); peer.name = &peer_name; peer.get = ngx_event_get_peer; peer.log = s->connection->log; peer.log_error = NGX_ERROR; rc = ngx_event_connect_peer(&peer); if (rc == NGX_ERROR || rc == NGX_BUSY || rc == NGX_DECLINED) /* error */ peer.connection->data = MyData; peer.connection->pool = s->connection->pool; peer.connection->read->handler = my_read_handler; peer.connection->write->handler = my_write_handler; Using that approach, ngx_event_connect_peer() creates the ngx_connection_t as peer.connection, and by using ngx_post_event() on peer.connection->read and peer.connection->write, I've been able to force my handlers to hit, and also, the appropriate handler (read or write) seems to be called when data is to be sent or received. That approach seemed ok at first, but I've been noticing strange behavior on the non-blocking sockets within my_read_handler(). In particular, I call ngx_unix_recv() in my_read_handler() to actually receive data, but in any given invocation of my_read_handler(), the first call to ngx_unix_recv() never ends up reading more than 128 bytes, and the second call always returns NGX_AGAIN, as if no more data were available at that time. However, I know more data is available! For example, if I continue to call ngx_unix_recv() is a loop, ngx_unix_recv() will return NGX_AGAIN forever, even though more data will definitely be available! It's not until a separate invocation my_read_handler() in the future occurs that ngx_unix_recv() doesn't return NGX_AGAIN, but again, the first call returns at most 128 bytes and the second call returns NGX_AGAIN. And, this pattern continues. The result is that for very large transfers, the data transfer is very slow! Is there something I can do to avoid that behavior? What am I doing wrong? Do I really want to be using ngx_event_connect_peer() and an ngx_peer_connection_t to achieve my goal of creating a new nginx connection, which gets scheduled via the event queue? Thanks a bunch, Drew Abbot, Acision ________________________________ This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you for understanding. -------------- next part -------------- An HTML attachment was scrubbed... URL: From drew.abbot at acision.com Fri Aug 16 15:56:43 2013 From: drew.abbot at acision.com (Abbot, Drew) Date: Fri, 16 Aug 2013 17:56:43 +0200 Subject: help with issue related to ngx_peer_connection_t, and ngx_unix_recv() returning NGX_AGAIN In-Reply-To: References: Message-ID: Update: I've realized that by setting peer.rcvbuf such that the setsockopt(s, SOL_SOCKET, SO_RCVBUF, ..) call in the beginning of ngx_event_connect_peer() is hit, my socket buffer size can be made larger, and the effect of NGX_AGAIN being returned on the 2nd call isn't really significant anymore, as an acceptable data transfer rate can now be achieved. But, just curious, if anyone can explain why NGX_AGAIN (and, internally, EGAIN from recv()) is being returned on the 2nd call, I'd love to know! From: nginx-devel-bounces at nginx.org [mailto:nginx-devel-bounces at nginx.org] On Behalf Of Abbot, Drew Sent: Thursday, August 15, 2013 5:25 PM To: nginx-devel at nginx.org Subject: help with issue related to ngx_peer_connection_t, and ngx_unix_recv() returning NGX_AGAIN Greetings nginx developers, I work at Acision, and we make use of nginx, especially its mail module, which we have added considerable code to. I'm currently experiencing issues related to non-blocking sockets and ngx_unix_recv() returning NGX_AGAIN which I can't make sense of, and I was wondering if anyone could help. Forgive me if these questions are answered somewhere in this list, or elsewhere online -- if so, I cannot find them. So, my chief goal is to know the proper way to simply create a new ngx_connection_t (specifically, with non-blocking sockets and in the mail module), which gets properly scheduled via the "nginx event queue" -- that is, to create an ngx_connection_t such that, when data arrives on the socket, the event engine calls my read handler, and when data is to be written, the event engine calls my write handler. To achieve that goal (to not only create an ngx_connection_t, but also have it "scheduled" in the "proper ngnix style"), I have thus far used ngx_event_connect_peer(), passing it the address of a local ngx_peer_connection_t variable, like so: ngx_peer_connection_t peer; ngx_str_t peer_name = {13, (u_char*)"MyName"}; ... /* build peer */ peer.sockaddr = (struct sockaddr *) saddr; peer.socklen = sizeof(struct sockaddr_in); peer.name = &peer_name; peer.get = ngx_event_get_peer; peer.log = s->connection->log; peer.log_error = NGX_ERROR; rc = ngx_event_connect_peer(&peer); if (rc == NGX_ERROR || rc == NGX_BUSY || rc == NGX_DECLINED) /* error */ peer.connection->data = MyData; peer.connection->pool = s->connection->pool; peer.connection->read->handler = my_read_handler; peer.connection->write->handler = my_write_handler; Using that approach, ngx_event_connect_peer() creates the ngx_connection_t as peer.connection, and by using ngx_post_event() on peer.connection->read and peer.connection->write, I've been able to force my handlers to hit, and also, the appropriate handler (read or write) seems to be called when data is to be sent or received. That approach seemed ok at first, but I've been noticing strange behavior on the non-blocking sockets within my_read_handler(). In particular, I call ngx_unix_recv() in my_read_handler() to actually receive data, but in any given invocation of my_read_handler(), the first call to ngx_unix_recv() never ends up reading more than 128 bytes, and the second call always returns NGX_AGAIN, as if no more data were available at that time. However, I know more data is available! For example, if I continue to call ngx_unix_recv() is a loop, ngx_unix_recv() will return NGX_AGAIN forever, even though more data will definitely be available! It's not until a separate invocation my_read_handler() in the future occurs that ngx_unix_recv() doesn't return NGX_AGAIN, but again, the first call returns at most 128 bytes and the second call returns NGX_AGAIN. And, this pattern continues. The result is that for very large transfers, the data transfer is very slow! Is there something I can do to avoid that behavior? What am I doing wrong? Do I really want to be using ngx_event_connect_peer() and an ngx_peer_connection_t to achieve my goal of creating a new nginx connection, which gets scheduled via the event queue? Thanks a bunch, Drew Abbot, Acision ________________________________ This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you for understanding. ________________________________ This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you for understanding. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Sat Aug 17 00:46:55 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sat, 17 Aug 2013 04:46:55 +0400 Subject: cache for ngx_http_time() In-Reply-To: References: Message-ID: <20130817004654.GU2130@mdounin.ru> Hello! On Mon, Aug 05, 2013 at 09:06:57PM +0300, Serguei I. Ivantsov wrote: > ngx_http_time() is called once per request and it call a heavy > ngx_sprintf() function. > Why not cache the output for one second (resolution of time_t)? > I find nice time caching framework on ngx_times.c, with slots and memory > barriers, but ngx_http_time() is not using it for some reason. > In this case, probably, it is easier to cache just one string buffer and > time_t value per process. The cache is for current time, and it's used for Date header (see ngx_cached_http_time). The ngx_http_time() function is to print arbitrary times, like Expires or Last-Modified, and these are usually non-cacheable. -- Maxim Dounin http://nginx.org/en/donation.html From mdounin at mdounin.ru Sat Aug 17 01:59:25 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sat, 17 Aug 2013 05:59:25 +0400 Subject: help with issue related to ngx_peer_connection_t, and ngx_unix_recv() returning NGX_AGAIN In-Reply-To: References: Message-ID: <20130817015925.GX2130@mdounin.ru> Hello! On Thu, Aug 15, 2013 at 11:25:24PM +0200, Abbot, Drew wrote: [...] > That approach seemed ok at first, but I've been noticing strange > behavior on the non-blocking sockets within my_read_handler(). > In particular, I call ngx_unix_recv() in my_read_handler() to > actually receive data, but in any given invocation of > my_read_handler(), the first call to ngx_unix_recv() never ends > up reading more than 128 bytes, and the second call always As your followup message suggests, 128 bytes is probably due to socket buffer sizes you use. > returns NGX_AGAIN, as if no more data were available at that > time. However, I know more data is available! For example, if > I continue to call ngx_unix_recv() is a loop, ngx_unix_recv() > will return NGX_AGAIN forever, even though more data will > definitely be available! It's not until a separate invocation First of all, you shouldn't call ngx_unix_recv() directly - you should call c->recv() instead. Second, you are not expected to call c->recv() again after it returns NGX_AGAIN - you should call ngx_handle_read_event() instead, and wait for a read handler to be called again. If a call returns NGX_AGAIN, nginx in some cases will just return NGX_AGAIN on subsequent calls, without an actual recv() syscall - in particular, this happens with kqueue event method where number of bytes available is reported by kevent(). See src/os/unix/ngx_recv.c for details. -- Maxim Dounin http://nginx.org/en/donation.html From aaron.bedra at gmail.com Sat Aug 17 05:28:57 2013 From: aaron.bedra at gmail.com (Aaron Bedra) Date: Sat, 17 Aug 2013 00:28:57 -0500 Subject: Only fire a handler once Message-ID: I'm looking for a way to make sure a handler only fires once. For instance, in Apache, you can use the guard: if (!ap_is_initial_req(r)) { skip handling } Is there anything like this? I couldn't find any documentation for it. Thanks, Aaron -------------- next part -------------- An HTML attachment was scrubbed... URL: From wandenberg at gmail.com Sat Aug 17 17:26:14 2013 From: wandenberg at gmail.com (Wandenberg Peixoto) Date: Sat, 17 Aug 2013 14:26:14 -0300 Subject: stop timers and close connections on Nginx reload Message-ID: Hi, is there a way to, inside a module, be notified when the Nginx process received a signal to reload? I need to stop some timers and do cleanup routines when a worker is on "worker process is shutting down" state to allow it to completely stop as fast as possible. I don't want to start a periodic timer to check this, if there is another way. Regards, Wandenberg -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Mon Aug 19 11:15:38 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 19 Aug 2013 15:15:38 +0400 Subject: stop timers and close connections on Nginx reload In-Reply-To: References: Message-ID: <20130819111538.GE705@mdounin.ru> Hello! On Sat, Aug 17, 2013 at 02:26:14PM -0300, Wandenberg Peixoto wrote: > is there a way to, inside a module, be notified when the Nginx process > received a signal to reload? > I need to stop some timers and do cleanup routines when a worker is on > "worker process is shutting down" state to allow it to completely stop as > fast as possible. > > I don't want to start a periodic timer to check this, if there is another > way. If connection's c->idle flag is set, it's read event handler will be called on shutdown with c->close set to 1. This might not be very convenient if your timers aren't connection-related though. -- Maxim Dounin http://nginx.org/en/donation.html From aviram at adallom.com Mon Aug 19 14:17:24 2013 From: aviram at adallom.com (Aviram Cohen) Date: Mon, 19 Aug 2013 17:17:24 +0300 Subject: Upstream error handling issue Message-ID: Hello! I have encountered a potential bug in Nginx's upstream module - When the upstream server is an SSL server, if an error occurs in ngx_http_upstream_ssl_handshake() - the function ngx_http_run_posted_requests() is never called. This happens when initiating an SSL connection, the SSL module handles the handshake, and not the upstream module (meaning ngx_http_upstream_handler() is not involved in the process), and so if an error occurs, there's no one who calls ngx_http_run_posted_requests(). The effect of this issue is the requests that "spawn" subrequests that use the upstream error get stuck in case of an SSL error. I can suggest two possible fixes (in the file ngx_http_upstream.c): - Add a call to ngx_http_run_posted_requests() to the end of ngx_http_upstream_finalize_request(). - Add a call to ngx_http_run_posted_requests() after calling ngx_http_upstream_finalize_request() during error handling of the SSL connection establishment. Can anyone verify this issue and the suggested solution? If so, I'll be more than happy to submit a patch. Best regards, Aviram -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Mon Aug 19 15:17:45 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 19 Aug 2013 19:17:45 +0400 Subject: Upstream error handling issue In-Reply-To: References: Message-ID: <20130819151745.GN705@mdounin.ru> Hello! On Mon, Aug 19, 2013 at 05:17:24PM +0300, Aviram Cohen wrote: > Hello! > > I have encountered a potential bug in Nginx's upstream module - > When the upstream server is an SSL server, if an error occurs in > ngx_http_upstream_ssl_handshake() - the > function ngx_http_run_posted_requests() is never called. > This happens when initiating an SSL connection, the SSL module handles the > handshake, and not the upstream module (meaning ngx_http_upstream_handler() > is not involved in the process), and so if an error occurs, there's no one > who calls ngx_http_run_posted_requests(). > > The effect of this issue is the requests that "spawn" subrequests that use > the upstream error get stuck in case of an SSL error. > I can suggest two possible fixes (in the file ngx_http_upstream.c): > - Add a call to ngx_http_run_posted_requests() to the end > of ngx_http_upstream_finalize_request(). > - Add a call to ngx_http_run_posted_requests() after calling > ngx_http_upstream_finalize_request() during error handling of the SSL > connection establishment. > > Can anyone verify this issue and the suggested solution? If so, I'll be > more than happy to submit a patch. Yes, it looks like there is a problem. The ngx_http_run_posted_requests() is usually called by an event handler, so adding a call to ngx_http_upstream_ssl_handshake() might be more appropriate. Something like this should fix the problem: diff --git a/src/http/ngx_http_upstream.c b/src/http/ngx_http_upstream.c --- a/src/http/ngx_http_upstream.c +++ b/src/http/ngx_http_upstream.c @@ -1343,8 +1343,11 @@ ngx_http_upstream_ssl_handshake(ngx_conn return; } + c = r->connection; + ngx_http_upstream_next(r, u, NGX_HTTP_UPSTREAM_FT_ERROR); + ngx_http_run_posted_requests(c); } #endif -- Maxim Dounin http://nginx.org/en/donation.html From aviram at adallom.com Mon Aug 19 15:36:17 2013 From: aviram at adallom.com (Aviram Cohen) Date: Mon, 19 Aug 2013 18:36:17 +0300 Subject: Upstream error handling issue In-Reply-To: <20130819151745.GN705@mdounin.ru> References: <20130819151745.GN705@mdounin.ru> Message-ID: Hello! Regarding the patch - it seems that if ngx_http_upstream_send_request() fails (the call to the function is in line 1342), ngx_http_run_posted_requests() is still not called. Should a call to this function be added there as well? Regards On Mon, Aug 19, 2013 at 6:17 PM, Maxim Dounin wrote: > Hello! > > On Mon, Aug 19, 2013 at 05:17:24PM +0300, Aviram Cohen wrote: > > > Hello! > > > > I have encountered a potential bug in Nginx's upstream module - > > When the upstream server is an SSL server, if an error occurs in > > ngx_http_upstream_ssl_handshake() - the > > function ngx_http_run_posted_requests() is never called. > > This happens when initiating an SSL connection, the SSL module handles > the > > handshake, and not the upstream module (meaning > ngx_http_upstream_handler() > > is not involved in the process), and so if an error occurs, there's no > one > > who calls ngx_http_run_posted_requests(). > > > > The effect of this issue is the requests that "spawn" subrequests that > use > > the upstream error get stuck in case of an SSL error. > > I can suggest two possible fixes (in the file ngx_http_upstream.c): > > - Add a call to ngx_http_run_posted_requests() to the end > > of ngx_http_upstream_finalize_request(). > > - Add a call to ngx_http_run_posted_requests() after calling > > ngx_http_upstream_finalize_request() during error handling of the SSL > > connection establishment. > > > > Can anyone verify this issue and the suggested solution? If so, I'll be > > more than happy to submit a patch. > > Yes, it looks like there is a problem. > > The ngx_http_run_posted_requests() is usually called by an event > handler, so adding a call to ngx_http_upstream_ssl_handshake() > might be more appropriate. Something like this should fix the > problem: > > diff --git a/src/http/ngx_http_upstream.c b/src/http/ngx_http_upstream.c > --- a/src/http/ngx_http_upstream.c > +++ b/src/http/ngx_http_upstream.c > @@ -1343,8 +1343,11 @@ ngx_http_upstream_ssl_handshake(ngx_conn > return; > } > > + c = r->connection; > + > ngx_http_upstream_next(r, u, NGX_HTTP_UPSTREAM_FT_ERROR); > > + ngx_http_run_posted_requests(c); > } > > #endif > > > -- > Maxim Dounin > http://nginx.org/en/donation.html > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Mon Aug 19 16:01:57 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 19 Aug 2013 20:01:57 +0400 Subject: Upstream error handling issue In-Reply-To: References: <20130819151745.GN705@mdounin.ru> Message-ID: <20130819160157.GR705@mdounin.ru> Hello! On Mon, Aug 19, 2013 at 06:36:17PM +0300, Aviram Cohen wrote: > Hello! > > Regarding the patch - it seems that if ngx_http_upstream_send_request() > fails (the call to the function is in line 1342), > ngx_http_run_posted_requests() is still not called. Should a call to this > function be added there as well? Yes, thanks, something like this should be better: --- a/src/http/ngx_http_upstream.c +++ b/src/http/ngx_http_upstream.c @@ -1338,13 +1338,19 @@ ngx_http_upstream_ssl_handshake(ngx_conn c->write->handler = ngx_http_upstream_handler; c->read->handler = ngx_http_upstream_handler; + c = r->connection; + ngx_http_upstream_send_request(r, u); + ngx_http_run_posted_requests(c); return; } + c = r->connection; + ngx_http_upstream_next(r, u, NGX_HTTP_UPSTREAM_FT_ERROR); + ngx_http_run_posted_requests(c); } #endif > > Regards > > > On Mon, Aug 19, 2013 at 6:17 PM, Maxim Dounin wrote: > > > Hello! > > > > On Mon, Aug 19, 2013 at 05:17:24PM +0300, Aviram Cohen wrote: > > > > > Hello! > > > > > > I have encountered a potential bug in Nginx's upstream module - > > > When the upstream server is an SSL server, if an error occurs in > > > ngx_http_upstream_ssl_handshake() - the > > > function ngx_http_run_posted_requests() is never called. > > > This happens when initiating an SSL connection, the SSL module handles > > the > > > handshake, and not the upstream module (meaning > > ngx_http_upstream_handler() > > > is not involved in the process), and so if an error occurs, there's no > > one > > > who calls ngx_http_run_posted_requests(). > > > > > > The effect of this issue is the requests that "spawn" subrequests that > > use > > > the upstream error get stuck in case of an SSL error. > > > I can suggest two possible fixes (in the file ngx_http_upstream.c): > > > - Add a call to ngx_http_run_posted_requests() to the end > > > of ngx_http_upstream_finalize_request(). > > > - Add a call to ngx_http_run_posted_requests() after calling > > > ngx_http_upstream_finalize_request() during error handling of the SSL > > > connection establishment. > > > > > > Can anyone verify this issue and the suggested solution? If so, I'll be > > > more than happy to submit a patch. > > > > Yes, it looks like there is a problem. > > > > The ngx_http_run_posted_requests() is usually called by an event > > handler, so adding a call to ngx_http_upstream_ssl_handshake() > > might be more appropriate. Something like this should fix the > > problem: > > > > diff --git a/src/http/ngx_http_upstream.c b/src/http/ngx_http_upstream.c > > --- a/src/http/ngx_http_upstream.c > > +++ b/src/http/ngx_http_upstream.c > > @@ -1343,8 +1343,11 @@ ngx_http_upstream_ssl_handshake(ngx_conn > > return; > > } > > > > + c = r->connection; > > + > > ngx_http_upstream_next(r, u, NGX_HTTP_UPSTREAM_FT_ERROR); > > > > + ngx_http_run_posted_requests(c); > > } > > > > #endif > > > > > > -- > > Maxim Dounin > > http://nginx.org/en/donation.html > > > > _______________________________________________ > > nginx-devel mailing list > > nginx-devel at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx-devel > > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel -- Maxim Dounin http://nginx.org/en/donation.html From aviram at adallom.com Tue Aug 20 12:33:43 2013 From: aviram at adallom.com (Aviram Cohen) Date: Tue, 20 Aug 2013 15:33:43 +0300 Subject: [PATCH] Proxy remote server SSL certificate verification Message-ID: Hello! Nginx's reverse proxy doesn't verify the SSL certificate of the remote server (see http://trac.nginx.org/nginx/ticket/13). The following is a suggested patch for v1.4.1 that adds this feature. It is partially inspired by the patch for v1.1.3 that has been suggested in this list and in the ticket above, with some improvements (i.e. no need to add the "verification_failed" field to ngx_ssl_connection_t). Note that a directory of CA's should be provided as a configuration parameter ("CApath"), and that this patch is missing a Certificate Revocation List file feature. Feedback would be welcome. Best regards, Aviram diff -Nrpu nginx-1.4.1/src/event/ngx_event_openssl.c nginx-1.4.1-proxy-ssl-verify/src/event/ngx_event_openssl.c --- nginx-1.4.1/src/event/ngx_event_openssl.c 2013-05-06 13:26:50.000000000 +0300 +++ nginx-1.4.1-proxy-ssl-verify/src/event/ngx_event_openssl.c 2013-08-20 14:53:31.465251759 +0300 @@ -337,6 +337,31 @@ ngx_ssl_certificate(ngx_conf_t *cf, ngx_ ngx_int_t +ngx_ssl_set_verify_options(ngx_ssl_t *ssl, ngx_str_t *cert, + ngx_int_t depth) +{ + if (cert->len == 0) { + return NGX_OK; + } + + SSL_CTX_set_verify(ssl->ctx, SSL_VERIFY_PEER, ngx_http_ssl_verify_callback); + + SSL_CTX_set_verify_depth(ssl->ctx, depth); + + if (SSL_CTX_load_verify_locations(ssl->ctx, NULL, (char *) cert->data) + == 0) + { + ngx_ssl_error(NGX_LOG_EMERG, ssl->log, 0, + "SSL_CTX_load_verify_locations(\"%s\") failed", + cert->data); + return NGX_ERROR; + } + + return NGX_OK; +} + + +ngx_int_t ngx_ssl_client_certificate(ngx_conf_t *cf, ngx_ssl_t *ssl, ngx_str_t *cert, ngx_int_t depth) { @@ -710,6 +735,17 @@ ngx_ssl_set_session(ngx_connection_t *c, return NGX_OK; } + +ngx_int_t +ngx_ssl_verify_result(ngx_connection_t *c) +{ + if (SSL_get_verify_result(c->ssl->connection) != X509_V_OK) { + ngx_ssl_error(NGX_LOG_EMERG, c->log, 0, "SSL_get_verify_result failed"); + return NGX_ERROR; + } + return NGX_OK; +} + ngx_int_t ngx_ssl_handshake(ngx_connection_t *c) diff -Nrpu nginx-1.4.1/src/event/ngx_event_openssl.h nginx-1.4.1-proxy-ssl-verify/src/event/ngx_event_openssl.h --- nginx-1.4.1/src/event/ngx_event_openssl.h 2013-05-06 13:26:50.000000000 +0300 +++ nginx-1.4.1-proxy-ssl-verify/src/event/ngx_event_openssl.h 2013-08-20 14:54:37.933252402 +0300 @@ -100,6 +100,8 @@ ngx_int_t ngx_ssl_init(ngx_log_t *log); ngx_int_t ngx_ssl_create(ngx_ssl_t *ssl, ngx_uint_t protocols, void *data); ngx_int_t ngx_ssl_certificate(ngx_conf_t *cf, ngx_ssl_t *ssl, ngx_str_t *cert, ngx_str_t *key); +ngx_int_t ngx_ssl_set_verify_options(ngx_ssl_t *ssl, ngx_str_t *cert, + ngx_int_t depth); ngx_int_t ngx_ssl_client_certificate(ngx_conf_t *cf, ngx_ssl_t *ssl, ngx_str_t *cert, ngx_int_t depth); ngx_int_t ngx_ssl_trusted_certificate(ngx_conf_t *cf, ngx_ssl_t *ssl, @@ -155,6 +157,7 @@ ngx_int_t ngx_ssl_get_client_verify(ngx_ ngx_str_t *s); +ngx_int_t ngx_ssl_verify_result(ngx_connection_t *c); ngx_int_t ngx_ssl_handshake(ngx_connection_t *c); ssize_t ngx_ssl_recv(ngx_connection_t *c, u_char *buf, size_t size); ssize_t ngx_ssl_write(ngx_connection_t *c, u_char *data, size_t size); diff -Nrpu nginx-1.4.1/src/http/modules/ngx_http_proxy_module.c nginx-1.4.1-proxy-ssl-verify/src/http/modules/ngx_http_proxy_module.c --- nginx-1.4.1/src/http/modules/ngx_http_proxy_module.c 2013-05-06 13:26:50.000000000 +0300 +++ nginx-1.4.1-proxy-ssl-verify/src/http/modules/ngx_http_proxy_module.c 2013-08-20 14:56:24.001251235 +0300 @@ -511,6 +511,26 @@ static ngx_command_t ngx_http_proxy_com offsetof(ngx_http_proxy_loc_conf_t, upstream.ssl_session_reuse), NULL }, + { ngx_string("proxy_ssl_verify_peer"), + NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1, + ngx_conf_set_flag_slot, + NGX_HTTP_LOC_CONF_OFFSET, + offsetof(ngx_http_proxy_loc_conf_t, upstream.ssl_verify_peer), + NULL }, + + { ngx_string("proxy_ssl_verify_depth"), + NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1, + ngx_conf_set_num_slot, + NGX_HTTP_LOC_CONF_OFFSET, + offsetof(ngx_http_proxy_loc_conf_t, upstream.ssl_verify_depth), + NULL }, + + { ngx_string("proxy_ssl_ca_certificate"), + NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1, + ngx_conf_set_str_slot, + NGX_HTTP_LOC_CONF_OFFSET, + offsetof(ngx_http_proxy_loc_conf_t, upstream.ssl_ca_certificate), + NULL }, #endif ngx_null_command @@ -2421,6 +2441,8 @@ ngx_http_proxy_create_loc_conf(ngx_conf_ conf->upstream.intercept_errors = NGX_CONF_UNSET; #if (NGX_HTTP_SSL) conf->upstream.ssl_session_reuse = NGX_CONF_UNSET; + conf->upstream.ssl_verify_peer = NGX_CONF_UNSET; + conf->upstream.ssl_verify_depth = NGX_CONF_UNSET_UINT; #endif /* "proxy_cyclic_temp_file" is disabled */ @@ -2697,6 +2719,22 @@ ngx_http_proxy_merge_loc_conf(ngx_conf_t #if (NGX_HTTP_SSL) ngx_conf_merge_value(conf->upstream.ssl_session_reuse, prev->upstream.ssl_session_reuse, 1); + ngx_conf_merge_value(conf->upstream.ssl_verify_peer, + prev->upstream.ssl_verify_peer, 0); + ngx_conf_merge_uint_value(conf->upstream.ssl_verify_depth, + prev->upstream.ssl_verify_depth, 1); + ngx_conf_merge_str_value(conf->upstream.ssl_ca_certificate, + prev->upstream.ssl_ca_certificate, ""); + + if (conf->upstream.ssl_verify_peer) { + if (conf->upstream.ssl_ca_certificate.len == 0) { + ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, + "no \"proxy_ssl_ca_certificate\" is defined for " + "the \"proxy_ssl_verify_peer\" directive"); + + return NGX_CONF_ERROR; + } + } #endif ngx_conf_merge_value(conf->redirect, prev->redirect, 1); diff -Nrpu nginx-1.4.1/src/http/ngx_http_upstream.c nginx-1.4.1-proxy-ssl-verify/src/http/ngx_http_upstream.c --- nginx-1.4.1/src/http/ngx_http_upstream.c 2013-05-06 13:26:50.000000000 +0300 +++ nginx-1.4.1-proxy-ssl-verify/src/http/ngx_http_upstream.c 2013-08-20 14:59:29.437251122 +0300 @@ -1281,6 +1281,15 @@ ngx_http_upstream_ssl_init_connection(ng { ngx_int_t rc; + if (ngx_ssl_set_verify_options(u->conf->ssl, + &u->conf->ssl_ca_certificate, u->conf->ssl_verify_depth) + != NGX_OK) + { + ngx_http_upstream_finalize_request(r, u, + NGX_HTTP_INTERNAL_SERVER_ERROR); + return; + } + if (ngx_ssl_create_connection(u->conf->ssl, c, NGX_SSL_BUFFER|NGX_SSL_CLIENT) != NGX_OK) @@ -1324,6 +1333,12 @@ ngx_http_upstream_ssl_handshake(ngx_conn u = r->upstream; if (c->ssl->handshaked) { + if (u->conf->ssl_verify_peer && ngx_ssl_verify_result(c) != NGX_OK) { + ngx_log_error(NGX_LOG_ERR, c->log, 0, "upstream ssl certificate validation failed"); + ngx_http_upstream_next(r, u, NGX_HTTP_UPSTREAM_FT_ERROR); + goto fail; + } + if (u->conf->ssl_session_reuse) { u->peer.save_session(&u->peer, u->peer.data); @@ -1334,6 +1349,11 @@ ngx_http_upstream_ssl_handshake(ngx_conn ngx_http_upstream_send_request(r, u); +fail: + c = r->connection; + + ngx_http_run_posted_requests(c); + return; } diff -Nrpu nginx-1.4.1/src/http/ngx_http_upstream.h nginx-1.4.1-proxy-ssl-verify/src/http/ngx_http_upstream.h --- nginx-1.4.1/src/http/ngx_http_upstream.h 2013-05-06 13:26:50.000000000 +0300 +++ nginx-1.4.1-proxy-ssl-verify/src/http/ngx_http_upstream.h 2013-08-20 15:00:10.281251422 +0300 @@ -191,6 +191,9 @@ typedef struct { #if (NGX_HTTP_SSL) ngx_ssl_t *ssl; ngx_flag_t ssl_session_reuse; + ngx_flag_t ssl_verify_peer; + ngx_uint_t ssl_verify_depth; + ngx_str_t ssl_ca_certificate; #endif ngx_str_t module; -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Tue Aug 20 14:09:12 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 20 Aug 2013 18:09:12 +0400 Subject: [PATCH] Proxy remote server SSL certificate verification In-Reply-To: References: Message-ID: <20130820140912.GF19334@mdounin.ru> Hello! On Tue, Aug 20, 2013 at 03:33:43PM +0300, Aviram Cohen wrote: > Hello! > > Nginx's reverse proxy doesn't verify the SSL certificate of the remote > server (see http://trac.nginx.org/nginx/ticket/13). > > The following is a suggested patch for v1.4.1 that adds this feature. It is > partially inspired by the patch for v1.1.3 that has been suggested in this > list and in the ticket above, with some improvements (i.e. no need to add > the "verification_failed" field to ngx_ssl_connection_t). > > Note that a directory of CA's should be provided as a configuration > parameter ("CApath"), and that this patch is missing a Certificate > Revocation List file feature. It's probably good idea to line up the implementation with ssl_verify_client. It might be also a good idea to reuse ssl_trusted_certificate file as a source of trusted CA certs, not sure though. In any case naming should be consistent (that is, proxy_ssl_ca_certificate is a bad name). See below for some more comments. > > Feedback would be welcome. > > Best regards, > Aviram > > > diff -Nrpu nginx-1.4.1/src/event/ngx_event_openssl.c > nginx-1.4.1-proxy-ssl-verify/src/event/ngx_event_openssl.c > --- nginx-1.4.1/src/event/ngx_event_openssl.c 2013-05-06 13:26:50.000000000 > +0300 > +++ nginx-1.4.1-proxy-ssl-verify/src/event/ngx_event_openssl.c 2013-08-20 > 14:53:31.465251759 +0300 > @@ -337,6 +337,31 @@ ngx_ssl_certificate(ngx_conf_t *cf, ngx_ > > > ngx_int_t > +ngx_ssl_set_verify_options(ngx_ssl_t *ssl, ngx_str_t *cert, > + ngx_int_t depth) > +{ > + if (cert->len == 0) { > + return NGX_OK; > + } > + > + SSL_CTX_set_verify(ssl->ctx, SSL_VERIFY_PEER, > ngx_http_ssl_verify_callback); Just a side note: your mail client corrupts patches. > + > + SSL_CTX_set_verify_depth(ssl->ctx, depth); > + > + if (SSL_CTX_load_verify_locations(ssl->ctx, NULL, (char *) cert->data) > + == 0) > + { > + ngx_ssl_error(NGX_LOG_EMERG, ssl->log, 0, > + "SSL_CTX_load_verify_locations(\"%s\") failed", > + cert->data); > + return NGX_ERROR; > + } > + > + return NGX_OK; > +} > + > + > +ngx_int_t > ngx_ssl_client_certificate(ngx_conf_t *cf, ngx_ssl_t *ssl, ngx_str_t *cert, > ngx_int_t depth) > { > @@ -710,6 +735,17 @@ ngx_ssl_set_session(ngx_connection_t *c, > return NGX_OK; > } > > + > +ngx_int_t > +ngx_ssl_verify_result(ngx_connection_t *c) > +{ > + if (SSL_get_verify_result(c->ssl->connection) != X509_V_OK) { > + ngx_ssl_error(NGX_LOG_EMERG, c->log, 0, "SSL_get_verify_result > failed"); > + return NGX_ERROR; > + } > + return NGX_OK; > +} > + > > ngx_int_t > ngx_ssl_handshake(ngx_connection_t *c) > diff -Nrpu nginx-1.4.1/src/event/ngx_event_openssl.h > nginx-1.4.1-proxy-ssl-verify/src/event/ngx_event_openssl.h > --- nginx-1.4.1/src/event/ngx_event_openssl.h 2013-05-06 13:26:50.000000000 > +0300 > +++ nginx-1.4.1-proxy-ssl-verify/src/event/ngx_event_openssl.h 2013-08-20 > 14:54:37.933252402 +0300 > @@ -100,6 +100,8 @@ ngx_int_t ngx_ssl_init(ngx_log_t *log); > ngx_int_t ngx_ssl_create(ngx_ssl_t *ssl, ngx_uint_t protocols, void *data); > ngx_int_t ngx_ssl_certificate(ngx_conf_t *cf, ngx_ssl_t *ssl, > ngx_str_t *cert, ngx_str_t *key); > +ngx_int_t ngx_ssl_set_verify_options(ngx_ssl_t *ssl, ngx_str_t *cert, > + ngx_int_t depth); > ngx_int_t ngx_ssl_client_certificate(ngx_conf_t *cf, ngx_ssl_t *ssl, > ngx_str_t *cert, ngx_int_t depth); > ngx_int_t ngx_ssl_trusted_certificate(ngx_conf_t *cf, ngx_ssl_t *ssl, > @@ -155,6 +157,7 @@ ngx_int_t ngx_ssl_get_client_verify(ngx_ > ngx_str_t *s); > > > +ngx_int_t ngx_ssl_verify_result(ngx_connection_t *c); > ngx_int_t ngx_ssl_handshake(ngx_connection_t *c); > ssize_t ngx_ssl_recv(ngx_connection_t *c, u_char *buf, size_t size); > ssize_t ngx_ssl_write(ngx_connection_t *c, u_char *data, size_t size); > diff -Nrpu nginx-1.4.1/src/http/modules/ngx_http_proxy_module.c > nginx-1.4.1-proxy-ssl-verify/src/http/modules/ngx_http_proxy_module.c > --- nginx-1.4.1/src/http/modules/ngx_http_proxy_module.c 2013-05-06 > 13:26:50.000000000 +0300 > +++ nginx-1.4.1-proxy-ssl-verify/src/http/modules/ngx_http_proxy_module.c > 2013-08-20 > 14:56:24.001251235 +0300 > @@ -511,6 +511,26 @@ static ngx_command_t ngx_http_proxy_com > offsetof(ngx_http_proxy_loc_conf_t, upstream.ssl_session_reuse), > NULL }, > > + { ngx_string("proxy_ssl_verify_peer"), > + > NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1, > + ngx_conf_set_flag_slot, > + NGX_HTTP_LOC_CONF_OFFSET, > + offsetof(ngx_http_proxy_loc_conf_t, upstream.ssl_verify_peer), > + NULL }, Just "proxy_ssl_verify" is probably enough. > + > + { ngx_string("proxy_ssl_verify_depth"), > + > NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1, > + ngx_conf_set_num_slot, > + NGX_HTTP_LOC_CONF_OFFSET, > + offsetof(ngx_http_proxy_loc_conf_t, upstream.ssl_verify_depth), > + NULL }, > + > + { ngx_string("proxy_ssl_ca_certificate"), > + > NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1, > + ngx_conf_set_str_slot, > + NGX_HTTP_LOC_CONF_OFFSET, > + offsetof(ngx_http_proxy_loc_conf_t, upstream.ssl_ca_certificate), > + NULL }, > #endif See above. > > ngx_null_command > @@ -2421,6 +2441,8 @@ ngx_http_proxy_create_loc_conf(ngx_conf_ > conf->upstream.intercept_errors = NGX_CONF_UNSET; > #if (NGX_HTTP_SSL) > conf->upstream.ssl_session_reuse = NGX_CONF_UNSET; > + conf->upstream.ssl_verify_peer = NGX_CONF_UNSET; > + conf->upstream.ssl_verify_depth = NGX_CONF_UNSET_UINT; > #endif > > /* "proxy_cyclic_temp_file" is disabled */ > @@ -2697,6 +2719,22 @@ ngx_http_proxy_merge_loc_conf(ngx_conf_t > #if (NGX_HTTP_SSL) > ngx_conf_merge_value(conf->upstream.ssl_session_reuse, > prev->upstream.ssl_session_reuse, 1); > + ngx_conf_merge_value(conf->upstream.ssl_verify_peer, > + prev->upstream.ssl_verify_peer, 0); > + ngx_conf_merge_uint_value(conf->upstream.ssl_verify_depth, > + prev->upstream.ssl_verify_depth, 1); > + ngx_conf_merge_str_value(conf->upstream.ssl_ca_certificate, > + prev->upstream.ssl_ca_certificate, ""); > + > + if (conf->upstream.ssl_verify_peer) { > + if (conf->upstream.ssl_ca_certificate.len == 0) { > + ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, > + "no \"proxy_ssl_ca_certificate\" is defined > for " > + "the \"proxy_ssl_verify_peer\" directive"); > + > + return NGX_CONF_ERROR; > + } > + } > #endif > > ngx_conf_merge_value(conf->redirect, prev->redirect, 1); > diff -Nrpu nginx-1.4.1/src/http/ngx_http_upstream.c > nginx-1.4.1-proxy-ssl-verify/src/http/ngx_http_upstream.c > --- nginx-1.4.1/src/http/ngx_http_upstream.c 2013-05-06 13:26:50.000000000 > +0300 > +++ nginx-1.4.1-proxy-ssl-verify/src/http/ngx_http_upstream.c 2013-08-20 > 14:59:29.437251122 +0300 > @@ -1281,6 +1281,15 @@ ngx_http_upstream_ssl_init_connection(ng > { > ngx_int_t rc; > > + if (ngx_ssl_set_verify_options(u->conf->ssl, > + &u->conf->ssl_ca_certificate, u->conf->ssl_verify_depth) > + != NGX_OK) > + { > + ngx_http_upstream_finalize_request(r, u, > + NGX_HTTP_INTERNAL_SERVER_ERROR); > + return; > + } > + Calling this on every connection attempt is silly. > if (ngx_ssl_create_connection(u->conf->ssl, c, > NGX_SSL_BUFFER|NGX_SSL_CLIENT) > != NGX_OK) > @@ -1324,6 +1333,12 @@ ngx_http_upstream_ssl_handshake(ngx_conn > u = r->upstream; > > if (c->ssl->handshaked) { > + if (u->conf->ssl_verify_peer && ngx_ssl_verify_result(c) != > NGX_OK) { > + ngx_log_error(NGX_LOG_ERR, c->log, 0, "upstream ssl > certificate validation failed"); > + ngx_http_upstream_next(r, u, NGX_HTTP_UPSTREAM_FT_ERROR); > + goto fail; > + } > + > > if (u->conf->ssl_session_reuse) { > u->peer.save_session(&u->peer, u->peer.data); > @@ -1334,6 +1349,11 @@ ngx_http_upstream_ssl_handshake(ngx_conn > > ngx_http_upstream_send_request(r, u); > > +fail: > + c = r->connection; > + > + ngx_http_run_posted_requests(c); The "c = r->connection;" part should be before the ngx_http_upstream_next() call where a request could be freed. > + > return; > } > > diff -Nrpu nginx-1.4.1/src/http/ngx_http_upstream.h > nginx-1.4.1-proxy-ssl-verify/src/http/ngx_http_upstream.h > --- nginx-1.4.1/src/http/ngx_http_upstream.h 2013-05-06 13:26:50.000000000 > +0300 > +++ nginx-1.4.1-proxy-ssl-verify/src/http/ngx_http_upstream.h 2013-08-20 > 15:00:10.281251422 +0300 > @@ -191,6 +191,9 @@ typedef struct { > #if (NGX_HTTP_SSL) > ngx_ssl_t *ssl; > ngx_flag_t ssl_session_reuse; > + ngx_flag_t ssl_verify_peer; > + ngx_uint_t ssl_verify_depth; > + ngx_str_t ssl_ca_certificate; > #endif > > ngx_str_t module; > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel -- Maxim Dounin http://nginx.org/en/donation.html From mdounin at mdounin.ru Tue Aug 20 16:44:54 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 20 Aug 2013 16:44:54 +0000 Subject: [nginx] Whitespace fix. Message-ID: details: http://hg.nginx.org/nginx/rev/d22eb224aedf branches: changeset: 5326:d22eb224aedf user: Maxim Dounin date: Sat Aug 17 16:54:55 2013 +0400 description: Whitespace fix. diffstat: conf/nginx.conf | 4 ++-- 1 files changed, 2 insertions(+), 2 deletions(-) diffs (17 lines): diff --git a/conf/nginx.conf b/conf/nginx.conf --- a/conf/nginx.conf +++ b/conf/nginx.conf @@ -102,11 +102,11 @@ http { # ssl_certificate cert.pem; # ssl_certificate_key cert.key; - # ssl_session_cache shared:SSL:1m; + # ssl_session_cache shared:SSL:1m; # ssl_session_timeout 5m; # ssl_ciphers HIGH:!aNULL:!MD5; - # ssl_prefer_server_ciphers on; + # ssl_prefer_server_ciphers on; # location / { # root html; From pluknet at nginx.com Tue Aug 20 16:51:37 2013 From: pluknet at nginx.com (Sergey Kandaurov) Date: Tue, 20 Aug 2013 16:51:37 +0000 Subject: [nginx] Format specifier fixes in error logging. Message-ID: details: http://hg.nginx.org/nginx/rev/6b479db5b52b branches: changeset: 5327:6b479db5b52b user: Sergey Kandaurov date: Tue Aug 20 20:47:16 2013 +0400 description: Format specifier fixes in error logging. diffstat: src/core/ngx_open_file_cache.c | 2 +- src/event/modules/ngx_devpoll_module.c | 4 ++-- src/http/ngx_http_special_response.c | 2 +- src/http/ngx_http_variables.c | 2 +- src/os/unix/ngx_channel.c | 2 +- 5 files changed, 6 insertions(+), 6 deletions(-) diffs (69 lines): diff -r d22eb224aedf -r 6b479db5b52b src/core/ngx_open_file_cache.c --- a/src/core/ngx_open_file_cache.c Sat Aug 17 16:54:55 2013 +0400 +++ b/src/core/ngx_open_file_cache.c Tue Aug 20 20:47:16 2013 +0400 @@ -124,7 +124,7 @@ ngx_open_file_cache_cleanup(void *data) if (cache->current) { ngx_log_error(NGX_LOG_ALERT, ngx_cycle->log, 0, - "%d items still leave in open file cache", + "%ui items still leave in open file cache", cache->current); } diff -r d22eb224aedf -r 6b479db5b52b src/event/modules/ngx_devpoll_module.c --- a/src/event/modules/ngx_devpoll_module.c Sat Aug 17 16:54:55 2013 +0400 +++ b/src/event/modules/ngx_devpoll_module.c Tue Aug 20 20:47:16 2013 +0400 @@ -425,7 +425,7 @@ ngx_devpoll_process_events(ngx_cycle_t * case -1: ngx_log_error(NGX_LOG_ALERT, cycle->log, ngx_errno, - "ioctl(DP_ISPOLLED) failed for socket %d, event", + "ioctl(DP_ISPOLLED) failed for socket %d, event %04Xd", fd, revents); break; @@ -449,7 +449,7 @@ ngx_devpoll_process_events(ngx_cycle_t * != (ssize_t) sizeof(struct pollfd)) { ngx_log_error(NGX_LOG_ALERT, cycle->log, ngx_errno, - "write(/dev/poll) for %d failed, fd"); + "write(/dev/poll) for %d failed", fd); } if (close(fd) == -1) { diff -r d22eb224aedf -r 6b479db5b52b src/http/ngx_http_special_response.c --- a/src/http/ngx_http_special_response.c Sat Aug 17 16:54:55 2013 +0400 +++ b/src/http/ngx_http_special_response.c Tue Aug 20 20:47:16 2013 +0400 @@ -370,7 +370,7 @@ ngx_http_special_response_handler(ngx_ht ngx_http_core_loc_conf_t *clcf; ngx_log_debug3(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, - "http special response: %d, \"%V?%V\"", + "http special response: %i, \"%V?%V\"", error, &r->uri, &r->args); r->err_status = error; diff -r d22eb224aedf -r 6b479db5b52b src/http/ngx_http_variables.c --- a/src/http/ngx_http_variables.c Sat Aug 17 16:54:55 2013 +0400 +++ b/src/http/ngx_http_variables.c Tue Aug 20 20:47:16 2013 +0400 @@ -487,7 +487,7 @@ ngx_http_get_indexed_variable(ngx_http_r if (cmcf->variables.nelts <= index) { ngx_log_error(NGX_LOG_ALERT, r->connection->log, 0, - "unknown variable index: %d", index); + "unknown variable index: %ui", index); return NULL; } diff -r d22eb224aedf -r 6b479db5b52b src/os/unix/ngx_channel.c --- a/src/os/unix/ngx_channel.c Sat Aug 17 16:54:55 2013 +0400 +++ b/src/os/unix/ngx_channel.c Tue Aug 20 20:47:16 2013 +0400 @@ -144,7 +144,7 @@ ngx_read_channel(ngx_socket_t s, ngx_cha if ((size_t) n < sizeof(ngx_channel_t)) { ngx_log_error(NGX_LOG_ALERT, log, 0, - "recvmsg() returned not enough data: %uz", n); + "recvmsg() returned not enough data: %z", n); return NGX_ERROR; } From kuhnhenn.nils at gmail.com Wed Aug 21 08:20:52 2013 From: kuhnhenn.nils at gmail.com (Nils Kuhnhenn) Date: Wed, 21 Aug 2013 10:20:52 +0200 Subject: Very small http parser tweak In-Reply-To: References: Message-ID: The http parser performance for http methods that are 5 bytes long could be improved a little by inserting 'break' after each 'r->method =...' in the lines 212-224 in src/http/ngx_http_parse.c >From looking at the other code it looks like someone just forgot to insert them. I'm on my phone right now so sorry for providing no diff. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Wed Aug 21 10:55:17 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 21 Aug 2013 10:55:17 +0000 Subject: [nginx] Minor ngx_http_parse_request_line() optimization. Message-ID: details: http://hg.nginx.org/nginx/rev/17291cb8c76e branches: changeset: 5328:17291cb8c76e user: Maxim Dounin date: Wed Aug 21 12:51:31 2013 +0400 description: Minor ngx_http_parse_request_line() optimization. Noted by Nils Kuhnhenn. diffstat: src/http/ngx_http_parse.c | 3 +++ 1 files changed, 3 insertions(+), 0 deletions(-) diffs (21 lines): diff --git a/src/http/ngx_http_parse.c b/src/http/ngx_http_parse.c --- a/src/http/ngx_http_parse.c +++ b/src/http/ngx_http_parse.c @@ -212,14 +212,17 @@ ngx_http_parse_request_line(ngx_http_req case 5: if (ngx_str5cmp(m, 'M', 'K', 'C', 'O', 'L')) { r->method = NGX_HTTP_MKCOL; + break; } if (ngx_str5cmp(m, 'P', 'A', 'T', 'C', 'H')) { r->method = NGX_HTTP_PATCH; + break; } if (ngx_str5cmp(m, 'T', 'R', 'A', 'C', 'E')) { r->method = NGX_HTTP_TRACE; + break; } break; From mdounin at mdounin.ru Wed Aug 21 10:55:26 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 21 Aug 2013 14:55:26 +0400 Subject: Very small http parser tweak In-Reply-To: References: Message-ID: <20130821105526.GK19334@mdounin.ru> Hello! On Wed, Aug 21, 2013 at 10:20:52AM +0200, Nils Kuhnhenn wrote: > The http parser performance for http methods that are 5 bytes long could be > improved a little by inserting 'break' after each 'r->method =...' in the > lines 212-224 in src/http/ngx_http_parse.c > > From looking at the other code it looks like someone just forgot to insert > them. Sure, thanks for noting this. -- Maxim Dounin http://nginx.org/en/donation.html From aviram at adallom.com Wed Aug 21 11:45:55 2013 From: aviram at adallom.com (Aviram Cohen) Date: Wed, 21 Aug 2013 14:45:55 +0300 Subject: [PATCH] Proxy remote server SSL certificate verification In-Reply-To: <20130820140912.GF19334@mdounin.ru> References: <20130820140912.GF19334@mdounin.ru> Message-ID: Hello! Thank you for the useful feedback! I think it's better not to use ssl_trusted_certificate (for a more extensible solution). The following is the fixed patch. I've also attached it in case Gmail corrupts it. Thanks! diff -Nrpu nginx-1.4.1/src/event/ngx_event_openssl.c nginx-1.4.1-proxy-ssl-verify/src/event/ngx_event_openssl.c --- nginx-1.4.1/src/event/ngx_event_openssl.c 2013-05-06 13:26:50.000000000 +0300 +++ nginx-1.4.1-proxy-ssl-verify/src/event/ngx_event_openssl.c 2013-08-21 14:18:58.529251404 +0300 @@ -337,6 +337,31 @@ ngx_ssl_certificate(ngx_conf_t *cf, ngx_ ngx_int_t +ngx_ssl_set_verify_options(ngx_ssl_t *ssl, ngx_str_t *cert, + ngx_int_t depth) +{ + if (cert->len == 0) { + return NGX_OK; + } + + SSL_CTX_set_verify(ssl->ctx, SSL_VERIFY_PEER, ngx_http_ssl_verify_callback); + + SSL_CTX_set_verify_depth(ssl->ctx, depth); + + if (SSL_CTX_load_verify_locations(ssl->ctx, (char *) cert->data, NULL) + == 0) + { + ngx_ssl_error(NGX_LOG_EMERG, ssl->log, 0, + "SSL_CTX_load_verify_locations(\"%s\") failed", + cert->data); + return NGX_ERROR; + } + + return NGX_OK; +} + + +ngx_int_t ngx_ssl_client_certificate(ngx_conf_t *cf, ngx_ssl_t *ssl, ngx_str_t *cert, ngx_int_t depth) { @@ -710,6 +735,17 @@ ngx_ssl_set_session(ngx_connection_t *c, return NGX_OK; } + +ngx_int_t +ngx_ssl_verify_result(ngx_connection_t *c) +{ + if (SSL_get_verify_result(c->ssl->connection) != X509_V_OK) { + ngx_ssl_error(NGX_LOG_EMERG, c->log, 0, "SSL_get_verify_result failed"); + return NGX_ERROR; + } + return NGX_OK; +} + ngx_int_t ngx_ssl_handshake(ngx_connection_t *c) diff -Nrpu nginx-1.4.1/src/event/ngx_event_openssl.h nginx-1.4.1-proxy-ssl-verify/src/event/ngx_event_openssl.h --- nginx-1.4.1/src/event/ngx_event_openssl.h 2013-05-06 13:26:50.000000000 +0300 +++ nginx-1.4.1-proxy-ssl-verify/src/event/ngx_event_openssl.h 2013-08-21 14:18:58.529251404 +0300 @@ -100,6 +100,8 @@ ngx_int_t ngx_ssl_init(ngx_log_t *log); ngx_int_t ngx_ssl_create(ngx_ssl_t *ssl, ngx_uint_t protocols, void *data); ngx_int_t ngx_ssl_certificate(ngx_conf_t *cf, ngx_ssl_t *ssl, ngx_str_t *cert, ngx_str_t *key); +ngx_int_t ngx_ssl_set_verify_options(ngx_ssl_t *ssl, ngx_str_t *cert, + ngx_int_t depth); ngx_int_t ngx_ssl_client_certificate(ngx_conf_t *cf, ngx_ssl_t *ssl, ngx_str_t *cert, ngx_int_t depth); ngx_int_t ngx_ssl_trusted_certificate(ngx_conf_t *cf, ngx_ssl_t *ssl, @@ -155,6 +157,7 @@ ngx_int_t ngx_ssl_get_client_verify(ngx_ ngx_str_t *s); +ngx_int_t ngx_ssl_verify_result(ngx_connection_t *c); ngx_int_t ngx_ssl_handshake(ngx_connection_t *c); ssize_t ngx_ssl_recv(ngx_connection_t *c, u_char *buf, size_t size); ssize_t ngx_ssl_write(ngx_connection_t *c, u_char *data, size_t size); diff -Nrpu nginx-1.4.1/src/http/modules/ngx_http_proxy_module.c nginx-1.4.1-proxy-ssl-verify/src/http/modules/ngx_http_proxy_module.c --- nginx-1.4.1/src/http/modules/ngx_http_proxy_module.c 2013-05-06 13:26:50.000000000 +0300 +++ nginx-1.4.1-proxy-ssl-verify/src/http/modules/ngx_http_proxy_module.c 2013-08-21 14:18:58.517251370 +0300 @@ -511,6 +511,26 @@ static ngx_command_t ngx_http_proxy_com offsetof(ngx_http_proxy_loc_conf_t, upstream.ssl_session_reuse), NULL }, + { ngx_string("proxy_ssl_verify"), + NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1, + ngx_conf_set_flag_slot, + NGX_HTTP_LOC_CONF_OFFSET, + offsetof(ngx_http_proxy_loc_conf_t, upstream.ssl_verify), + NULL }, + + { ngx_string("proxy_ssl_verify_depth"), + NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1, + ngx_conf_set_num_slot, + NGX_HTTP_LOC_CONF_OFFSET, + offsetof(ngx_http_proxy_loc_conf_t, upstream.ssl_verify_depth), + NULL }, + + { ngx_string("proxy_ssl_certificate"), + NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1, + ngx_conf_set_str_slot, + NGX_HTTP_LOC_CONF_OFFSET, + offsetof(ngx_http_proxy_loc_conf_t, upstream.ssl_certificate), + NULL }, #endif ngx_null_command @@ -2421,6 +2441,8 @@ ngx_http_proxy_create_loc_conf(ngx_conf_ conf->upstream.intercept_errors = NGX_CONF_UNSET; #if (NGX_HTTP_SSL) conf->upstream.ssl_session_reuse = NGX_CONF_UNSET; + conf->upstream.ssl_verify = NGX_CONF_UNSET; + conf->upstream.ssl_verify_depth = NGX_CONF_UNSET_UINT; #endif /* "proxy_cyclic_temp_file" is disabled */ @@ -2697,6 +2719,22 @@ ngx_http_proxy_merge_loc_conf(ngx_conf_t #if (NGX_HTTP_SSL) ngx_conf_merge_value(conf->upstream.ssl_session_reuse, prev->upstream.ssl_session_reuse, 1); + ngx_conf_merge_value(conf->upstream.ssl_verify, + prev->upstream.ssl_verify, 0); + ngx_conf_merge_uint_value(conf->upstream.ssl_verify_depth, + prev->upstream.ssl_verify_depth, 1); + ngx_conf_merge_str_value(conf->upstream.ssl_certificate, + prev->upstream.ssl_certificate, ""); + + if (conf->upstream.ssl_verify) { + if (conf->upstream.ssl_certificate.len == 0) { + ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, + "no \"proxy_ssl_certificate\" is defined for " + "the \"proxy_ssl_verify\" directive"); + + return NGX_CONF_ERROR; + } + } #endif ngx_conf_merge_value(conf->redirect, prev->redirect, 1); @@ -3748,6 +3786,13 @@ ngx_http_proxy_set_ssl(ngx_conf_t *cf, n != NGX_OK) { return NGX_ERROR; + } + + if (ngx_ssl_set_verify_options(plcf->upstream.ssl, + &plcf->upstream.ssl_certificate, plcf->upstream.ssl_verify_depth) + != NGX_OK) + { + return NGX_ERROR; } cln = ngx_pool_cleanup_add(cf->pool, 0); diff -Nrpu nginx-1.4.1/src/http/ngx_http_upstream.c nginx-1.4.1-proxy-ssl-verify/src/http/ngx_http_upstream.c --- nginx-1.4.1/src/http/ngx_http_upstream.c 2013-05-06 13:26:50.000000000 +0300 +++ nginx-1.4.1-proxy-ssl-verify/src/http/ngx_http_upstream.c 2013-08-21 14:18:58.521251394 +0300 @@ -1324,6 +1324,13 @@ ngx_http_upstream_ssl_handshake(ngx_conn u = r->upstream; if (c->ssl->handshaked) { + if (u->conf->ssl_verify && ngx_ssl_verify_result(c) != NGX_OK) { + ngx_log_error(NGX_LOG_ERR, c->log, 0, "upstream ssl certificate validation failed"); + c = r->connection; + ngx_http_upstream_next(r, u, NGX_HTTP_UPSTREAM_FT_ERROR); + goto fail; + } + if (u->conf->ssl_session_reuse) { u->peer.save_session(&u->peer, u->peer.data); @@ -1334,6 +1341,11 @@ ngx_http_upstream_ssl_handshake(ngx_conn ngx_http_upstream_send_request(r, u); + c = r->connection; + +fail: + ngx_http_run_posted_requests(c); + return; } diff -Nrpu nginx-1.4.1/src/http/ngx_http_upstream.h nginx-1.4.1-proxy-ssl-verify/src/http/ngx_http_upstream.h --- nginx-1.4.1/src/http/ngx_http_upstream.h 2013-05-06 13:26:50.000000000 +0300 +++ nginx-1.4.1-proxy-ssl-verify/src/http/ngx_http_upstream.h 2013-08-21 14:18:58.521251394 +0300 @@ -191,6 +191,9 @@ typedef struct { #if (NGX_HTTP_SSL) ngx_ssl_t *ssl; ngx_flag_t ssl_session_reuse; + ngx_flag_t ssl_verify; + ngx_uint_t ssl_verify_depth; + ngx_str_t ssl_certificate; #endif ngx_str_t module; On Tue, Aug 20, 2013 at 5:09 PM, Maxim Dounin wrote: > Hello! > > On Tue, Aug 20, 2013 at 03:33:43PM +0300, Aviram Cohen wrote: > >> Hello! >> >> Nginx's reverse proxy doesn't verify the SSL certificate of the remote >> server (see http://trac.nginx.org/nginx/ticket/13). >> >> The following is a suggested patch for v1.4.1 that adds this feature. It is >> partially inspired by the patch for v1.1.3 that has been suggested in this >> list and in the ticket above, with some improvements (i.e. no need to add >> the "verification_failed" field to ngx_ssl_connection_t). >> >> Note that a directory of CA's should be provided as a configuration >> parameter ("CApath"), and that this patch is missing a Certificate >> Revocation List file feature. > > It's probably good idea to line up the implementation with > ssl_verify_client. > > It might be also a good idea to reuse ssl_trusted_certificate file > as a source of trusted CA certs, not sure though. In any case > naming should be consistent (that is, proxy_ssl_ca_certificate is > a bad name). > > See below for some more comments. > >> >> Feedback would be welcome. >> >> Best regards, >> Aviram >> >> >> diff -Nrpu nginx-1.4.1/src/event/ngx_event_openssl.c >> nginx-1.4.1-proxy-ssl-verify/src/event/ngx_event_openssl.c >> --- nginx-1.4.1/src/event/ngx_event_openssl.c 2013-05-06 13:26:50.000000000 >> +0300 >> +++ nginx-1.4.1-proxy-ssl-verify/src/event/ngx_event_openssl.c 2013-08-20 >> 14:53:31.465251759 +0300 >> @@ -337,6 +337,31 @@ ngx_ssl_certificate(ngx_conf_t *cf, ngx_ >> >> >> ngx_int_t >> +ngx_ssl_set_verify_options(ngx_ssl_t *ssl, ngx_str_t *cert, >> + ngx_int_t depth) >> +{ >> + if (cert->len == 0) { >> + return NGX_OK; >> + } >> + >> + SSL_CTX_set_verify(ssl->ctx, SSL_VERIFY_PEER, >> ngx_http_ssl_verify_callback); > > Just a side note: your mail client corrupts patches. > >> + >> + SSL_CTX_set_verify_depth(ssl->ctx, depth); >> + >> + if (SSL_CTX_load_verify_locations(ssl->ctx, NULL, (char *) cert->data) >> + == 0) >> + { >> + ngx_ssl_error(NGX_LOG_EMERG, ssl->log, 0, >> + "SSL_CTX_load_verify_locations(\"%s\") failed", >> + cert->data); >> + return NGX_ERROR; >> + } >> + >> + return NGX_OK; >> +} >> + >> + >> +ngx_int_t >> ngx_ssl_client_certificate(ngx_conf_t *cf, ngx_ssl_t *ssl, ngx_str_t *cert, >> ngx_int_t depth) >> { >> @@ -710,6 +735,17 @@ ngx_ssl_set_session(ngx_connection_t *c, >> return NGX_OK; >> } >> >> + >> +ngx_int_t >> +ngx_ssl_verify_result(ngx_connection_t *c) >> +{ >> + if (SSL_get_verify_result(c->ssl->connection) != X509_V_OK) { >> + ngx_ssl_error(NGX_LOG_EMERG, c->log, 0, "SSL_get_verify_result >> failed"); >> + return NGX_ERROR; >> + } >> + return NGX_OK; >> +} >> + >> >> ngx_int_t >> ngx_ssl_handshake(ngx_connection_t *c) >> diff -Nrpu nginx-1.4.1/src/event/ngx_event_openssl.h >> nginx-1.4.1-proxy-ssl-verify/src/event/ngx_event_openssl.h >> --- nginx-1.4.1/src/event/ngx_event_openssl.h 2013-05-06 13:26:50.000000000 >> +0300 >> +++ nginx-1.4.1-proxy-ssl-verify/src/event/ngx_event_openssl.h 2013-08-20 >> 14:54:37.933252402 +0300 >> @@ -100,6 +100,8 @@ ngx_int_t ngx_ssl_init(ngx_log_t *log); >> ngx_int_t ngx_ssl_create(ngx_ssl_t *ssl, ngx_uint_t protocols, void *data); >> ngx_int_t ngx_ssl_certificate(ngx_conf_t *cf, ngx_ssl_t *ssl, >> ngx_str_t *cert, ngx_str_t *key); >> +ngx_int_t ngx_ssl_set_verify_options(ngx_ssl_t *ssl, ngx_str_t *cert, >> + ngx_int_t depth); >> ngx_int_t ngx_ssl_client_certificate(ngx_conf_t *cf, ngx_ssl_t *ssl, >> ngx_str_t *cert, ngx_int_t depth); >> ngx_int_t ngx_ssl_trusted_certificate(ngx_conf_t *cf, ngx_ssl_t *ssl, >> @@ -155,6 +157,7 @@ ngx_int_t ngx_ssl_get_client_verify(ngx_ >> ngx_str_t *s); >> >> >> +ngx_int_t ngx_ssl_verify_result(ngx_connection_t *c); >> ngx_int_t ngx_ssl_handshake(ngx_connection_t *c); >> ssize_t ngx_ssl_recv(ngx_connection_t *c, u_char *buf, size_t size); >> ssize_t ngx_ssl_write(ngx_connection_t *c, u_char *data, size_t size); >> diff -Nrpu nginx-1.4.1/src/http/modules/ngx_http_proxy_module.c >> nginx-1.4.1-proxy-ssl-verify/src/http/modules/ngx_http_proxy_module.c >> --- nginx-1.4.1/src/http/modules/ngx_http_proxy_module.c 2013-05-06 >> 13:26:50.000000000 +0300 >> +++ nginx-1.4.1-proxy-ssl-verify/src/http/modules/ngx_http_proxy_module.c >> 2013-08-20 >> 14:56:24.001251235 +0300 >> @@ -511,6 +511,26 @@ static ngx_command_t ngx_http_proxy_com >> offsetof(ngx_http_proxy_loc_conf_t, upstream.ssl_session_reuse), >> NULL }, >> >> + { ngx_string("proxy_ssl_verify_peer"), >> + >> NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1, >> + ngx_conf_set_flag_slot, >> + NGX_HTTP_LOC_CONF_OFFSET, >> + offsetof(ngx_http_proxy_loc_conf_t, upstream.ssl_verify_peer), >> + NULL }, > > Just "proxy_ssl_verify" is probably enough. > >> + >> + { ngx_string("proxy_ssl_verify_depth"), >> + >> NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1, >> + ngx_conf_set_num_slot, >> + NGX_HTTP_LOC_CONF_OFFSET, >> + offsetof(ngx_http_proxy_loc_conf_t, upstream.ssl_verify_depth), >> + NULL }, >> + >> + { ngx_string("proxy_ssl_ca_certificate"), >> + >> NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1, >> + ngx_conf_set_str_slot, >> + NGX_HTTP_LOC_CONF_OFFSET, >> + offsetof(ngx_http_proxy_loc_conf_t, upstream.ssl_ca_certificate), >> + NULL }, >> #endif > > See above. > >> >> ngx_null_command >> @@ -2421,6 +2441,8 @@ ngx_http_proxy_create_loc_conf(ngx_conf_ >> conf->upstream.intercept_errors = NGX_CONF_UNSET; >> #if (NGX_HTTP_SSL) >> conf->upstream.ssl_session_reuse = NGX_CONF_UNSET; >> + conf->upstream.ssl_verify_peer = NGX_CONF_UNSET; >> + conf->upstream.ssl_verify_depth = NGX_CONF_UNSET_UINT; >> #endif >> >> /* "proxy_cyclic_temp_file" is disabled */ >> @@ -2697,6 +2719,22 @@ ngx_http_proxy_merge_loc_conf(ngx_conf_t >> #if (NGX_HTTP_SSL) >> ngx_conf_merge_value(conf->upstream.ssl_session_reuse, >> prev->upstream.ssl_session_reuse, 1); >> + ngx_conf_merge_value(conf->upstream.ssl_verify_peer, >> + prev->upstream.ssl_verify_peer, 0); >> + ngx_conf_merge_uint_value(conf->upstream.ssl_verify_depth, >> + prev->upstream.ssl_verify_depth, 1); >> + ngx_conf_merge_str_value(conf->upstream.ssl_ca_certificate, >> + prev->upstream.ssl_ca_certificate, ""); >> + >> + if (conf->upstream.ssl_verify_peer) { >> + if (conf->upstream.ssl_ca_certificate.len == 0) { >> + ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, >> + "no \"proxy_ssl_ca_certificate\" is defined >> for " >> + "the \"proxy_ssl_verify_peer\" directive"); >> + >> + return NGX_CONF_ERROR; >> + } >> + } >> #endif >> >> ngx_conf_merge_value(conf->redirect, prev->redirect, 1); >> diff -Nrpu nginx-1.4.1/src/http/ngx_http_upstream.c >> nginx-1.4.1-proxy-ssl-verify/src/http/ngx_http_upstream.c >> --- nginx-1.4.1/src/http/ngx_http_upstream.c 2013-05-06 13:26:50.000000000 >> +0300 >> +++ nginx-1.4.1-proxy-ssl-verify/src/http/ngx_http_upstream.c 2013-08-20 >> 14:59:29.437251122 +0300 >> @@ -1281,6 +1281,15 @@ ngx_http_upstream_ssl_init_connection(ng >> { >> ngx_int_t rc; >> >> + if (ngx_ssl_set_verify_options(u->conf->ssl, >> + &u->conf->ssl_ca_certificate, u->conf->ssl_verify_depth) >> + != NGX_OK) >> + { >> + ngx_http_upstream_finalize_request(r, u, >> + NGX_HTTP_INTERNAL_SERVER_ERROR); >> + return; >> + } >> + > > Calling this on every connection attempt is silly. > >> if (ngx_ssl_create_connection(u->conf->ssl, c, >> NGX_SSL_BUFFER|NGX_SSL_CLIENT) >> != NGX_OK) >> @@ -1324,6 +1333,12 @@ ngx_http_upstream_ssl_handshake(ngx_conn >> u = r->upstream; >> >> if (c->ssl->handshaked) { >> + if (u->conf->ssl_verify_peer && ngx_ssl_verify_result(c) != >> NGX_OK) { >> + ngx_log_error(NGX_LOG_ERR, c->log, 0, "upstream ssl >> certificate validation failed"); >> + ngx_http_upstream_next(r, u, NGX_HTTP_UPSTREAM_FT_ERROR); >> + goto fail; >> + } >> + >> >> if (u->conf->ssl_session_reuse) { >> u->peer.save_session(&u->peer, u->peer.data); >> @@ -1334,6 +1349,11 @@ ngx_http_upstream_ssl_handshake(ngx_conn >> >> ngx_http_upstream_send_request(r, u); >> >> +fail: >> + c = r->connection; >> + >> + ngx_http_run_posted_requests(c); > > The "c = r->connection;" part should be before the > ngx_http_upstream_next() call where a request could be freed. > >> + >> return; >> } >> >> diff -Nrpu nginx-1.4.1/src/http/ngx_http_upstream.h >> nginx-1.4.1-proxy-ssl-verify/src/http/ngx_http_upstream.h >> --- nginx-1.4.1/src/http/ngx_http_upstream.h 2013-05-06 13:26:50.000000000 >> +0300 >> +++ nginx-1.4.1-proxy-ssl-verify/src/http/ngx_http_upstream.h 2013-08-20 >> 15:00:10.281251422 +0300 >> @@ -191,6 +191,9 @@ typedef struct { >> #if (NGX_HTTP_SSL) >> ngx_ssl_t *ssl; >> ngx_flag_t ssl_session_reuse; >> + ngx_flag_t ssl_verify_peer; >> + ngx_uint_t ssl_verify_depth; >> + ngx_str_t ssl_ca_certificate; >> #endif >> >> ngx_str_t module; > >> _______________________________________________ >> nginx-devel mailing list >> nginx-devel at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx-devel > > > -- > Maxim Dounin > http://nginx.org/en/donation.html > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel -------------- next part -------------- A non-text attachment was scrubbed... Name: nginx-1.4.1-proxy-ssl-verify.patch Type: application/octet-stream Size: 7445 bytes Desc: not available URL: From mdounin at mdounin.ru Wed Aug 21 14:30:33 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 21 Aug 2013 18:30:33 +0400 Subject: [PATCH] Proxy remote server SSL certificate verification In-Reply-To: References: <20130820140912.GF19334@mdounin.ru> Message-ID: <20130821143033.GP19334@mdounin.ru> Hello! On Wed, Aug 21, 2013 at 02:45:55PM +0300, Aviram Cohen wrote: > Hello! > > Thank you for the useful feedback! > I think it's better not to use ssl_trusted_certificate (for a more > extensible solution). > The following is the fixed patch. I've also attached it in case Gmail > corrupts it. [...] > ngx_int_t > +ngx_ssl_set_verify_options(ngx_ssl_t *ssl, ngx_str_t *cert, > + ngx_int_t depth) > +{ > + if (cert->len == 0) { > + return NGX_OK; > + } > + > + SSL_CTX_set_verify(ssl->ctx, SSL_VERIFY_PEER, > ngx_http_ssl_verify_callback); > + > + SSL_CTX_set_verify_depth(ssl->ctx, depth); > + > + if (SSL_CTX_load_verify_locations(ssl->ctx, (char *) cert->data, NULL) > + == 0) > + { > + ngx_ssl_error(NGX_LOG_EMERG, ssl->log, 0, > + "SSL_CTX_load_verify_locations(\"%s\") failed", > + cert->data); > + return NGX_ERROR; > + } > + > + return NGX_OK; > +} Even if you don't want to reuse ssl_trusted_certificate value, reusing ngx_ssl_trusted_certificate() function might be a good idea. In particular, it would have saved you from a bug with relative certificate names handling. > + > + > +ngx_int_t > ngx_ssl_client_certificate(ngx_conf_t *cf, ngx_ssl_t *ssl, ngx_str_t *cert, > ngx_int_t depth) > { > @@ -710,6 +735,17 @@ ngx_ssl_set_session(ngx_connection_t *c, > return NGX_OK; > } > > + > +ngx_int_t > +ngx_ssl_verify_result(ngx_connection_t *c) > +{ > + if (SSL_get_verify_result(c->ssl->connection) != X509_V_OK) { > + ngx_ssl_error(NGX_LOG_EMERG, c->log, 0, > "SSL_get_verify_result failed"); > + return NGX_ERROR; > + } > + return NGX_OK; > +} > + Note: the SSL_get_verify_result() is currently called directly by ngx_http_request.c code, and introducing a wrapper function for proxy code might not be a good idea. [...] > + { ngx_string("proxy_ssl_certificate"), > + NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1, > + ngx_conf_set_str_slot, > + NGX_HTTP_LOC_CONF_OFFSET, > + offsetof(ngx_http_proxy_loc_conf_t, upstream.ssl_certificate), > + NULL }, > #endif Still bad name. Such name will easily cause confusion with a client certificate proxy may in theory use as well. Something like "proxy_ssl_trusted_certificate" is probably good enough. [...] > @@ -3748,6 +3786,13 @@ ngx_http_proxy_set_ssl(ngx_conf_t *cf, n > != NGX_OK) > { > return NGX_ERROR; > + } > + > + if (ngx_ssl_set_verify_options(plcf->upstream.ssl, > + &plcf->upstream.ssl_certificate, plcf->upstream.ssl_verify_depth) > + != NGX_OK) > + { > + return NGX_ERROR; > } This is called before options used are correctly set. (There is also a style problem here, but it doesn't really matter as you'll have to rewrite the code anyway.) [...] > @@ -1334,6 +1341,11 @@ ngx_http_upstream_ssl_handshake(ngx_conn > > ngx_http_upstream_send_request(r, u); > > + c = r->connection; > + > +fail: > + ngx_http_run_posted_requests(c); > + > return; > } > You probably missed my previous comment. You have a use after free problem here. Try triggering an error in ngx_http_upstream_send_request() with NGX_DEBUG_MALLOC defined, it should segfault. [...] -- Maxim Dounin http://nginx.org/en/donation.html From mdounin at mdounin.ru Wed Aug 21 15:44:28 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 21 Aug 2013 15:44:28 +0000 Subject: [nginx] Auth request module import. Message-ID: details: http://hg.nginx.org/nginx/rev/00bdc9f08a16 branches: changeset: 5329:00bdc9f08a16 user: Maxim Dounin date: Wed Aug 21 19:19:47 2013 +0400 description: Auth request module import. diffstat: auto/modules | 5 + auto/options | 3 + auto/sources | 4 + src/http/modules/ngx_http_auth_request_module.c | 444 ++++++++++++++++++++++++ 4 files changed, 456 insertions(+), 0 deletions(-) diffs (truncated from 505 to 300 lines): diff --git a/auto/modules b/auto/modules --- a/auto/modules +++ b/auto/modules @@ -220,6 +220,11 @@ if [ $HTTP_RANDOM_INDEX = YES ]; then HTTP_SRCS="$HTTP_SRCS $HTTP_RANDOM_INDEX_SRCS" fi +if [ $HTTP_AUTH_REQUEST = YES ]; then + HTTP_MODULES="$HTTP_MODULES $HTTP_AUTH_REQUEST_MODULE" + HTTP_SRCS="$HTTP_SRCS $HTTP_AUTH_REQUEST_SRCS" +fi + if [ $HTTP_AUTH_BASIC = YES ]; then USE_MD5=YES USE_SHA1=YES diff --git a/auto/options b/auto/options --- a/auto/options +++ b/auto/options @@ -71,6 +71,7 @@ HTTP_ADDITION=NO HTTP_DAV=NO HTTP_ACCESS=YES HTTP_AUTH_BASIC=YES +HTTP_AUTH_REQUEST=NO HTTP_USERID=YES HTTP_AUTOINDEX=YES HTTP_RANDOM_INDEX=NO @@ -215,6 +216,7 @@ do --with-http_mp4_module) HTTP_MP4=YES ;; --with-http_gunzip_module) HTTP_GUNZIP=YES ;; --with-http_gzip_static_module) HTTP_GZIP_STATIC=YES ;; + --with-http_auth_request_module) HTTP_AUTH_REQUEST=YES ;; --with-http_random_index_module) HTTP_RANDOM_INDEX=YES ;; --with-http_secure_link_module) HTTP_SECURE_LINK=YES ;; --with-http_degradation_module) HTTP_DEGRADATION=YES ;; @@ -363,6 +365,7 @@ cat << END --with-http_mp4_module enable ngx_http_mp4_module --with-http_gunzip_module enable ngx_http_gunzip_module --with-http_gzip_static_module enable ngx_http_gzip_static_module + --with-http_auth_request_module enable ngx_http_auth_request_module --with-http_random_index_module enable ngx_http_random_index_module --with-http_secure_link_module enable ngx_http_secure_link_module --with-http_degradation_module enable ngx_http_degradation_module diff --git a/auto/sources b/auto/sources --- a/auto/sources +++ b/auto/sources @@ -386,6 +386,10 @@ HTTP_AUTH_BASIC_MODULE=ngx_http_auth_bas HTTP_AUTH_BASIC_SRCS=src/http/modules/ngx_http_auth_basic_module.c +HTTP_AUTH_REQUEST_MODULE=ngx_http_auth_request_module +HTTP_AUTH_REQUEST_SRCS=src/http/modules/ngx_http_auth_request_module.c + + HTTP_AUTOINDEX_MODULE=ngx_http_autoindex_module HTTP_AUTOINDEX_SRCS=src/http/modules/ngx_http_autoindex_module.c diff --git a/src/http/modules/ngx_http_auth_request_module.c b/src/http/modules/ngx_http_auth_request_module.c new file mode 100644 --- /dev/null +++ b/src/http/modules/ngx_http_auth_request_module.c @@ -0,0 +1,444 @@ + +/* + * Copyright (C) Maxim Dounin + * Copyright (C) Nginx, Inc. + */ + + +#include +#include +#include + + +typedef struct { + ngx_str_t uri; + ngx_array_t *vars; +} ngx_http_auth_request_conf_t; + + +typedef struct { + ngx_uint_t done; + ngx_uint_t status; + ngx_http_request_t *subrequest; +} ngx_http_auth_request_ctx_t; + + +typedef struct { + ngx_int_t index; + ngx_http_complex_value_t value; + ngx_http_set_variable_pt set_handler; +} ngx_http_auth_request_variable_t; + + +static ngx_int_t ngx_http_auth_request_handler(ngx_http_request_t *r); +static ngx_int_t ngx_http_auth_request_done(ngx_http_request_t *r, + void *data, ngx_int_t rc); +static ngx_int_t ngx_http_auth_request_set_variables(ngx_http_request_t *r, + ngx_http_auth_request_conf_t *arcf, ngx_http_auth_request_ctx_t *ctx); +static ngx_int_t ngx_http_auth_request_variable(ngx_http_request_t *r, + ngx_http_variable_value_t *v, uintptr_t data); +static void *ngx_http_auth_request_create_conf(ngx_conf_t *cf); +static char *ngx_http_auth_request_merge_conf(ngx_conf_t *cf, + void *parent, void *child); +static ngx_int_t ngx_http_auth_request_init(ngx_conf_t *cf); +static char *ngx_http_auth_request(ngx_conf_t *cf, ngx_command_t *cmd, + void *conf); +static char *ngx_http_auth_request_set(ngx_conf_t *cf, ngx_command_t *cmd, + void *conf); + + +static ngx_command_t ngx_http_auth_request_commands[] = { + + { ngx_string("auth_request"), + NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1, + ngx_http_auth_request, + NGX_HTTP_LOC_CONF_OFFSET, + 0, + NULL }, + + { ngx_string("auth_request_set"), + NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE2, + ngx_http_auth_request_set, + NGX_HTTP_LOC_CONF_OFFSET, + 0, + NULL }, + + ngx_null_command +}; + + +static ngx_http_module_t ngx_http_auth_request_module_ctx = { + NULL, /* preconfiguration */ + ngx_http_auth_request_init, /* postconfiguration */ + + NULL, /* create main configuration */ + NULL, /* init main configuration */ + + NULL, /* create server configuration */ + NULL, /* merge server configuration */ + + ngx_http_auth_request_create_conf, /* create location configuration */ + ngx_http_auth_request_merge_conf /* merge location configuration */ +}; + + +ngx_module_t ngx_http_auth_request_module = { + NGX_MODULE_V1, + &ngx_http_auth_request_module_ctx, /* module context */ + ngx_http_auth_request_commands, /* module directives */ + NGX_HTTP_MODULE, /* module type */ + NULL, /* init master */ + NULL, /* init module */ + NULL, /* init process */ + NULL, /* init thread */ + NULL, /* exit thread */ + NULL, /* exit process */ + NULL, /* exit master */ + NGX_MODULE_V1_PADDING +}; + + +static ngx_int_t +ngx_http_auth_request_handler(ngx_http_request_t *r) +{ + ngx_table_elt_t *h, *ho; + ngx_http_request_t *sr; + ngx_http_post_subrequest_t *ps; + ngx_http_auth_request_ctx_t *ctx; + ngx_http_auth_request_conf_t *arcf; + + arcf = ngx_http_get_module_loc_conf(r, ngx_http_auth_request_module); + + if (arcf->uri.len == 0) { + return NGX_DECLINED; + } + + ngx_log_debug0(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, + "auth request handler"); + + ctx = ngx_http_get_module_ctx(r, ngx_http_auth_request_module); + + if (ctx != NULL) { + if (!ctx->done) { + return NGX_AGAIN; + } + + /* + * as soon as we are done - explicitly set variables to make + * sure they will be available after internal redirects + */ + + if (ngx_http_auth_request_set_variables(r, arcf, ctx) != NGX_OK) { + return NGX_ERROR; + } + + /* return appropriate status */ + + if (ctx->status == NGX_HTTP_FORBIDDEN) { + return ctx->status; + } + + if (ctx->status == NGX_HTTP_UNAUTHORIZED) { + sr = ctx->subrequest; + + h = sr->headers_out.www_authenticate; + + if (!h && sr->upstream) { + h = sr->upstream->headers_in.www_authenticate; + } + + if (h) { + ho = ngx_list_push(&r->headers_out.headers); + if (ho == NULL) { + return NGX_ERROR; + } + + *ho = *h; + + r->headers_out.www_authenticate = ho; + } + + return ctx->status; + } + + if (ctx->status >= NGX_HTTP_OK + && ctx->status < NGX_HTTP_SPECIAL_RESPONSE) + { + return NGX_OK; + } + + ngx_log_error(NGX_LOG_ERR, r->connection->log, 0, + "auth request unexpected status: %d", ctx->status); + + return NGX_HTTP_INTERNAL_SERVER_ERROR; + } + + ctx = ngx_pcalloc(r->pool, sizeof(ngx_http_auth_request_ctx_t)); + if (ctx == NULL) { + return NGX_ERROR; + } + + ps = ngx_palloc(r->pool, sizeof(ngx_http_post_subrequest_t)); + if (ps == NULL) { + return NGX_ERROR; + } + + ps->handler = ngx_http_auth_request_done; + ps->data = ctx; + + if (ngx_http_subrequest(r, &arcf->uri, NULL, &sr, ps, + NGX_HTTP_SUBREQUEST_WAITED) + != NGX_OK) + { + return NGX_ERROR; + } + + /* + * allocate fake request body to avoid attempts to read it and to make + * sure real body file (if already read) won't be closed by upstream + */ + + sr->request_body = ngx_pcalloc(r->pool, sizeof(ngx_http_request_body_t)); + if (sr->request_body == NULL) { + return NGX_ERROR; + } + + sr->header_only = 1; + + ctx->subrequest = sr; + + ngx_http_set_ctx(r, ctx, ngx_http_auth_request_module); + + return NGX_AGAIN; +} + + +static ngx_int_t +ngx_http_auth_request_done(ngx_http_request_t *r, void *data, ngx_int_t rc) +{ + ngx_http_auth_request_ctx_t *ctx = data; + + ngx_log_debug1(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, + "auth request done s:%d", r->headers_out.status); + + ctx->done = 1; + ctx->status = r->headers_out.status; + + return rc; +} + + +static ngx_int_t +ngx_http_auth_request_set_variables(ngx_http_request_t *r, + ngx_http_auth_request_conf_t *arcf, ngx_http_auth_request_ctx_t *ctx) +{ + ngx_str_t val; + ngx_http_variable_t *v; + ngx_http_variable_value_t *vv; + ngx_http_auth_request_variable_t *av, *last; + ngx_http_core_main_conf_t *cmcf; From mdounin at mdounin.ru Wed Aug 21 15:51:50 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 21 Aug 2013 15:51:50 +0000 Subject: [nginx] Backed out f1a91825730a and 7094bd12c1ff. Message-ID: details: http://hg.nginx.org/nginx/rev/314c3d7cc3a5 branches: changeset: 5330:314c3d7cc3a5 user: Maxim Dounin date: Tue Aug 20 21:11:19 2013 +0400 description: Backed out f1a91825730a and 7094bd12c1ff. While ngx_get_full_name() might have a bit more descriptive arguments, the ngx_conf_full_name() is generally easier to use when parsing configuration and limits exposure of cycle->prefix / cycle->conf_prefix details. diffstat: src/core/nginx.c | 10 +++------- src/core/ngx_conf_file.c | 14 ++++++++++++-- src/core/ngx_conf_file.h | 2 ++ src/core/ngx_file.c | 8 ++------ src/event/ngx_event_openssl.c | 12 ++++++------ src/event/ngx_event_openssl_stapling.c | 2 +- src/http/modules/ngx_http_geo_module.c | 2 +- src/http/modules/ngx_http_log_module.c | 4 +--- src/http/modules/ngx_http_xslt_filter_module.c | 2 +- src/http/modules/perl/ngx_http_perl_module.c | 4 +--- src/http/ngx_http_core_module.c | 8 ++------ src/http/ngx_http_file_cache.c | 4 +--- src/http/ngx_http_script.c | 7 +------ 13 files changed, 34 insertions(+), 45 deletions(-) diffs (286 lines): diff --git a/src/core/nginx.c b/src/core/nginx.c --- a/src/core/nginx.c +++ b/src/core/nginx.c @@ -897,9 +897,7 @@ ngx_process_options(ngx_cycle_t *cycle) ngx_str_set(&cycle->conf_file, NGX_CONF_PATH); } - if (ngx_get_full_name(cycle->pool, &cycle->prefix, &cycle->conf_file) - != NGX_OK) - { + if (ngx_conf_full_name(cycle, &cycle->conf_file, 0) != NGX_OK) { return NGX_ERROR; } @@ -1015,7 +1013,7 @@ ngx_core_module_init_conf(ngx_cycle_t *c ngx_str_set(&ccf->pid, NGX_PID_PATH); } - if (ngx_get_full_name(cycle->pool, &cycle->prefix, &ccf->pid) != NGX_OK) { + if (ngx_conf_full_name(cycle, &ccf->pid, 0) != NGX_OK) { return NGX_CONF_ERROR; } @@ -1063,9 +1061,7 @@ ngx_core_module_init_conf(ngx_cycle_t *c ngx_str_set(&ccf->lock_file, NGX_LOCK_PATH); } - if (ngx_get_full_name(cycle->pool, &cycle->prefix, &ccf->lock_file) - != NGX_OK) - { + if (ngx_conf_full_name(cycle, &ccf->lock_file, 0) != NGX_OK) { return NGX_CONF_ERROR; } diff --git a/src/core/ngx_conf_file.c b/src/core/ngx_conf_file.c --- a/src/core/ngx_conf_file.c +++ b/src/core/ngx_conf_file.c @@ -747,7 +747,7 @@ ngx_conf_include(ngx_conf_t *cf, ngx_com ngx_log_debug1(NGX_LOG_DEBUG_CORE, cf->log, 0, "include %s", file.data); - if (ngx_get_full_name(cf->pool, &cf->cycle->conf_prefix, &file) != NGX_OK) { + if (ngx_conf_full_name(cf->cycle, &file, 1) != NGX_OK) { return NGX_CONF_ERROR; } @@ -797,6 +797,16 @@ ngx_conf_include(ngx_conf_t *cf, ngx_com } +ngx_int_t +ngx_conf_full_name(ngx_cycle_t *cycle, ngx_str_t *name, ngx_uint_t conf_prefix) +{ + return ngx_get_full_name(cycle->pool, + conf_prefix ? &cycle->conf_prefix: + &cycle->prefix, + name); +} + + ngx_open_file_t * ngx_conf_open_file(ngx_cycle_t *cycle, ngx_str_t *name) { @@ -812,7 +822,7 @@ ngx_conf_open_file(ngx_cycle_t *cycle, n if (name->len) { full = *name; - if (ngx_get_full_name(cycle->pool, &cycle->prefix, &full) != NGX_OK) { + if (ngx_conf_full_name(cycle, &full, 0) != NGX_OK) { return NULL; } diff --git a/src/core/ngx_conf_file.h b/src/core/ngx_conf_file.h --- a/src/core/ngx_conf_file.h +++ b/src/core/ngx_conf_file.h @@ -311,6 +311,8 @@ char *ngx_conf_parse(ngx_conf_t *cf, ngx char *ngx_conf_include(ngx_conf_t *cf, ngx_command_t *cmd, void *conf); +ngx_int_t ngx_conf_full_name(ngx_cycle_t *cycle, ngx_str_t *name, + ngx_uint_t conf_prefix); ngx_open_file_t *ngx_conf_open_file(ngx_cycle_t *cycle, ngx_str_t *name); void ngx_cdecl ngx_conf_log_error(ngx_uint_t level, ngx_conf_t *cf, ngx_err_t err, const char *fmt, ...); diff --git a/src/core/ngx_file.c b/src/core/ngx_file.c --- a/src/core/ngx_file.c +++ b/src/core/ngx_file.c @@ -355,9 +355,7 @@ ngx_conf_set_path_slot(ngx_conf_t *cf, n path->name.len--; } - if (ngx_get_full_name(cf->pool, &cf->cycle->prefix, &path->name) - != NGX_OK) - { + if (ngx_conf_full_name(cf->cycle, &path->name, 0) != NGX_OK) { return NULL; } @@ -411,9 +409,7 @@ ngx_conf_merge_path_value(ngx_conf_t *cf (*path)->name = init->name; - if (ngx_get_full_name(cf->pool, &cf->cycle->prefix, &(*path)->name) - != NGX_OK) - { + if (ngx_conf_full_name(cf->cycle, &(*path)->name, 0) != NGX_OK) { return NGX_CONF_ERROR; } diff --git a/src/event/ngx_event_openssl.c b/src/event/ngx_event_openssl.c --- a/src/event/ngx_event_openssl.c +++ b/src/event/ngx_event_openssl.c @@ -240,7 +240,7 @@ ngx_ssl_certificate(ngx_conf_t *cf, ngx_ X509 *x509; u_long n; - if (ngx_get_full_name(cf->pool, &cf->cycle->conf_prefix, cert) != NGX_OK) { + if (ngx_conf_full_name(cf->cycle, cert, 1) != NGX_OK) { return NGX_ERROR; } @@ -319,7 +319,7 @@ ngx_ssl_certificate(ngx_conf_t *cf, ngx_ BIO_free(bio); - if (ngx_get_full_name(cf->pool, &cf->cycle->conf_prefix, key) != NGX_OK) { + if (ngx_conf_full_name(cf->cycle, key, 1) != NGX_OK) { return NGX_ERROR; } @@ -350,7 +350,7 @@ ngx_ssl_client_certificate(ngx_conf_t *c return NGX_OK; } - if (ngx_get_full_name(cf->pool, &cf->cycle->conf_prefix, cert) != NGX_OK) { + if (ngx_conf_full_name(cf->cycle, cert, 1) != NGX_OK) { return NGX_ERROR; } @@ -394,7 +394,7 @@ ngx_ssl_trusted_certificate(ngx_conf_t * return NGX_OK; } - if (ngx_get_full_name(cf->pool, &cf->cycle->conf_prefix, cert) != NGX_OK) { + if (ngx_conf_full_name(cf->cycle, cert, 1) != NGX_OK) { return NGX_ERROR; } @@ -421,7 +421,7 @@ ngx_ssl_crl(ngx_conf_t *cf, ngx_ssl_t *s return NGX_OK; } - if (ngx_get_full_name(cf->pool, &cf->cycle->conf_prefix, crl) != NGX_OK) { + if (ngx_conf_full_name(cf->cycle, crl, 1) != NGX_OK) { return NGX_ERROR; } @@ -587,7 +587,7 @@ ngx_ssl_dhparam(ngx_conf_t *cf, ngx_ssl_ return NGX_OK; } - if (ngx_get_full_name(cf->pool, &cf->cycle->conf_prefix, file) != NGX_OK) { + if (ngx_conf_full_name(cf->cycle, file, 1) != NGX_OK) { return NGX_ERROR; } diff --git a/src/event/ngx_event_openssl_stapling.c b/src/event/ngx_event_openssl_stapling.c --- a/src/event/ngx_event_openssl_stapling.c +++ b/src/event/ngx_event_openssl_stapling.c @@ -197,7 +197,7 @@ ngx_ssl_stapling_file(ngx_conf_t *cf, ng staple = SSL_CTX_get_ex_data(ssl->ctx, ngx_ssl_stapling_index); - if (ngx_get_full_name(cf->pool, &cf->cycle->conf_prefix, file) != NGX_OK) { + if (ngx_conf_full_name(cf->cycle, file, 1) != NGX_OK) { return NGX_ERROR; } diff --git a/src/http/modules/ngx_http_geo_module.c b/src/http/modules/ngx_http_geo_module.c --- a/src/http/modules/ngx_http_geo_module.c +++ b/src/http/modules/ngx_http_geo_module.c @@ -1327,7 +1327,7 @@ ngx_http_geo_include(ngx_conf_t *cf, ngx ngx_sprintf(file.data, "%V.bin%Z", name); - if (ngx_get_full_name(cf->pool, &cf->cycle->conf_prefix, &file) != NGX_OK) { + if (ngx_conf_full_name(cf->cycle, &file, 1) != NGX_OK) { return NGX_CONF_ERROR; } diff --git a/src/http/modules/ngx_http_log_module.c b/src/http/modules/ngx_http_log_module.c --- a/src/http/modules/ngx_http_log_module.c +++ b/src/http/modules/ngx_http_log_module.c @@ -1134,9 +1134,7 @@ ngx_http_log_set_log(ngx_conf_t *cf, ngx } } else { - if (ngx_get_full_name(cf->pool, &cf->cycle->prefix, &value[1]) - != NGX_OK) - { + if (ngx_conf_full_name(cf->cycle, &value[1], 0) != NGX_OK) { return NGX_CONF_ERROR; } diff --git a/src/http/modules/ngx_http_xslt_filter_module.c b/src/http/modules/ngx_http_xslt_filter_module.c --- a/src/http/modules/ngx_http_xslt_filter_module.c +++ b/src/http/modules/ngx_http_xslt_filter_module.c @@ -892,7 +892,7 @@ ngx_http_xslt_stylesheet(ngx_conf_t *cf, ngx_memzero(sheet, sizeof(ngx_http_xslt_sheet_t)); - if (ngx_get_full_name(cf->pool, &cf->cycle->prefix, &value[1]) != NGX_OK) { + if (ngx_conf_full_name(cf->cycle, &value[1], 0) != NGX_OK) { return NGX_CONF_ERROR; } diff --git a/src/http/modules/perl/ngx_http_perl_module.c b/src/http/modules/perl/ngx_http_perl_module.c --- a/src/http/modules/perl/ngx_http_perl_module.c +++ b/src/http/modules/perl/ngx_http_perl_module.c @@ -485,9 +485,7 @@ ngx_http_perl_init_interpreter(ngx_conf_ if (pmcf->modules != NGX_CONF_UNSET_PTR) { m = pmcf->modules->elts; for (i = 0; i < pmcf->modules->nelts; i++) { - if (ngx_get_full_name(cf->pool, &cf->cycle->prefix, &m[i]) - != NGX_OK) - { + if (ngx_conf_full_name(cf->cycle, &m[i], 0) != NGX_OK) { return NGX_CONF_ERROR; } } diff --git a/src/http/ngx_http_core_module.c b/src/http/ngx_http_core_module.c --- a/src/http/ngx_http_core_module.c +++ b/src/http/ngx_http_core_module.c @@ -3686,9 +3686,7 @@ ngx_http_core_merge_loc_conf(ngx_conf_t if (prev->root.data == NULL) { ngx_str_set(&conf->root, "html"); - if (ngx_get_full_name(cf->pool, &cf->cycle->prefix, &conf->root) - != NGX_OK) - { + if (ngx_conf_full_name(cf->cycle, &conf->root, 0) != NGX_OK) { return NGX_CONF_ERROR; } } @@ -4430,9 +4428,7 @@ ngx_http_core_root(ngx_conf_t *cf, ngx_c } if (clcf->root.data[0] != '$') { - if (ngx_get_full_name(cf->pool, &cf->cycle->prefix, &clcf->root) - != NGX_OK) - { + if (ngx_conf_full_name(cf->cycle, &clcf->root, 0) != NGX_OK) { return NGX_CONF_ERROR; } } diff --git a/src/http/ngx_http_file_cache.c b/src/http/ngx_http_file_cache.c --- a/src/http/ngx_http_file_cache.c +++ b/src/http/ngx_http_file_cache.c @@ -1626,9 +1626,7 @@ ngx_http_file_cache_set_slot(ngx_conf_t cache->path->name.len--; } - if (ngx_get_full_name(cf->pool, &cf->cycle->prefix, &cache->path->name) - != NGX_OK) - { + if (ngx_conf_full_name(cf->cycle, &cache->path->name, 0) != NGX_OK) { return NGX_CONF_ERROR; } diff --git a/src/http/ngx_http_script.c b/src/http/ngx_http_script.c --- a/src/http/ngx_http_script.c +++ b/src/http/ngx_http_script.c @@ -131,12 +131,7 @@ ngx_http_compile_complex_value(ngx_http_ if ((v->len == 0 || v->data[0] != '$') && (ccv->conf_prefix || ccv->root_prefix)) { - if (ngx_get_full_name(ccv->cf->pool, - ccv->conf_prefix ? &ccv->cf->cycle->conf_prefix: - &ccv->cf->cycle->prefix, - v) - != NGX_OK) - { + if (ngx_conf_full_name(ccv->cf->cycle, v, ccv->conf_prefix) != NGX_OK) { return NGX_ERROR; } From mdounin at mdounin.ru Wed Aug 21 15:51:51 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 21 Aug 2013 15:51:51 +0000 Subject: [nginx] Style improved after 12dd27b74117. Message-ID: details: http://hg.nginx.org/nginx/rev/e04083b79335 branches: changeset: 5331:e04083b79335 user: Maxim Dounin date: Tue Aug 20 21:33:43 2013 +0400 description: Style improved after 12dd27b74117. diffstat: src/core/ngx_conf_file.c | 9 +++++---- src/http/ngx_http_script.c | 13 +++++-------- 2 files changed, 10 insertions(+), 12 deletions(-) diffs (48 lines): diff --git a/src/core/ngx_conf_file.c b/src/core/ngx_conf_file.c --- a/src/core/ngx_conf_file.c +++ b/src/core/ngx_conf_file.c @@ -800,10 +800,11 @@ ngx_conf_include(ngx_conf_t *cf, ngx_com ngx_int_t ngx_conf_full_name(ngx_cycle_t *cycle, ngx_str_t *name, ngx_uint_t conf_prefix) { - return ngx_get_full_name(cycle->pool, - conf_prefix ? &cycle->conf_prefix: - &cycle->prefix, - name); + ngx_str_t *prefix; + + prefix = conf_prefix ? &cycle->conf_prefix : &cycle->prefix; + + return ngx_get_full_name(cycle->pool, prefix, name); } diff --git a/src/http/ngx_http_script.c b/src/http/ngx_http_script.c --- a/src/http/ngx_http_script.c +++ b/src/http/ngx_http_script.c @@ -1327,20 +1327,17 @@ ngx_http_script_full_name_code(ngx_http_ { ngx_http_script_full_name_code_t *code; - ngx_str_t value; + ngx_str_t value, *prefix; code = (ngx_http_script_full_name_code_t *) e->ip; value.data = e->buf.data; value.len = e->pos - e->buf.data; - if (ngx_get_full_name(e->request->pool, - code->conf_prefix - ? (ngx_str_t *) &ngx_cycle->conf_prefix: - (ngx_str_t *) &ngx_cycle->prefix, - &value) - != NGX_OK) - { + prefix = code->conf_prefix ? (ngx_str_t *) &ngx_cycle->conf_prefix: + (ngx_str_t *) &ngx_cycle->prefix; + + if (ngx_get_full_name(e->request->pool, prefix, &value) != NGX_OK) { e->ip = ngx_http_script_exit; e->status = NGX_HTTP_INTERNAL_SERVER_ERROR; return; From pluknet at nginx.com Wed Aug 21 16:24:35 2013 From: pluknet at nginx.com (Sergey Kandaurov) Date: Wed, 21 Aug 2013 16:24:35 +0000 Subject: [nginx] Autoindex: return NGX_ERROR on error if headers were sent. Message-ID: details: http://hg.nginx.org/nginx/rev/1a9700ef9725 branches: changeset: 5332:1a9700ef9725 user: Sergey Kandaurov date: Tue Jul 30 11:43:21 2013 +0400 description: Autoindex: return NGX_ERROR on error if headers were sent. This prevents ngx_http_finalize_request() from issuing ngx_http_special_response_handler() on a freed context. diffstat: src/http/modules/ngx_http_autoindex_module.c | 4 ++-- 1 files changed, 2 insertions(+), 2 deletions(-) diffs (21 lines): diff -r e04083b79335 -r 1a9700ef9725 src/http/modules/ngx_http_autoindex_module.c --- a/src/http/modules/ngx_http_autoindex_module.c Tue Aug 20 21:33:43 2013 +0400 +++ b/src/http/modules/ngx_http_autoindex_module.c Tue Jul 30 11:43:21 2013 +0400 @@ -388,7 +388,7 @@ ngx_http_autoindex_handler(ngx_http_requ b = ngx_create_temp_buf(r->pool, len); if (b == NULL) { - return NGX_HTTP_INTERNAL_SERVER_ERROR; + return NGX_ERROR; } if (entries.nelts > 1) { @@ -649,7 +649,7 @@ ngx_http_autoindex_error(ngx_http_reques ngx_close_dir_n " \"%V\" failed", name); } - return NGX_HTTP_INTERNAL_SERVER_ERROR; + return r->header_sent ? NGX_ERROR : NGX_HTTP_INTERNAL_SERVER_ERROR; } From pluknet at nginx.com Wed Aug 21 16:24:36 2013 From: pluknet at nginx.com (Sergey Kandaurov) Date: Wed, 21 Aug 2013 16:24:36 +0000 Subject: [nginx] Autoindex: improved ngx_de_info() error handling. Message-ID: details: http://hg.nginx.org/nginx/rev/e8bca8397625 branches: changeset: 5333:e8bca8397625 user: Sergey Kandaurov date: Tue Jul 30 11:43:21 2013 +0400 description: Autoindex: improved ngx_de_info() error handling. This allows to build a directory listing whenever a loop exists in symbolic link resolution of the path argument. diffstat: src/http/modules/ngx_http_autoindex_module.c | 2 +- src/os/unix/ngx_errno.h | 2 +- src/os/win32/ngx_errno.h | 1 + 3 files changed, 3 insertions(+), 2 deletions(-) diffs (38 lines): diff -r 1a9700ef9725 -r e8bca8397625 src/http/modules/ngx_http_autoindex_module.c --- a/src/http/modules/ngx_http_autoindex_module.c Tue Jul 30 11:43:21 2013 +0400 +++ b/src/http/modules/ngx_http_autoindex_module.c Tue Jul 30 11:43:21 2013 +0400 @@ -304,7 +304,7 @@ ngx_http_autoindex_handler(ngx_http_requ if (ngx_de_info(filename, &dir) == NGX_FILE_ERROR) { err = ngx_errno; - if (err != NGX_ENOENT) { + if (err != NGX_ENOENT && err != NGX_ELOOP) { ngx_log_error(NGX_LOG_CRIT, r->connection->log, err, ngx_de_info_n " \"%s\" failed", filename); diff -r 1a9700ef9725 -r e8bca8397625 src/os/unix/ngx_errno.h --- a/src/os/unix/ngx_errno.h Tue Jul 30 11:43:21 2013 +0400 +++ b/src/os/unix/ngx_errno.h Tue Jul 30 11:43:21 2013 +0400 @@ -49,10 +49,10 @@ typedef int ngx_err_t; #define NGX_ECANCELED ECANCELED #define NGX_EILSEQ EILSEQ #define NGX_ENOMOREFILES 0 +#define NGX_ELOOP ELOOP #if (NGX_HAVE_OPENAT) #define NGX_EMLINK EMLINK -#define NGX_ELOOP ELOOP #endif #if (__hpux__) diff -r 1a9700ef9725 -r e8bca8397625 src/os/win32/ngx_errno.h --- a/src/os/win32/ngx_errno.h Tue Jul 30 11:43:21 2013 +0400 +++ b/src/os/win32/ngx_errno.h Tue Jul 30 11:43:21 2013 +0400 @@ -51,6 +51,7 @@ typedef DWORD ngx_e #define NGX_EHOSTUNREACH WSAEHOSTUNREACH #define NGX_ENOMOREFILES ERROR_NO_MORE_FILES #define NGX_EILSEQ ERROR_NO_UNICODE_TRANSLATION +#define NGX_ELOOP 0 #define NGX_EALREADY WSAEALREADY #define NGX_EINVAL WSAEINVAL From pluknet at nginx.com Wed Aug 21 16:24:37 2013 From: pluknet at nginx.com (Sergey Kandaurov) Date: Wed, 21 Aug 2013 16:24:37 +0000 Subject: [nginx] Added safety belt for the case of sending header twice. Message-ID: details: http://hg.nginx.org/nginx/rev/03ff14058272 branches: changeset: 5334:03ff14058272 user: Sergey Kandaurov date: Tue Jul 30 15:04:46 2013 +0400 description: Added safety belt for the case of sending header twice. The aforementioned situation is abnormal per se and as such it now forces request termination with appropriate error message. diffstat: src/http/ngx_http_core_module.c | 6 ++++++ 1 files changed, 6 insertions(+), 0 deletions(-) diffs (16 lines): diff -r e8bca8397625 -r 03ff14058272 src/http/ngx_http_core_module.c --- a/src/http/ngx_http_core_module.c Tue Jul 30 11:43:21 2013 +0400 +++ b/src/http/ngx_http_core_module.c Tue Jul 30 15:04:46 2013 +0400 @@ -1933,6 +1933,12 @@ ngx_http_send_response(ngx_http_request_ ngx_int_t ngx_http_send_header(ngx_http_request_t *r) { + if (r->header_sent) { + ngx_log_error(NGX_LOG_ALERT, r->connection->log, 0, + "header already sent"); + return NGX_ERROR; + } + if (r->err_status) { r->headers_out.status = r->err_status; r->headers_out.status_line.len = 0; From mdounin at mdounin.ru Wed Aug 21 16:47:43 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 21 Aug 2013 16:47:43 +0000 Subject: [nginx] Added auth request to win32 builds. Message-ID: details: http://hg.nginx.org/nginx/rev/1d0523f54a9f branches: changeset: 5335:1d0523f54a9f user: Maxim Dounin date: Wed Aug 21 20:46:10 2013 +0400 description: Added auth request to win32 builds. diffstat: misc/GNUmakefile | 1 + 1 files changed, 1 insertions(+), 0 deletions(-) diffs (11 lines): diff --git a/misc/GNUmakefile b/misc/GNUmakefile --- a/misc/GNUmakefile +++ b/misc/GNUmakefile @@ -76,6 +76,7 @@ win32: --with-http_mp4_module \ --with-http_gunzip_module \ --with-http_gzip_static_module \ + --with-http_auth_request_module \ --with-http_random_index_module \ --with-http_secure_link_module \ --with-mail \ From aviram at adallom.com Thu Aug 22 14:00:55 2013 From: aviram at adallom.com (Aviram Cohen) Date: Thu, 22 Aug 2013 17:00:55 +0300 Subject: [PATCH] Proxy remote server SSL certificate verification In-Reply-To: <20130821143033.GP19334@mdounin.ru> References: <20130820140912.GF19334@mdounin.ru> <20130821143033.GP19334@mdounin.ru> Message-ID: Hello! I have a couple of questions regarding the two last comments: On Wed, Aug 21, 2013 at 5:30 PM, Maxim Dounin wrote: > Hello! > [..] >> @@ -3748,6 +3786,13 @@ ngx_http_proxy_set_ssl(ngx_conf_t *cf, n >> != NGX_OK) >> { >> return NGX_ERROR; >> + } >> + >> + if (ngx_ssl_set_verify_options(plcf->upstream.ssl, >> + &plcf->upstream.ssl_certificate, plcf->upstream.ssl_verify_depth) >> + != NGX_OK) >> + { >> + return NGX_ERROR; >> } > > This is called before options used are correctly set. Where do you think this call should be performed? Should we add a postconfiguration callback for the proxy module from which this would be called? (BTW, I'll remove ngx_ssl_set_verify_options() and use ngx_ssl_trusted_certificate() directly instead, as it is better) > > (There is also a style problem here, but it doesn't really matter > as you'll have to rewrite the code anyway.) > > [...] > >> @@ -1334,6 +1341,11 @@ ngx_http_upstream_ssl_handshake(ngx_conn >> >> ngx_http_upstream_send_request(r, u); >> >> + c = r->connection; >> + >> +fail: >> + ngx_http_run_posted_requests(c); >> + >> return; >> } >> > > You probably missed my previous comment. You have a use after > free problem here. Try triggering an error in > ngx_http_upstream_send_request() with NGX_DEBUG_MALLOC defined, it > should segfault. You're right, I've missed it... Should we check before ngx_http_send_request() is called whether or not the request has a parent request, and accordingly decide later whether to call ngx_http_run_posted_requests() or not? > > [...] > > -- > Maxim Dounin > http://nginx.org/en/donation.html > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel Best regrads, Aviram From a.marinov at ucdn.com Thu Aug 22 14:49:04 2013 From: a.marinov at ucdn.com (Anatoli Marinov) Date: Thu, 22 Aug 2013 17:49:04 +0300 Subject: nginx dynamic configuration Message-ID: Mates, Is there any written info how dynamic configuration for nginx works. I am wandering is it possible to add new proxy_cache zone with it without reload worker processes? There are several examples how to build dynamic configuration with lua and perl but both approaches cannot dynamicaly create proxy_cache zones (because there is no simple method to transfer shared memory segment from master to workers). Do you have more info about that? -------------- next part -------------- An HTML attachment was scrubbed... URL: From aaron.bedra at gmail.com Thu Aug 22 15:53:55 2013 From: aaron.bedra at gmail.com (Aaron Bedra) Date: Thu, 22 Aug 2013 10:53:55 -0500 Subject: Only fire a handler once In-Reply-To: References: Message-ID: Any ideas for this? On Sat, Aug 17, 2013 at 12:28 AM, Aaron Bedra wrote: > I'm looking for a way to make sure a handler only fires once. For > instance, in Apache, you can use the guard: > > if (!ap_is_initial_req(r)) { skip handling } > > Is there anything like this? I couldn't find any documentation for it. > > Thanks, > > Aaron > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pluknet at nginx.com Fri Aug 23 12:26:08 2013 From: pluknet at nginx.com (Sergey Kandaurov) Date: Fri, 23 Aug 2013 12:26:08 +0000 Subject: [nginx] MIME: added application/font-woff MIME type (ticket #292). Message-ID: details: http://hg.nginx.org/nginx/rev/aeabb6ae574d branches: changeset: 5336:aeabb6ae574d user: Sergey Kandaurov date: Fri Aug 23 16:24:23 2013 +0400 description: MIME: added application/font-woff MIME type (ticket #292). diffstat: conf/mime.types | 1 + 1 files changed, 1 insertions(+), 0 deletions(-) diffs (11 lines): diff -r 1d0523f54a9f -r aeabb6ae574d conf/mime.types --- a/conf/mime.types Wed Aug 21 20:46:10 2013 +0400 +++ b/conf/mime.types Fri Aug 23 16:24:23 2013 +0400 @@ -24,6 +24,7 @@ types { image/svg+xml svg svgz; image/webp webp; + application/font-woff woff; application/java-archive jar war ear; application/mac-binhex40 hqx; application/msword doc; From pluknet at nginx.com Fri Aug 23 12:26:09 2013 From: pluknet at nginx.com (Sergey Kandaurov) Date: Fri, 23 Aug 2013 12:26:09 +0000 Subject: [nginx] MIME: added the most common OOXML MIME types (ticket #243). Message-ID: details: http://hg.nginx.org/nginx/rev/07ef29f69a54 branches: changeset: 5337:07ef29f69a54 user: Sergey Kandaurov date: Fri Aug 23 16:24:24 2013 +0400 description: MIME: added the most common OOXML MIME types (ticket #243). diffstat: conf/mime.types | 4 ++++ 1 files changed, 4 insertions(+), 0 deletions(-) diffs (14 lines): diff -r aeabb6ae574d -r 07ef29f69a54 conf/mime.types --- a/conf/mime.types Fri Aug 23 16:24:23 2013 +0400 +++ b/conf/mime.types Fri Aug 23 16:24:24 2013 +0400 @@ -61,6 +61,10 @@ types { application/octet-stream iso img; application/octet-stream msi msp msm; + application/vnd.openxmlformats-officedocument.wordprocessingml.document docx; + application/vnd.openxmlformats-officedocument.spreadsheetml.sheet xlsx; + application/vnd.openxmlformats-officedocument.presentationml.presentation pptx; + audio/midi mid midi kar; audio/mpeg mp3; audio/ogg ogg; From pluknet at nginx.com Fri Aug 23 12:26:10 2013 From: pluknet at nginx.com (Sergey Kandaurov) Date: Fri, 23 Aug 2013 12:26:10 +0000 Subject: [nginx] MIME: eot MIME type updated to follow IANA (ticket #306). Message-ID: details: http://hg.nginx.org/nginx/rev/010bb2e21f3f branches: changeset: 5338:010bb2e21f3f user: Sergey Kandaurov date: Fri Aug 23 16:24:24 2013 +0400 description: MIME: eot MIME type updated to follow IANA (ticket #306). diffstat: conf/mime.types | 2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diffs (19 lines): diff -r 07ef29f69a54 -r 010bb2e21f3f conf/mime.types --- a/conf/mime.types Fri Aug 23 16:24:24 2013 +0400 +++ b/conf/mime.types Fri Aug 23 16:24:24 2013 +0400 @@ -32,6 +32,7 @@ types { application/postscript ps eps ai; application/rtf rtf; application/vnd.ms-excel xls; + application/vnd.ms-fontobject eot; application/vnd.ms-powerpoint ppt; application/vnd.wap.wmlc wmlc; application/vnd.google-earth.kml+xml kml; @@ -57,7 +58,6 @@ types { application/octet-stream bin exe dll; application/octet-stream deb; application/octet-stream dmg; - application/octet-stream eot; application/octet-stream iso img; application/octet-stream msi msp msm; From mdounin at mdounin.ru Fri Aug 23 18:20:05 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 23 Aug 2013 18:20:05 +0000 Subject: [nginx] Fixed try_files with empty argument (ticket #390). Message-ID: details: http://hg.nginx.org/nginx/rev/ee2a4c68fb35 branches: changeset: 5339:ee2a4c68fb35 user: Maxim Dounin date: Fri Aug 23 22:18:39 2013 +0400 description: Fixed try_files with empty argument (ticket #390). diffstat: src/http/ngx_http_core_module.c | 4 +++- 1 files changed, 3 insertions(+), 1 deletions(-) diffs (14 lines): diff --git a/src/http/ngx_http_core_module.c b/src/http/ngx_http_core_module.c --- a/src/http/ngx_http_core_module.c +++ b/src/http/ngx_http_core_module.c @@ -4766,7 +4766,9 @@ ngx_http_core_try_files(ngx_conf_t *cf, tf[i].name = value[i + 1]; - if (tf[i].name.data[tf[i].name.len - 1] == '/') { + if (tf[i].name.len > 0 + && tf[i].name.data[tf[i].name.len - 1] == '/') + { tf[i].test_dir = 1; tf[i].name.len--; tf[i].name.data[tf[i].name.len] = '\0'; From mdounin at mdounin.ru Fri Aug 23 18:20:06 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 23 Aug 2013 18:20:06 +0000 Subject: [nginx] Upstream: posted requests handling after ssl handshake e... Message-ID: details: http://hg.nginx.org/nginx/rev/13a5f4765887 branches: changeset: 5340:13a5f4765887 user: Maxim Dounin date: Fri Aug 23 22:18:46 2013 +0400 description: Upstream: posted requests handling after ssl handshake errors. Missing call to ngx_http_run_posted_request() resulted in a main request hang if subrequest's ssl handshake with an upstream server failed for some reason. Reported by Aviram Cohen. diffstat: src/http/ngx_http_upstream.c | 6 ++++++ 1 files changed, 6 insertions(+), 0 deletions(-) diffs (23 lines): diff --git a/src/http/ngx_http_upstream.c b/src/http/ngx_http_upstream.c --- a/src/http/ngx_http_upstream.c +++ b/src/http/ngx_http_upstream.c @@ -1338,13 +1338,19 @@ ngx_http_upstream_ssl_handshake(ngx_conn c->write->handler = ngx_http_upstream_handler; c->read->handler = ngx_http_upstream_handler; + c = r->connection; + ngx_http_upstream_send_request(r, u); + ngx_http_run_posted_requests(c); return; } + c = r->connection; + ngx_http_upstream_next(r, u, NGX_HTTP_UPSTREAM_FT_ERROR); + ngx_http_run_posted_requests(c); } #endif From mdounin at mdounin.ru Fri Aug 23 18:20:07 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 23 Aug 2013 18:20:07 +0000 Subject: [nginx] Cache: lock timeouts are now logged at info level. Message-ID: details: http://hg.nginx.org/nginx/rev/654c1631dc86 branches: changeset: 5341:654c1631dc86 user: Maxim Dounin date: Fri Aug 23 22:18:54 2013 +0400 description: Cache: lock timeouts are now logged at info level. diffstat: src/http/ngx_http_file_cache.c | 3 +-- 1 files changed, 1 insertions(+), 2 deletions(-) diffs (13 lines): diff --git a/src/http/ngx_http_file_cache.c b/src/http/ngx_http_file_cache.c --- a/src/http/ngx_http_file_cache.c +++ b/src/http/ngx_http_file_cache.c @@ -445,8 +445,7 @@ ngx_http_file_cache_lock_wait_handler(ng timer = c->wait_time - ngx_current_msec; if ((ngx_msec_int_t) timer <= 0) { - ngx_log_debug0(NGX_LOG_DEBUG_HTTP, ev->log, 0, - "http file cache lock timeout"); + ngx_log_error(NGX_LOG_INFO, ev->log, 0, "cache lock timeout"); c->lock = 0; goto wakeup; } From mdounin at mdounin.ru Fri Aug 23 23:16:38 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 23 Aug 2013 23:16:38 +0000 Subject: [nginx] Configure: pcre.lib dependencies fix. Message-ID: details: http://hg.nginx.org/nginx/rev/b3f6290a9401 branches: changeset: 5342:b3f6290a9401 user: Maxim Dounin date: Fri Aug 23 22:53:54 2013 +0400 description: Configure: pcre.lib dependencies fix. Previously, an attempt to build pcre.lib on win32 before anything else failed due to no pcre.h. diffstat: auto/lib/pcre/make | 3 ++- 1 files changed, 2 insertions(+), 1 deletions(-) diffs (13 lines): diff --git a/auto/lib/pcre/make b/auto/lib/pcre/make --- a/auto/lib/pcre/make +++ b/auto/lib/pcre/make @@ -32,7 +32,8 @@ case "$NGX_PLATFORM" in cat << END >> $NGX_MAKEFILE -`echo "$PCRE/pcre.lib: $NGX_MAKEFILE" | sed -e "s/\//$ngx_regex_dirsep/g"` +`echo "$PCRE/pcre.lib: $PCRE/pcre.h $NGX_MAKEFILE" \ + | sed -e "s/\//$ngx_regex_dirsep/g"` \$(MAKE) -f auto/lib/pcre/$ngx_makefile $ngx_pcre $ngx_opt `echo "$PCRE/pcre.h:" | sed -e "s/\//$ngx_regex_dirsep/g"` From mdounin at mdounin.ru Fri Aug 23 23:16:40 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 23 Aug 2013 23:16:40 +0000 Subject: [nginx] Updated PCRE used for win32 builds. Message-ID: details: http://hg.nginx.org/nginx/rev/bd828a14e017 branches: changeset: 5343:bd828a14e017 user: Maxim Dounin date: Fri Aug 23 22:54:08 2013 +0400 description: Updated PCRE used for win32 builds. As of PCRE 8.33, config.h.generic no longer contains boolean macros. Two of them (SUPPORT_PCRE8 and HAVE_MEMMOVE) were added to appropriate makefiles. This allows PCRE 8.33 to compile and don't change anything for previous versions. diffstat: auto/lib/pcre/makefile.bcc | 3 ++- auto/lib/pcre/makefile.msvc | 3 ++- auto/lib/pcre/makefile.owc | 3 ++- misc/GNUmakefile | 2 +- 4 files changed, 7 insertions(+), 4 deletions(-) diffs (51 lines): diff --git a/auto/lib/pcre/makefile.bcc b/auto/lib/pcre/makefile.bcc --- a/auto/lib/pcre/makefile.bcc +++ b/auto/lib/pcre/makefile.bcc @@ -4,7 +4,8 @@ CFLAGS = -q -O2 -tWM -w-8004 $(CPU_OPT) -PCREFLAGS = -DHAVE_CONFIG_H -DPCRE_STATIC -DPOSIX_MALLOC_THRESHOLD=10 +PCREFLAGS = -DHAVE_CONFIG_H -DPCRE_STATIC -DPOSIX_MALLOC_THRESHOLD=10 \ + -DSUPPORT_PCRE8 -DHAVE_MEMMOVE pcre.lib: diff --git a/auto/lib/pcre/makefile.msvc b/auto/lib/pcre/makefile.msvc --- a/auto/lib/pcre/makefile.msvc +++ b/auto/lib/pcre/makefile.msvc @@ -4,7 +4,8 @@ CFLAGS = -O2 -Ob1 -Oi -Gs $(LIBC) $(CPU_OPT) -PCREFLAGS = -DHAVE_CONFIG_H -DPCRE_STATIC -DPOSIX_MALLOC_THRESHOLD=10 +PCREFLAGS = -DHAVE_CONFIG_H -DPCRE_STATIC -DPOSIX_MALLOC_THRESHOLD=10 \ + -DSUPPORT_PCRE8 -DHAVE_MEMMOVE pcre.lib: diff --git a/auto/lib/pcre/makefile.owc b/auto/lib/pcre/makefile.owc --- a/auto/lib/pcre/makefile.owc +++ b/auto/lib/pcre/makefile.owc @@ -4,7 +4,8 @@ CFLAGS = -c -zq -bt=nt -ot -op -oi -oe -s -bm $(CPU_OPT) -PCREFLAGS = -DHAVE_CONFIG_H -DPCRE_STATIC -DPOSIX_MALLOC_THRESHOLD=10 +PCREFLAGS = -DHAVE_CONFIG_H -DPCRE_STATIC -DPOSIX_MALLOC_THRESHOLD=10 \ + -DSUPPORT_PCRE8 -DHAVE_MEMMOVE pcre.lib: diff --git a/misc/GNUmakefile b/misc/GNUmakefile --- a/misc/GNUmakefile +++ b/misc/GNUmakefile @@ -7,7 +7,7 @@ TEMP = tmp OBJS = objs.msvc8 OPENSSL = openssl-1.0.1e ZLIB = zlib-1.2.8 -PCRE = pcre-8.32 +PCRE = pcre-8.33 release: export From alex.garzao at azion.com Mon Aug 26 21:54:16 2013 From: alex.garzao at azion.com (=?ISO-8859-1?Q?Alex_Garz=E3o?=) Date: Mon, 26 Aug 2013 18:54:16 -0300 Subject: Sharing data when download the same object from upstream Message-ID: Hello guys, This is my first post to nginx-devel. First of all, I would like to congratulate NGINX developers. NGINX is an amazing project :-) Well, I'm using NGINX as a proxy server, with cache enabled. I noted that, when two (or more) users trying to download the same object, in parallel, and the object isn't in the cache, NGINX download them from the upstream. In this case, NGINX creates one connection to upstream (per request) and download them to temp files. Ok, this works, but, in some situations, in one server, we saw more than 70 parallel downloads to the same object (in this case, an object with more than 200 MB). If possible, I would like some insights about how can I avoid this situation. I looked to see if it's just a configuration, but I didn't find nothing. IMHO, I think the best approach is share the temp file. If possible, I would like to known your opinions about this approach. I looked at the code in ngx_http_upstream.c and ngx_http_proxy.c, and I'm trying to fix the code to share the temp. I think that I need to do the following tasks: 1) Register the current downloads from upstreams. Probably I can address this with a rbtree, where each node has the unique object id and a list with downstreams (requests?) waiting for data from the temp. 2) Disassociate the read from upstream from the write to downstream. Today, in the ngx_event_pipe function, NGINX reads from upstream, writes to temp, and writes to downstream. But, as I can have N downstreams waiting data from the same upstream, probably I need to move the write to downstream to another place. The only way I think is implementing a polling event, but I know that this is incorrect because NGINX is event based, and polling waste a lote of CPU. 3) When I know that there more data in temp to be sent, which function I must use? ngx_http_output_filter? Suggestions will welcome :-) Thanks people! -- Alex Garz?o Projetista de Software Azion Technologies alex.garzao (at) azion.com From wandenberg at gmail.com Mon Aug 26 21:58:35 2013 From: wandenberg at gmail.com (Wandenberg Peixoto) Date: Mon, 26 Aug 2013 18:58:35 -0300 Subject: Sharing data when download the same object from upstream In-Reply-To: References: Message-ID: Try to use the proxy_cache_lock configuration, I think this is what you are looking for. Don't forget to configure the proxy_cache_lock_timeout to your use case. On Aug 26, 2013 6:54 PM, "Alex Garz?o" wrote: > Hello guys, > > This is my first post to nginx-devel. > > First of all, I would like to congratulate NGINX developers. NGINX is > an amazing project :-) > > Well, I'm using NGINX as a proxy server, with cache enabled. I noted > that, when two (or more) users trying to download the same object, in > parallel, and the object isn't in the cache, NGINX download them from > the upstream. In this case, NGINX creates one connection to upstream > (per request) and download them to temp files. Ok, this works, but, in > some situations, in one server, we saw more than 70 parallel downloads > to the same object (in this case, an object with more than 200 MB). > > If possible, I would like some insights about how can I avoid this > situation. I looked to see if it's just a configuration, but I didn't > find nothing. > > IMHO, I think the best approach is share the temp file. If possible, I > would like to known your opinions about this approach. > > I looked at the code in ngx_http_upstream.c and ngx_http_proxy.c, and > I'm trying to fix the code to share the temp. I think that I need to > do the following tasks: > > 1) Register the current downloads from upstreams. Probably I can > address this with a rbtree, where each node has the unique object id > and a list with downstreams (requests?) waiting for data from the > temp. > > 2) Disassociate the read from upstream from the write to downstream. > Today, in the ngx_event_pipe function, NGINX reads from upstream, > writes to temp, and writes to downstream. But, as I can have N > downstreams waiting data from the same upstream, probably I need to > move the write to downstream to another place. The only way I think is > implementing a polling event, but I know that this is incorrect > because NGINX is event based, and polling waste a lote of CPU. > > 3) When I know that there more data in temp to be sent, which function > I must use? ngx_http_output_filter? > > Suggestions will welcome :-) > > Thanks people! > > -- > Alex Garz?o > Projetista de Software > Azion Technologies > alex.garzao (at) azion.com > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From torshie at gmail.com Tue Aug 27 03:21:38 2013 From: torshie at gmail.com (=?UTF-8?B?6YKT5bCn?=) Date: Tue, 27 Aug 2013 11:21:38 +0800 Subject: The meaning of ngx_http_request_t.out ? Message-ID: Hi, I'm writing an nginx module, it does something similar to the sub module. After some research I succeeded in handling the ngx_chain_t pointer passed to my body filter. My module seems to work well for static files. When my module is handling PHP responses (fastcgi), sometimes the ngx_chain_t pointer is NULL. I simply call the next body filter in such situation like the sub module body filter. However function ngx_http_write_filter() will fail, because r->out isn't NULL and the buf's in r->out are of zero size. My questions are: what's the meaning of r->out? how should I modify it in my body filter ? Could I simply return NGX_OK in my body filter without calling the next body filter if the ngx_chain_t pointer is NULL ? Thanks Yao -------------- next part -------------- An HTML attachment was scrubbed... URL: From aviram at adallom.com Tue Aug 27 08:47:38 2013 From: aviram at adallom.com (Aviram Cohen) Date: Tue, 27 Aug 2013 11:47:38 +0300 Subject: [PATCH] Proxy remote server SSL certificate verification In-Reply-To: References: <20130820140912.GF19334@mdounin.ru> <20130821143033.GP19334@mdounin.ru> Message-ID: Added a new version, with all the required fixes. diff -Nrpu nginx-1.4.1/src/http/modules/ngx_http_proxy_module.c nginx-1.4.1-proxy-ssl-verification/src/http/modules/ngx_http_proxy_module.c --- nginx-1.4.1/src/http/modules/ngx_http_proxy_module.c 2013-05-06 13:26:50.000000000 +0300 +++ nginx-1.4.1-proxy-ssl-verification/src/http/modules/ngx_http_proxy_module.c 2013-08-26 10:43:15.639557701 +0300 @@ -511,6 +511,27 @@ static ngx_command_t ngx_http_proxy_com offsetof(ngx_http_proxy_loc_conf_t, upstream.ssl_session_reuse), NULL }, + + { ngx_string("proxy_ssl_verify"), + NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1, + ngx_conf_set_flag_slot, + NGX_HTTP_LOC_CONF_OFFSET, + offsetof(ngx_http_proxy_loc_conf_t, upstream.ssl_verify), + NULL }, + + { ngx_string("proxy_ssl_verify_depth"), + NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1, + ngx_conf_set_num_slot, + NGX_HTTP_LOC_CONF_OFFSET, + offsetof(ngx_http_proxy_loc_conf_t, upstream.ssl_verify_depth), + NULL }, + + { ngx_string("proxy_ssl_trusted_certificate"), + NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1, + ngx_conf_set_str_slot, + NGX_HTTP_LOC_CONF_OFFSET, + offsetof(ngx_http_proxy_loc_conf_t, upstream.ssl_certificate), + NULL }, #endif ngx_null_command @@ -2419,8 +2440,11 @@ ngx_http_proxy_create_loc_conf(ngx_conf_ conf->upstream.pass_headers = NGX_CONF_UNSET_PTR; conf->upstream.intercept_errors = NGX_CONF_UNSET; + #if (NGX_HTTP_SSL) conf->upstream.ssl_session_reuse = NGX_CONF_UNSET; + conf->upstream.ssl_verify = NGX_CONF_UNSET; + conf->upstream.ssl_verify_depth = NGX_CONF_UNSET_UINT; #endif /* "proxy_cyclic_temp_file" is disabled */ @@ -2697,6 +2721,30 @@ ngx_http_proxy_merge_loc_conf(ngx_conf_t #if (NGX_HTTP_SSL) ngx_conf_merge_value(conf->upstream.ssl_session_reuse, prev->upstream.ssl_session_reuse, 1); + ngx_conf_merge_value(conf->upstream.ssl_verify, + prev->upstream.ssl_verify, 0); + ngx_conf_merge_uint_value(conf->upstream.ssl_verify_depth, + prev->upstream.ssl_verify_depth, 1); + ngx_conf_merge_str_value(conf->upstream.ssl_certificate, + prev->upstream.ssl_certificate, ""); + + if (conf->upstream.ssl_verify) { + if (conf->upstream.ssl_certificate.len == 0) { + ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, + "no \"proxy_ssl_trusted_certificate\" is defined for " + "the \"proxy_ssl_verify\" directive"); + + return NGX_CONF_ERROR; + } + } + + if (conf->upstream.ssl && + ngx_ssl_trusted_certificate(cf, conf->upstream.ssl, + &conf->upstream.ssl_certificate, conf->upstream.ssl_verify_depth) != NGX_OK) + { + return NGX_CONF_ERROR; + } + #endif ngx_conf_merge_value(conf->redirect, prev->redirect, 1); diff -Nrpu nginx-1.4.1/src/http/ngx_http_upstream.c nginx-1.4.1-proxy-ssl-verification/src/http/ngx_http_upstream.c --- nginx-1.4.1/src/http/ngx_http_upstream.c 2013-05-06 13:26:50.000000000 +0300 +++ nginx-1.4.1-proxy-ssl-verification/src/http/ngx_http_upstream.c 2013-08-26 10:44:35.323558884 +0300 @@ -1324,7 +1324,13 @@ ngx_http_upstream_ssl_handshake(ngx_conn u = r->upstream; if (c->ssl->handshaked) { - + if (u->conf->ssl_verify && SSL_get_verify_result(c->ssl->connection) != X509_V_OK) { + ngx_log_error(NGX_LOG_ERR, c->log, 0, "upstream ssl certificate validation failed"); + c = r->connection; + ngx_http_upstream_next(r, u, NGX_HTTP_UPSTREAM_FT_ERROR); + goto fail; + } + if (u->conf->ssl_session_reuse) { u->peer.save_session(&u->peer, u->peer.data); } @@ -1332,13 +1338,21 @@ ngx_http_upstream_ssl_handshake(ngx_conn c->write->handler = ngx_http_upstream_handler; c->read->handler = ngx_http_upstream_handler; + c = r->connection; + ngx_http_upstream_send_request(r, u); +fail: + ngx_http_run_posted_requests(c); + return; } + c = r->connection; + ngx_http_upstream_next(r, u, NGX_HTTP_UPSTREAM_FT_ERROR); + ngx_http_run_posted_requests(c); } #endif diff -Nrpu nginx-1.4.1/src/http/ngx_http_upstream.h nginx-1.4.1-proxy-ssl-verification/src/http/ngx_http_upstream.h --- nginx-1.4.1/src/http/ngx_http_upstream.h 2013-05-06 13:26:50.000000000 +0300 +++ nginx-1.4.1-proxy-ssl-verification/src/http/ngx_http_upstream.h 2013-08-26 10:44:56.263558679 +0300 @@ -191,6 +191,9 @@ typedef struct { #if (NGX_HTTP_SSL) ngx_ssl_t *ssl; ngx_flag_t ssl_session_reuse; + ngx_flag_t ssl_verify; + ngx_uint_t ssl_verify_depth; + ngx_str_t ssl_certificate; #endif ngx_str_t module; On Thu, Aug 22, 2013 at 5:00 PM, Aviram Cohen wrote: > Hello! > > I have a couple of questions regarding the two last comments: > > On Wed, Aug 21, 2013 at 5:30 PM, Maxim Dounin wrote: >> Hello! >> > [..] >>> @@ -3748,6 +3786,13 @@ ngx_http_proxy_set_ssl(ngx_conf_t *cf, n >>> != NGX_OK) >>> { >>> return NGX_ERROR; >>> + } >>> + >>> + if (ngx_ssl_set_verify_options(plcf->upstream.ssl, >>> + &plcf->upstream.ssl_certificate, plcf->upstream.ssl_verify_depth) >>> + != NGX_OK) >>> + { >>> + return NGX_ERROR; >>> } >> >> This is called before options used are correctly set. > > Where do you think this call should be performed? Should we add a > postconfiguration callback for > the proxy module from which this would be called? > (BTW, I'll remove ngx_ssl_set_verify_options() and use > ngx_ssl_trusted_certificate() directly instead, as it is better) > >> >> (There is also a style problem here, but it doesn't really matter >> as you'll have to rewrite the code anyway.) >> >> [...] >> >>> @@ -1334,6 +1341,11 @@ ngx_http_upstream_ssl_handshake(ngx_conn >>> >>> ngx_http_upstream_send_request(r, u); >>> >>> + c = r->connection; >>> + >>> +fail: >>> + ngx_http_run_posted_requests(c); >>> + >>> return; >>> } >>> >> >> You probably missed my previous comment. You have a use after >> free problem here. Try triggering an error in >> ngx_http_upstream_send_request() with NGX_DEBUG_MALLOC defined, it >> should segfault. > > You're right, I've missed it... Should we check before > ngx_http_send_request() is called whether or not the > request has a parent request, and accordingly decide later whether to > call ngx_http_run_posted_requests() or not? > >> >> [...] >> >> -- >> Maxim Dounin >> http://nginx.org/en/donation.html >> >> _______________________________________________ >> nginx-devel mailing list >> nginx-devel at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx-devel > > Best regrads, > Aviram -------------- next part -------------- A non-text attachment was scrubbed... Name: nginx-1.4.1-proxy-ssl-verification.patch Type: application/octet-stream Size: 5083 bytes Desc: not available URL: From lior.k at zend.com Tue Aug 27 13:16:19 2013 From: lior.k at zend.com (Lior Kaplan) Date: Tue, 27 Aug 2013 16:16:19 +0300 Subject: Installation script from nginx Linux repositories Message-ID: Hi, Continuing my tweet question [1], Zend would like to contribute this simple script to help automate the installation from nginx.org Linux repositories [2]. We've built the script as part of our ZendServer on Ngnix installation script. Let let me know if you have any specifc license requirements. Kaplan [1] https://twitter.com/KaplanZend/status/362497285189414913 [2] http://nginx.org/en/linux_packages.html#stable -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: install_nginx.sh Type: application/x-sh Size: 2371 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: nginx.list Type: application/octet-stream Size: 50 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: nginx.repo Type: application/octet-stream Size: 150 bytes Desc: not available URL: From mdounin at mdounin.ru Tue Aug 27 14:01:08 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 27 Aug 2013 14:01:08 +0000 Subject: [nginx] nginx-1.5.4-RELEASE Message-ID: details: http://hg.nginx.org/nginx/rev/376a5e769400 branches: changeset: 5344:376a5e769400 user: Maxim Dounin date: Tue Aug 27 17:37:15 2013 +0400 description: nginx-1.5.4-RELEASE diffstat: docs/xml/nginx/changes.xml | 102 +++++++++++++++++++++++++++++++++++++++++++++ 1 files changed, 102 insertions(+), 0 deletions(-) diffs (112 lines): diff --git a/docs/xml/nginx/changes.xml b/docs/xml/nginx/changes.xml --- a/docs/xml/nginx/changes.xml +++ b/docs/xml/nginx/changes.xml @@ -5,6 +5,108 @@ + + + + +MIME-??? ??? ?????????? js ??????? ?? "application/javascript"; +???????? ?? ????????? ????????? charset_types ???????? ??????????????. + + +the "js" extension MIME type has been changed to "application/javascript"; +default value of the "charset_types" directive was changed accordingly. + + + + + +?????? ????????? image_filter ? ?????????? size +?????????? ????? ? MIME-????? "application/json". + + +now the "image_filter" directive with the "size" parameter +returns responses with the "application/json" MIME type. + + + + + +?????? ngx_http_auth_request_module. + + +the ngx_http_auth_request_module. + + + + + +?? ?????? ??? ?? ????? ???????????????? ??? ????????? segmentation fault, +???? ?????????????? ????????? try_files ? ?????? ??????????. + + +a segmentation fault might occur on start or during reconfiguration +if the "try_files" directive was used with an empty parameter. + + + + + +?????? ?????? ??? ????????????? ? ?????????? root ? auth_basic_user_file +????????????? ?????, ???????? ? ??????? ??????????. + + +memory leak if relative paths were specified using variables +in the "root" or "auth_basic_user_file" directives. + + + + + +????????? valid_referers ??????????? ????????? ?????????? ?????????, +???? ????????? Referer ????????? ? "https://".
+??????? Liangbin Li. +
+ +the "valid_referers" directive incorrectly executed regular expressions +if a "Referer" header started with "https://".
+Thanks to Liangbin Li. +
+
+ + + +?????? ????? ????????, ???? ?????????????? ?????????? ? ??? ????????? ?????????? +??????????? ?????? ?? ????? SSL handshake ? ????????.
+??????? Aviram Cohen. +
+ +responses might hang if subrequests were used +and an SSL handshake error happened during subrequest processing.
+Thanks to Aviram Cohen. +
+
+ + + +? ?????? ngx_http_autoindex_module. + + +in the ngx_http_autoindex_module. + + + + + +? ?????? ngx_http_spdy_module. + + +in the ngx_http_spdy_module. + + + +
+ + From mdounin at mdounin.ru Tue Aug 27 14:01:10 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 27 Aug 2013 14:01:10 +0000 Subject: [nginx] release-1.5.4 tag Message-ID: details: http://hg.nginx.org/nginx/rev/d1403de41631 branches: changeset: 5345:d1403de41631 user: Maxim Dounin date: Tue Aug 27 17:37:15 2013 +0400 description: release-1.5.4 tag diffstat: .hgtags | 1 + 1 files changed, 1 insertions(+), 0 deletions(-) diffs (8 lines): diff --git a/.hgtags b/.hgtags --- a/.hgtags +++ b/.hgtags @@ -359,3 +359,4 @@ 48a84bc3ff074a65a63e353b9796ff2b14239699 99eed1a88fc33f32d66e2ec913874dfef3e12fcc release-1.5.1 5bdca4812974011731e5719a6c398b54f14a6d61 release-1.5.2 644a079526295aca11c52c46cb81e3754e6ad4ad release-1.5.3 +376a5e7694004048a9d073e4feb81bb54ee3ba91 release-1.5.4 From mdounin at mdounin.ru Tue Aug 27 14:21:00 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 27 Aug 2013 18:21:00 +0400 Subject: The meaning of ngx_http_request_t.out ? In-Reply-To: References: Message-ID: <20130827142100.GA19334@mdounin.ru> Hello! On Tue, Aug 27, 2013 at 11:21:38AM +0800, ?? wrote: > Hi, > I'm writing an nginx module, it does something similar to the sub module. > After some research I succeeded in handling the ngx_chain_t pointer passed > to my body filter. My module seems to work well for static files. > When my module is handling PHP responses (fastcgi), sometimes the > ngx_chain_t pointer is NULL. I simply call the next body filter in such > situation like the sub module body filter. However function > ngx_http_write_filter() will fail, because r->out isn't NULL > and the buf's in r->out are of zero size. > My questions are: what's the meaning of r->out? how should I modify it in > my body filter ? Could I simply return NGX_OK in my body filter without > calling the next body filter if the ngx_chain_t pointer is NULL ? The r->out is write filter's private data, you shouldn't touch it in your module. And it shouldn't contain zero size non-special buffers, if it does - there is a bug somewhere (most likely in your filter). -- Maxim Dounin http://nginx.org/en/donation.html From zls.sogou at gmail.com Tue Aug 27 16:21:27 2013 From: zls.sogou at gmail.com (lanshun zhou) Date: Wed, 28 Aug 2013 00:21:27 +0800 Subject: [PATCH] Image filter: large image handling Message-ID: # HG changeset patch # User Lanshun Zhou # Date 1377620347 -28800 # Node ID 4fae04f332b489c85cdc116e6138a618372d3691 # Parent d1403de4163100ec0c6c015e57f22384456870e3 Image filter: large image handling. If Content-Length header is not set, and the image size is larger than the buffer size, client will hang until a timeout occurs. Now NGX_HTTP_UNSUPPORTED_MEDIA_TYPE is returned immediately. diff -r d1403de41631 -r 4fae04f332b4 src/http/modules/ngx_http_image_filter_module.c --- a/src/http/modules/ngx_http_image_filter_module.c Tue Aug 27 17:37:15 2013 +0400 +++ b/src/http/modules/ngx_http_image_filter_module.c Wed Aug 28 00:19:07 2013 +0800 @@ -478,7 +478,14 @@ "image buf: %uz", size); rest = ctx->image + ctx->length - p; - size = (rest < size) ? rest : size; + if (rest < size) { + ngx_log_error(NGX_LOG_ERR, r->connection->log, 0, + "image filter: too big response: >%z, " + "try to increase image_filter_buffer", + ctx->length); + + return NGX_ERROR; + } p = ngx_cpymem(p, b->pos, size); b->pos += size; -------------- next part -------------- An HTML attachment was scrubbed... URL: From alex.garzao at azion.com Tue Aug 27 17:43:04 2013 From: alex.garzao at azion.com (=?ISO-8859-1?Q?Alex_Garz=E3o?=) Date: Tue, 27 Aug 2013 14:43:04 -0300 Subject: Sharing data when download the same object from upstream In-Reply-To: References: Message-ID: Hello Wandenberg, Thanks for your reply. Using proxy_cache_lock, when the second request arrive, it will wait until the object is complete in the cache (or until proxy_cache_lock_timeout expires). But, in many cases, my upstream has a really slow link and NGINX needs more than 30 minutes to download the object. In practice, probably I will see a lote of parallel downloads from the same object. Someone has other idea? Or I'm wrong about proxy_cache_lock? Regards. -- Alex Garz?o Projetista de Software Azion Technologies alex.garzao (at) azion.com On Mon, Aug 26, 2013 at 6:58 PM, Wandenberg Peixoto wrote: > Try to use the proxy_cache_lock configuration, I think this is what you are > looking for. > Don't forget to configure the proxy_cache_lock_timeout to your use case. > > On Aug 26, 2013 6:54 PM, "Alex Garz?o" wrote: >> >> Hello guys, >> >> This is my first post to nginx-devel. >> >> First of all, I would like to congratulate NGINX developers. NGINX is >> an amazing project :-) >> >> Well, I'm using NGINX as a proxy server, with cache enabled. I noted >> that, when two (or more) users trying to download the same object, in >> parallel, and the object isn't in the cache, NGINX download them from >> the upstream. In this case, NGINX creates one connection to upstream >> (per request) and download them to temp files. Ok, this works, but, in >> some situations, in one server, we saw more than 70 parallel downloads >> to the same object (in this case, an object with more than 200 MB). >> >> If possible, I would like some insights about how can I avoid this >> situation. I looked to see if it's just a configuration, but I didn't >> find nothing. >> >> IMHO, I think the best approach is share the temp file. If possible, I >> would like to known your opinions about this approach. >> >> I looked at the code in ngx_http_upstream.c and ngx_http_proxy.c, and >> I'm trying to fix the code to share the temp. I think that I need to >> do the following tasks: >> >> 1) Register the current downloads from upstreams. Probably I can >> address this with a rbtree, where each node has the unique object id >> and a list with downstreams (requests?) waiting for data from the >> temp. >> >> 2) Disassociate the read from upstream from the write to downstream. >> Today, in the ngx_event_pipe function, NGINX reads from upstream, >> writes to temp, and writes to downstream. But, as I can have N >> downstreams waiting data from the same upstream, probably I need to >> move the write to downstream to another place. The only way I think is >> implementing a polling event, but I know that this is incorrect >> because NGINX is event based, and polling waste a lote of CPU. >> >> 3) When I know that there more data in temp to be sent, which function >> I must use? ngx_http_output_filter? >> >> Suggestions will welcome :-) >> >> Thanks people! >> >> -- >> Alex Garz?o >> Projetista de Software >> Azion Technologies >> alex.garzao (at) azion.com >> >> _______________________________________________ >> nginx-devel mailing list >> nginx-devel at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx-devel > > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel From mdounin at mdounin.ru Tue Aug 27 22:35:33 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 27 Aug 2013 22:35:33 +0000 Subject: [nginx] Version bump. Message-ID: details: http://hg.nginx.org/nginx/rev/293290081b12 branches: changeset: 5346:293290081b12 user: Maxim Dounin date: Wed Aug 28 02:34:21 2013 +0400 description: Version bump. diffstat: src/core/nginx.h | 4 ++-- 1 files changed, 2 insertions(+), 2 deletions(-) diffs (14 lines): diff --git a/src/core/nginx.h b/src/core/nginx.h --- a/src/core/nginx.h +++ b/src/core/nginx.h @@ -9,8 +9,8 @@ #define _NGINX_H_INCLUDED_ -#define nginx_version 1005004 -#define NGINX_VERSION "1.5.4" +#define nginx_version 1005005 +#define NGINX_VERSION "1.5.5" #define NGINX_VER "nginx/" NGINX_VERSION #define NGINX_VAR "NGINX" From mdounin at mdounin.ru Tue Aug 27 22:35:34 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 27 Aug 2013 22:35:34 +0000 Subject: [nginx] Typo fixed. Message-ID: details: http://hg.nginx.org/nginx/rev/011d72dea802 branches: changeset: 5347:011d72dea802 user: Maxim Dounin date: Wed Aug 28 02:34:30 2013 +0400 description: Typo fixed. diffstat: src/event/ngx_event_pipe.c | 4 ++-- 1 files changed, 2 insertions(+), 2 deletions(-) diffs (14 lines): diff --git a/src/event/ngx_event_pipe.c b/src/event/ngx_event_pipe.c --- a/src/event/ngx_event_pipe.c +++ b/src/event/ngx_event_pipe.c @@ -220,8 +220,8 @@ ngx_event_pipe_read_upstream(ngx_event_p { /* - * if it is allowed, then save some bufs from r->in - * to a temporary file, and add them to a r->out chain + * if it is allowed, then save some bufs from p->in + * to a temporary file, and add them to a p->out chain */ rc = ngx_event_pipe_write_chain_to_temp_file(p); From mdounin at mdounin.ru Tue Aug 27 22:43:10 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 28 Aug 2013 02:43:10 +0400 Subject: [PATCH] Image filter: large image handling In-Reply-To: References: Message-ID: <20130827224310.GD2748@mdounin.ru> Hello! On Wed, Aug 28, 2013 at 12:21:27AM +0800, lanshun zhou wrote: > # HG changeset patch > # User Lanshun Zhou > # Date 1377620347 -28800 > # Node ID 4fae04f332b489c85cdc116e6138a618372d3691 > # Parent d1403de4163100ec0c6c015e57f22384456870e3 > Image filter: large image handling. > > If Content-Length header is not set, and the image size is larger than the > buffer size, client will hang until a timeout occurs. > > Now NGX_HTTP_UNSUPPORTED_MEDIA_TYPE is returned immediately. > > diff -r d1403de41631 -r 4fae04f332b4 > src/http/modules/ngx_http_image_filter_module.c > --- a/src/http/modules/ngx_http_image_filter_module.c Tue Aug 27 17:37:15 > 2013 +0400 > +++ b/src/http/modules/ngx_http_image_filter_module.c Wed Aug 28 00:19:07 > 2013 +0800 > @@ -478,7 +478,14 @@ > "image buf: %uz", size); > > rest = ctx->image + ctx->length - p; > - size = (rest < size) ? rest : size; > + if (rest < size) { > + ngx_log_error(NGX_LOG_ERR, r->connection->log, 0, > + "image filter: too big response: >%z, " > + "try to increase image_filter_buffer", > + ctx->length); > + > + return NGX_ERROR; > + } Good catch, thnx. I don't think the message should be different from one emitted with Content-Length available though. What about something like this: --- a/src/http/modules/ngx_http_image_filter_module.c +++ b/src/http/modules/ngx_http_image_filter_module.c @@ -478,7 +478,12 @@ ngx_http_image_read(ngx_http_request_t "image buf: %uz", size); rest = ctx->image + ctx->length - p; - size = (rest < size) ? rest : size; + + if (size > rest) { + ngx_log_error(NGX_LOG_ERR, r->connection->log, 0, + "image filter: too big response"); + return NGX_ERROR; + } p = ngx_cpymem(p, b->pos, size); b->pos += size; ? -- Maxim Dounin http://nginx.org/en/donation.html From mdounin at mdounin.ru Wed Aug 28 00:41:43 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 28 Aug 2013 04:41:43 +0400 Subject: [PATCH] Proxy remote server SSL certificate verification In-Reply-To: References: <20130820140912.GF19334@mdounin.ru> <20130821143033.GP19334@mdounin.ru> Message-ID: <20130828004143.GE2748@mdounin.ru> Hello! On Tue, Aug 27, 2013 at 11:47:38AM +0300, Aviram Cohen wrote: > Added a new version, with all the required fixes. This looks better, modulo various style problems. It also looks like verification code isn't complete. See below for comments. > > diff -Nrpu nginx-1.4.1/src/http/modules/ngx_http_proxy_module.c > nginx-1.4.1-proxy-ssl-verification/src/http/modules/ngx_http_proxy_module.c > --- nginx-1.4.1/src/http/modules/ngx_http_proxy_module.c 2013-05-06 > 13:26:50.000000000 +0300 > +++ nginx-1.4.1-proxy-ssl-verification/src/http/modules/ngx_http_proxy_module.c > 2013-08-26 10:43:15.639557701 +0300 > @@ -511,6 +511,27 @@ static ngx_command_t ngx_http_proxy_com > offsetof(ngx_http_proxy_loc_conf_t, upstream.ssl_session_reuse), > NULL }, > > + Style: extra empty line. > + { ngx_string("proxy_ssl_verify"), > + NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1, > + ngx_conf_set_flag_slot, > + NGX_HTTP_LOC_CONF_OFFSET, > + offsetof(ngx_http_proxy_loc_conf_t, upstream.ssl_verify), > + NULL }, > + > + { ngx_string("proxy_ssl_verify_depth"), > + NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1, > + ngx_conf_set_num_slot, > + NGX_HTTP_LOC_CONF_OFFSET, > + offsetof(ngx_http_proxy_loc_conf_t, upstream.ssl_verify_depth), > + NULL }, > + > + { ngx_string("proxy_ssl_trusted_certificate"), > + NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1, > + ngx_conf_set_str_slot, > + NGX_HTTP_LOC_CONF_OFFSET, > + offsetof(ngx_http_proxy_loc_conf_t, upstream.ssl_certificate), > + NULL }, > #endif Style: missing empty line before "#endif". It also might make sense to name the field ssl_trusted_certificate to match directive name. It probably also a good idea to move both certificate and verify_depth to ngx_http_proxy_loc_conf_t as they aren't needed in upstream configuration. (At upstream level, only upstream.verify is needed/used.) > > ngx_null_command > @@ -2419,8 +2440,11 @@ ngx_http_proxy_create_loc_conf(ngx_conf_ > conf->upstream.pass_headers = NGX_CONF_UNSET_PTR; > > conf->upstream.intercept_errors = NGX_CONF_UNSET; > + > #if (NGX_HTTP_SSL) > conf->upstream.ssl_session_reuse = NGX_CONF_UNSET; > + conf->upstream.ssl_verify = NGX_CONF_UNSET; > + conf->upstream.ssl_verify_depth = NGX_CONF_UNSET_UINT; > #endif > > /* "proxy_cyclic_temp_file" is disabled */ Style: please add conf->upstream.ssl_certificate (or whatever) to a comment "set by ngx_pcalloc()". > @@ -2697,6 +2721,30 @@ ngx_http_proxy_merge_loc_conf(ngx_conf_t > #if (NGX_HTTP_SSL) > ngx_conf_merge_value(conf->upstream.ssl_session_reuse, > prev->upstream.ssl_session_reuse, 1); Style: if you add empty line before "#endif", please add another one after "#if". > + ngx_conf_merge_value(conf->upstream.ssl_verify, > + prev->upstream.ssl_verify, 0); > + ngx_conf_merge_uint_value(conf->upstream.ssl_verify_depth, > + prev->upstream.ssl_verify_depth, 1); > + ngx_conf_merge_str_value(conf->upstream.ssl_certificate, > + prev->upstream.ssl_certificate, ""); > + > + if (conf->upstream.ssl_verify) { > + if (conf->upstream.ssl_certificate.len == 0) { > + ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, > + "no \"proxy_ssl_trusted_certificate\" is defined for " > + "the \"proxy_ssl_verify\" directive"); Style: no lines longer than 80 chars, please. It might also be a good idea to only complain if there is conf->upstream.ssl. > + > + return NGX_CONF_ERROR; > + } > + } > + > + if (conf->upstream.ssl && > + ngx_ssl_trusted_certificate(cf, conf->upstream.ssl, > + &conf->upstream.ssl_certificate, conf->upstream.ssl_verify_depth) != NGX_OK) > + { Style: - no lines longer than 80 chars, please. - if you wrap long conditions, please put operators a the start of a continuation line. That is, if (conf->upstream.ssl && ngx_ssl_trusted_certificate(cf, conf->upstream.ssl, &conf->upstream.ssl_certificate conf->upstream.ssl_verify_depth) != NGX_OK) { ... } Additional question is what happens in a configuration like location / { proxy_pass https://example.com; proxy_ssl_verify on; proxy_ssl_trusted_ceritifcate example.crt; if ($foo) { # do nothing } } or the same with a nested location instead of "if". Quick look suggest it will result in trusted certs loaded twice (and stale alerts later due to how OpenSSL handles this). > + return NGX_CONF_ERROR; > + } > + > #endif > > ngx_conf_merge_value(conf->redirect, prev->redirect, 1); > diff -Nrpu nginx-1.4.1/src/http/ngx_http_upstream.c > nginx-1.4.1-proxy-ssl-verification/src/http/ngx_http_upstream.c > --- nginx-1.4.1/src/http/ngx_http_upstream.c 2013-05-06 13:26:50.000000000 +0300 > +++ nginx-1.4.1-proxy-ssl-verification/src/http/ngx_http_upstream.c > 2013-08-26 10:44:35.323558884 +0300 > @@ -1324,7 +1324,13 @@ ngx_http_upstream_ssl_handshake(ngx_conn > u = r->upstream; > > if (c->ssl->handshaked) { > - > + if (u->conf->ssl_verify && SSL_get_verify_result(c->ssl->connection) != X509_V_OK) { > + ngx_log_error(NGX_LOG_ERR, c->log, 0, "upstream ssl certificate validation failed"); > + c = r->connection; > + ngx_http_upstream_next(r, u, NGX_HTTP_UPSTREAM_FT_ERROR); > + goto fail; > + } > + Style: 80 chars. Please also note that SSL_get_verify_result() will return X509_V_OK if there is no certificate at all, quote from SSL_get_verify_result manpage: If no peer certificate was presented, the returned result code is X509_V_OK. This is because no verification error occurred, it does however not indicate success. SSL_get_verify_result() is only useful in connection with SSL_get_peer_certificate(3). Please take a look at relevant code at ngx_http_process_request(). It also seems to have better error reporting. > if (u->conf->ssl_session_reuse) { > u->peer.save_session(&u->peer, u->peer.data); > } > @@ -1332,13 +1338,21 @@ ngx_http_upstream_ssl_handshake(ngx_conn > c->write->handler = ngx_http_upstream_handler; > c->read->handler = ngx_http_upstream_handler; > > + c = r->connection; > + > ngx_http_upstream_send_request(r, u); > > +fail: > + ngx_http_run_posted_requests(c); > + > return; > } Probably just adding ngx_http_run_posted_requests(c); return; instead of "goto" above would be more readable... > > + c = r->connection; > + > ngx_http_upstream_next(r, u, NGX_HTTP_UPSTREAM_FT_ERROR); > > + ngx_http_run_posted_requests(c); > } ... or, alternatively, "fail:" may be moved before these lines (and "c = ..." and "ngx_http_upstream_next(...)" lines before "goto" removed), resulting in less code duplication. [...] > On Thu, Aug 22, 2013 at 5:00 PM, Aviram Cohen wrote: > > Hello! > > > > I have a couple of questions regarding the two last comments: [...] Sorry, I somehow missed this message. Glad to see you've succcessfully found answers. -- Maxim Dounin http://nginx.org/en/donation.html From zls.sogou at gmail.com Wed Aug 28 02:40:37 2013 From: zls.sogou at gmail.com (lanshun zhou) Date: Wed, 28 Aug 2013 10:40:37 +0800 Subject: [PATCH] Image filter: large image handling In-Reply-To: <20130827224310.GD2748@mdounin.ru> References: <20130827224310.GD2748@mdounin.ru> Message-ID: It's ok for me, thanks~ ? 2013-8-28 ??6:43?"Maxim Dounin" ??? > Hello! > > On Wed, Aug 28, 2013 at 12:21:27AM +0800, lanshun zhou wrote: > > > # HG changeset patch > > # User Lanshun Zhou > > # Date 1377620347 -28800 > > # Node ID 4fae04f332b489c85cdc116e6138a618372d3691 > > # Parent d1403de4163100ec0c6c015e57f22384456870e3 > > Image filter: large image handling. > > > > If Content-Length header is not set, and the image size is larger than > the > > buffer size, client will hang until a timeout occurs. > > > > Now NGX_HTTP_UNSUPPORTED_MEDIA_TYPE is returned immediately. > > > > diff -r d1403de41631 -r 4fae04f332b4 > > src/http/modules/ngx_http_image_filter_module.c > > --- a/src/http/modules/ngx_http_image_filter_module.c Tue Aug 27 > 17:37:15 > > 2013 +0400 > > +++ b/src/http/modules/ngx_http_image_filter_module.c Wed Aug 28 > 00:19:07 > > 2013 +0800 > > @@ -478,7 +478,14 @@ > > "image buf: %uz", size); > > > > rest = ctx->image + ctx->length - p; > > - size = (rest < size) ? rest : size; > > + if (rest < size) { > > + ngx_log_error(NGX_LOG_ERR, r->connection->log, 0, > > + "image filter: too big response: >%z, " > > + "try to increase image_filter_buffer", > > + ctx->length); > > + > > + return NGX_ERROR; > > + } > > Good catch, thnx. > > I don't think the message should be different from one emitted with > Content-Length available though. What about something like this: > > --- a/src/http/modules/ngx_http_image_filter_module.c > +++ b/src/http/modules/ngx_http_image_filter_module.c > @@ -478,7 +478,12 @@ ngx_http_image_read(ngx_http_request_t > "image buf: %uz", size); > > rest = ctx->image + ctx->length - p; > - size = (rest < size) ? rest : size; > + > + if (size > rest) { > + ngx_log_error(NGX_LOG_ERR, r->connection->log, 0, > + "image filter: too big response"); > + return NGX_ERROR; > + } > > p = ngx_cpymem(p, b->pos, size); > b->pos += size; > > > ? > > -- > Maxim Dounin > http://nginx.org/en/donation.html > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From a.marinov at ucdn.com Wed Aug 28 06:04:16 2013 From: a.marinov at ucdn.com (Anatoli Marinov) Date: Wed, 28 Aug 2013 09:04:16 +0300 Subject: Sharing data when download the same object from upstream In-Reply-To: References: Message-ID: I had the same problem and I wrote a patch to reuse the file with I already have in tmp directory for the second stream (and for all streams before the file is completely cached). Unfortunately I cannot share it but can give you an idea how to do it. On Tue, Aug 27, 2013 at 8:43 PM, Alex Garz?o wrote: > Hello Wandenberg, > > Thanks for your reply. > > Using proxy_cache_lock, when the second request arrive, it will wait > until the object is complete in the cache (or until > proxy_cache_lock_timeout expires). But, in many cases, my upstream has > a really slow link and NGINX needs more than 30 minutes to download > the object. In practice, probably I will see a lote of parallel > downloads from the same object. > > Someone has other idea? Or I'm wrong about proxy_cache_lock? > > Regards. > -- > Alex Garz?o > Projetista de Software > Azion Technologies > alex.garzao (at) azion.com > > > On Mon, Aug 26, 2013 at 6:58 PM, Wandenberg Peixoto > wrote: > > Try to use the proxy_cache_lock configuration, I think this is what you > are > > looking for. > > Don't forget to configure the proxy_cache_lock_timeout to your use case. > > > > On Aug 26, 2013 6:54 PM, "Alex Garz?o" wrote: > >> > >> Hello guys, > >> > >> This is my first post to nginx-devel. > >> > >> First of all, I would like to congratulate NGINX developers. NGINX is > >> an amazing project :-) > >> > >> Well, I'm using NGINX as a proxy server, with cache enabled. I noted > >> that, when two (or more) users trying to download the same object, in > >> parallel, and the object isn't in the cache, NGINX download them from > >> the upstream. In this case, NGINX creates one connection to upstream > >> (per request) and download them to temp files. Ok, this works, but, in > >> some situations, in one server, we saw more than 70 parallel downloads > >> to the same object (in this case, an object with more than 200 MB). > >> > >> If possible, I would like some insights about how can I avoid this > >> situation. I looked to see if it's just a configuration, but I didn't > >> find nothing. > >> > >> IMHO, I think the best approach is share the temp file. If possible, I > >> would like to known your opinions about this approach. > >> > >> I looked at the code in ngx_http_upstream.c and ngx_http_proxy.c, and > >> I'm trying to fix the code to share the temp. I think that I need to > >> do the following tasks: > >> > >> 1) Register the current downloads from upstreams. Probably I can > >> address this with a rbtree, where each node has the unique object id > >> and a list with downstreams (requests?) waiting for data from the > >> temp. > >> > >> 2) Disassociate the read from upstream from the write to downstream. > >> Today, in the ngx_event_pipe function, NGINX reads from upstream, > >> writes to temp, and writes to downstream. But, as I can have N > >> downstreams waiting data from the same upstream, probably I need to > >> move the write to downstream to another place. The only way I think is > >> implementing a polling event, but I know that this is incorrect > >> because NGINX is event based, and polling waste a lote of CPU. > >> > >> 3) When I know that there more data in temp to be sent, which function > >> I must use? ngx_http_output_filter? > >> > >> Suggestions will welcome :-) > >> > >> Thanks people! > >> > >> -- > >> Alex Garz?o > >> Projetista de Software > >> Azion Technologies > >> alex.garzao (at) azion.com > >> > >> _______________________________________________ > >> nginx-devel mailing list > >> nginx-devel at nginx.org > >> http://mailman.nginx.org/mailman/listinfo/nginx-devel > > > > > > _______________________________________________ > > nginx-devel mailing list > > nginx-devel at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx-devel > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From parker.p.dev at gmail.com Wed Aug 28 08:20:46 2013 From: parker.p.dev at gmail.com (Phil Parker) Date: Wed, 28 Aug 2013 09:20:46 +0100 Subject: Verify Upstream SSL Certs Message-ID: This has been discussed in detail previously: http://trac.nginx.org/nginx/ticket/13 http://mailman.nginx.org/pipermail/nginx-devel/2011-September/001182.html I have created a patch that I'm using locally and would like to contribute but am a first-time contributor so looking for advice. The way I've implemented it supports two (mutually exclusive) new directives on a location. e.g. location / { proxy_ssl_peer_certificate_path "/tmp/sslcerts"; #proxy_ssl_peer_certificate_file "/tmp/sslcerts/cert.pem"; proxy_pass .... } These are passed through to SSL_CTX_load_verify_locations ( http://www.openssl.org/docs/ssl/SSL_CTX_load_verify_locations.html) The main advice I'm looking for: 1) Is this implemented in a way that is useful for others? 2) Should I be writing tests/test driving? If so, how? 3) Anything in the patch (below) that needs to be changed (implementation or style)? 4) How best to submit the patch (I've currently made it against 1.4.2 and just created a patch file, not currently a Mercurial user but can check-out if necessary)? Thx, P. diff -uNr ../nginx-1.4.2/src/event/ngx_event_openssl.c src/event/ngx_event_openssl.c --- ../nginx-1.4.2/src/event/ngx_event_openssl.c 2013-07-17 13:51:21.000000000 +0100 +++ src/event/ngx_event_openssl.c 2013-08-28 08:21:26.062300918 +0100 @@ -228,6 +228,30 @@ SSL_CTX_set_info_callback(ssl->ctx, ngx_ssl_info_callback); + if (ssl->ca_certificate_file.len > 0) { + SSL_CTX_set_verify(ssl->ctx, SSL_VERIFY_PEER, NULL); + if (SSL_CTX_load_verify_locations(ssl->ctx, (const char *) + ssl->ca_certificate_file.data, NULL + ) == 0){ + ngx_ssl_error(NGX_LOG_ALERT, ssl->log, 0, + "SSL_CTX_load_verify_locations(ctx, \"%s\", NULL) failed", + (const char *)ssl->ca_certificate_file.data); + return NGX_ERROR; + } + } + + if (ssl->ca_certificate_path.len > 0) { + SSL_CTX_set_verify(ssl->ctx, SSL_VERIFY_PEER, NULL); + if (SSL_CTX_load_verify_locations(ssl->ctx, NULL, + (const char *) + ssl->ca_certificate_path.data) == 0){ + ngx_ssl_error(NGX_LOG_ALERT, ssl->log, 0, + "SSL_CTX_load_verify_locations(ctx, NULL, \"%s\") failed", + (const char *)ssl->ca_certificate_path.data); + return NGX_ERROR; + } + } + return NGX_OK; } diff -uNr ../nginx-1.4.2/src/event/ngx_event_openssl.h src/event/ngx_event_openssl.h --- ../nginx-1.4.2/src/event/ngx_event_openssl.h 2013-07-17 13:51:21.000000000 +0100 +++ src/event/ngx_event_openssl.h 2013-08-28 08:21:26.074300918 +0100 @@ -29,6 +29,8 @@ typedef struct { SSL_CTX *ctx; ngx_log_t *log; + ngx_str_t ca_certificate_file; + ngx_str_t ca_certificate_path; } ngx_ssl_t; diff -uNr ../nginx-1.4.2/src/http/modules/ngx_http_proxy_module.c src/http/modules/ngx_http_proxy_module.c --- ../nginx-1.4.2/src/http/modules/ngx_http_proxy_module.c 2013-07-17 13:51:22.000000000 +0100 +++ src/http/modules/ngx_http_proxy_module.c 2013-08-28 08:21:26.074300918 +0100 @@ -511,6 +511,20 @@ offsetof(ngx_http_proxy_loc_conf_t, upstream.ssl_session_reuse), NULL }, + { ngx_string("proxy_ssl_peer_certificate_file"), + NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1, + ngx_conf_set_str_slot, + NGX_HTTP_LOC_CONF_OFFSET, + offsetof(ngx_http_proxy_loc_conf_t, upstream.ssl_certificate_file), + NULL }, + + { ngx_string("proxy_ssl_peer_certificate_path"), + NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1, + ngx_conf_set_str_slot, + NGX_HTTP_LOC_CONF_OFFSET, + offsetof(ngx_http_proxy_loc_conf_t, upstream.ssl_certificate_path), + NULL }, + #endif ngx_null_command @@ -3742,6 +3756,11 @@ plcf->upstream.ssl->log = cf->log; + plcf->upstream.ssl->ca_certificate_file = + plcf->upstream.ssl_certificate_file; + plcf->upstream.ssl->ca_certificate_path = + plcf->upstream.ssl_certificate_path; + if (ngx_ssl_create(plcf->upstream.ssl, NGX_SSL_SSLv2|NGX_SSL_SSLv3|NGX_SSL_TLSv1 |NGX_SSL_TLSv1_1|NGX_SSL_TLSv1_2, diff -uNr ../nginx-1.4.2/src/http/ngx_http_upstream.h src/http/ngx_http_upstream.h --- ../nginx-1.4.2/src/http/ngx_http_upstream.h 2013-07-17 13:51:22.000000000 +0100 +++ src/http/ngx_http_upstream.h 2013-08-28 08:21:26.090300917 +0100 @@ -191,6 +191,8 @@ #if (NGX_HTTP_SSL) ngx_ssl_t *ssl; ngx_flag_t ssl_session_reuse; + ngx_str_t ssl_certificate_file; + ngx_str_t ssl_certificate_path; #endif ngx_str_t module; -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Wed Aug 28 08:33:39 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 28 Aug 2013 08:33:39 +0000 Subject: [nginx] Image filter: large image handling. Message-ID: details: http://hg.nginx.org/nginx/rev/317e0893a1e6 branches: changeset: 5348:317e0893a1e6 user: Lanshun Zhou date: Wed Aug 28 00:19:07 2013 +0800 description: Image filter: large image handling. If Content-Length header is not set, and the image size is larger than the buffer size, client will hang until a timeout occurs. Now NGX_HTTP_UNSUPPORTED_MEDIA_TYPE is returned immediately. diff -r d1403de41631 -r 4fae04f332b4 src/http/modules/ngx_http_image_filter_module.c diffstat: src/http/modules/ngx_http_image_filter_module.c | 7 ++++++- 1 files changed, 6 insertions(+), 1 deletions(-) diffs (17 lines): diff --git a/src/http/modules/ngx_http_image_filter_module.c b/src/http/modules/ngx_http_image_filter_module.c --- a/src/http/modules/ngx_http_image_filter_module.c +++ b/src/http/modules/ngx_http_image_filter_module.c @@ -478,7 +478,12 @@ ngx_http_image_read(ngx_http_request_t * "image buf: %uz", size); rest = ctx->image + ctx->length - p; - size = (rest < size) ? rest : size; + + if (size > rest) { + ngx_log_error(NGX_LOG_ERR, r->connection->log, 0, + "image filter: too big response"); + return NGX_ERROR; + } p = ngx_cpymem(p, b->pos, size); b->pos += size; From mdounin at mdounin.ru Wed Aug 28 08:34:05 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 28 Aug 2013 12:34:05 +0400 Subject: [PATCH] Image filter: large image handling In-Reply-To: References: <20130827224310.GD2748@mdounin.ru> Message-ID: <20130828083405.GB8272@mdounin.ru> Hello! On Wed, Aug 28, 2013 at 10:40:37AM +0800, lanshun zhou wrote: > It's ok for me, thanks~ Pushed, thanks. -- Maxim Dounin http://nginx.org/en/donation.html From mdounin at mdounin.ru Wed Aug 28 08:54:03 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 28 Aug 2013 12:54:03 +0400 Subject: Verify Upstream SSL Certs In-Reply-To: References: Message-ID: <20130828085403.GD8272@mdounin.ru> Hello! On Wed, Aug 28, 2013 at 09:20:46AM +0100, Phil Parker wrote: > This has been discussed in detail previously: > > http://trac.nginx.org/nginx/ticket/13 > http://mailman.nginx.org/pipermail/nginx-devel/2011-September/001182.html > > I have created a patch that I'm using locally and would like to contribute > but am a first-time contributor so looking for advice. Given the fact that Aviram Cohen's patch for the same ticket is already in the review process, I would suggest you to join review/testing instead. See this thread for details: http://mailman.nginx.org/pipermail/nginx-devel/2013-August/004085.html > The way I've implemented it supports two (mutually exclusive) new > directives on a location. e.g. > > location / { > proxy_ssl_peer_certificate_path "/tmp/sslcerts"; > #proxy_ssl_peer_certificate_file "/tmp/sslcerts/cert.pem"; > proxy_pass .... > } > > These are passed through to SSL_CTX_load_verify_locations ( > http://www.openssl.org/docs/ssl/SSL_CTX_load_verify_locations.html) Just a side note: we don't provide "_path" variants for other certificate verification directives, so it's unlikely it will be accepted for a proxy peer verification. > The main advice I'm looking for: > > 1) Is this implemented in a way that is useful for others? > 2) Should I be writing tests/test driving? If so, how? Writing tests may make sense (though not required), test suite is available at http://hg.nginx.org/nginx-tests. > 3) Anything in the patch (below) that needs to be changed (implementation > or style)? > 4) How best to submit the patch (I've currently made it against 1.4.2 and > just created a patch file, not currently a Mercurial user but can check-out > if necessary)? Basic recommendations can be found here: http://nginx.org/en/docs/contributing_changes.html [...] -- Maxim Dounin http://nginx.org/en/donation.html From torshie at gmail.com Wed Aug 28 09:54:39 2013 From: torshie at gmail.com (=?UTF-8?B?6YKT5bCn?=) Date: Wed, 28 Aug 2013 17:54:39 +0800 Subject: The meaning of ngx_http_request_t.out ? In-Reply-To: <20130827142100.GA19334@mdounin.ru> References: <20130827142100.GA19334@mdounin.ru> Message-ID: On Tue, Aug 27, 2013 at 10:21 PM, Maxim Dounin wrote: > Hello! > > On Tue, Aug 27, 2013 at 11:21:38AM +0800, ?? wrote: > > > Hi, > > I'm writing an nginx module, it does something similar to the sub module. > > After some research I succeeded in handling the ngx_chain_t pointer > passed > > to my body filter. My module seems to work well for static files. > > When my module is handling PHP responses (fastcgi), sometimes the > > ngx_chain_t pointer is NULL. I simply call the next body filter in such > > situation like the sub module body filter. However function > > ngx_http_write_filter() will fail, because r->out isn't NULL > > and the buf's in r->out are of zero size. > > My questions are: what's the meaning of r->out? how should I modify it in > > my body filter ? Could I simply return NGX_OK in my body filter without > > calling the next body filter if the ngx_chain_t pointer is NULL ? > > The r->out is write filter's private data, you shouldn't touch it > in your module. And it shouldn't contain zero size non-special > buffers, if it does - there is a bug somewhere (most likely in > your filter). > > -- > Maxim Dounin > http://nginx.org/en/donation.html > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel What are the most likely reasons for a buffer in r->out to be zero sized and non-special ? I do believe the bug is somewhere in my body filter, but I checked many times, cannot find it. My body filter's work flow is like the following: 1. create a private empty chain, ctx->out. 2. search the input chain for patterns 3. if a matched pattern is found, append three ngx_buf_t structures (one for the substitution string, two for the data surrounding the matched pattern) to ctx->out, the one used for the substitution string is zerorized then only three members are set: pos, last, memory. The other two are memcpy()ed from the original buffer then: * pos, last, file_pos, file_last are modified accordingly * last_buf, last_in_chain are cleared (last_buf & last_in_chain of the final chain are still set) * shadow is set to NULL 4. if no pattern is found, append the ngx_buf_t structure to ctx->out, then clear last_buf & last_in_chain. 5. when a pattern is across two buffers, well, it's ignored, should be no problem. 6. feed ctx->out to the next body filter 7. return the code from the above function call after some cleaning up. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Wed Aug 28 11:40:01 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 28 Aug 2013 15:40:01 +0400 Subject: The meaning of ngx_http_request_t.out ? In-Reply-To: References: <20130827142100.GA19334@mdounin.ru> Message-ID: <20130828114001.GE8272@mdounin.ru> Hello! On Wed, Aug 28, 2013 at 05:54:39PM +0800, ?? wrote: > On Tue, Aug 27, 2013 at 10:21 PM, Maxim Dounin wrote: > > > Hello! > > > > On Tue, Aug 27, 2013 at 11:21:38AM +0800, ?? wrote: > > > > > Hi, > > > I'm writing an nginx module, it does something similar to the sub module. > > > After some research I succeeded in handling the ngx_chain_t pointer > > passed > > > to my body filter. My module seems to work well for static files. > > > When my module is handling PHP responses (fastcgi), sometimes the > > > ngx_chain_t pointer is NULL. I simply call the next body filter in such > > > situation like the sub module body filter. However function > > > ngx_http_write_filter() will fail, because r->out isn't NULL > > > and the buf's in r->out are of zero size. > > > My questions are: what's the meaning of r->out? how should I modify it in > > > my body filter ? Could I simply return NGX_OK in my body filter without > > > calling the next body filter if the ngx_chain_t pointer is NULL ? > > > > The r->out is write filter's private data, you shouldn't touch it > > in your module. And it shouldn't contain zero size non-special > > buffers, if it does - there is a bug somewhere (most likely in > > your filter). > > What are the most likely reasons for a buffer in r->out to be zero sized > and non-special ? I do believe the bug is somewhere in my body filter, but > I checked many times, cannot find it. My body filter's work flow is like > the following: > 1. create a private empty chain, ctx->out. > 2. search the input chain for patterns > 3. if a matched pattern is found, append three ngx_buf_t structures (one > for the substitution string, two for the data surrounding the matched > pattern) to ctx->out, the one used for the substitution string is zerorized > then only three members are set: pos, last, memory. The other two are > memcpy()ed from the original buffer then: > * pos, last, file_pos, file_last are modified accordingly > * last_buf, last_in_chain are cleared (last_buf & last_in_chain of the > final chain are still set) > * shadow is set to NULL > 4. if no pattern is found, append the ngx_buf_t structure to ctx->out, then > clear last_buf & last_in_chain. > 5. when a pattern is across two buffers, well, it's ignored, should be no > problem. > 6. feed ctx->out to the next body filter > 7. return the code from the above function call after some cleaning up. I suspect your code reuses/modifies buffers passed to next body filter before they are sent. Note you should keep track of buffers passed to next filters and don't reuse them before they are fully sent (usually ngx_chain_update_chains() function is used for dirty work). Alternatively, if you modify ngx_buf_t structures your code got from previous body filters - your modifications may confuse tracking code and buffers may be reused there (again, before they are sent). -- Maxim Dounin http://nginx.org/en/donation.html From parker.p.dev at gmail.com Wed Aug 28 15:45:38 2013 From: parker.p.dev at gmail.com (Phil Parker) Date: Wed, 28 Aug 2013 16:45:38 +0100 Subject: Verify Upstream SSL Certs In-Reply-To: <20130828085403.GD8272@mdounin.ru> References: <20130828085403.GD8272@mdounin.ru> Message-ID: On Wed, Aug 28, 2013 at 9:54 AM, Maxim Dounin wrote: > > Hello! Hi! > > On Wed, Aug 28, 2013 at 09:20:46AM +0100, Phil Parker wrote: > > > This has been discussed in detail previously: > > > > http://trac.nginx.org/nginx/ticket/13 > > http://mailman.nginx.org/pipermail/nginx-devel/2011-September/001182.html > > > > I have created a patch that I'm using locally and would like to contribute > > but am a first-time contributor so looking for advice. > > Given the fact that Aviram Cohen's patch for the same ticket is > already in the review process, I would suggest you to join > review/testing instead. Thanks, I missed that in all my searches. It might be worth adding a comment to the trac ticket and the previous (dead, I think) patch thread I found above so people can "follow the breadcrumbs"? > See this thread for details: > http://mailman.nginx.org/pipermail/nginx-devel/2013-August/004085.html > I've downloaded this and managed to patch/compile on: nginx version: nginx/1.4.2 Linux 3.8.0-25-generic #37-Ubuntu SMP Thu Jun 6 20:47:07 UTC 2013 x86_64 GNU/Linux I specified proxy_ssl_verify and proxy_ssl_trusted_certificate (I tried this with both specifying a single cert, which worked with my previous patch, and a combined cert via 'openssl x509 -in cert1.pem -text >> CAfile.pem') but got the following error when trying to proxy: [error] 14716#0: *1 upstream sslcertificate validation failed while SSL handshaking to upstream This message doesn't match the one in the patch (which is just "upstream sslcertificate validation failed" but a search led me to http://serverfault.com/questions/436737/forcing-a-particular-ssl-protocol-for-an-nginx-proxying-server . In my case downgrading openssl to 1.0.0 didn't seem to change anything. I'll keep investigating but would be useful to see if anyone has seen this before or knows what the cause might be. One additional point is it looks from the patch like if you don't specify 'proxy_ssl_verify_depth' it defaults to 1 but the Open SSL documentation states it defaults to 9 http://www.openssl.org/docs/ssl/SSL_CTX_set_verify.html#NOTES. I'd suggest if it's not specified in an nginx directive then the default should be that of open ssl (the Principle of Least Astonishment applies....). > > The way I've implemented it supports two (mutually exclusive) new > > directives on a location. e.g. > > > > location / { > > proxy_ssl_peer_certificate_path "/tmp/sslcerts"; > > #proxy_ssl_peer_certificate_file "/tmp/sslcerts/cert.pem"; > > proxy_pass .... > > } > > > > These are passed through to SSL_CTX_load_verify_locations ( > > http://www.openssl.org/docs/ssl/SSL_CTX_load_verify_locations.html) > > Just a side note: we don't provide "_path" variants for other > certificate verification directives, so it's unlikely it will be > accepted for a proxy peer verification. > > > The main advice I'm looking for: > > > > 1) Is this implemented in a way that is useful for others? > > 2) Should I be writing tests/test driving? If so, how? > > Writing tests may make sense (though not required), test suite is > available at http://hg.nginx.org/nginx-tests. > > > 3) Anything in the patch (below) that needs to be changed (implementation > > or style)? > > 4) How best to submit the patch (I've currently made it against 1.4.2 and > > just created a patch file, not currently a Mercurial user but can check-out > > if necessary)? > > Basic recommendations can be found here: > > http://nginx.org/en/docs/contributing_changes.html > > [...] > > -- > Maxim Dounin > http://nginx.org/en/donation.html > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel P. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Wed Aug 28 16:22:36 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 28 Aug 2013 20:22:36 +0400 Subject: Verify Upstream SSL Certs In-Reply-To: References: <20130828085403.GD8272@mdounin.ru> Message-ID: <20130828162236.GJ8272@mdounin.ru> Hello! On Wed, Aug 28, 2013 at 04:45:38PM +0100, Phil Parker wrote: [...] > It might be worth adding a comment to the trac ticket and the previous > (dead, I think) patch thread I found above so people can "follow the > breadcrumbs"? Sure, I've added a couple of links there. > > See this thread for details: > > http://mailman.nginx.org/pipermail/nginx-devel/2013-August/004085.html > > > > I've downloaded this and managed to patch/compile on: > > nginx version: nginx/1.4.2 > Linux 3.8.0-25-generic #37-Ubuntu SMP Thu Jun 6 20:47:07 UTC 2013 x86_64 > GNU/Linux > > I specified proxy_ssl_verify and proxy_ssl_trusted_certificate (I tried > this with both specifying a single cert, which worked with my previous > patch, and a combined cert via 'openssl x509 -in cert1.pem -text >> > CAfile.pem') but got the following error when trying to proxy: > > [error] 14716#0: *1 upstream sslcertificate validation failed while SSL > handshaking to upstream > > This message doesn't match the one in the patch (which is just "upstream > sslcertificate validation failed" but a search led me to The message is different as "while action>" is added automatically by ngx_http_log_error(). One of the comments I've made during last review is that error messages should be improved. :) [...] > One additional point is it looks from the patch like if you don't specify > 'proxy_ssl_verify_depth' it defaults to 1 but the Open SSL documentation > states it defaults to 9 > http://www.openssl.org/docs/ssl/SSL_CTX_set_verify.html#NOTES. > > I'd suggest if it's not specified in an nginx directive then the default > should be that of open ssl (the Principle of Least Astonishment > applies....). The ssl_verify_depth defaults to 1, as well as Apache's SSLProxyVerifyDepth. So I tend to think that using different default for proxy_ssl_verify_depth will actually break POLA. -- Maxim Dounin http://nginx.org/en/donation.html From alex.garzao at azion.com Wed Aug 28 16:56:36 2013 From: alex.garzao at azion.com (=?ISO-8859-1?Q?Alex_Garz=E3o?=) Date: Wed, 28 Aug 2013 13:56:36 -0300 Subject: Sharing data when download the same object from upstream In-Reply-To: References: Message-ID: Hello Anatoli, Thanks for your reply. I will appreciate (a lot) your help :-) I'm trying to fix the code with the following requirements in mind: 1) We were upstreams/downstreams with good (and bad) links; in general, upstream speed is more than downstream speed but, in some situations, the downstream speed is a lot more quickly than the upstream speed; 2) I'm trying to disassociate the upstream speed from the downstream speed. The first request (request that already will connect in the upstream) download data to temp file, but no longer sends data to downstream. I disabled this because, in my understand, if the first request has a slow downstream, all others downstreams will wait data to be sent to this slow downstream. My first doubt is: Need I worry about downstream/upstream speed? Well, I will try to explain what I did in the code: 1) I created a rbtree (currrent_downloads) that keeps the current downloads (one rbtree per upstream). Each node keeps the first request (request that already will connect into upstream) and a list (download_info_list) that will keep two fields: (a) request waiting data from the temp file and (b) file offset already sent from the temp file (last_offset); 2) In ngx_http_upstream_init_request(), when the object isn't in the cache, before connect into upstream, I check if the object is in rbtree (current_downloads); 3) When the object isn't in current_downloads, I add a node that contains the first request (equal to current request) and I add the current request into the download_info_list. Beyond that, I create a timer event (polling) that will check all requests in download_info_list and verify if there are data in temp file that already not sent to the downstream. I create one timer event per object [1]. 4) When the object is in current_downloads, I add the request into download_info_list and finalize ngx_http_upstream_init_request() (I just return without execute ngx_http_upstream_finalize_request()); 5) I have disabled (in ngx_event_pipe) the code that sends data to downstream (requirement 2); 6) In the polling event, I get the current temp file offset (first_request->upstream->pipe->temp_file->offset) and I check in the download_info_list if this is > than last_offset. If true, I send more data to downstream with the ngx_http_upstream_cache_send_partial (code bellow); 7) In the polling event, when pipe->upstream_done || pipe->upstream_eof || pipe->upstream_error, and all data were sent to downstream, I execute ngx_http_upstream_finalize_request for all requests; 8) I added a bit flag (first_download_request) in ngx_http_request_t struct to avoid request to be finished before all requests were completed. In ngx_http_upstream_finalize_request() I check this flag. But, in really, I don't have sure if is necessary avoid this situation... Bellow you can see the ngx_http_upstream_cache_send_partial code: ///////////// static ngx_int_t ngx_http_upstream_cache_send_partial(ngx_http_request_t *r, ngx_temp_file_t *file, off_t offset, off_t bytes, unsigned last_buf) { ngx_buf_t *b; ngx_chain_t out; ngx_http_cache_t *c; c = r->cache; /* we need to allocate all before the header would be sent */ b = ngx_pcalloc(r->pool, sizeof(ngx_buf_t)); if (b == NULL) { return NGX_HTTP_INTERNAL_SERVER_ERROR; } b->file = ngx_pcalloc(r->pool, sizeof(ngx_file_t)); if (b->file == NULL) { return NGX_HTTP_INTERNAL_SERVER_ERROR; } /* FIX: need to run ngx_http_send_header(r) once... */ b->file_pos = offset; b->file_last = bytes; b->in_file = 1; b->last_buf = last_buf; b->last_in_chain = 1; b->file->fd = file->file.fd; b->file->name = file->file.name; b->file->log = r->connection->log; out.buf = b; out.next = NULL; return ngx_http_output_filter(r, &out); } //////////// My second doubt is: Could I just fix ngx_event_pipe to send to all requests (instead of to send to one request)? And, if true, ngx_http_output_filter can be used to send a big chunk at first time (300 MB or more) and little chunks after that? Thanks in advance for your attention :-) [1] I know that "polling event" is a bad approach with NGINX, but I don't know how to fix this. For example, the upstream download can be very quickly, and is possible that I need send data to downstream in little chunks. Upstream (in NGINX) is socket event based, but, when download from upstream finished, which event can I expect? Regards. -- Alex Garz?o Projetista de Software Azion Technologies alex.garzao (at) azion.com From sepherosa at gmail.com Thu Aug 29 13:24:00 2013 From: sepherosa at gmail.com (Sepherosa Ziehau) Date: Thu, 29 Aug 2013 21:24:00 +0800 Subject: [PATCH] SO_REUSEPORT support for listen sockets (round 3) In-Reply-To: References: Message-ID: Hi all, Sorry for the top post. Any follow-up on this? Or I should just keep it as a local patch? Best Regards, sephe On Fri, Aug 2, 2013 at 1:16 PM, Sepherosa Ziehau wrote: > Hi all, > > Here is another round of SO_REUSEPORT support. The plot is changed a > little bit to allow smooth configure reloading and binary upgrading. > Here is what happens when so_reuseport is enable (this does not affect > single process model): > - Master creates the listen sockets w/ SO_REUSEPORT, but does not > configure them > - The first worker process will inherit the listen sockets created by > master and configure them > - After master forked the first worker process all listen sockets are > closed > - The rest of the workers will create their own listen sockets w/ > SO_REUSEPORT > - During binary upgrade, listen sockets are no longer passed through > environment variables, since new master will create its own listen > sockets. Well, the old master actually does not have any listen > sockets opened :). > > The idea behind this plot is that at any given time, there is always > one listen socket left, which could inherit the syncaches and pending > sockets on the to-be-closed listen sockets. The inheritance itself is > handled by the kernel; I implemented this inheritance for DragonFlyBSD > recently ( > http://gitweb.dragonflybsd.org/dragonfly.git/commit/02ad2f0b874fb0a45eb69750219f79f5e8982272 > ). > I am not tracking Linux's code, but I think Linux side will > eventually get (or already got) the proper fix. > > The patch itself: > http://leaf.dragonflybsd.org/~sephe/ngx_soreuseport3.diff > > Configuration reloading and binary upgrading will not be interfered as > w/ the first 2 patches. > > Binary upgrading reverting method 1 ("Send the HUP signal to the old > master process. ...") will not be interfered as w/ the first 2 > patches. There still could be some glitch (but not that worse as w/ > the first 2 patches) if binary upgrading reverting method 2 ("Send the > TERM signal to the new master process. ...") is used. I think we > probably just need to mention that in the document. > > Best Regards, > sephe > > -- > Tomorrow Will Never Die > -- Tomorrow Will Never Die -------------- next part -------------- An HTML attachment was scrubbed... URL: From pluknet at nginx.com Thu Aug 29 18:37:54 2013 From: pluknet at nginx.com (Sergey Kandaurov) Date: Thu, 29 Aug 2013 18:37:54 +0000 Subject: [nginx] Referer: fixed error type usage inconsistency for ngx_ht... Message-ID: details: http://hg.nginx.org/nginx/rev/9b8a634e348a branches: changeset: 5349:9b8a634e348a user: Sergey Kandaurov date: Thu Aug 29 22:35:26 2013 +0400 description: Referer: fixed error type usage inconsistency for ngx_http_add*(). diffstat: src/http/modules/ngx_http_referer_module.c | 30 +++++++++++++++--------------- 1 files changed, 15 insertions(+), 15 deletions(-) diffs (113 lines): diff -r 317e0893a1e6 -r 9b8a634e348a src/http/modules/ngx_http_referer_module.c --- a/src/http/modules/ngx_http_referer_module.c Wed Aug 28 00:19:07 2013 +0800 +++ b/src/http/modules/ngx_http_referer_module.c Thu Aug 29 22:35:26 2013 +0400 @@ -41,9 +41,9 @@ static char * ngx_http_referer_merge_con void *child); static char *ngx_http_valid_referers(ngx_conf_t *cf, ngx_command_t *cmd, void *conf); -static char *ngx_http_add_referer(ngx_conf_t *cf, ngx_hash_keys_arrays_t *keys, - ngx_str_t *value, ngx_str_t *uri); -static char *ngx_http_add_regex_referer(ngx_conf_t *cf, +static ngx_int_t ngx_http_add_referer(ngx_conf_t *cf, + ngx_hash_keys_arrays_t *keys, ngx_str_t *value, ngx_str_t *uri); +static ngx_int_t ngx_http_add_regex_referer(ngx_conf_t *cf, ngx_http_referer_conf_t *rlcf, ngx_str_t *name, ngx_regex_t *regex); static int ngx_libc_cdecl ngx_http_cmp_referer_wildcards(const void *one, const void *two); @@ -497,7 +497,7 @@ ngx_http_valid_referers(ngx_conf_t *cf, } -static char * +static ngx_int_t ngx_http_add_referer(ngx_conf_t *cf, ngx_hash_keys_arrays_t *keys, ngx_str_t *value, ngx_str_t *uri) { @@ -510,7 +510,7 @@ ngx_http_add_referer(ngx_conf_t *cf, ngx } else { u = ngx_palloc(cf->pool, sizeof(ngx_str_t)); if (u == NULL) { - return NGX_CONF_ERROR; + return NGX_ERROR; } *u = *uri; @@ -519,7 +519,7 @@ ngx_http_add_referer(ngx_conf_t *cf, ngx rc = ngx_hash_add_key(keys, value, u, NGX_HASH_WILDCARD_KEY); if (rc == NGX_OK) { - return NGX_CONF_OK; + return NGX_OK; } if (rc == NGX_DECLINED) { @@ -532,11 +532,11 @@ ngx_http_add_referer(ngx_conf_t *cf, ngx "conflicting parameter \"%V\"", value); } - return NGX_CONF_ERROR; + return NGX_ERROR; } -static char * +static ngx_int_t ngx_http_add_regex_referer(ngx_conf_t *cf, ngx_http_referer_conf_t *rlcf, ngx_str_t *name, ngx_regex_t *regex) { @@ -547,26 +547,26 @@ ngx_http_add_regex_referer(ngx_conf_t *c if (name->len == 1) { ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, "empty regex in \"%V\"", name); - return NGX_CONF_ERROR; + return NGX_ERROR; } if (rlcf->regex == NGX_CONF_UNSET_PTR) { rlcf->regex = ngx_array_create(cf->pool, 2, sizeof(ngx_regex_elt_t)); if (rlcf->regex == NULL) { - return NGX_CONF_ERROR; + return NGX_ERROR; } } re = ngx_array_push(rlcf->regex); if (re == NULL) { - return NGX_CONF_ERROR; + return NGX_ERROR; } if (regex) { re->regex = regex; re->name = name->data; - return NGX_CONF_OK; + return NGX_OK; } name->len--; @@ -582,13 +582,13 @@ ngx_http_add_regex_referer(ngx_conf_t *c if (ngx_regex_compile(&rc) != NGX_OK) { ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, "%V", &rc.err); - return NGX_CONF_ERROR; + return NGX_ERROR; } re->regex = rc.regex; re->name = name->data; - return NGX_CONF_OK; + return NGX_OK; #else @@ -596,7 +596,7 @@ ngx_http_add_regex_referer(ngx_conf_t *c "the using of the regex \"%V\" requires PCRE library", name); - return NGX_CONF_ERROR; + return NGX_ERROR; #endif } From pluknet at nginx.com Thu Aug 29 18:37:56 2013 From: pluknet at nginx.com (Sergey Kandaurov) Date: Thu, 29 Aug 2013 18:37:56 +0000 Subject: [nginx] Referer: fixed server_name regex matching. Message-ID: details: http://hg.nginx.org/nginx/rev/8220e393c241 branches: changeset: 5350:8220e393c241 user: Sergey Kandaurov date: Thu Aug 29 22:35:26 2013 +0400 description: Referer: fixed server_name regex matching. The server_name regexes are normally compiled for case-sensitive matching. This violates case-insensitive obligations in the referer module. To fix this, the host string is converted to lower case before matching. Previously server_name regex was executed against the whole referer string after dropping the scheme part. This could led to an improper matching, e.g.: server_name ~^localhost$; valid_referers server_names; Referer: http://localhost/index.html It was changed to look only at the hostname part. The server_name regexes are separated into another array to not clash with regular regexes. diffstat: src/http/modules/ngx_http_referer_module.c | 89 ++++++++++++++++++++++------- 1 files changed, 67 insertions(+), 22 deletions(-) diffs (190 lines): diff -r 9b8a634e348a -r 8220e393c241 src/http/modules/ngx_http_referer_module.c --- a/src/http/modules/ngx_http_referer_module.c Thu Aug 29 22:35:26 2013 +0400 +++ b/src/http/modules/ngx_http_referer_module.c Thu Aug 29 22:35:26 2013 +0400 @@ -12,18 +12,13 @@ #define NGX_HTTP_REFERER_NO_URI_PART ((void *) 4) -#if !(NGX_PCRE) - -#define ngx_regex_t void - -#endif - typedef struct { ngx_hash_combined_t hash; #if (NGX_PCRE) ngx_array_t *regex; + ngx_array_t *server_name_regex; #endif ngx_flag_t no_referer; @@ -44,7 +39,11 @@ static char *ngx_http_valid_referers(ngx static ngx_int_t ngx_http_add_referer(ngx_conf_t *cf, ngx_hash_keys_arrays_t *keys, ngx_str_t *value, ngx_str_t *uri); static ngx_int_t ngx_http_add_regex_referer(ngx_conf_t *cf, - ngx_http_referer_conf_t *rlcf, ngx_str_t *name, ngx_regex_t *regex); + ngx_http_referer_conf_t *rlcf, ngx_str_t *name); +#if (NGX_PCRE) +static ngx_int_t ngx_http_add_regex_server_name(ngx_conf_t *cf, + ngx_http_referer_conf_t *rlcf, ngx_http_regex_t *regex); +#endif static int ngx_libc_cdecl ngx_http_cmp_referer_wildcards(const void *one, const void *two); @@ -117,6 +116,10 @@ ngx_http_referer_variable(ngx_http_reque ngx_uint_t i, key; ngx_http_referer_conf_t *rlcf; u_char buf[256]; +#if (NGX_PCRE) + ngx_int_t rc; + ngx_str_t referer; +#endif rlcf = ngx_http_get_module_loc_conf(r, ngx_http_referer_module); @@ -125,6 +128,7 @@ ngx_http_referer_variable(ngx_http_reque && rlcf->hash.wc_tail == NULL #if (NGX_PCRE) && rlcf->regex == NULL + && rlcf->server_name_regex == NULL #endif ) { @@ -189,10 +193,25 @@ valid_scheme: #if (NGX_PCRE) + if (rlcf->server_name_regex) { + referer.len = p - ref; + referer.data = buf; + + rc = ngx_regex_exec_array(rlcf->server_name_regex, &referer, + r->connection->log); + + if (rc == NGX_OK) { + goto valid; + } + + if (rc == NGX_ERROR) { + return rc; + } + + /* NGX_DECLINED */ + } + if (rlcf->regex) { - ngx_int_t rc; - ngx_str_t referer; - referer.len = len; referer.data = ref; @@ -255,6 +274,7 @@ ngx_http_referer_create_conf(ngx_conf_t #if (NGX_PCRE) conf->regex = NGX_CONF_UNSET_PTR; + conf->server_name_regex = NGX_CONF_UNSET_PTR; #endif conf->no_referer = NGX_CONF_UNSET; @@ -279,6 +299,8 @@ ngx_http_referer_merge_conf(ngx_conf_t * #if (NGX_PCRE) ngx_conf_merge_ptr_value(conf->regex, prev->regex, NULL); + ngx_conf_merge_ptr_value(conf->server_name_regex, + prev->server_name_regex, NULL); #endif ngx_conf_merge_value(conf->no_referer, prev->no_referer, 0); ngx_conf_merge_value(conf->blocked_referer, prev->blocked_referer, 0); @@ -368,6 +390,8 @@ ngx_http_referer_merge_conf(ngx_conf_t * #if (NGX_PCRE) ngx_conf_merge_ptr_value(conf->regex, prev->regex, NULL); + ngx_conf_merge_ptr_value(conf->server_name_regex, prev->server_name_regex, + NULL); #endif if (conf->no_referer == NGX_CONF_UNSET) { @@ -450,8 +474,7 @@ ngx_http_valid_referers(ngx_conf_t *cf, #if (NGX_PCRE) if (sn[n].regex) { - if (ngx_http_add_regex_referer(cf, rlcf, &sn[n].name, - sn[n].regex->regex) + if (ngx_http_add_regex_server_name(cf, rlcf, sn[n].regex) != NGX_OK) { return NGX_CONF_ERROR; @@ -472,8 +495,7 @@ ngx_http_valid_referers(ngx_conf_t *cf, } if (value[i].data[0] == '~') { - if (ngx_http_add_regex_referer(cf, rlcf, &value[i], NULL) != NGX_OK) - { + if (ngx_http_add_regex_referer(cf, rlcf, &value[i]) != NGX_OK) { return NGX_CONF_ERROR; } @@ -538,7 +560,7 @@ ngx_http_add_referer(ngx_conf_t *cf, ngx static ngx_int_t ngx_http_add_regex_referer(ngx_conf_t *cf, ngx_http_referer_conf_t *rlcf, - ngx_str_t *name, ngx_regex_t *regex) + ngx_str_t *name) { #if (NGX_PCRE) ngx_regex_elt_t *re; @@ -562,13 +584,6 @@ ngx_http_add_regex_referer(ngx_conf_t *c return NGX_ERROR; } - if (regex) { - re->regex = regex; - re->name = name->data; - - return NGX_OK; - } - name->len--; name->data++; @@ -602,6 +617,36 @@ ngx_http_add_regex_referer(ngx_conf_t *c } +#if (NGX_PCRE) + +static ngx_int_t +ngx_http_add_regex_server_name(ngx_conf_t *cf, ngx_http_referer_conf_t *rlcf, + ngx_http_regex_t *regex) +{ + ngx_regex_elt_t *re; + + if (rlcf->server_name_regex == NGX_CONF_UNSET_PTR) { + rlcf->server_name_regex = ngx_array_create(cf->pool, 2, + sizeof(ngx_regex_elt_t)); + if (rlcf->server_name_regex == NULL) { + return NGX_ERROR; + } + } + + re = ngx_array_push(rlcf->server_name_regex); + if (re == NULL) { + return NGX_ERROR; + } + + re->regex = regex->regex; + re->name = regex->name.data; + + return NGX_OK; +} + +#endif + + static int ngx_libc_cdecl ngx_http_cmp_referer_wildcards(const void *one, const void *two) { From pluknet at nginx.com Thu Aug 29 18:37:57 2013 From: pluknet at nginx.com (Sergey Kandaurov) Date: Thu, 29 Aug 2013 18:37:57 +0000 Subject: [nginx] Referer: "server_names" parsing deferred to merge phase. Message-ID: details: http://hg.nginx.org/nginx/rev/a2c772963b04 branches: changeset: 5351:a2c772963b04 user: Sergey Kandaurov date: Thu Aug 29 22:35:27 2013 +0400 description: Referer: "server_names" parsing deferred to merge phase. This allows to approach "server_name" values specified below the "valid_referers" directive when used within the "server_names" parameter, e.g.: server_name example.org; valid_referers server_names; server_name example.com; As a bonus, this fixes bogus error with "server_names" specified several times. diffstat: src/http/modules/ngx_http_referer_module.c | 78 +++++++++++++++++------------ 1 files changed, 45 insertions(+), 33 deletions(-) diffs (139 lines): diff -r 8220e393c241 -r a2c772963b04 src/http/modules/ngx_http_referer_module.c --- a/src/http/modules/ngx_http_referer_module.c Thu Aug 29 22:35:26 2013 +0400 +++ b/src/http/modules/ngx_http_referer_module.c Thu Aug 29 22:35:27 2013 +0400 @@ -23,6 +23,7 @@ typedef struct { ngx_flag_t no_referer; ngx_flag_t blocked_referer; + ngx_flag_t server_names; ngx_hash_keys_arrays_t *keys; @@ -272,6 +273,14 @@ ngx_http_referer_create_conf(ngx_conf_t return NULL; } + /* + * set by ngx_pcalloc(): + * + * conf->hash = { NULL }; + * conf->server_names = 0; + * conf->keys = NULL; + */ + #if (NGX_PCRE) conf->regex = NGX_CONF_UNSET_PTR; conf->server_name_regex = NGX_CONF_UNSET_PTR; @@ -292,7 +301,10 @@ ngx_http_referer_merge_conf(ngx_conf_t * ngx_http_referer_conf_t *prev = parent; ngx_http_referer_conf_t *conf = child; - ngx_hash_init_t hash; + ngx_uint_t n; + ngx_hash_init_t hash; + ngx_http_server_name_t *sn; + ngx_http_core_srv_conf_t *cscf; if (conf->keys == NULL) { conf->hash = prev->hash; @@ -312,6 +324,33 @@ ngx_http_referer_merge_conf(ngx_conf_t * return NGX_CONF_OK; } + if (conf->server_names == 1) { + cscf = ngx_http_conf_get_module_srv_conf(cf, ngx_http_core_module); + + sn = cscf->server_names.elts; + for (n = 0; n < cscf->server_names.nelts; n++) { + +#if (NGX_PCRE) + if (sn[n].regex) { + + if (ngx_http_add_regex_server_name(cf, conf, sn[n].regex) + != NGX_OK) + { + return NGX_CONF_ERROR; + } + + continue; + } +#endif + + if (ngx_http_add_referer(cf, conf->keys, &sn[n].name, NULL) + != NGX_OK) + { + return NGX_CONF_ERROR; + } + } + } + if ((conf->no_referer == 1 || conf->blocked_referer == 1) && conf->keys->keys.nelts == 0 && conf->keys->dns_wc_head.nelts == 0 @@ -415,10 +454,8 @@ ngx_http_valid_referers(ngx_conf_t *cf, u_char *p; ngx_str_t *value, uri, name; - ngx_uint_t i, n; + ngx_uint_t i; ngx_http_variable_t *var; - ngx_http_server_name_t *sn; - ngx_http_core_srv_conf_t *cscf; ngx_str_set(&name, "invalid_referer"); @@ -462,35 +499,8 @@ ngx_http_valid_referers(ngx_conf_t *cf, continue; } - ngx_str_null(&uri); - if (ngx_strcmp(value[i].data, "server_names") == 0) { - - cscf = ngx_http_conf_get_module_srv_conf(cf, ngx_http_core_module); - - sn = cscf->server_names.elts; - for (n = 0; n < cscf->server_names.nelts; n++) { - -#if (NGX_PCRE) - if (sn[n].regex) { - - if (ngx_http_add_regex_server_name(cf, rlcf, sn[n].regex) - != NGX_OK) - { - return NGX_CONF_ERROR; - } - - continue; - } -#endif - - if (ngx_http_add_referer(cf, rlcf->keys, &sn[n].name, &uri) - != NGX_OK) - { - return NGX_CONF_ERROR; - } - } - + rlcf->server_names = 1; continue; } @@ -502,6 +512,8 @@ ngx_http_valid_referers(ngx_conf_t *cf, continue; } + ngx_str_null(&uri); + p = (u_char *) ngx_strchr(value[i].data, '/'); if (p) { @@ -526,7 +538,7 @@ ngx_http_add_referer(ngx_conf_t *cf, ngx ngx_int_t rc; ngx_str_t *u; - if (uri->len == 0) { + if (uri == NULL || uri->len == 0) { u = NGX_HTTP_REFERER_NO_URI_PART; } else { From pluknet at nginx.com Thu Aug 29 18:37:59 2013 From: pluknet at nginx.com (Sergey Kandaurov) Date: Thu, 29 Aug 2013 18:37:59 +0000 Subject: [nginx] Referer: fixed hostname buffer overflow check. Message-ID: details: http://hg.nginx.org/nginx/rev/ec0be12c8e29 branches: changeset: 5352:ec0be12c8e29 user: Valentin Bartenev date: Thu Aug 29 22:35:54 2013 +0400 description: Referer: fixed hostname buffer overflow check. Because of premature check the effective buffer size was 255 symbols while the buffer is able to handle 256. diffstat: src/http/modules/ngx_http_referer_module.c | 6 +++--- 1 files changed, 3 insertions(+), 3 deletions(-) diffs (19 lines): diff -r a2c772963b04 -r ec0be12c8e29 src/http/modules/ngx_http_referer_module.c --- a/src/http/modules/ngx_http_referer_module.c Thu Aug 29 22:35:27 2013 +0400 +++ b/src/http/modules/ngx_http_referer_module.c Thu Aug 29 22:35:54 2013 +0400 @@ -178,12 +178,12 @@ valid_scheme: break; } - buf[i] = ngx_tolower(*p); - key = ngx_hash(key, buf[i++]); - if (i == 256) { goto invalid; } + + buf[i] = ngx_tolower(*p); + key = ngx_hash(key, buf[i++]); } uri = ngx_hash_find_combined(&rlcf->hash, key, buf, p - ref); From mdounin at mdounin.ru Thu Aug 29 19:50:06 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 29 Aug 2013 23:50:06 +0400 Subject: [PATCH] SO_REUSEPORT support for listen sockets (round 3) In-Reply-To: References: Message-ID: <20130829195006.GN22852@mdounin.ru> Hello! On Thu, Aug 29, 2013 at 09:24:00PM +0800, Sepherosa Ziehau wrote: > Sorry for the top post. Any follow-up on this? Or I should just keep it > as a local patch? Sorry, I've missed your message. I'll take a look and try to respond shortly. -- Maxim Dounin http://nginx.org/en/donation.html From sepherosa at gmail.com Fri Aug 30 08:19:09 2013 From: sepherosa at gmail.com (Sepherosa Ziehau) Date: Fri, 30 Aug 2013 16:19:09 +0800 Subject: [PATCH] SO_REUSEPORT support for listen sockets (round 3) In-Reply-To: <20130829195006.GN22852@mdounin.ru> References: <20130829195006.GN22852@mdounin.ru> Message-ID: On Fri, Aug 30, 2013 at 3:50 AM, Maxim Dounin wrote: > Hello! > > On Thu, Aug 29, 2013 at 09:24:00PM +0800, Sepherosa Ziehau wrote: > > > Sorry for the top post. Any follow-up on this? Or I should just keep it > > as a local patch? > > Sorry, I've missed your message. I'll take a look and try to > respond shortly. > > Thank you very much! Best Regards, sephe > -- > Maxim Dounin > http://nginx.org/en/donation.html > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > -- Tomorrow Will Never Die -------------- next part -------------- An HTML attachment was scrubbed... URL: From a.marinov at ucdn.com Fri Aug 30 08:31:26 2013 From: a.marinov at ucdn.com (Anatoli Marinov) Date: Fri, 30 Aug 2013 11:31:26 +0300 Subject: Sharing data when download the same object from upstream In-Reply-To: References: Message-ID: Hello, On Wed, Aug 28, 2013 at 7:56 PM, Alex Garz?o wrote: > Hello Anatoli, > > Thanks for your reply. I will appreciate (a lot) your help :-) > > I'm trying to fix the code with the following requirements in mind: > > 1) We were upstreams/downstreams with good (and bad) links; in > general, upstream speed is more than downstream speed but, in some > situations, the downstream speed is a lot more quickly than the > upstream speed; > I think this is asynchronous and if the upstream is faster than the downstream it save the data to cached file faster and the downstream gets the data from the file instead of the mem buffers. > 2) I'm trying to disassociate the upstream speed from the downstream > speed. The first request (request that already will connect in the > upstream) download data to temp file, but no longer sends data to > downstream. I disabled this because, in my understand, if the first > request has a slow downstream, all others downstreams will wait data > to be sent to this slow downstream. > I think this is not necessary. > > My first doubt is: Need I worry about downstream/upstream speed? > > No > Well, I will try to explain what I did in the code: > > 1) I created a rbtree (currrent_downloads) that keeps the current > downloads (one rbtree per upstream). Each node keeps the first request > (request that already will connect into upstream) and a list > (download_info_list) that will keep two fields: (a) request waiting > data from the temp file and (b) file offset already sent from the temp > file (last_offset); > > I have the same but in ordered array (simple implementation). Anyway the rbtree will do the same. But this structure should be in shared memory because all workers should know which files are currently in downloading from upstream state. The should exist in tmp directory. > 2) In ngx_http_upstream_init_request(), when the object isn't in the > cache, before connect into upstream, I check if the object is in > rbtree (current_downloads); > > 3) When the object isn't in current_downloads, I add a node that > contains the first request (equal to current request) and I add the > current request into the download_info_list. Beyond that, I create a > timer event (polling) that will check all requests in > download_info_list and verify if there are data in temp file that > already not sent to the downstream. I create one timer event per > object [1]. > > 4) When the object is in current_downloads, I add the request into > download_info_list and finalize ngx_http_upstream_init_request() (I > just return without execute ngx_http_upstream_finalize_request()); > > 5) I have disabled (in ngx_event_pipe) the code that sends data to > downstream (requirement 2); > > 6) In the polling event, I get the current temp file offset > (first_request->upstream->pipe->temp_file->offset) and I check in the > download_info_list if this is > than last_offset. If true, I send more > data to downstream with the ngx_http_upstream_cache_send_partial (code > bellow); > > 7) In the polling event, when pipe->upstream_done || > pipe->upstream_eof || pipe->upstream_error, and all data were sent to > downstream, I execute ngx_http_upstream_finalize_request for all > requests; > > 8) I added a bit flag (first_download_request) in ngx_http_request_t > struct to avoid request to be finished before all requests were > completed. In ngx_http_upstream_finalize_request() I check this flag. > But, in really, I don't have sure if is necessary avoid this > situation... > > > Bellow you can see the ngx_http_upstream_cache_send_partial code: > > > ///////////// > static ngx_int_t > ngx_http_upstream_cache_send_partial(ngx_http_request_t *r, > ngx_temp_file_t *file, off_t offset, off_t bytes, unsigned last_buf) > { > ngx_buf_t *b; > ngx_chain_t out; > ngx_http_cache_t *c; > > c = r->cache; > > /* we need to allocate all before the header would be sent */ > > b = ngx_pcalloc(r->pool, sizeof(ngx_buf_t)); > if (b == NULL) { > return NGX_HTTP_INTERNAL_SERVER_ERROR; > } > > b->file = ngx_pcalloc(r->pool, sizeof(ngx_file_t)); > if (b->file == NULL) { > return NGX_HTTP_INTERNAL_SERVER_ERROR; > } > > /* FIX: need to run ngx_http_send_header(r) once... */ > > b->file_pos = offset; > b->file_last = bytes; > > b->in_file = 1; > b->last_buf = last_buf; > b->last_in_chain = 1; > > b->file->fd = file->file.fd; > b->file->name = file->file.name; > b->file->log = r->connection->log; > > out.buf = b; > out.next = NULL; > > return ngx_http_output_filter(r, &out); > } > //////////// > > My second doubt is: Could I just fix ngx_event_pipe to send to all > requests (instead of to send to one request)? And, if true, > ngx_http_output_filter can be used to send a big chunk at first time > (300 MB or more) and little chunks after that? > > Use smaller chunks. Thanks in advance for your attention :-) > > [1] I know that "polling event" is a bad approach with NGINX, but I > don't know how to fix this. For example, the upstream download can be > very quickly, and is possible that I need send data to downstream in > little chunks. Upstream (in NGINX) is socket event based, but, when > download from upstream finished, which event can I expect? > > Regards. > -- > Alex Garz?o > Projetista de Software > Azion Technologies > alex.garzao (at) azion.com > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > You are on a right way. Just keep digging. Do not forget to turn off this features when you have flv or mp4 seek, partial requests and content-ecoding different than identity because you will send broken files to the browsers. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mat999 at gmail.com Fri Aug 30 08:42:04 2013 From: mat999 at gmail.com (SplitIce) Date: Fri, 30 Aug 2013 18:12:04 +0930 Subject: Sharing data when download the same object from upstream In-Reply-To: References: Message-ID: This is an interesting idea, while I don't see it being all that useful for most applications there are some that could really benefit (large file proxying first comes to mind). If it could be achieved without introducing too much of a CPU overhead in keeping track of the requests & available parts it would be quite interesting. I would like to see an option to supply a minimum size to restrict this feature too (either by after x bytes are passed add to map/rbtree whatever or based off content-length). Regards, Mathew On Fri, Aug 30, 2013 at 6:01 PM, Anatoli Marinov wrote: > Hello, > > > On Wed, Aug 28, 2013 at 7:56 PM, Alex Garz?o wrote: > >> Hello Anatoli, >> >> Thanks for your reply. I will appreciate (a lot) your help :-) >> >> I'm trying to fix the code with the following requirements in mind: >> >> 1) We were upstreams/downstreams with good (and bad) links; in >> general, upstream speed is more than downstream speed but, in some >> situations, the downstream speed is a lot more quickly than the >> upstream speed; >> > I think this is asynchronous and if the upstream is faster than the > downstream it save the data to cached file faster and the downstream gets > the data from the file instead of the mem buffers. > > >> 2) I'm trying to disassociate the upstream speed from the downstream >> speed. The first request (request that already will connect in the >> upstream) download data to temp file, but no longer sends data to >> downstream. I disabled this because, in my understand, if the first >> request has a slow downstream, all others downstreams will wait data >> to be sent to this slow downstream. >> > I think this is not necessary. > > >> >> My first doubt is: Need I worry about downstream/upstream speed? >> >> No > > >> Well, I will try to explain what I did in the code: >> >> 1) I created a rbtree (currrent_downloads) that keeps the current >> downloads (one rbtree per upstream). Each node keeps the first request >> (request that already will connect into upstream) and a list >> (download_info_list) that will keep two fields: (a) request waiting >> data from the temp file and (b) file offset already sent from the temp >> file (last_offset); >> >> > I have the same but in ordered array (simple implementation). Anyway the > rbtree will do the same. But this structure should be in shared memory > because all workers should know which files are currently in downloading > from upstream state. The should exist in tmp directory. > > >> 2) In ngx_http_upstream_init_request(), when the object isn't in the >> cache, before connect into upstream, I check if the object is in >> rbtree (current_downloads); >> >> 3) When the object isn't in current_downloads, I add a node that >> contains the first request (equal to current request) and I add the >> current request into the download_info_list. Beyond that, I create a >> timer event (polling) that will check all requests in >> download_info_list and verify if there are data in temp file that >> already not sent to the downstream. I create one timer event per >> object [1]. >> >> 4) When the object is in current_downloads, I add the request into >> download_info_list and finalize ngx_http_upstream_init_request() (I >> just return without execute ngx_http_upstream_finalize_request()); >> >> 5) I have disabled (in ngx_event_pipe) the code that sends data to >> downstream (requirement 2); >> >> 6) In the polling event, I get the current temp file offset >> (first_request->upstream->pipe->temp_file->offset) and I check in the >> download_info_list if this is > than last_offset. If true, I send more >> data to downstream with the ngx_http_upstream_cache_send_partial (code >> bellow); >> >> 7) In the polling event, when pipe->upstream_done || >> pipe->upstream_eof || pipe->upstream_error, and all data were sent to >> downstream, I execute ngx_http_upstream_finalize_request for all >> requests; >> >> 8) I added a bit flag (first_download_request) in ngx_http_request_t >> struct to avoid request to be finished before all requests were >> completed. In ngx_http_upstream_finalize_request() I check this flag. >> But, in really, I don't have sure if is necessary avoid this >> situation... >> >> >> Bellow you can see the ngx_http_upstream_cache_send_partial code: >> >> >> ///////////// >> static ngx_int_t >> ngx_http_upstream_cache_send_partial(ngx_http_request_t *r, >> ngx_temp_file_t *file, off_t offset, off_t bytes, unsigned last_buf) >> { >> ngx_buf_t *b; >> ngx_chain_t out; >> ngx_http_cache_t *c; >> >> c = r->cache; >> >> /* we need to allocate all before the header would be sent */ >> >> b = ngx_pcalloc(r->pool, sizeof(ngx_buf_t)); >> if (b == NULL) { >> return NGX_HTTP_INTERNAL_SERVER_ERROR; >> } >> >> b->file = ngx_pcalloc(r->pool, sizeof(ngx_file_t)); >> if (b->file == NULL) { >> return NGX_HTTP_INTERNAL_SERVER_ERROR; >> } >> >> /* FIX: need to run ngx_http_send_header(r) once... */ >> >> b->file_pos = offset; >> b->file_last = bytes; >> >> b->in_file = 1; >> b->last_buf = last_buf; >> b->last_in_chain = 1; >> >> b->file->fd = file->file.fd; >> b->file->name = file->file.name; >> b->file->log = r->connection->log; >> >> out.buf = b; >> out.next = NULL; >> >> return ngx_http_output_filter(r, &out); >> } >> //////////// >> >> My second doubt is: Could I just fix ngx_event_pipe to send to all >> requests (instead of to send to one request)? And, if true, >> ngx_http_output_filter can be used to send a big chunk at first time >> (300 MB or more) and little chunks after that? >> >> > Use smaller chunks. > > Thanks in advance for your attention :-) >> >> [1] I know that "polling event" is a bad approach with NGINX, but I >> don't know how to fix this. For example, the upstream download can be >> very quickly, and is possible that I need send data to downstream in >> little chunks. Upstream (in NGINX) is socket event based, but, when >> download from upstream finished, which event can I expect? >> >> Regards. >> -- >> Alex Garz?o >> Projetista de Software >> Azion Technologies >> alex.garzao (at) azion.com >> >> _______________________________________________ >> nginx-devel mailing list >> nginx-devel at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx-devel >> > > You are on a right way. Just keep digging. Do not forget to turn off this > features when you have flv or mp4 seek, partial requests and > content-ecoding different than identity because you will send broken files > to the browsers. > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From a.marinov at ucdn.com Fri Aug 30 08:55:11 2013 From: a.marinov at ucdn.com (Anatoli Marinov) Date: Fri, 30 Aug 2013 11:55:11 +0300 Subject: Sharing data when download the same object from upstream In-Reply-To: References: Message-ID: I discussed the idea years ago here in the mailing list but nobody from the main developers liked it. However I developed a patch and we have this in production more than 1 year and it works fine. Just think for the following case: You have a new file which is 1 GB and it is located far from the cache. Even so you can download it with 5 MBps through cache upstream so you need 200 seconds to get it. This file is a video file and because it is a new is placed on the first page. For first 30 seconds your caching server may receive 1000 requests (or even more) for this file and you cannot block all new requests for 170 seconds ?!?! to wait for file to be downloaded. Also all requests will be send to the origin and your proxy will generate 1 TB traffic instead of 1 GB. It will be amazing if this feature will be implemented as a part of the common caching mechanism. On Fri, Aug 30, 2013 at 11:42 AM, SplitIce wrote: > This is an interesting idea, while I don't see it being all that useful > for most applications there are some that could really benefit (large file > proxying first comes to mind). If it could be achieved without introducing > too much of a CPU overhead in keeping track of the requests & available > parts it would be quite interesting. > > I would like to see an option to supply a minimum size to restrict this > feature too (either by after x bytes are passed add to map/rbtree whatever > or based off content-length). > > Regards, > Mathew > > > On Fri, Aug 30, 2013 at 6:01 PM, Anatoli Marinov wrote: > >> Hello, >> >> >> On Wed, Aug 28, 2013 at 7:56 PM, Alex Garz?o wrote: >> >>> Hello Anatoli, >>> >>> Thanks for your reply. I will appreciate (a lot) your help :-) >>> >>> I'm trying to fix the code with the following requirements in mind: >>> >>> 1) We were upstreams/downstreams with good (and bad) links; in >>> general, upstream speed is more than downstream speed but, in some >>> situations, the downstream speed is a lot more quickly than the >>> upstream speed; >>> >> I think this is asynchronous and if the upstream is faster than the >> downstream it save the data to cached file faster and the downstream gets >> the data from the file instead of the mem buffers. >> >> >>> 2) I'm trying to disassociate the upstream speed from the downstream >>> speed. The first request (request that already will connect in the >>> upstream) download data to temp file, but no longer sends data to >>> downstream. I disabled this because, in my understand, if the first >>> request has a slow downstream, all others downstreams will wait data >>> to be sent to this slow downstream. >>> >> I think this is not necessary. >> >> >>> >>> My first doubt is: Need I worry about downstream/upstream speed? >>> >>> No >> >> >>> Well, I will try to explain what I did in the code: >>> >>> 1) I created a rbtree (currrent_downloads) that keeps the current >>> downloads (one rbtree per upstream). Each node keeps the first request >>> (request that already will connect into upstream) and a list >>> (download_info_list) that will keep two fields: (a) request waiting >>> data from the temp file and (b) file offset already sent from the temp >>> file (last_offset); >>> >>> >> I have the same but in ordered array (simple implementation). Anyway the >> rbtree will do the same. But this structure should be in shared memory >> because all workers should know which files are currently in downloading >> from upstream state. The should exist in tmp directory. >> >> >>> 2) In ngx_http_upstream_init_request(), when the object isn't in the >>> cache, before connect into upstream, I check if the object is in >>> rbtree (current_downloads); >>> >>> 3) When the object isn't in current_downloads, I add a node that >>> contains the first request (equal to current request) and I add the >>> current request into the download_info_list. Beyond that, I create a >>> timer event (polling) that will check all requests in >>> download_info_list and verify if there are data in temp file that >>> already not sent to the downstream. I create one timer event per >>> object [1]. >>> >>> 4) When the object is in current_downloads, I add the request into >>> download_info_list and finalize ngx_http_upstream_init_request() (I >>> just return without execute ngx_http_upstream_finalize_request()); >>> >>> 5) I have disabled (in ngx_event_pipe) the code that sends data to >>> downstream (requirement 2); >>> >>> 6) In the polling event, I get the current temp file offset >>> (first_request->upstream->pipe->temp_file->offset) and I check in the >>> download_info_list if this is > than last_offset. If true, I send more >>> data to downstream with the ngx_http_upstream_cache_send_partial (code >>> bellow); >>> >>> 7) In the polling event, when pipe->upstream_done || >>> pipe->upstream_eof || pipe->upstream_error, and all data were sent to >>> downstream, I execute ngx_http_upstream_finalize_request for all >>> requests; >>> >>> 8) I added a bit flag (first_download_request) in ngx_http_request_t >>> struct to avoid request to be finished before all requests were >>> completed. In ngx_http_upstream_finalize_request() I check this flag. >>> But, in really, I don't have sure if is necessary avoid this >>> situation... >>> >>> >>> Bellow you can see the ngx_http_upstream_cache_send_partial code: >>> >>> >>> ///////////// >>> static ngx_int_t >>> ngx_http_upstream_cache_send_partial(ngx_http_request_t *r, >>> ngx_temp_file_t *file, off_t offset, off_t bytes, unsigned last_buf) >>> { >>> ngx_buf_t *b; >>> ngx_chain_t out; >>> ngx_http_cache_t *c; >>> >>> c = r->cache; >>> >>> /* we need to allocate all before the header would be sent */ >>> >>> b = ngx_pcalloc(r->pool, sizeof(ngx_buf_t)); >>> if (b == NULL) { >>> return NGX_HTTP_INTERNAL_SERVER_ERROR; >>> } >>> >>> b->file = ngx_pcalloc(r->pool, sizeof(ngx_file_t)); >>> if (b->file == NULL) { >>> return NGX_HTTP_INTERNAL_SERVER_ERROR; >>> } >>> >>> /* FIX: need to run ngx_http_send_header(r) once... */ >>> >>> b->file_pos = offset; >>> b->file_last = bytes; >>> >>> b->in_file = 1; >>> b->last_buf = last_buf; >>> b->last_in_chain = 1; >>> >>> b->file->fd = file->file.fd; >>> b->file->name = file->file.name; >>> b->file->log = r->connection->log; >>> >>> out.buf = b; >>> out.next = NULL; >>> >>> return ngx_http_output_filter(r, &out); >>> } >>> //////////// >>> >>> My second doubt is: Could I just fix ngx_event_pipe to send to all >>> requests (instead of to send to one request)? And, if true, >>> ngx_http_output_filter can be used to send a big chunk at first time >>> (300 MB or more) and little chunks after that? >>> >>> >> Use smaller chunks. >> >> Thanks in advance for your attention :-) >>> >>> [1] I know that "polling event" is a bad approach with NGINX, but I >>> don't know how to fix this. For example, the upstream download can be >>> very quickly, and is possible that I need send data to downstream in >>> little chunks. Upstream (in NGINX) is socket event based, but, when >>> download from upstream finished, which event can I expect? >>> >>> Regards. >>> -- >>> Alex Garz?o >>> Projetista de Software >>> Azion Technologies >>> alex.garzao (at) azion.com >>> >>> _______________________________________________ >>> nginx-devel mailing list >>> nginx-devel at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx-devel >>> >> >> You are on a right way. Just keep digging. Do not forget to turn off this >> features when you have flv or mp4 seek, partial requests and >> content-ecoding different than identity because you will send broken files >> to the browsers. >> >> _______________________________________________ >> nginx-devel mailing list >> nginx-devel at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx-devel >> > > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mat999 at gmail.com Fri Aug 30 09:04:37 2013 From: mat999 at gmail.com (SplitIce) Date: Fri, 30 Aug 2013 18:34:37 +0930 Subject: Sharing data when download the same object from upstream In-Reply-To: References: Message-ID: Is the patch on this mailing list (forgive me I cant see it)? Ill happily test it for you, although for me to get any personal benefit there would need to be a size restriction since 99.9% of requests are just small HTML documents and would not benifit. Also the standard caching (headers that result in a cache miss e.g cookies, cache-control) would have to be correct. At the very least Ill read over it and see if I spot anything / have recommendations. Regards, Mathew On Fri, Aug 30, 2013 at 6:25 PM, Anatoli Marinov wrote: > I discussed the idea years ago here in the mailing list but nobody from > the main developers liked it. However I developed a patch and we have this > in production more than 1 year and it works fine. > > Just think for the following case: > You have a new file which is 1 GB and it is located far from the cache. > Even so you can download it with 5 MBps through cache upstream so you need > 200 seconds to get it. This file is a video file and because it is a new is > placed on the first page. For first 30 seconds your caching server may > receive 1000 requests (or even more) for this file and you cannot block > all new requests for 170 seconds ?!?! to wait for file to be downloaded. > Also all requests will be send to the origin and your proxy will generate 1 > TB traffic instead of 1 GB. > > It will be amazing if this feature will be implemented as a part of the > common caching mechanism. > > > > On Fri, Aug 30, 2013 at 11:42 AM, SplitIce wrote: > >> This is an interesting idea, while I don't see it being all that useful >> for most applications there are some that could really benefit (large file >> proxying first comes to mind). If it could be achieved without introducing >> too much of a CPU overhead in keeping track of the requests & available >> parts it would be quite interesting. >> >> I would like to see an option to supply a minimum size to restrict this >> feature too (either by after x bytes are passed add to map/rbtree whatever >> or based off content-length). >> >> Regards, >> Mathew >> >> >> On Fri, Aug 30, 2013 at 6:01 PM, Anatoli Marinov wrote: >> >>> Hello, >>> >>> >>> On Wed, Aug 28, 2013 at 7:56 PM, Alex Garz?o wrote: >>> >>>> Hello Anatoli, >>>> >>>> Thanks for your reply. I will appreciate (a lot) your help :-) >>>> >>>> I'm trying to fix the code with the following requirements in mind: >>>> >>>> 1) We were upstreams/downstreams with good (and bad) links; in >>>> general, upstream speed is more than downstream speed but, in some >>>> situations, the downstream speed is a lot more quickly than the >>>> upstream speed; >>>> >>> I think this is asynchronous and if the upstream is faster than the >>> downstream it save the data to cached file faster and the downstream gets >>> the data from the file instead of the mem buffers. >>> >>> >>>> 2) I'm trying to disassociate the upstream speed from the downstream >>>> speed. The first request (request that already will connect in the >>>> upstream) download data to temp file, but no longer sends data to >>>> downstream. I disabled this because, in my understand, if the first >>>> request has a slow downstream, all others downstreams will wait data >>>> to be sent to this slow downstream. >>>> >>> I think this is not necessary. >>> >>> >>>> >>>> My first doubt is: Need I worry about downstream/upstream speed? >>>> >>>> No >>> >>> >>>> Well, I will try to explain what I did in the code: >>>> >>>> 1) I created a rbtree (currrent_downloads) that keeps the current >>>> downloads (one rbtree per upstream). Each node keeps the first request >>>> (request that already will connect into upstream) and a list >>>> (download_info_list) that will keep two fields: (a) request waiting >>>> data from the temp file and (b) file offset already sent from the temp >>>> file (last_offset); >>>> >>>> >>> I have the same but in ordered array (simple implementation). Anyway the >>> rbtree will do the same. But this structure should be in shared memory >>> because all workers should know which files are currently in downloading >>> from upstream state. The should exist in tmp directory. >>> >>> >>>> 2) In ngx_http_upstream_init_request(), when the object isn't in the >>>> cache, before connect into upstream, I check if the object is in >>>> rbtree (current_downloads); >>>> >>>> 3) When the object isn't in current_downloads, I add a node that >>>> contains the first request (equal to current request) and I add the >>>> current request into the download_info_list. Beyond that, I create a >>>> timer event (polling) that will check all requests in >>>> download_info_list and verify if there are data in temp file that >>>> already not sent to the downstream. I create one timer event per >>>> object [1]. >>>> >>>> 4) When the object is in current_downloads, I add the request into >>>> download_info_list and finalize ngx_http_upstream_init_request() (I >>>> just return without execute ngx_http_upstream_finalize_request()); >>>> >>>> 5) I have disabled (in ngx_event_pipe) the code that sends data to >>>> downstream (requirement 2); >>>> >>>> 6) In the polling event, I get the current temp file offset >>>> (first_request->upstream->pipe->temp_file->offset) and I check in the >>>> download_info_list if this is > than last_offset. If true, I send more >>>> data to downstream with the ngx_http_upstream_cache_send_partial (code >>>> bellow); >>>> >>>> 7) In the polling event, when pipe->upstream_done || >>>> pipe->upstream_eof || pipe->upstream_error, and all data were sent to >>>> downstream, I execute ngx_http_upstream_finalize_request for all >>>> requests; >>>> >>>> 8) I added a bit flag (first_download_request) in ngx_http_request_t >>>> struct to avoid request to be finished before all requests were >>>> completed. In ngx_http_upstream_finalize_request() I check this flag. >>>> But, in really, I don't have sure if is necessary avoid this >>>> situation... >>>> >>>> >>>> Bellow you can see the ngx_http_upstream_cache_send_partial code: >>>> >>>> >>>> ///////////// >>>> static ngx_int_t >>>> ngx_http_upstream_cache_send_partial(ngx_http_request_t *r, >>>> ngx_temp_file_t *file, off_t offset, off_t bytes, unsigned last_buf) >>>> { >>>> ngx_buf_t *b; >>>> ngx_chain_t out; >>>> ngx_http_cache_t *c; >>>> >>>> c = r->cache; >>>> >>>> /* we need to allocate all before the header would be sent */ >>>> >>>> b = ngx_pcalloc(r->pool, sizeof(ngx_buf_t)); >>>> if (b == NULL) { >>>> return NGX_HTTP_INTERNAL_SERVER_ERROR; >>>> } >>>> >>>> b->file = ngx_pcalloc(r->pool, sizeof(ngx_file_t)); >>>> if (b->file == NULL) { >>>> return NGX_HTTP_INTERNAL_SERVER_ERROR; >>>> } >>>> >>>> /* FIX: need to run ngx_http_send_header(r) once... */ >>>> >>>> b->file_pos = offset; >>>> b->file_last = bytes; >>>> >>>> b->in_file = 1; >>>> b->last_buf = last_buf; >>>> b->last_in_chain = 1; >>>> >>>> b->file->fd = file->file.fd; >>>> b->file->name = file->file.name; >>>> b->file->log = r->connection->log; >>>> >>>> out.buf = b; >>>> out.next = NULL; >>>> >>>> return ngx_http_output_filter(r, &out); >>>> } >>>> //////////// >>>> >>>> My second doubt is: Could I just fix ngx_event_pipe to send to all >>>> requests (instead of to send to one request)? And, if true, >>>> ngx_http_output_filter can be used to send a big chunk at first time >>>> (300 MB or more) and little chunks after that? >>>> >>>> >>> Use smaller chunks. >>> >>> Thanks in advance for your attention :-) >>>> >>>> [1] I know that "polling event" is a bad approach with NGINX, but I >>>> don't know how to fix this. For example, the upstream download can be >>>> very quickly, and is possible that I need send data to downstream in >>>> little chunks. Upstream (in NGINX) is socket event based, but, when >>>> download from upstream finished, which event can I expect? >>>> >>>> Regards. >>>> -- >>>> Alex Garz?o >>>> Projetista de Software >>>> Azion Technologies >>>> alex.garzao (at) azion.com >>>> >>>> _______________________________________________ >>>> nginx-devel mailing list >>>> nginx-devel at nginx.org >>>> http://mailman.nginx.org/mailman/listinfo/nginx-devel >>>> >>> >>> You are on a right way. Just keep digging. Do not forget to turn off >>> this features when you have flv or mp4 seek, partial requests and >>> content-ecoding different than identity because you will send broken files >>> to the browsers. >>> >>> _______________________________________________ >>> nginx-devel mailing list >>> nginx-devel at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx-devel >>> >> >> >> _______________________________________________ >> nginx-devel mailing list >> nginx-devel at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx-devel >> > > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Fri Aug 30 17:44:57 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 30 Aug 2013 17:44:57 +0000 Subject: [nginx] Upstream: setting u->header_sent before ngx_http_upstrea... Message-ID: details: http://hg.nginx.org/nginx/rev/1608b1135a1d branches: changeset: 5353:1608b1135a1d user: Maxim Dounin date: Fri Aug 30 21:44:16 2013 +0400 description: Upstream: setting u->header_sent before ngx_http_upstream_upgrade(). Without u->header_sent set a special response might be generated following an upgraded connection. The problem appeared in 1ccdda1f37f3 (1.5.3). Catched by "header already sent" alerts in 1.5.4 after upstream timeouts. diffstat: src/http/ngx_http_upstream.c | 4 ++-- 1 files changed, 2 insertions(+), 2 deletions(-) diffs (21 lines): diff --git a/src/http/ngx_http_upstream.c b/src/http/ngx_http_upstream.c --- a/src/http/ngx_http_upstream.c +++ b/src/http/ngx_http_upstream.c @@ -2105,6 +2105,8 @@ ngx_http_upstream_send_response(ngx_http return; } + u->header_sent = 1; + if (u->upgrade) { ngx_http_upstream_upgrade(r, u); return; @@ -2131,8 +2133,6 @@ ngx_http_upstream_send_response(ngx_http } } - u->header_sent = 1; - if (r->request_body && r->request_body->temp_file) { ngx_pool_run_cleanup_file(r->pool, r->request_body->temp_file->file.fd); r->request_body->temp_file->file.fd = NGX_INVALID_FILE; From alex.garzao at azion.com Fri Aug 30 18:53:55 2013 From: alex.garzao at azion.com (=?ISO-8859-1?Q?Alex_Garz=E3o?=) Date: Fri, 30 Aug 2013 15:53:55 -0300 Subject: Sharing data when download the same object from upstream In-Reply-To: References: Message-ID: Hello Anatoli, > I think this is asynchronous and if the upstream is faster than the > downstream it save the data to cached file faster and the downstream gets > the data from the file instead of the mem buffers. In this case, I don't need to worry about upstream/downstream speed. Very good! > I have the same but in ordered array (simple implementation). Anyway the > rbtree will do the same. But this structure should be in shared memory > because all workers should know which files are currently in downloading > from upstream state. The should exist in tmp directory. About shared memory, I did. Thanks. >> My second doubt is: Could I just fix ngx_event_pipe to send to all >> requests (instead of to send to one request)? And, if true, >> ngx_http_output_filter can be used to send a big chunk at first time >> (300 MB or more) and little chunks after that? >> > > Use smaller chunks. Ok. In really, I tried to send a big chunk with ngx_http_output_filter, but, in some cases, it returns NGX_AGAIN. I looked at all places in NGINX where this function is called, but seems to me that, when it returns NGX_AGAIN, ngx_http_upstream_finalize_request is called, and it deals with buffers not sent. In my approach I call ngx_http_output_filter for each chunk, but this not working. I think that I can't call ngx_http_output_filter more than once per request. Or Can I ? About use smaller chunks. I will adjust to address this. Thanks. > You are on a right way. Just keep digging. Do not forget to turn off this > features when you have flv or mp4 seek, partial requests and content-ecoding > different than identity because you will send broken files to the browsers. Ok. Thanks in advance for your help Anatoli. Regards. -- Alex Garz?o Projetista de Software Azion Technologies alex.garzao (at) azion.com From alex.garzao at azion.com Fri Aug 30 19:05:10 2013 From: alex.garzao at azion.com (=?ISO-8859-1?Q?Alex_Garz=E3o?=) Date: Fri, 30 Aug 2013 16:05:10 -0300 Subject: Sharing data when download the same object from upstream In-Reply-To: References: Message-ID: Hello Mathew, > This is an interesting idea, while I don't see it being all that useful for > most applications there are some that could really benefit (large file > proxying first comes to mind). If it could be achieved without introducing > too much of a CPU overhead in keeping track of the requests & available > parts it would be quite interesting. I think that this idea is valid only for large files. And about CPU overhead, I has this in my mind. I think that with hints of nginx-devel, I can address this. > I would like to see an option to supply a minimum size to restrict this > feature too (either by after x bytes are passed add to map/rbtree whatever > or based off content-length). I agree. But I have not tried to solve it yet. Regards. -- Alex Garz?o Projetista de Software Azion Technologies alex.garzao (at) azion.com From juremenart at gmail.com Sat Aug 31 20:35:20 2013 From: juremenart at gmail.com (Jure Menart) Date: Sat, 31 Aug 2013 22:35:20 +0200 Subject: Nginx modules & C includes Message-ID: Dear all, I'm new to the Nginx project and I am just getting familiar with it. Let me first thank to the contributors for the work they've put into to make the project so nice. I've observed very 'strange' behaviour and took me quite a lot time to find the cause for it (not to understand it yet). Let me start in the beginning: - I've been playing with Hello world examples of course and then started to build bigger 'real' module. - Suddenly I've got very unpredictable behaviour and seg. faults. - I've stripped down my module back to real bare minimum - at the end I just included one command which sends "Hello world" string back to the client. The thing was still acting very strange: ngx_http_request_t seems 'unstable' - r->method with strange numbers, if I wanted to log in r->connection->log I've got seg. fault, ... I'm fairly sure my test module does not hot have any memory leaks because I am using only one static string which is put to the output buffer. - In the end I've removed the system C includes (sys/types.h, sys/stat.h, unistd.h) and my simple example started to work again - I've tried few times to add includes, put them before Nginx includes or after - it was very repeatable and the module was stable if I either did not include or include them after Nginx module: For example: <-- snip --> #include #include #include #include #include #include <-- snip --> Crashes my module, while: <-- snip --> #include #include #include #include #include #include <-- snip --> Seems to work. My question: Did anybody observed this behaviour? Obviously the system includes can influence/change the includes in the Nginx. If this is known, are there any special limitations while including system headers? For sure this kind of behaviour is not nice and maybe it can be counted as bug (or at least be documented). Regards, Jure Menart -------------- next part -------------- An HTML attachment was scrubbed... URL: From vbart at nginx.com Sat Aug 31 21:08:20 2013 From: vbart at nginx.com (Valentin V. Bartenev) Date: Sun, 1 Sep 2013 01:08:20 +0400 Subject: Nginx modules & C includes In-Reply-To: References: Message-ID: <201309010108.20124.vbart@nginx.com> On Sunday 01 September 2013 00:35:20 Jure Menart wrote: > Dear all, > > I'm new to the Nginx project and I am just getting familiar with it. Let me > first thank to the contributors for the work they've put into to make the > project so nice. > > I've observed very 'strange' behaviour and took me quite a lot time to find > the cause for it (not to understand it yet). Let me start in the beginning: > - I've been playing with Hello world examples of course and then started to > build bigger 'real' module. > - Suddenly I've got very unpredictable behaviour and seg. faults. > - I've stripped down my module back to real bare minimum - at the end I > just included one command which sends "Hello world" string back to the > client. The thing was still acting very strange: ngx_http_request_t seems > 'unstable' - r->method with strange numbers, if I wanted to log in > r->connection->log I've got seg. fault, ... I'm fairly sure my test module > does not hot have any memory leaks because I am using only one static > string which is put to the output buffer. > - In the end I've removed the system C includes (sys/types.h, sys/stat.h, > unistd.h) and my simple example started to work again - I've tried few > times to add includes, put them before Nginx includes or after - it was > very repeatable and the module was stable if I either did not include or > include them after Nginx module: > For example: > <-- snip --> > #include > #include > #include > > #include > #include > #include > <-- snip --> > > Crashes my module, while: > <-- snip --> > #include > #include > #include > > #include > #include > #include > <-- snip --> > > Seems to work. > > My question: Did anybody observed this behaviour? Obviously the system > includes can influence/change the includes in the Nginx. If this is known, > are there any special limitations while including system headers? > For sure this kind of behaviour is not nice and maybe it can be counted as > bug (or at least be documented). > There is a commentary in the C++ example module (the only example that we have): http://trac.nginx.org/nginx/browser/nginx/src/misc/ngx_cpp_test_module.cpp#L19 // nginx header files should go before other, because they define 64-bit off_t wbr, Valentin V. Bartenev From juremenart at gmail.com Sat Aug 31 21:13:04 2013 From: juremenart at gmail.com (Jure Menart) Date: Sat, 31 Aug 2013 23:13:04 +0200 Subject: Nginx modules & C includes In-Reply-To: <201309010108.20124.vbart@nginx.com> References: <201309010108.20124.vbart@nginx.com> Message-ID: Hello Valentin, thank you very much :-) Well... this was one of the rare examples I obviously did not check today. Regards, Jure Menart On Sat, Aug 31, 2013 at 11:08 PM, Valentin V. Bartenev wrote: > On Sunday 01 September 2013 00:35:20 Jure Menart wrote: > > Dear all, > > > > I'm new to the Nginx project and I am just getting familiar with it. Let > me > > first thank to the contributors for the work they've put into to make the > > project so nice. > > > > I've observed very 'strange' behaviour and took me quite a lot time to > find > > the cause for it (not to understand it yet). Let me start in the > beginning: > > - I've been playing with Hello world examples of course and then started > to > > build bigger 'real' module. > > - Suddenly I've got very unpredictable behaviour and seg. faults. > > - I've stripped down my module back to real bare minimum - at the end I > > just included one command which sends "Hello world" string back to the > > client. The thing was still acting very strange: ngx_http_request_t seems > > 'unstable' - r->method with strange numbers, if I wanted to log in > > r->connection->log I've got seg. fault, ... I'm fairly sure my test > module > > does not hot have any memory leaks because I am using only one static > > string which is put to the output buffer. > > - In the end I've removed the system C includes (sys/types.h, sys/stat.h, > > unistd.h) and my simple example started to work again - I've tried few > > times to add includes, put them before Nginx includes or after - it was > > very repeatable and the module was stable if I either did not include or > > include them after Nginx module: > > For example: > > <-- snip --> > > #include > > #include > > #include > > > > #include > > #include > > #include > > <-- snip --> > > > > Crashes my module, while: > > <-- snip --> > > #include > > #include > > #include > > > > #include > > #include > > #include > > <-- snip --> > > > > Seems to work. > > > > My question: Did anybody observed this behaviour? Obviously the system > > includes can influence/change the includes in the Nginx. If this is > known, > > are there any special limitations while including system headers? > > For sure this kind of behaviour is not nice and maybe it can be counted > as > > bug (or at least be documented). > > > > There is a commentary in the C++ example module (the only example that we > have): > > http://trac.nginx.org/nginx/browser/nginx/src/misc/ngx_cpp_test_module.cpp#L19 > > // nginx header files should go before other, because they define 64-bit > off_t > > wbr, Valentin V. Bartenev > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > -------------- next part -------------- An HTML attachment was scrubbed... URL: