From nginx-forum at forum.nginx.org Tue Mar 1 03:17:51 2016 From: nginx-forum at forum.nginx.org (Russel_) Date: Mon, 29 Feb 2016 22:17:51 -0500 Subject: HTTP/2 Reverse Proxy Message-ID: <163d95f0c33f18b8500439ff98369e04.NginxMailingListEnglish@forum.nginx.org> I have Nginx (1.9.12) setup and working fine with HTTP/2 on host-a. I can establish HTTP/2 connections to host-a with any web browser. I also have HTTP/2 and proxypass setup on host-b. I can proxy HTTP/1.1 requests from host-b to host-a with any web browser. I can also establish HTTP/2 connections with host-b using any web browser. However, I can't get the proxy request from host-b to host-a to talk HTTP/2. The connection between host-b and host-a doesn't utilize the TLS ALPN extension so that the h2 protocol can be used. A packet capture shows that HTTP/1.1 is used for the proxy connection between host-b and host-a despite the fact that virtual host on host-a is configured to only accept h2 connections. Is there a way to force the use of the TLS ALPN extension? Thanks for your time, Russel Posted at Nginx Forum: https://forum.nginx.org/read.php?2,264954,264954#msg-264954 From nginx-forum at forum.nginx.org Tue Mar 1 06:22:07 2016 From: nginx-forum at forum.nginx.org (hheiko) Date: Tue, 01 Mar 2016 01:22:07 -0500 Subject: HTTP/2 Reverse Proxy In-Reply-To: <163d95f0c33f18b8500439ff98369e04.NginxMailingListEnglish@forum.nginx.org> References: <163d95f0c33f18b8500439ff98369e04.NginxMailingListEnglish@forum.nginx.org> Message-ID: <331b6dbf0e9e881c55be62ad4197951a.NginxMailingListEnglish@forum.nginx.org> As far as I know HTTP2 is only for client-server communication in Nginx. Server to server connections remain at HTTP 1.1 Regards, Heiko Posted at Nginx Forum: https://forum.nginx.org/read.php?2,264954,264956#msg-264956 From nginx-forum at forum.nginx.org Tue Mar 1 12:46:41 2016 From: nginx-forum at forum.nginx.org (drhowarddrfine) Date: Tue, 01 Mar 2016 07:46:41 -0500 Subject: enable reuseport then only one worker is working? In-Reply-To: <20160229140629.GM31796@mdounin.ru> References: <20160229140629.GM31796@mdounin.ru> Message-ID: <0f407f239f906966763ed34fbc1b2939.NginxMailingListEnglish@forum.nginx.org> Do you know why FreeBSD does not do this? Is there a technical reason to not do that? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,264913,264959#msg-264959 From maxim at nginx.com Tue Mar 1 12:53:49 2016 From: maxim at nginx.com (Maxim Konovalov) Date: Tue, 1 Mar 2016 15:53:49 +0300 Subject: enable reuseport then only one worker is working? In-Reply-To: <0f407f239f906966763ed34fbc1b2939.NginxMailingListEnglish@forum.nginx.org> References: <20160229140629.GM31796@mdounin.ru> <0f407f239f906966763ed34fbc1b2939.NginxMailingListEnglish@forum.nginx.org> Message-ID: <56D590DD.5040309@nginx.com> On 3/1/16 3:46 PM, drhowarddrfine wrote: > Do you know why FreeBSD does not do this? Is there a technical reason to not > do that? > Just because you need someone to write this code for FreeBSD. -- Maxim Konovalov From nginx-forum at forum.nginx.org Tue Mar 1 13:08:46 2016 From: nginx-forum at forum.nginx.org (drhowarddrfine) Date: Tue, 01 Mar 2016 08:08:46 -0500 Subject: enable reuseport then only one worker is working? In-Reply-To: <56D590DD.5040309@nginx.com> References: <56D590DD.5040309@nginx.com> Message-ID: <628a80795cc1db78fbbcbfc761d114c0.NginxMailingListEnglish@forum.nginx.org> Yes but is there a technical reason why it hasn't been done yet? Does FreeBSD have a reason to not do it? Just because Linux did does not mean it should be done. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,264913,264961#msg-264961 From jim at ohlste.in Tue Mar 1 13:10:15 2016 From: jim at ohlste.in (Jim Ohlstein) Date: Tue, 1 Mar 2016 08:10:15 -0500 Subject: enable reuseport then only one worker is working? In-Reply-To: <1579942.OlGCbpSOfM@vbart-laptop> References: <1579942.OlGCbpSOfM@vbart-laptop> Message-ID: <56D594B7.3010804@ohlste.in> Hello, On 2/28/16 11:22 PM, ???????? ???????? wrote: > On Sunday 28 February 2016 08:52:12 meteor8488 wrote: >> Hi All, >> >> I just upgrade Nginx from 1.8 o 1.9 on my FreeBSD box. > [..] >> Did I miss anything in the configuration? or for a busy server, it's better >> to use accept_mutex instead of reuseport? >> > [..] > > In FreeBSD the SO_REUSEPORT option has completely different behavior > and shouldn't be enabled in nginx. > Should the configruation option then be disabled or silently ignored in FreeBSD at this time? -- Jim Ohlstein "Never argue with a fool, onlookers may not be able to tell the difference." - Mark Twain From maxim at nginx.com Tue Mar 1 13:12:17 2016 From: maxim at nginx.com (Maxim Konovalov) Date: Tue, 1 Mar 2016 16:12:17 +0300 Subject: enable reuseport then only one worker is working? In-Reply-To: <628a80795cc1db78fbbcbfc761d114c0.NginxMailingListEnglish@forum.nginx.org> References: <56D590DD.5040309@nginx.com> <628a80795cc1db78fbbcbfc761d114c0.NginxMailingListEnglish@forum.nginx.org> Message-ID: <56D59531.8000201@nginx.com> On 3/1/16 4:08 PM, drhowarddrfine wrote: > Yes but is there a technical reason why it hasn't been done yet? Does > FreeBSD have a reason to not do it? Just because Linux did does not mean it > should be done. > Please approach FreeBSD developers. -- Maxim Konovalov From nginx-forum at forum.nginx.org Tue Mar 1 13:32:07 2016 From: nginx-forum at forum.nginx.org (drhowarddrfine) Date: Tue, 01 Mar 2016 08:32:07 -0500 Subject: enable reuseport then only one worker is working? In-Reply-To: <56D59531.8000201@nginx.com> References: <56D59531.8000201@nginx.com> Message-ID: After a brief search, I was correct, and there are technical reasons for not doing this and Linux and Dragonfly are doing it wrong. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,264913,264964#msg-264964 From ahutchings at nginx.com Tue Mar 1 13:34:30 2016 From: ahutchings at nginx.com (Andrew Hutchings) Date: Tue, 1 Mar 2016 13:34:30 +0000 Subject: enable reuseport then only one worker is working? In-Reply-To: <56D594B7.3010804@ohlste.in> References: <1579942.OlGCbpSOfM@vbart-laptop> <56D594B7.3010804@ohlste.in> Message-ID: <56D59A66.7020603@nginx.com> Hi Jim, On 01/03/16 13:10, Jim Ohlstein wrote: > Hello, > > On 2/28/16 11:22 PM, ???????? ???????? wrote: >> On Sunday 28 February 2016 08:52:12 meteor8488 wrote: >>> Hi All, >>> >>> I just upgrade Nginx from 1.8 o 1.9 on my FreeBSD box. >> [..] >>> Did I miss anything in the configuration? or for a busy server, it's >>> better >>> to use accept_mutex instead of reuseport? >>> >> [..] >> >> In FreeBSD the SO_REUSEPORT option has completely different behavior >> and shouldn't be enabled in nginx. >> > > Should the configruation option then be disabled or silently ignored in > FreeBSD at this time? It would be difficult to selectively ignore operating systems based on how this function is supported. Especially if that support changes over time. I believe that DragonFly BSD for example supported the method most BSDs use originally. In the FreeBSD pattern it is designed to bleed off connections to one service as another comes up. Such as a binary upgrade. The last socket listener always gets new connections. This is almost the exact opposite of what Linux does with the option (DragonFly is somewhere in the middle). I'm personally not a BSD expert so can't say which other BSD operating systems use the same method as FreeBSD and even which ones do not even have a SO_REUSEPORT option. Documentation is probably the best option for now and we have tried to make it clear which operating systems do support this feature. Kind Regards -- Andrew Hutchings (LinuxJedi) Technical Product Manager, NGINX Inc. From vbart at nginx.com Tue Mar 1 13:35:14 2016 From: vbart at nginx.com (Valentin V. Bartenev) Date: Tue, 01 Mar 2016 16:35:14 +0300 Subject: enable reuseport then only one worker is working? In-Reply-To: References: <56D59531.8000201@nginx.com> Message-ID: <4165483.Ki8ypEHHiB@vbart-workstation> On Tuesday 01 March 2016 08:32:07 drhowarddrfine wrote: > After a brief search, I was correct, and there are technical reasons for not > doing this and Linux and Dragonfly are doing it wrong. > [..] What is the reason? wbr, Valentin V. Bartenev From reallfqq-nginx at yahoo.fr Tue Mar 1 13:52:52 2016 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Tue, 1 Mar 2016 14:52:52 +0100 Subject: How to check nginx OCSP verification Message-ID: Hello, I want to configure a server with: ssl_stapling on; ssl_stapling_verify on; What should happen if the ssl_trusted_certificate is (not|mis)configured? How to check nginx is properly configured and server-side OCSP response verification works? Thanks, --- *B. R.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From lenaigst at maelenn.org Tue Mar 1 14:12:37 2016 From: lenaigst at maelenn.org (Thierry) Date: Tue, 1 Mar 2016 16:12:37 +0200 Subject: How to check nginx OCSP verification In-Reply-To: References: Message-ID: <1747629239.20160301161237@maelenn.org> An HTML attachment was scrubbed... URL: From jim at ohlste.in Tue Mar 1 14:23:43 2016 From: jim at ohlste.in (Jim Ohlstein) Date: Tue, 1 Mar 2016 09:23:43 -0500 Subject: enable reuseport then only one worker is working? In-Reply-To: <56D59A66.7020603@nginx.com> References: <1579942.OlGCbpSOfM@vbart-laptop> <56D594B7.3010804@ohlste.in> <56D59A66.7020603@nginx.com> Message-ID: <56D5A5EF.4020901@ohlste.in> Hello, On 3/1/16 8:34 AM, Andrew Hutchings wrote: > Hi Jim, > > On 01/03/16 13:10, Jim Ohlstein wrote: >> Hello, >> >> On 2/28/16 11:22 PM, ???????? ???????? wrote: >>> On Sunday 28 February 2016 08:52:12 meteor8488 wrote: >>>> Hi All, >>>> >>>> I just upgrade Nginx from 1.8 o 1.9 on my FreeBSD box. >>> [..] >>>> Did I miss anything in the configuration? or for a busy server, it's >>>> better >>>> to use accept_mutex instead of reuseport? >>>> >>> [..] >>> >>> In FreeBSD the SO_REUSEPORT option has completely different behavior >>> and shouldn't be enabled in nginx. >>> >> >> Should the configruation option then be disabled or silently ignored in >> FreeBSD at this time? > > It would be difficult to selectively ignore operating systems based on > how this function is supported. Especially if that support changes over > time. I don't claim to know how "difficult" that would be, but with all the extremely talented coders in the Nginx group, I would think that "difficult" would not be a barrier to "doing it right". If OS support changes, nginx can change. Something tells me that with a FreeBSD Core Team member on the Nginx payroll, if this OS feature changes, it'll filter through to the people who write the code. If I choose to try "use kqueue" on a system that does not support that event method, I am hoping nginx would either tell me by refusing to start, or ignore my stupidity. Similarly, if I attempt to enable "reuseport" on Solaris or Windows which (I guess) do not support SO_REUSEPORT, nginx will inform me. I know this happens on FreeBSD with, for instance, aio. If aio is not built in to the kernel (it is not by default), or specifically loaded, nginx gives an error and won't start. In the case of SO_REUSEPORT, the OS call has virtually the opposite of the desired effect. If a directive has the potential for significantly bad consequences, it should be spat out during config testing with at least a warning. > > I believe that DragonFly BSD for example supported the method most BSDs > use originally. > > In the FreeBSD pattern it is designed to bleed off connections to one > service as another comes up. Such as a binary upgrade. The last socket > listener always gets new connections. This is almost the exact opposite > of what Linux does with the option (DragonFly is somewhere in the > middle). I'm personally not a BSD expert so can't say which other BSD > operating systems use the same method as FreeBSD and even which ones do > not even have a SO_REUSEPORT option. > > Documentation is probably the best option for now and we have tried to > make it clear which operating systems do support this feature. I'm not trying to make an excuse for not reading the documentation, which is clear and is exactly why I have not tested this feature on FreeBSD. Rather, I'm suggesting a more rigorous approach to config parsing. -- Jim Ohlstein "Never argue with a fool, onlookers may not be able to tell the difference." - Mark Twain From maxim at nginx.com Tue Mar 1 14:35:40 2016 From: maxim at nginx.com (Maxim Konovalov) Date: Tue, 1 Mar 2016 17:35:40 +0300 Subject: enable reuseport then only one worker is working? In-Reply-To: <56D5A5EF.4020901@ohlste.in> References: <1579942.OlGCbpSOfM@vbart-laptop> <56D594B7.3010804@ohlste.in> <56D59A66.7020603@nginx.com> <56D5A5EF.4020901@ohlste.in> Message-ID: <56D5A8BC.5000705@nginx.com> On 3/1/16 5:23 PM, Jim Ohlstein wrote: > Hello, > > On 3/1/16 8:34 AM, Andrew Hutchings wrote: >> Hi Jim, >> >> On 01/03/16 13:10, Jim Ohlstein wrote: >>> Hello, >>> >>> On 2/28/16 11:22 PM, ???????? ???????? wrote: >>>> On Sunday 28 February 2016 08:52:12 meteor8488 wrote: >>>>> Hi All, >>>>> >>>>> I just upgrade Nginx from 1.8 o 1.9 on my FreeBSD box. >>>> [..] >>>>> Did I miss anything in the configuration? or for a busy server, >>>>> it's >>>>> better >>>>> to use accept_mutex instead of reuseport? >>>>> >>>> [..] >>>> >>>> In FreeBSD the SO_REUSEPORT option has completely different >>>> behavior >>>> and shouldn't be enabled in nginx. >>>> >>> >>> Should the configruation option then be disabled or silently >>> ignored in >>> FreeBSD at this time? >> >> It would be difficult to selectively ignore operating systems >> based on >> how this function is supported. Especially if that support changes >> over >> time. > > I don't claim to know how "difficult" that would be, but with all > the extremely talented coders in the Nginx group, I would think that > "difficult" would not be a barrier to "doing it right". If OS > support changes, nginx can change. Something tells me that with a > FreeBSD Core Team member on the Nginx payroll, if this OS feature > changes, it'll filter through to the people who write the code. > Jim, we don't have any FreeBSD core team members on payroll. -- Maxim Konovalov From ahutchings at nginx.com Tue Mar 1 14:43:07 2016 From: ahutchings at nginx.com (Andrew Hutchings) Date: Tue, 1 Mar 2016 14:43:07 +0000 Subject: enable reuseport then only one worker is working? In-Reply-To: <56D5A5EF.4020901@ohlste.in> References: <1579942.OlGCbpSOfM@vbart-laptop> <56D594B7.3010804@ohlste.in> <56D59A66.7020603@nginx.com> <56D5A5EF.4020901@ohlste.in> Message-ID: <56D5AA7B.6080206@nginx.com> On 01/03/16 14:23, Jim Ohlstein wrote: > Hello, > > On 3/1/16 8:34 AM, Andrew Hutchings wrote: >> Hi Jim, >> >> On 01/03/16 13:10, Jim Ohlstein wrote: >>> Hello, >>> >>> On 2/28/16 11:22 PM, ???????? ???????? wrote: >>>> On Sunday 28 February 2016 08:52:12 meteor8488 wrote: >>>>> Hi All, >>>>> >>>>> I just upgrade Nginx from 1.8 o 1.9 on my FreeBSD box. >>>> [..] >>>>> Did I miss anything in the configuration? or for a busy server, it's >>>>> better >>>>> to use accept_mutex instead of reuseport? >>>>> >>>> [..] >>>> >>>> In FreeBSD the SO_REUSEPORT option has completely different behavior >>>> and shouldn't be enabled in nginx. >>>> >>> >>> Should the configruation option then be disabled or silently ignored in >>> FreeBSD at this time? >> >> It would be difficult to selectively ignore operating systems based on >> how this function is supported. Especially if that support changes over >> time. > > I don't claim to know how "difficult" that would be, but with all the > extremely talented coders in the Nginx group, I would think that > "difficult" would not be a barrier to "doing it right". If OS support > changes, nginx can change. Something tells me that with a FreeBSD Core > Team member on the Nginx payroll, if this OS feature changes, it'll > filter through to the people who write the code. This would be more a case of effort / payoff. I'm not saying it isn't technically possible. But effort may be better spent implementing new features people have been asking for (we have some kick-ass stuff upcoming). Rather than implementing an OS kernel version detection with blacklist. > If I choose to try "use kqueue" on a system that does not support that > event method, I am hoping nginx would either tell me by refusing to > start, or ignore my stupidity. Similarly, if I attempt to enable > "reuseport" on Solaris or Windows which (I guess) do not support > SO_REUSEPORT, nginx will inform me. I know this happens on FreeBSD with, > for instance, aio. If aio is not built in to the kernel (it is not by > default), or specifically loaded, nginx gives an error and won't start. > In the case of SO_REUSEPORT, the OS call has virtually the opposite of > the desired effect. Comparing it to kqueue isn't quite the same. On operating systems that do not support SO_REUSEPORT (including older Linux kernels) you will not be able to use the option at all. The problem here is FreeBSD has the option, it just has a different behaviour. I'm not even sure it is possible to detect in a build system without maintaining a blacklist. > If a directive has the potential for significantly bad consequences, it > should be spat out during config testing with at least a warning. I have mixed feelings on this. I believe for a majority of users this isn't a problem so users will see a warning that could concern them when in reality in a majority of cases it doesn't affect them. Since it isn't enabled by default a user would have to research and make a conscious choice to turn it on. I would hope at this point the user would have made an informed decision when enabling tuning directives. Kind Regards -- Andrew Hutchings (LinuxJedi) Technical Product Manager, NGINX Inc. From nginx-forum at forum.nginx.org Tue Mar 1 16:33:11 2016 From: nginx-forum at forum.nginx.org (Alt) Date: Tue, 01 Mar 2016 11:33:11 -0500 Subject: How to check nginx OCSP verification In-Reply-To: References: Message-ID: <5d068938b67c28645695f0d7bade65fd.NginxMailingListEnglish@forum.nginx.org> Hello, You can check with this command found on this website: https://unmitigatedrisk.com/?p=100 openssl s_client -connect login.live.com:443 -tls1 -tlsextdebug -status If everything goes well, you should find something like: "OCSP response: ====================================== OCSP Response Data: OCSP Response Status: successful (0x0) Response Type: Basic OCSP Response ..." If there's no stapling, you'll get: "OCSP response: no response sent". Please note: when you restart nginx, you won't get an OCSP answer immediatly. You'll have to visit the URL and wait a few seconds before having the stapling working for the next request. IIRC, this behavior is because OCSP servers may be slow to answer. Best Regards Posted at Nginx Forum: https://forum.nginx.org/read.php?2,264967,264977#msg-264977 From reallfqq-nginx at yahoo.fr Tue Mar 1 17:12:42 2016 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Tue, 1 Mar 2016 18:12:42 +0100 Subject: How to check nginx OCSP verification In-Reply-To: <5d068938b67c28645695f0d7bade65fd.NginxMailingListEnglish@forum.nginx.org> References: <5d068938b67c28645695f0d7bade65fd.NginxMailingListEnglish@forum.nginx.org> Message-ID: I do not want to validate OCSP responses client-side, which are OK. I want to have details about the status nginx' validation of the initial OCSP query it did to the OCSP responder of the CA, especially when it goes wrong. I noted that even though ssl_trusted_certificate is not set or set with a wrong (set of) certificate(s), a cached OCSP response will served by nginx to the client after an initial request has been made to a domain hosted by it and served through TLS. I want to know the consequences of having such a directive badly configured : - error.log message? Found nothing - modified OCSP response? Nope - ... What am I supposed to notice and where/when?? --- *B. R.* On Tue, Mar 1, 2016 at 5:33 PM, Alt wrote: > Hello, > > You can check with this command found on this website: > https://unmitigatedrisk.com/?p=100 > openssl s_client -connect login.live.com:443 -tls1 -tlsextdebug -status > > If everything goes well, you should find something like: > "OCSP response: > ====================================== > OCSP Response Data: > OCSP Response Status: successful (0x0) > Response Type: Basic OCSP Response > ..." > > If there's no stapling, you'll get: > "OCSP response: no response sent". > > Please note: when you restart nginx, you won't get an OCSP answer > immediatly. You'll have to visit the URL and wait a few seconds before > having the stapling working for the next request. IIRC, this behavior is > because OCSP servers may be slow to answer. > > Best Regards > > Posted at Nginx Forum: > https://forum.nginx.org/read.php?2,264967,264977#msg-264977 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jim at ohlste.in Tue Mar 1 17:19:22 2016 From: jim at ohlste.in (Jim Ohlstein) Date: Tue, 1 Mar 2016 12:19:22 -0500 Subject: enable reuseport then only one worker is working? In-Reply-To: <56D5A8BC.5000705@nginx.com> References: <1579942.OlGCbpSOfM@vbart-laptop> <56D594B7.3010804@ohlste.in> <56D59A66.7020603@nginx.com> <56D5A5EF.4020901@ohlste.in> <56D5A8BC.5000705@nginx.com> Message-ID: <73D8BC76-098C-4D3D-B324-D6E797BF7835@ohlste.in> Hello, > On Mar 1, 2016, at 9:35 AM, Maxim Konovalov wrote: > >> On 3/1/16 5:23 PM, Jim Ohlstein wrote: >> Hello, >> >>> On 3/1/16 8:34 AM, Andrew Hutchings wrote: >>> Hi Jim, >>> >>>> On 01/03/16 13:10, Jim Ohlstein wrote: >>>> Hello, >>>> >>>>> On 2/28/16 11:22 PM, ???????? ???????? wrote: >>>>>> On Sunday 28 February 2016 08:52:12 meteor8488 wrote: >>>>>> Hi All, >>>>>> >>>>>> I just upgrade Nginx from 1.8 o 1.9 on my FreeBSD box. >>>>> [..] >>>>>> Did I miss anything in the configuration? or for a busy server, >>>>>> it's >>>>>> better >>>>>> to use accept_mutex instead of reuseport? >>>>> [..] >>>>> >>>>> In FreeBSD the SO_REUSEPORT option has completely different >>>>> behavior >>>>> and shouldn't be enabled in nginx. >>>> >>>> Should the configruation option then be disabled or silently >>>> ignored in >>>> FreeBSD at this time? >>> >>> It would be difficult to selectively ignore operating systems >>> based on >>> how this function is supported. Especially if that support changes >>> over >>> time. >> >> I don't claim to know how "difficult" that would be, but with all >> the extremely talented coders in the Nginx group, I would think that >> "difficult" would not be a barrier to "doing it right". If OS >> support changes, nginx can change. Something tells me that with a >> FreeBSD Core Team member on the Nginx payroll, if this OS feature >> changes, it'll filter through to the people who write the code. > Jim, we don't have any FreeBSD core team members on payroll. Perhaps you can understand my confusion when I see multiple references to it online, including this tweet. https://mobile.twitter.com/maximkonovalov/status/486847353484480512. That, and the recent work sponsored by Nginx on FreeBSD sendfile(2) to be included in the upcoming release (11). If he is no longer on the payroll he is still working closely with you, so this hardly invalidates my premise that you would be aware of future changes in FreeBSD behavior. ;) Jim -------------- next part -------------- An HTML attachment was scrubbed... URL: From maxim at nginx.com Tue Mar 1 17:27:55 2016 From: maxim at nginx.com (Maxim Konovalov) Date: Tue, 1 Mar 2016 20:27:55 +0300 Subject: enable reuseport then only one worker is working? In-Reply-To: <73D8BC76-098C-4D3D-B324-D6E797BF7835@ohlste.in> References: <1579942.OlGCbpSOfM@vbart-laptop> <56D594B7.3010804@ohlste.in> <56D59A66.7020603@nginx.com> <56D5A5EF.4020901@ohlste.in> <56D5A8BC.5000705@nginx.com> <73D8BC76-098C-4D3D-B324-D6E797BF7835@ohlste.in> Message-ID: <56D5D11B.9050505@nginx.com> On 3/1/16 8:19 PM, Jim Ohlstein wrote: > Hello, > > On Mar 1, 2016, at 9:35 AM, Maxim Konovalov > wrote: > >> On 3/1/16 5:23 PM, Jim Ohlstein wrote: >>> Hello, >>> >>> On 3/1/16 8:34 AM, Andrew Hutchings wrote: >>>> Hi Jim, >>>> >>>> On 01/03/16 13:10, Jim Ohlstein wrote: >>>>> Hello, >>>>> >>>>> On 2/28/16 11:22 PM, ???????? ???????? wrote: >>>>>> On Sunday 28 February 2016 08:52:12 meteor8488 wrote: >>>>>>> Hi All, >>>>>>> >>>>>>> I just upgrade Nginx from 1.8 o 1.9 on my FreeBSD box. >>>>>> [..] >>>>>>> Did I miss anything in the configuration? or for a busy server, >>>>>>> it's >>>>>>> better >>>>>>> to use accept_mutex instead of reuseport? >>>>>>> >>>>>> [..] >>>>>> >>>>>> In FreeBSD the SO_REUSEPORT option has completely different >>>>>> behavior >>>>>> and shouldn't be enabled in nginx. >>>>>> >>>>> >>>>> Should the configruation option then be disabled or silently >>>>> ignored in >>>>> FreeBSD at this time? >>>> >>>> It would be difficult to selectively ignore operating systems >>>> based on >>>> how this function is supported. Especially if that support changes >>>> over >>>> time. >>> >>> I don't claim to know how "difficult" that would be, but with all >>> the extremely talented coders in the Nginx group, I would think that >>> "difficult" would not be a barrier to "doing it right". If OS >>> support changes, nginx can change. Something tells me that with a >>> FreeBSD Core Team member on the Nginx payroll, if this OS feature >>> changes, it'll filter through to the people who write the code. >>> >> Jim, we don't have any FreeBSD core team members on payroll. > > Perhaps you can understand my confusion when I see multiple > references to it online, including this > tweet. https://mobile.twitter.com/maximkonovalov/status/486847353484480512. > > That, and the recent work sponsored by Nginx on FreeBSD sendfile(2) > to be included in the upcoming release (11). If he is no longer on > the payroll he is still working closely with you, so this hardly > invalidates my premise that you would be aware of future changes in > FreeBSD behavior. ;) > Jim, that was 1.6+ years ago, this is not true anymore. Can we return to the technical part of the discussion? -- Maxim Konovalov From jim at ohlste.in Tue Mar 1 17:50:57 2016 From: jim at ohlste.in (Jim Ohlstein) Date: Tue, 1 Mar 2016 12:50:57 -0500 Subject: enable reuseport then only one worker is working? In-Reply-To: <56D5D11B.9050505@nginx.com> References: <1579942.OlGCbpSOfM@vbart-laptop> <56D594B7.3010804@ohlste.in> <56D59A66.7020603@nginx.com> <56D5A5EF.4020901@ohlste.in> <56D5A8BC.5000705@nginx.com> <73D8BC76-098C-4D3D-B324-D6E797BF7835@ohlste.in> <56D5D11B.9050505@nginx.com> Message-ID: <610B7789-B7C6-493F-890C-ED842F0144EB@ohlste.in> Hello, > On Mar 1, 2016, at 12:27 PM, Maxim Konovalov wrote: > >> On 3/1/16 8:19 PM, Jim Ohlstein wrote: >> Hello, >> >> On Mar 1, 2016, at 9:35 AM, Maxim Konovalov > > wrote: >> >>>> On 3/1/16 5:23 PM, Jim Ohlstein wrote: >>>> Hello, >>>> >>>>> On 3/1/16 8:34 AM, Andrew Hutchings wrote: >>>>> Hi Jim, >>>>> >>>>>> On 01/03/16 13:10, Jim Ohlstein wrote: >>>>>> Hello, >>>>>> >>>>>>> On 2/28/16 11:22 PM, ???????? ???????? wrote: >>>>>>>> On Sunday 28 February 2016 08:52:12 meteor8488 wrote: >>>>>>>> Hi All, >>>>>>>> >>>>>>>> I just upgrade Nginx from 1.8 o 1.9 on my FreeBSD box. >>>>>>> [..] >>>>>>>> Did I miss anything in the configuration? or for a busy server, >>>>>>>> it's >>>>>>>> better >>>>>>>> to use accept_mutex instead of reuseport? >>>>>>> [..] >>>>>>> >>>>>>> In FreeBSD the SO_REUSEPORT option has completely different >>>>>>> behavior >>>>>>> and shouldn't be enabled in nginx. >>>>>> >>>>>> Should the configruation option then be disabled or silently >>>>>> ignored in >>>>>> FreeBSD at this time? >>>>> >>>>> It would be difficult to selectively ignore operating systems >>>>> based on >>>>> how this function is supported. Especially if that support changes >>>>> over >>>>> time. >>>> >>>> I don't claim to know how "difficult" that would be, but with all >>>> the extremely talented coders in the Nginx group, I would think that >>>> "difficult" would not be a barrier to "doing it right". If OS >>>> support changes, nginx can change. Something tells me that with a >>>> FreeBSD Core Team member on the Nginx payroll, if this OS feature >>>> changes, it'll filter through to the people who write the code. >>> Jim, we don't have any FreeBSD core team members on payroll. >> >> Perhaps you can understand my confusion when I see multiple >> references to it online, including this >> tweet. https://mobile.twitter.com/maximkonovalov/status/486847353484480512. >> >> That, and the recent work sponsored by Nginx on FreeBSD sendfile(2) >> to be included in the upcoming release (11). If he is no longer on >> the payroll he is still working closely with you, so this hardly >> invalidates my premise that you would be aware of future changes in >> FreeBSD behavior. ;) > Jim, that was 1.6+ years ago, this is not true anymore. > > Can we return to the technical part of the discussion? > Perhaps you missed the little wink there. My apologies. My point is that I would like to see more rigorous, bullet-proof config parsing/testing on the part of nginx. This is one example. We can agree to disagree on its importance to users. Jim From maxim at nginx.com Tue Mar 1 17:54:36 2016 From: maxim at nginx.com (Maxim Konovalov) Date: Tue, 1 Mar 2016 20:54:36 +0300 Subject: enable reuseport then only one worker is working? In-Reply-To: <610B7789-B7C6-493F-890C-ED842F0144EB@ohlste.in> References: <1579942.OlGCbpSOfM@vbart-laptop> <56D594B7.3010804@ohlste.in> <56D59A66.7020603@nginx.com> <56D5A5EF.4020901@ohlste.in> <56D5A8BC.5000705@nginx.com> <73D8BC76-098C-4D3D-B324-D6E797BF7835@ohlste.in> <56D5D11B.9050505@nginx.com> <610B7789-B7C6-493F-890C-ED842F0144EB@ohlste.in> Message-ID: <56D5D75C.40803@nginx.com> On 3/1/16 8:50 PM, Jim Ohlstein wrote: > Hello, > [...] >>>>>>>>> I just upgrade Nginx from 1.8 o 1.9 on my FreeBSD >>>>>>>>> box. >>>>>>>> [..] >>>>>>>>> Did I miss anything in the configuration? or for >>>>>>>>> a busy server, it's better to use accept_mutex >>>>>>>>> instead of reuseport? >>>>>>>> [..] >>>>>>>> >>>>>>>> In FreeBSD the SO_REUSEPORT option has completely >>>>>>>> different behavior and shouldn't be enabled in >>>>>>>> nginx. >>>>>>> >>>>>>> Should the configruation option then be disabled or >>>>>>> silently ignored in FreeBSD at this time? >>>>>> [...] > My point is that I would like to see more rigorous, bullet-proof > config parsing/testing on the part of nginx. This is one example. > We can agree to disagree on its importance to users. > This is definitely something that makes perfect sense. We will try to figure out how to fix it when time permits. -- Maxim Konovalov From sca at andreasschulze.de Tue Mar 1 20:01:15 2016 From: sca at andreasschulze.de (A. Schulze) Date: Tue, 01 Mar 2016 21:01:15 +0100 Subject: How to check nginx OCSP verification In-Reply-To: References: <5d068938b67c28645695f0d7bade65fd.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20160301210115.Horde.iTKIHsSKlTPKbF2jRuy1hNg@andreasschulze.de> B.R.: > I want to have details about the status nginx' validation of the initial > OCSP query it did to the OCSP responder of the CA, especially when it goes > wrong. we do not let nginx fetch the ocsp data itself but use ssl_stapling_file. a cronjob call openssl and VERIFY the ocsp resonse. OCSP_RESPONSE='/path/to/ocsp_response_file' # ssl_stapling_file in nginx.conf # all intermediate and root certificates exept the certificate itself CA_CHAIN='/tmp/ca_chain.pem' cat intermediate.pem root.pem > $CA_CHAIN DIRECT_ISSUER='root.pem' # or intermediate.pem, exact one certificate CERT='cert.pem' # for this certificate we need the OCSP response... OCSP_URI=`openssl x509 -noout -text -in ${CERT} | grep 'OCSP - URI:' | cut -d: -f2,3` openssl ocsp -no_nonce \ -respout ${OCSP_RESPONSE}.tmp \ -CAfile ${CA_CHAIN} \ -issuer ${DIRECT_ISSUER} \ -cert ${CERT} \ -url ${OCSP_URI} ${EXTRA_ARGS} if [ $? -eq 0 ]; then # handle error fi # success mv ${OCSP_RESPONSE}.tmp ${OCSP_RESPONSE} killall -HUP nginx EXTRA_ARGS handle some special tweaks - Startcom: https://forum.startcom.org/viewtopic.php?f=15&t=2661 EXTRA_ARGS='-header HOST ocsp.startssl.com' - Let's Entrypt: https://community.letsencrypt.org/t/unable-to-verify-ocsp-response/7264/3 EXTRA_ARGS='-header HOST ocsp.int-x1.letsencrypt.org -verify_other ${DIRECT_ISSUER}' you may want to adjust to your needs. Andreas From nginx-forum at forum.nginx.org Tue Mar 1 22:39:38 2016 From: nginx-forum at forum.nginx.org (Russel_) Date: Tue, 01 Mar 2016 17:39:38 -0500 Subject: HTTP/2 Reverse Proxy In-Reply-To: <331b6dbf0e9e881c55be62ad4197951a.NginxMailingListEnglish@forum.nginx.org> References: <163d95f0c33f18b8500439ff98369e04.NginxMailingListEnglish@forum.nginx.org> <331b6dbf0e9e881c55be62ad4197951a.NginxMailingListEnglish@forum.nginx.org> Message-ID: Ok, thanks for the heads up. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,264954,264989#msg-264989 From nginx-forum at forum.nginx.org Wed Mar 2 00:13:21 2016 From: nginx-forum at forum.nginx.org (meteor8488) Date: Tue, 01 Mar 2016 19:13:21 -0500 Subject: enable reuseport then only one worker is working? In-Reply-To: <4165483.Ki8ypEHHiB@vbart-workstation> References: <4165483.Ki8ypEHHiB@vbart-workstation> Message-ID: Hi Guys, Thanks for all these information. But, is there any way for FreeBSD to enable it? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,264913,264990#msg-264990 From nginx-forum at forum.nginx.org Wed Mar 2 03:19:41 2016 From: nginx-forum at forum.nginx.org (AmyAmy) Date: Tue, 01 Mar 2016 22:19:41 -0500 Subject: openstack swift as a cache proxy for nginx, swift proxy report 401 error when authenticate Message-ID: <33813afd69a267ce7d2e677268a63f49.NginxMailingListEnglish@forum.nginx.org> hello, everybody. I am trying to find a way to use Openstack swift to cache static file for a web server such as nginx, the below are request step: 1. nginx is configured as a load balance proxy server and web server. 2. There are several swift , suppose there are 2, that is swift-A,swift-B ,swift-A is control node,and swift-B is storage node 3. client send a request to nginx for url: http://domain.com/filename.txt 4. nginx received the request and it is a cache miss, it need to fetch the content from SWIFT proxy server, 5. nginx send a request to swift proxy server for authentication, the url looks like http://swift-proxy/auth-account, account information is set in header, the response from swift proxy server contains a auth-token for that account if authentication success. 6. then nginx use this auth-token and put it in a new request header, and send the new request to the swift proxy server for the original request content, there could be a map between client request url to the swift proxy url, for example, /filename.txt --> /account/container/filename.txt, so the new request url could be http://swift-proxy/account/container/filename.txt,plus the auth-token. 7. swift proxy server response the content to nginx, then nginx cache the content and pass the response to the client. I have search for the answer on the internet, and referent this solution: https://forum.nginx.org/read.php?2,250458,250463#msg-250463 Then ,I change my nginx configuration like this: server { listen 80; server_name localhost; location / { root html; index index.html index.htm; auth_request /auth/v1.0; } location /auth/v1.0 { proxy_pass http://192.168.1.1:8080; proxy_pass_request_body off; proxy_set_header Content-Length ""; proxy_set_header X-Original-URI $request_uri; } } Port 80 is for nginx,port 8080 is for swift, both can work independently, but after I change the configuration ,use chrome browser enter:10.67. 247.21,it just not working like what I expect, swift proxy return 401 error, swift proxy logs report like this : Mar 1 20:43:48 localhost journal: proxy-logging 192.168.1.1 192.168.1.1 01/Mar/2016/20/43/48 GET /auth/v1.0 HTTP/1.0 401 - Mozilla/5.0%20%28Windows%20NT%206.1%3B%20WOW64%29%20AppleWebKit/537.36%20%28KHTML%2C%20like%20Gecko%29%20Chrome/28.0.1500.72%20Safari/537.36 - - 131 - txbfc24355780143568445c4ddf5d774e3 - 0.0003 - Mar 1 20:43:48 localhost journal: tempauth - 192.168.1.1 01/Mar/2016/20/43/48 GET /auth/v1.0 HTTP/1.0 401 - Mozilla/5.0%20%28Windows%20NT%206.1%3B%20WOW64%29%20AppleWebKit/537.36%20%28KHTML%2C%20like%20Gecko%29%20Chrome/28.0.1500.72%20Safari/537.36 - - - - txbfc24355780143568445c4ddf5d774e3 - 0.0007 I don?t know does it matter if I use a chrome browser to send request to swift ,it looks like some unrecognized char are include in the request header .while I use shell command to send request, it works fine, like this : [root at localhost ~]# curl -v -H 'X-Storage-User: service:swift' -H 'X-Storage-Pass:swift ' http://192.168.1.1:8080/auth/v1.0 * Trying 192.168.1.1... * Connected to 192.168.1.1 (192.168.1.1) port 8080 (#0) > GET /auth/v1.0 HTTP/1.1 > Host: 192.168.1.1:8080 > User-Agent: curl/7.47.1 > Accept: */* > X-Storage-User: service:swift > X-Storage-Pass:swift > < HTTP/1.1 200 OK < X-Storage-Url: http://192.168.1.1:8080/v1/AUTH_service < X-Auth-Token: AUTH_tk4f2eaa45b35c47b4ab0b955710cce6da < Content-Type: text/html; charset=UTF-8 < X-Storage-Token: AUTH_tk4f2eaa45b35c47b4ab0b955710cce6da < Content-Length: 0 < X-Trans-Id: tx3b90f2a8a3284f52951cc80ca41f104a < Date: Tue, 01 Mar 2016 21:10:50 GMT < * Connection #0 to host 192.168.1.1 left intact Below is my swift proxy-server.conf: [DEFAULT] bind_port = 8080 bind_ip = 192.168.1.1 workers = 1 user = swift log_facility = LOG_LOCAL1 eventlet_debug = true [pipeline:main] pipeline = catch_errors healthcheck proxy-logging cache tempurl ratelimit tempauth staticweb proxy-logging proxy-server [filter:catch_errors] use = egg:swift#catch_errors set log_name = cache_errors [filter:healthcheck] use = egg:swift#healthcheck set log_name = healthcheck [filter:proxy-logging] use = egg:swift#proxy_logging set log_name = proxy-logging [filter:ratelimit] use = egg:swift#ratelimit set log_name = ratelimit [filter:crossdomain] use = egg:swift#crossdomain set log_name = crossdomain [filter:tempurl] use = egg:swift#tempurl set log_name = tempurl [filter:tempauth] use = egg:swift#tempauth set log_name = tempauth user_admin_admin = admin .admin .reseller_admin user_test_tester = testing .admin user_test2_tester2 = testing2 .admin user_test_tester3 = testing3 user_service_swift = swift .admin [filter:staticweb] use = egg:swift#staticweb set log_name = staticweb [filter:cache] use = egg:swift#memcache set log_name = memcache [app:proxy-server] use = egg:swift#proxy set log_name = proxy allow_account_management = true account_autocreate = true I have no idea for the 401 error occurred and how to solve this question.Are there some configuration error in my swift or nginx configuration file? Thanks for your time, Amy Posted at Nginx Forum: https://forum.nginx.org/read.php?2,264991,264991#msg-264991 From ahutchings at nginx.com Wed Mar 2 08:39:33 2016 From: ahutchings at nginx.com (Andrew Hutchings) Date: Wed, 2 Mar 2016 08:39:33 +0000 Subject: enable reuseport then only one worker is working? In-Reply-To: References: <4165483.Ki8ypEHHiB@vbart-workstation> Message-ID: <56D6A6C5.1010306@nginx.com> Hi, Unfortunately no, the FreeBSD kernel doesn't have the feature implemented in the way you would want to use it. Kind Regards Andrew On 02/03/16 00:13, meteor8488 wrote: > Hi Guys, > > Thanks for all these information. > But, is there any way for FreeBSD to enable it? > > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,264913,264990#msg-264990 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Andrew Hutchings (LinuxJedi) Technical Product Manager, NGINX Inc. From nginx-forum at forum.nginx.org Wed Mar 2 09:10:27 2016 From: nginx-forum at forum.nginx.org (AmyAmy) Date: Wed, 02 Mar 2016 04:10:27 -0500 Subject: Does nginx support openstack swift API? In-Reply-To: References: Message-ID: <33284c4e3d8a0351007525a09ba49d58.NginxMailingListEnglish@forum.nginx.org> Hi, hexiay, I'm trying to find a way to use OpenStack SWIFT with nginx too. After read your post, I got a solution and try to configure Nginx with ngx_http_auth_request_module, but what I got is a 401 error. Based on your answer, you have found a solution, Do you mind sharing it? Or any one has the suggestion on how to configure Nginx HTTP Auth Request module to accomplish it? Thanks, Amy Posted at Nginx Forum: https://forum.nginx.org/read.php?2,250458,264995#msg-264995 From ahutchings at nginx.com Wed Mar 2 09:12:58 2016 From: ahutchings at nginx.com (Andrew Hutchings) Date: Wed, 2 Mar 2016 09:12:58 +0000 Subject: openstack swift as a cache proxy for nginx, swift proxy report 401 error when authenticate In-Reply-To: <33813afd69a267ce7d2e677268a63f49.NginxMailingListEnglish@forum.nginx.org> References: <33813afd69a267ce7d2e677268a63f49.NginxMailingListEnglish@forum.nginx.org> Message-ID: <56D6AE9A.5030903@nginx.com> Hi Amy, On 02/03/16 03:19, AmyAmy wrote: > hello, everybody. > > I have search for the answer on the internet, and referent this solution: > https://forum.nginx.org/read.php?2,250458,250463#msg-250463 > > Then ,I change my nginx configuration like this: > server { > listen 80; > server_name localhost; > location / { > root html; > index index.html index.htm; > auth_request /auth/v1.0; > } > location /auth/v1.0 { > proxy_pass http://192.168.1.1:8080; > proxy_pass_request_body off; > proxy_set_header Content-Length ""; > proxy_set_header X-Original-URI $request_uri; > } > } > [trimmed much of the email] Have you tried looking at your swift logs when NGINX passes on the request? I suspect this will give a good indication as to what is wrong. In addition you probably need to set the following, although I'm uncertain as to whether it will fix your problem: proxy_http_version 1.1 Kind Regards -- Andrew Hutchings (LinuxJedi) Technical Product Manager, NGINX Inc. From nginx-forum at forum.nginx.org Wed Mar 2 09:44:08 2016 From: nginx-forum at forum.nginx.org (AmyAmy) Date: Wed, 02 Mar 2016 04:44:08 -0500 Subject: openstack swift as a cache proxy for nginx, swift proxy report 401 error when authenticate In-Reply-To: <56D6AE9A.5030903@nginx.com> References: <56D6AE9A.5030903@nginx.com> Message-ID: <654f4a481467e5f4b396c5710dcd769d.NginxMailingListEnglish@forum.nginx.org> Thanks for answer. As my swift server is serverd as a proxy server, I can look at swift proxy-server's log which I have mention on my post, it report like this : Mar 1 20:43:48 localhost journal: proxy-logging 192.168.1.1 192.168.1.1 01/Mar/2016/20/43/48 GET /auth/v1.0 HTTP/1.0 401 - Mozilla/5.0%20%28Windows%20NT%206.1%3B%20WOW64%29%20AppleWebKit/537.36%20%28KHTML%2C%20like%20Gecko%29%20Chrome/28.0.1500.72%20Safari/537.36 - - 131 - txbfc24355780143568445c4ddf5d774e3 - 0.0003 - Mar 1 20:43:48 localhost journal: tempauth - 192.168.1.1 01/Mar/2016/20/43/48 GET /auth/v1.0 HTTP/1.0 401 - Mozilla/5.0%20%28Windows%20NT%206.1%3B%20WOW64%29%20AppleWebKit/537.36%20%28KHTML%2C%20like%20Gecko%29%20Chrome/28.0.1500.72%20Safari/537.36 - - - - txbfc24355780143568445c4ddf5d774e3 - 0.0007 It seems there are some unrecognized char are contained in the request which nginx send to swift server .while I use curl command to send request, it works fine, swift server log report like this : Mar 1 18:38:44 localhost journal: proxy-server 192.168.1.1 192.168.1.1 01/Mar/2016/18/38/44 GET /auth/v1.0 HTTP/1.0 200 - curl/7.47.1 - - - - txc35cdcf0cc6f4d938e57772da694352a - 0.0015 - Mar 1 18:38:44 localhost journal: proxy-server - 192.168.1.1 01/Mar/2016/18/38/44 GET /auth/v1.0 HTTP/1.0 200 - curl/7.47.1 - - - - txc35cdcf0cc6f4d938e57772da694352a - 0.0020 It seems swift cannot recognize the request from my nginx which has configed with an addictional module named ngx_http_auth_request_module. Maybe nginx was not passes right user and password to swift, but i have no idea which way to figure it out. Best, Amy Posted at Nginx Forum: https://forum.nginx.org/read.php?2,264996,264997#msg-264997 From guillaume at databerries.com Wed Mar 2 10:24:54 2016 From: guillaume at databerries.com (Guillaume Charhon) Date: Wed, 2 Mar 2016 11:24:54 +0100 Subject: Request processing rate and reverse proxy In-Reply-To: References: Message-ID: Hello guys, Is it a bug, something not implemented or a mistake on my side? Best, On Mon, Feb 29, 2016 at 5:05 PM, Guillaume Charhon < guillaume at databerries.com> wrote: > Hello, > > I have setup nginx 1.9.3 as a reverse proxy [1] with a rate limitation per > server [2]. The rate limitation does not work on this scenario. The rate > request limitation works well if I use nginx as a normal webserver (for > example to serve the default welcome page). > > I have attached my configuration files (listen on 80 and redirect to > another webserver running lighttpd). > > > [1] http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_pass > [2] http://nginx.org/en/docs/http/ngx_http_limit_req_module.html > > Best Regards, > poiuytrez > > PS : You can run it in a docker using the following commands if you need: > > docker run --rm --name nginx -p 8082:80 -v > /yourdir/nginx.conf:/etc/nginx/nginx.conf:ro -v > /yourdir/default.conf:/etc/nginx/conf.d/default.conf --link backend:backend > nginx > > docker run --rm --name backend -p 8081:80 jprjr/lighttpd > > Go to http://localhost:8082/ in your web browser. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ahutchings at nginx.com Wed Mar 2 10:41:49 2016 From: ahutchings at nginx.com (Andrew Hutchings) Date: Wed, 2 Mar 2016 10:41:49 +0000 Subject: openstack swift as a cache proxy for nginx, swift proxy report 401 error when authenticate In-Reply-To: <654f4a481467e5f4b396c5710dcd769d.NginxMailingListEnglish@forum.nginx.org> References: <56D6AE9A.5030903@nginx.com> <654f4a481467e5f4b396c5710dcd769d.NginxMailingListEnglish@forum.nginx.org> Message-ID: <56D6C36D.4010405@nginx.com> Hi Amy, I suggest trying to talk to the Swift community. If you can't get any more information than that out of the Swift logs it is going to be difficult for you to determine what it is actually looking for. In general though you probably shouldn't be using a web browser to talk to an OpenStack API. Kind Regards Andrew On 02/03/16 09:44, AmyAmy wrote: > Thanks for answer. > As my swift server is serverd as a proxy server, I can look at swift > proxy-server's log which I have mention on my post, it report like this : > > Mar 1 20:43:48 localhost journal: proxy-logging 192.168.1.1 192.168.1.1 > 01/Mar/2016/20/43/48 GET /auth/v1.0 HTTP/1.0 401 - > Mozilla/5.0%20%28Windows%20NT%206.1%3B%20WOW64%29%20AppleWebKit/537.36%20%28KHTML%2C%20like%20Gecko%29%20Chrome/28.0.1500.72%20Safari/537.36 > - - 131 - txbfc24355780143568445c4ddf5d774e3 - 0.0003 - > Mar 1 20:43:48 localhost journal: tempauth - 192.168.1.1 > 01/Mar/2016/20/43/48 GET /auth/v1.0 HTTP/1.0 401 - > Mozilla/5.0%20%28Windows%20NT%206.1%3B%20WOW64%29%20AppleWebKit/537.36%20%28KHTML%2C%20like%20Gecko%29%20Chrome/28.0.1500.72%20Safari/537.36 > - - - - txbfc24355780143568445c4ddf5d774e3 - 0.0007 > > > > It seems there are some unrecognized char are contained in the request > which nginx send to swift server .while I use curl command to send request, > it works fine, swift server log report like this : > > Mar 1 18:38:44 localhost journal: proxy-server 192.168.1.1 192.168.1.1 > 01/Mar/2016/18/38/44 GET /auth/v1.0 HTTP/1.0 200 - curl/7.47.1 - - - - > txc35cdcf0cc6f4d938e57772da694352a - 0.0015 - > Mar 1 18:38:44 localhost journal: proxy-server - 192.168.1.1 > 01/Mar/2016/18/38/44 GET /auth/v1.0 HTTP/1.0 200 - curl/7.47.1 - - - - > txc35cdcf0cc6f4d938e57772da694352a - 0.0020 > > > It seems swift cannot recognize the request from my nginx which has configed > with an addictional module named ngx_http_auth_request_module. Maybe nginx > was not passes right user and password to swift, but i have no idea which > way to figure it out. > > Best, > Amy > > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,264996,264997#msg-264997 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Andrew Hutchings (LinuxJedi) Technical Product Manager, NGINX Inc. From mdounin at mdounin.ru Wed Mar 2 12:28:35 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 2 Mar 2016 15:28:35 +0300 Subject: Request processing rate and reverse proxy In-Reply-To: References: Message-ID: <20160302122835.GJ31796@mdounin.ru> Hello! On Mon, Feb 29, 2016 at 05:05:16PM +0100, Guillaume Charhon wrote: > I have setup nginx 1.9.3 as a reverse proxy [1] with a rate limitation per > server [2]. The rate limitation does not work on this scenario. The rate > request limitation works well if I use nginx as a normal webserver (for > example to serve the default welcome page). > > I have attached my configuration files (listen on 80 and redirect to > another webserver running lighttpd). There is no "limit_req" directive anywhere in your config, so it's not a surprise rate limiting does not work. You have to configure both "limit_req_zone" (to configure shared memory zone to store states) and "limit_req" (to configure particular limits in particular locations), see details here: http://nginx.org/en/docs/http/ngx_http_limit_req_module.html -- Maxim Dounin http://nginx.org/ From para.vikas at gmail.com Wed Mar 2 16:28:26 2016 From: para.vikas at gmail.com (Vikas Parashar) Date: Wed, 2 Mar 2016 21:58:26 +0530 Subject: Nginx caching no header content-length Message-ID: Hi, I am using nginx(caching server) in front my tomcat. My tomcat is serving header(content-length). But my nginx is not serving header content-length. However, i can put that header in my config with some static value then it will serve. But, in my case, the content length is coming from tomcat server. Could anybody please help me here. my config file location /comet { chunked_transfer_encoding off; proxy_http_version 1.1; proxy_buffering on; proxy_pass http://tomcat:8080/xxxx proxy_cache my_cache; # Adding a header to see the cache status in the browser add_header X-Cache-Status $upstream_cache_status; proxy_cache_key "$scheme$request_uri $http_x_seneca_uid"; proxy_cache_min_uses 1; proxy_cache_valid 200 302 120m; proxy_cache_valid 404 1m; proxy_cache_use_stale error timeout invalid_header http_500 http_502 http_503 http_504; } ==============> My problem:- i have to pass content-header to my client. Could any body please help me here -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Wed Mar 2 16:48:23 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 2 Mar 2016 19:48:23 +0300 Subject: Nginx caching no header content-length In-Reply-To: References: Message-ID: <20160302164823.GM31796@mdounin.ru> Hello! On Wed, Mar 02, 2016 at 09:58:26PM +0530, Vikas Parashar wrote: > I am using nginx(caching server) in front my tomcat. My tomcat is serving > header(content-length). But my nginx is not serving header content-length. > > However, i can put that header in my config with some static value then it > will serve. But, in my case, the content length is coming from tomcat > server. The "Content-Length" header is expected to be automatically removed by nginx if it does some processing on the response body and the length can be changed during processing. In particular, this includes the following cases: - the response is gzipped ("gzip on"); - SSI is enabled; - sub_filter is enabled. Various 3rd party modules can also remove Content-Length. If you want to preserve Content-Length, you should identify which modules remove Content-Length in your particular configuration and switch them off. [...] > chunked_transfer_encoding off; Note: this directive in your config prevents nginx from using "Transfer-Encoding: chunked" to indicate response length. As a result nginx won't be able to maintain persistent connections with clients when Content-Length isn't known in advance. Unless you've added this directive to resolve specific problems with broken clients and you know for sure you need it, you may want to remove this directive from your configuration. -- Maxim Dounin http://nginx.org/ From para.vikas at gmail.com Wed Mar 2 17:03:34 2016 From: para.vikas at gmail.com (Vikas Parashar) Date: Wed, 2 Mar 2016 22:33:34 +0530 Subject: Nginx caching no header content-length In-Reply-To: <20160302164823.GM31796@mdounin.ru> References: <20160302164823.GM31796@mdounin.ru> Message-ID: Thanks for your promot response. location /xxxxx { error_log /var/log/nginx/contenttroubleshooterror.log debug; #chunked_transfer_encoding off; #proxy_http_version 1.1; #proxy_buffering on; #proxy_pass_header Content-Length; proxy_pass http://tomcat:8080/xxxxx; proxy_cache my_cache; # Adding a header to see the cache status in the browser add_header X-Cache-Status $upstream_cache_status; # sub_filter_last_modified on; #add_header 'Content-Length' $upstream_http_content_length; #add_header 'Content-Length' 12312; proxy_cache_key "$scheme$request_uri $http_x_seneca_uid"; proxy_cache_min_uses 1; proxy_cache_valid 200 302 120m; proxy_cache_valid 404 1m; proxy_cache_use_stale error timeout invalid_header http_500 http_502 http_503 http_504; } don't know how to do it. Could you please let me know, how to add header forcefully. like add_header 'Content-Length' $upstream_http_content_length; somehow, add_header 'Content-Length' 12312312 Static value is working fine. But at the moment, i replaced it from upstream_http_content_length it did not work. Any Luck -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Wed Mar 2 17:53:19 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 2 Mar 2016 20:53:19 +0300 Subject: Nginx caching no header content-length In-Reply-To: References: <20160302164823.GM31796@mdounin.ru> Message-ID: <20160302175319.GN31796@mdounin.ru> Hello! On Wed, Mar 02, 2016 at 10:33:34PM +0530, Vikas Parashar wrote: [...] > don't know how to do it. Check your configuration for any modules which are expected to change content. Disable them (comment out relevant directives out or switch off in a particular location). In particular, check for the following directives: - ssi; - sub_filter; - gzip, gunzip; - add_before_body, add_after_body. More details can be found here: http://nginx.org/en/docs/http/ngx_http_ssi_module.html http://nginx.org/en/docs/http/ngx_http_sub_module.html http://nginx.org/en/docs/http/ngx_http_gzip_module.html http://nginx.org/en/docs/http/ngx_http_guzip_module.html http://nginx.org/en/docs/http/ngx_http_addition_module.html > Could you please let me know, how to add header > forcefully. > like add_header 'Content-Length' $upstream_http_content_length; > > somehow, > > add_header 'Content-Length' 12312312 > > Static value is working fine. But at the moment, i replaced it from > upstream_http_content_length You shouldn't try to add Content-Length manually. This is very likely to break things completely. -- Maxim Dounin http://nginx.org/ From gfrankliu at gmail.com Thu Mar 3 03:55:21 2016 From: gfrankliu at gmail.com (Frank Liu) Date: Wed, 2 Mar 2016 19:55:21 -0800 Subject: proxy_next_upstream Message-ID: Does the proxy_next_upstream "timeout" apply to both connect timeout and read timeout? Is it possible to configure proxy_next_upstream to use connect timeout only, not the read timeout? In case a connection is made and the request is sent, I don't want to re-try next upstream even when the read timeout happens. Retry in this case could cause damage to the upstream (duplicate payment, etc). Can someone also give some example of "error" or "invalid_header"? Are they 100% safe, meaning not causing upstream to process the request twice? Am I right that the only real safe is the connect timeout? Thanks! Frank -------------- next part -------------- An HTML attachment was scrubbed... URL: From guillaume at databerries.com Thu Mar 3 09:11:15 2016 From: guillaume at databerries.com (Guillaume Charhon) Date: Thu, 3 Mar 2016 10:11:15 +0100 Subject: Request processing rate and reverse proxy In-Reply-To: <20160302122835.GJ31796@mdounin.ru> References: <20160302122835.GJ31796@mdounin.ru> Message-ID: Hello Maxim, You are completely right. I must have been completely tired to miss it. Thank you, Guillaume On Wed, Mar 2, 2016 at 1:28 PM, Maxim Dounin wrote: > Hello! > > On Mon, Feb 29, 2016 at 05:05:16PM +0100, Guillaume Charhon wrote: > > > I have setup nginx 1.9.3 as a reverse proxy [1] with a rate limitation > per > > server [2]. The rate limitation does not work on this scenario. The rate > > request limitation works well if I use nginx as a normal webserver (for > > example to serve the default welcome page). > > > > I have attached my configuration files (listen on 80 and redirect to > > another webserver running lighttpd). > > There is no "limit_req" directive anywhere in your config, so it's > not a surprise rate limiting does not work. > > You have to configure both "limit_req_zone" (to configure shared > memory zone to store states) and "limit_req" (to configure > particular limits in particular locations), see > details here: > > http://nginx.org/en/docs/http/ngx_http_limit_req_module.html > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx.org at siimnet.dk Thu Mar 3 10:08:52 2016 From: nginx.org at siimnet.dk (Nginx User) Date: Thu, 3 Mar 2016 11:08:52 +0100 Subject: upstream prematurely closes cnx => 502 Bad Gateway to client Message-ID: <7B1985F8-6978-40DC-A70C-F44B78DC4C11@siimnet.dk> Nginx?ers, I trying to figure out why I'm randomly are seeing requests having issues with a nginx 1.7.4 when proxying to an upstream pool like: 2016/03/03 10:24:21 [error] 15905#0: *3252814 upstream prematurely closed connection while reading response header from upstream, client: 10.45.69.25, server: , request: "POST / HTTP/1.1", upstream: "http://10.45.69.28:8081/", host: "mxosCl:8081" 2016/03/03 10:30:02 [error] 15905#0: *3237632 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: 10.45.69.99, server: , request: "POST / HTTP/1.1", upstream: "http://10.45.69.25:8081/", host: "mxosCl:8081" 2016/03/03 10:30:02 [error] 15905#0: *3237670 upstream prematurely closed connection while reading response header from upstream, client: 10.45.69.99, server: , request: "POST / HTTP/1.1", upstream: "http://10.45.69.28:8081/", host: "mxosCl:8081" 2016/03/03 10:30:03 [error] 15905#0: *3237670 upstream prematurely closed connection while reading response header from upstream, client: 10.45.69.99, server: , request: "DELETE / HTTP/1.1", upstream: "http://10.45.69.26:8081/", host: "mxosCl:8081" upstream servers doesn't log any issues, it seems. clients get a 502 Bad Gateway for such requests. TIA /Steffen From nginx-forum at forum.nginx.org Thu Mar 3 10:57:36 2016 From: nginx-forum at forum.nginx.org (stefws) Date: Thu, 03 Mar 2016 05:57:36 -0500 Subject: upstream prematurely closes cnx => 502 Bad Gateway to client In-Reply-To: <7B1985F8-6978-40DC-A70C-F44B78DC4C11@siimnet.dk> References: <7B1985F8-6978-40DC-A70C-F44B78DC4C11@siimnet.dk> Message-ID: My config btw: user imail; worker_processes auto; daemon on; master_process on; error_log logs/mos_error.tcp debug_tcp; error_log logs/mos_error.log; pid /opt/imail/nginx/logs/mos_nginx.pid; worker_rlimit_nofile 200000; worker_rlimit_core 500M; working_directory /opt/imail/nginx; events { worker_connections 25000; use epoll; multi_accept off; } http { log_format testlogs '$remote_addr - $remote_user [$time_local] ' '"$request" $status $body_bytes_sent ' '"$http_referer" "$upstream_addr" ' '$request_time $upstream_response_time'; access_log logs/mos_access.log; sendfile on; keepalive_requests 500000; keepalive_timeout 20s; tcp_nopush on; tcp_nodelay on; client_body_buffer_size 128k; client_body_temp_path /dev/shm/mos_nginx/client_temp 1 2; open_file_cache max=25000 inactive=30s; open_file_cache_valid 60s; open_file_cache_min_uses 2; open_file_cache_errors on; upstream mosgenericbackend { server mss1:8081; server mss2:8081; server mss3:8081; server mss4:8081; healthcheck_enabled; healthcheck_delay 1000; healthcheck_timeout 1000; healthcheck_failcount 2; healthcheck_send 'GET /mxos/monitor HTTP/1.0'; keepalive 300; } server { listen 8081; keepalive_timeout 600s; client_max_body_size 0; location /mxos/ { proxy_http_version 1.1; proxy_set_header Connection ""; proxy_pass http://mosgenericbackend; proxy_connect_timeout 2; more_set_headers "Content-Type: application/json"; } location /mxos/health_status { healthcheck_status; } proxy_connect_timeout 60; proxy_read_timeout 30; proxy_send_timeout 60; } } My [vendor patched/supplied due to application specific load balancing+health check] nginx version: # /opt/imail/nginx/sbin/nginx -V nginx version: nginx/1.7.4 built by gcc 3.4.6 20060404 (Red Hat 3.4.6-3) TLS SNI support disabled configure arguments: --with-debug --add-module=/bld/current/emailmx90_git_blds/emailmx90_contrib_shared/contrib/nginx_1.7.4/source/addons/healthcheck_nginx_upstreams-master --add-module=/bld/current/emailmx90_git_blds/emailmx90_contrib_shared/contrib/nginx_1.7.4/source/addons/headers-more-nginx-module-0.23 --add-module=/bld/current/emailmx90_git_blds/emailmx90_contrib_shared/contrib/nginx_1.7.4/source/addons/yaoweibin-nginx_tcp_proxy_module-f2156ef Posted at Nginx Forum: https://forum.nginx.org/read.php?2,265031,265032#msg-265032 From nginx-forum at forum.nginx.org Thu Mar 3 11:04:25 2016 From: nginx-forum at forum.nginx.org (AmyAmy) Date: Thu, 03 Mar 2016 06:04:25 -0500 Subject: How can nginx login and access swift storage node automatically? Message-ID: Hi, everybody. I am looking for a way to make my nginx server to login swift proxy server automatically, nginx is a web server ,and swift is a cache proxy for nginx ,which can store static file. What operation I hope the machine to do was like this : 1. client send a request(10.67.247.21/test) via a browser ,for example ,chrome browser,asking for access a folder named test which is exist in swift storage node,a disk that can be ssd or sata 2. nginx was configed proxy_pass point to swift ip (10.67.247.21:8080) accept the request and try to login swift use the password I have set (user:swift,password:swift), automatically pass the X-Storage-User and X-Storage-Pass to swift 3.swift generate a X-Auth-Token return to nginx 4.nginx login swift storage node using the X-Auth-Token 5.finally, client can see the test folder on the web page,and files in the test folder can be download or upload Is there any function in nginx or swift help?Or some additional module or code can complete the operation? Any advices are welcome. Thanks for your time Amy Posted at Nginx Forum: https://forum.nginx.org/read.php?2,265033,265033#msg-265033 From reallfqq-nginx at yahoo.fr Thu Mar 3 11:42:55 2016 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Thu, 3 Mar 2016 12:42:55 +0100 Subject: TLS session resumption (identifier) Message-ID: Hello, Based on the default value of ssl_session_cache , nginx does not store any session parameter, but allows client with the right Master Key to reuse their ID (and the parameters they got). Since nginx, does not cache anything and is thus unable to revalidate anything but the Master Key, isn't it a violation of the RFC not to validate all the parameters? What happens in the following scenario? 1?) Client negociates a new TLS session and stores the session ID locally 2?) Server admin changes the configuration of his/her server to completely alter cipher suites, etc. and reloads the configuration (without restarting the server, so the Master Key is left untouched) 3?) Client tries to reuse its previously saved session ID with the right Master Key I guess the server will most probably reject the session bacu and initiate a new one with the same Master Key (please confirm)? Is it 'legal'? I admit that, in a way, the same happens when say, on a high-traffic server, the cache rotation eliminates old entries which a client then tries to resume a session with... Is it allowed to reduce the session ID mechanism to the check of the Master Key per RFC? Shouldn't you either fully support the mechanism (with a cache of parameters server-side) or not at all? --- *B. R.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From reallfqq-nginx at yahoo.fr Thu Mar 3 11:52:15 2016 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Thu, 3 Mar 2016 12:52:15 +0100 Subject: upstream prematurely closes cnx => 502 Bad Gateway to client In-Reply-To: References: <7B1985F8-6978-40DC-A70C-F44B78DC4C11@siimnet.dk> Message-ID: The HTTP specification states every request shall receive a response. Your backend closes the connection while nginx is awaiting/reading the headers. The problem definitely comes from your backend. You could use tcpdump between nginx and your backend to record what they say to each other. ?Try to correlate logs from nginx with logs from your backend.? --- *B. R.* On Thu, Mar 3, 2016 at 11:57 AM, stefws wrote: > My config btw: > > user imail; > worker_processes auto; > daemon on; > master_process on; > error_log logs/mos_error.tcp debug_tcp; > error_log logs/mos_error.log; > pid /opt/imail/nginx/logs/mos_nginx.pid; > worker_rlimit_nofile 200000; > worker_rlimit_core 500M; > working_directory /opt/imail/nginx; > events { > worker_connections 25000; > use epoll; > multi_accept off; > } > http { > log_format testlogs '$remote_addr - $remote_user [$time_local] ' > '"$request" $status $body_bytes_sent ' > '"$http_referer" "$upstream_addr" ' > '$request_time $upstream_response_time'; > access_log logs/mos_access.log; > sendfile on; > keepalive_requests 500000; > keepalive_timeout 20s; > tcp_nopush on; > tcp_nodelay on; > client_body_buffer_size 128k; > client_body_temp_path /dev/shm/mos_nginx/client_temp 1 2; > open_file_cache max=25000 inactive=30s; > open_file_cache_valid 60s; > open_file_cache_min_uses 2; > open_file_cache_errors on; > upstream mosgenericbackend { > server mss1:8081; > server mss2:8081; > server mss3:8081; > server mss4:8081; > healthcheck_enabled; > healthcheck_delay 1000; > healthcheck_timeout 1000; > healthcheck_failcount 2; > healthcheck_send 'GET /mxos/monitor HTTP/1.0'; > keepalive 300; > } > server { > listen 8081; > keepalive_timeout 600s; > client_max_body_size 0; > location /mxos/ { > proxy_http_version 1.1; > proxy_set_header Connection ""; > proxy_pass http://mosgenericbackend; > proxy_connect_timeout 2; > more_set_headers "Content-Type: application/json"; > } > location /mxos/health_status { > healthcheck_status; > } > proxy_connect_timeout 60; > proxy_read_timeout 30; > proxy_send_timeout 60; > } > } > > My [vendor patched/supplied due to application specific load > balancing+health check] nginx version: > > # /opt/imail/nginx/sbin/nginx -V > nginx version: nginx/1.7.4 > built by gcc 3.4.6 20060404 (Red Hat 3.4.6-3) > TLS SNI support disabled > configure arguments: --with-debug > > --add-module=/bld/current/emailmx90_git_blds/emailmx90_contrib_shared/contrib/nginx_1.7.4/source/addons/healthcheck_nginx_upstreams-master > > --add-module=/bld/current/emailmx90_git_blds/emailmx90_contrib_shared/contrib/nginx_1.7.4/source/addons/headers-more-nginx-module-0.23 > > --add-module=/bld/current/emailmx90_git_blds/emailmx90_contrib_shared/contrib/nginx_1.7.4/source/addons/yaoweibin-nginx_tcp_proxy_module-f2156ef > > Posted at Nginx Forum: > https://forum.nginx.org/read.php?2,265031,265032#msg-265032 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Thu Mar 3 12:32:10 2016 From: nginx-forum at forum.nginx.org (dannydekr) Date: Thu, 03 Mar 2016 07:32:10 -0500 Subject: Nginx 1.9.11 and OpenSSL 1.0.2G - HTTP2, but no ALPN negotiated. Message-ID: <99c8cf4303efc35c229da6f8bed4ed7c.NginxMailingListEnglish@forum.nginx.org> I have Ubuntu 14.04 with OpenSSL 1.0.2G, Upgraded to Nginx 1.9.11 mainline (PPA) from 1.8.1 stable, because Chrome will drop SPDY in a few months. Better be prepared. Everything went fine, but when I test HTTP2 I notice that ALPN doesn't work: No ALPN negotiated Since I have the latest version of OpenSSL, I have no idea why this is the case. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,265036,265036#msg-265036 From ahutchings at nginx.com Thu Mar 3 12:44:40 2016 From: ahutchings at nginx.com (Andrew Hutchings) Date: Thu, 3 Mar 2016 12:44:40 +0000 Subject: Nginx 1.9.11 and OpenSSL 1.0.2G - HTTP2, but no ALPN negotiated. In-Reply-To: <99c8cf4303efc35c229da6f8bed4ed7c.NginxMailingListEnglish@forum.nginx.org> References: <99c8cf4303efc35c229da6f8bed4ed7c.NginxMailingListEnglish@forum.nginx.org> Message-ID: <56D831B8.2000602@nginx.com> Hi, On Ubuntu 14.04 NGINX is built with OpenSSL 1.0.1 so is not built with ALPN support. If you have installed OpenSSL 1.0.2 you can recompile NGINX to use this and gain the ALPN support. In most cases HTTP/2 with NPN in OpenSSL 1.0.1 will work for now. Kind Regards Andrew On 03/03/16 12:32, dannydekr wrote: > I have Ubuntu 14.04 with OpenSSL 1.0.2G, > > Upgraded to Nginx 1.9.11 mainline (PPA) from 1.8.1 stable, because Chrome > will drop SPDY in a few months. Better be prepared. > > Everything went fine, but when I test HTTP2 I notice that ALPN doesn't > work: > > No ALPN negotiated > > Since I have the latest version of OpenSSL, I have no idea why this is the > case. > > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,265036,265036#msg-265036 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Andrew Hutchings (LinuxJedi) Technical Product Manager, NGINX Inc. From mdounin at mdounin.ru Thu Mar 3 13:29:34 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 3 Mar 2016 16:29:34 +0300 Subject: TLS session resumption (identifier) In-Reply-To: References: Message-ID: <20160303132934.GQ31796@mdounin.ru> Hello! On Thu, Mar 03, 2016 at 12:42:55PM +0100, B.R. wrote: > Based on the default value of ssl_session_cache > , > nginx does not store any session parameter, but allows client with the > right Master Key to reuse their ID (and the parameters they got). > > Since nginx, does not cache anything and is thus unable to revalidate > anything but the Master Key, isn't it a violation of the RFC not to > validate all the parameters? You are misunderstanding what "ssl_session_cache none" does. It doesn't allow anything to be reused, just says so to clients. -- Maxim Dounin http://nginx.org/ From reallfqq-nginx at yahoo.fr Thu Mar 3 15:42:19 2016 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Thu, 3 Mar 2016 16:42:19 +0100 Subject: TLS session resumption (identifier) In-Reply-To: <20160303132934.GQ31796@mdounin.ru> References: <20160303132934.GQ31796@mdounin.ru> Message-ID: Thanks, Maxim. You were right: I did my tests improperly... What is the use of the 'none' value then? Should not there be only the 'off' one? There must be some benefit to it, but I fail to catch it. --- *B. R.* On Thu, Mar 3, 2016 at 2:29 PM, Maxim Dounin wrote: > Hello! > > On Thu, Mar 03, 2016 at 12:42:55PM +0100, B.R. wrote: > > > Based on the default value of ssl_session_cache > > < > http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_session_cache>, > > nginx does not store any session parameter, but allows client with the > > right Master Key to reuse their ID (and the parameters they got). > > > > Since nginx, does not cache anything and is thus unable to revalidate > > anything but the Master Key, isn't it a violation of the RFC not to > > validate all the parameters? > > You are misunderstanding what "ssl_session_cache none" does. It > doesn't allow anything to be reused, just says so to clients. > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From igor at sysoev.ru Thu Mar 3 15:48:57 2016 From: igor at sysoev.ru (Igor Sysoev) Date: Thu, 3 Mar 2016 18:48:57 +0300 Subject: TLS session resumption (identifier) In-Reply-To: References: <20160303132934.GQ31796@mdounin.ru> Message-ID: On 03 Mar 2016, at 18:42, B.R. wrote: > Thanks, Maxim. > > You were right: I did my tests improperly... > > What is the use of the 'none' value then? Should not there be only the 'off' one? > There must be some benefit to it, but I fail to catch it. Initially it has been implemented for mail proxy module, but it seems that ?none? is more graceful than ?off? in general: /* * If the server explicitly says that it does not support * session reuse (see SSL_SESS_CACHE_OFF above), then * Outlook Express fails to upload a sent email to * the Sent Items folder on the IMAP server via a separate IMAP * connection in the background. Therefore we have a special * mode (SSL_SESS_CACHE_SERVER|SSL_SESS_CACHE_NO_INTERNAL_STORE) * where the server pretends that it supports session reuse, * but it does not actually store any session. */ -- Igor Sysoev http://nginx.com > On Thu, Mar 3, 2016 at 2:29 PM, Maxim Dounin wrote: > Hello! > > On Thu, Mar 03, 2016 at 12:42:55PM +0100, B.R. wrote: > > > Based on the default value of ssl_session_cache > > , > > nginx does not store any session parameter, but allows client with the > > right Master Key to reuse their ID (and the parameters they got). > > > > Since nginx, does not cache anything and is thus unable to revalidate > > anything but the Master Key, isn't it a violation of the RFC not to > > validate all the parameters? > > You are misunderstanding what "ssl_session_cache none" does. It > doesn't allow anything to be reused, just says so to clients. > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Thu Mar 3 15:55:14 2016 From: nginx-forum at forum.nginx.org (huakaibird) Date: Thu, 03 Mar 2016 10:55:14 -0500 Subject: nginx ssl performance Message-ID: <1977464a78dd3adee49efb806db96da7.NginxMailingListEnglish@forum.nginx.org> Hi, I want to test the nginx server performance with different server configuration (CPU and RAM etc) I first use apache ab as testing tool, nginx server with 2 CPU and 4G RAM, http test could handle 7000 requests/s, cpu usage reach to 30%-40%. But https' performace drop dramatically to only 300-400 requests/s and cpu usage reach to 100%. Then I also use jemeter and wrk to test, find https' performance is not so much less than http, it's only a little than http. For example use wrk test http, can reach to 5300 requests/s, while https could reach to 5100 requests/s and CPU almost same, will not reach to 100%. So I'm confused, is that because the test tool ab's problem, if it's ab's problem, why it's http performance is good? If ab's test result is correct, is https' performance is so bad than http? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,265046,265046#msg-265046 From nginx-forum at forum.nginx.org Thu Mar 3 16:00:03 2016 From: nginx-forum at forum.nginx.org (Alt) Date: Thu, 03 Mar 2016 11:00:03 -0500 Subject: Nginx 1.9.11 and OpenSSL 1.0.2G - HTTP2, but no ALPN negotiated. In-Reply-To: <56D831B8.2000602@nginx.com> References: <56D831B8.2000602@nginx.com> Message-ID: <273831314516c72c97222e44fc51b66f.NginxMailingListEnglish@forum.nginx.org> Hello, "In most cases HTTP/2 with NPN in OpenSSL 1.0.1 will work for now.", yes, for now, sadly Google will remove the NPN support in Chrome "soon": "We plan to remove support for SPDY in early 2016, and to also remove support for the TLS extension named NPN in favor of ALPN in Chrome at the same time. Server developers are strongly encouraged to move to HTTP/2 and ALPN.". Source: http://blog.chromium.org/2015/02/hello-http2-goodbye-spdy.html Thats why we all have to hurry the migration to ALPN by compiling nginx with OpenSSL 1.0.2 or LibreSSL. PS : I can't find a good reason for Google to drop support for NPN right now... it feels like last year, when they wanted to drop support of SPDY in Chrome when HTTP/2 was barely standardized and no major web server was HTTP/2 ready. Best Regards Posted at Nginx Forum: https://forum.nginx.org/read.php?2,265036,265048#msg-265048 From rpaprocki at fearnothingproductions.net Thu Mar 3 16:02:25 2016 From: rpaprocki at fearnothingproductions.net (Robert Paprocki) Date: Thu, 3 Mar 2016 08:02:25 -0800 Subject: nginx ssl performance In-Reply-To: <1977464a78dd3adee49efb806db96da7.NginxMailingListEnglish@forum.nginx.org> References: <1977464a78dd3adee49efb806db96da7.NginxMailingListEnglish@forum.nginx.org> Message-ID: ApacheBench doesn't do TLS resumption, so you're forcing a new TLS handshake with each request. This will kill your performance. ab is a pretty weak tool ;) On Thu, Mar 3, 2016 at 7:55 AM, huakaibird wrote: > Hi, > > I want to test the nginx server performance with different server > configuration (CPU and RAM etc) > > I first use apache ab as testing tool, nginx server with 2 CPU and 4G RAM, > http test could handle 7000 requests/s, cpu usage reach to 30%-40%. But > https' performace drop dramatically to only 300-400 requests/s and cpu > usage > reach to 100%. > > Then I also use jemeter and wrk to test, find https' performance is not so > much less than http, it's only a little than http. For example use wrk test > http, can reach to 5300 requests/s, while https could reach to 5100 > requests/s and CPU almost same, will not reach to 100%. > > So I'm confused, is that because the test tool ab's problem, if it's ab's > problem, why it's http performance is good? > If ab's test result is correct, is https' performance is so bad than http? > > Posted at Nginx Forum: > https://forum.nginx.org/read.php?2,265046,265046#msg-265046 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jim at ohlste.in Thu Mar 3 16:30:51 2016 From: jim at ohlste.in (Jim Ohlstein) Date: Thu, 3 Mar 2016 11:30:51 -0500 Subject: Nginx 1.9.11 and OpenSSL 1.0.2G - HTTP2, but no ALPN negotiated. In-Reply-To: <273831314516c72c97222e44fc51b66f.NginxMailingListEnglish@forum.nginx.org> References: <56D831B8.2000602@nginx.com> <273831314516c72c97222e44fc51b66f.NginxMailingListEnglish@forum.nginx.org> Message-ID: <46123AE9-8146-4906-B32E-C3CE600BDD73@ohlste.in> Hello, > On Mar 3, 2016, at 11:00 AM, Alt wrote: > > Hello, > > "In most cases HTTP/2 with NPN in OpenSSL 1.0.1 will work for now.", yes, > for now, sadly Google will remove the NPN support in Chrome "soon": "We plan > to remove support for SPDY in early 2016, and to also remove support for the > TLS extension named NPN in favor of ALPN in Chrome at the same time. Server > developers are strongly encouraged to move to HTTP/2 and ALPN.". > Source: http://blog.chromium.org/2015/02/hello-http2-goodbye-spdy.html > > Thats why we all have to hurry the migration to ALPN by compiling nginx with > OpenSSL 1.0.2 or LibreSSL. > > PS : I can't find a good reason for Google to drop support for NPN right > now... it feels like last year, when they wanted to drop support of SPDY in > Chrome when HTTP/2 was barely standardized and no major web server was > HTTP/2 ready. If you need http2 there is always the option to compile your own nginx binary against a more modern version of OpenSSL than what your operating system provides, or to change operating systems to one which provides such an OpenSSL version. Jim From nginx-forum at forum.nginx.org Thu Mar 3 16:46:54 2016 From: nginx-forum at forum.nginx.org (Alt) Date: Thu, 03 Mar 2016 11:46:54 -0500 Subject: Nginx 1.9.11 and OpenSSL 1.0.2G - HTTP2, but no ALPN negotiated. In-Reply-To: <46123AE9-8146-4906-B32E-C3CE600BDD73@ohlste.in> References: <46123AE9-8146-4906-B32E-C3CE600BDD73@ohlste.in> Message-ID: <8a439cd25d3ede57be9643d198e372e8.NginxMailingListEnglish@forum.nginx.org> Hello, Jim Ohlstein Wrote: ------------------------------------------------------- > If you need http2 there is always the option to compile your own nginx > binary against a more modern version of OpenSSL than what your > operating system provides, or to change operating systems to one which > provides such an OpenSSL version. Yes, it's what I'm doing with LibreSSL :-) But HTTP/2 works also very well with NPN and OpenSSL 1.0.1... sadly it'll be less useful when Google would have dropped support for NPN. Best Regards Posted at Nginx Forum: https://forum.nginx.org/read.php?2,265036,265054#msg-265054 From todd.wilson at twcable.com Thu Mar 3 18:54:51 2016 From: todd.wilson at twcable.com (Wilson, Todd) Date: Thu, 3 Mar 2016 18:54:51 +0000 Subject: Entire content is cached but when client pulls a byte range the entire file is sent Message-ID: <75ef189dff1348cc82e68f1dd2043961@exchpapp19.corp.twcable.com> Can the client pull a subset of the content from the cached content? If it can pull a subset of the content via byte range requests can you point me to how to configure nginx to allow this...? This is what is stored in cache on the nginx server. [root at gemini-sled1 ac]# strings 81f71da53616b454815b570216041ac7 | more KEY: GET/atis/vod/0x1051e69a101e9/d4_HDFD0075260002446163-8320285.mpg HTTP/1.1 206 Partial Content Server: Cisco/CDS Gateway/3.0 Connection: close Content-Type: video/mpeg Content-Length: 597600300 Content-Range: bytes 0-597600299/597600300 Cache-Control: public But when the client attempt to pull a byte range the entire file is pulled. curl -v -o /dev/null -r 1000-1001 http://10.157.66.194/atis/vod/0x1051e69a101e9/d4_HDFD0075260002446163-8320285.mpg * About to connect() to 10.157.66.194 port 80 * Trying 10.157.66.194... connected * Connected to 10.157.66.194 (10.157.66.194) port 80 > GET /atis/vod/0x1051e69a101e9/d4_HDFD0075260002446163-8320285.mpg HTTP/1.1 > Range: bytes=1000-1001 > User-Agent: curl/7.15.5 (x86_64-redhat-linux-gnu) libcurl/7.15.5 OpenSSL/0.9.8b zlib/1.2.3 libidn/0.6.5 > Host: 10.157.66.194 > Accept: */* > < HTTP/1.1 206 Partial Content < Server: nginx/1.9.9 < Date: Thu, 03 Mar 2016 18:36:05 GMT < Content-Type: video/mpeg < Content-Length: 597600300 < Connection: keep-alive < Content-Range: bytes 0-597600299/597600300 < Cache-Control: public < Link: ; rel="http://www.iif.atis.com/c2-media-resource-metadata" % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 569M 100 569M 0 0 3758k 0 0:02:35 0:02:35 --:--:-- 4054k We have nginx version 1.9.9 installed. Nginx.conf user nginx; worker_processes 1; error_log /var/log/nginx/error.log warn; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for" "$uri" "$request_uri" "$request_completion" "$cookie_session" "$http_cookie" "$http_host" "$host" "$server_port" "$proxy_add_x_forwarded_for" '; access_log /var/log/nginx/access.log main; sendfile on; tcp_nopush on; tcp_nodelay on; #keepalive_timeout 65; proxy_http_version 1.1; proxy_cache_methods GET HEAD POST; proxy_pass_request_headers on; fastcgi_buffers 8 160k; fastcgi_buffer_size 320k; fastcgi_connect_timeout 300; fastcgi_send_timeout 300; fastcgi_read_timeout 300; #proxy_cache_path /arroyo/nginxtest/sample/atis/vod levels=1:2 keys_zone=my-cache:80m proxy_cache_path /cache1 levels=1:2 keys_zone=my-cache:80m max_size=2g inactive=6000m use_temp_path=off; #proxy_temp_path /var/cache/tmp; proxy_force_ranges on; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_cache_key "$request_method$request_uri"; proxy_cache my-cache; proxy_cache_valid 200 206 1d; proxy_cache_valid 404 1m; proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504; proxy_headers_hash_max_size 51200; proxy_headers_hash_bucket_size 6400; include /etc/nginx/conf.d/*.conf; } Default.conf #resolve these upstream servers in /etc/hosts upstream server_array { server 10.155.202.125:80; } server { listen 10.157.66.194:80; #local ip:port for nginx to listen server_name atisdemo.atge.twcable.com; max_ranges 4; proxy_force_ranges on; location / { limit_rate_after 10k; set $limit_rate 3750k; proxy_intercept_errors on; proxy_pass http://server_array; error_page 302 = @locationupstream; } location @locationupstream{ rewrite_log on; set $limit_rate 3750k; set $upstreamlocation '$upstream_http_location'; proxy_set_header Range $http_range; proxy_set_header "User-Agent" "Cisco/CDS Gateway/3.0"; proxy_pass_header Server; proxy_pass $upstreamlocation; proxy_force_ranges on; } } Todd Wilson ________________________________ This E-mail and any of its attachments may contain Time Warner Cable proprietary information, which is privileged, confidential, or subject to copyright belonging to Time Warner Cable. This E-mail is intended solely for the use of the individual or entity to which it is addressed. If you are not the intended recipient of this E-mail, you are hereby notified that any dissemination, distribution, copying, or action taken in relation to the contents of and attachments to this E-mail is strictly prohibited and may be unlawful. If you have received this E-mail in error, please notify the sender immediately and permanently delete the original and any copy of this E-mail and any printout. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ahutchings at nginx.com Thu Mar 3 18:58:57 2016 From: ahutchings at nginx.com (Andrew Hutchings) Date: Thu, 3 Mar 2016 18:58:57 +0000 Subject: Nginx 1.9.11 and OpenSSL 1.0.2G - HTTP2, but no ALPN negotiated. In-Reply-To: <273831314516c72c97222e44fc51b66f.NginxMailingListEnglish@forum.nginx.org> References: <56D831B8.2000602@nginx.com> <273831314516c72c97222e44fc51b66f.NginxMailingListEnglish@forum.nginx.org> Message-ID: <56D88971.2050506@nginx.com> Hi, This link was also shown to me today. I have contacted Google to ask them to reverse the decision to drop NPN HTTP/2. Kind Regards Andrew On 03/03/16 16:00, Alt wrote: > Hello, > > "In most cases HTTP/2 with NPN in OpenSSL 1.0.1 will work for now.", yes, > for now, sadly Google will remove the NPN support in Chrome "soon": "We plan > to remove support for SPDY in early 2016, and to also remove support for the > TLS extension named NPN in favor of ALPN in Chrome at the same time. Server > developers are strongly encouraged to move to HTTP/2 and ALPN.". > Source: http://blog.chromium.org/2015/02/hello-http2-goodbye-spdy.html > > Thats why we all have to hurry the migration to ALPN by compiling nginx with > OpenSSL 1.0.2 or LibreSSL. > > PS : I can't find a good reason for Google to drop support for NPN right > now... it feels like last year, when they wanted to drop support of SPDY in > Chrome when HTTP/2 was barely standardized and no major web server was > HTTP/2 ready. > > Best Regards > > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,265036,265048#msg-265048 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Andrew Hutchings (LinuxJedi) Technical Product Manager, NGINX Inc. From nginx-forum at forum.nginx.org Fri Mar 4 07:12:51 2016 From: nginx-forum at forum.nginx.org (stefws) Date: Fri, 04 Mar 2016 02:12:51 -0500 Subject: upstream prematurely closes cnx => 502 Bad Gateway to client In-Reply-To: References: Message-ID: @B.R. You're right, seemed my upstream tomcat instances were RESETing cnx as reply something. So far I improved it a lot my altering a http connector keepAliveTimeout value from mistakenly expressed as sec when in fact it should be msec ;) When heavy load it still occurs but far less frequently, will dig deeper into tomcat trimming/tunning Posted at Nginx Forum: https://forum.nginx.org/read.php?2,265031,265070#msg-265070 From reallfqq-nginx at yahoo.fr Fri Mar 4 09:08:18 2016 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Fri, 4 Mar 2016 10:08:18 +0100 Subject: TLS session resumption (identifier) In-Reply-To: References: <20160303132934.GQ31796@mdounin.ru> Message-ID: Thanks Igor, that makes the whole thing crystal clear! What saves us there is the fact that, if I understand it well, the RFC 5077 ? states the server decides by itself on the use of tickets and those have precedence over identifiers. But still, advertising something without actually supporting it must lead to cases where sessions reuse is believed to take place without ever happening, harming performance... that was probably happening in versions < 1.5.9. Giving the possibility to accomodate with Outlook (and Microsoft products in general) numerous quirks is fine, but making it the default is debatable... Maybe the docs should be more explicit about the reason of the existence of 'none'? Code comments are clearer than the docs on this matter. --- *B. R.* On Thu, Mar 3, 2016 at 4:48 PM, Igor Sysoev wrote: > > On 03 Mar 2016, at 18:42, B.R. wrote: > > Thanks, Maxim. > > You were right: I did my tests improperly... > > What is the use of the 'none' value then? Should not there be only the > 'off' one? > There must be some benefit to it, but I fail to catch it. > > > Initially it has been implemented for mail proxy module, but it seems that > ?none? > is more graceful than ?off? in general: > > /* > * If the server explicitly says that it does not support > * session reuse (see SSL_SESS_CACHE_OFF above), then > * Outlook Express fails to upload a sent email to > * the Sent Items folder on the IMAP server via a separate IMAP > * connection in the background. Therefore we have a special > * mode (SSL_SESS_CACHE_SERVER|SSL_SESS_CACHE_NO_INTERNAL_STORE) > * where the server pretends that it supports session reuse, > * but it does not actually store any session. > */ > > -- > Igor Sysoev > http://nginx.com > > On Thu, Mar 3, 2016 at 2:29 PM, Maxim Dounin wrote: > >> Hello! >> >> On Thu, Mar 03, 2016 at 12:42:55PM +0100, B.R. wrote: >> >> > Based on the default value of ssl_session_cache >> > < >> http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_session_cache >> >, >> > nginx does not store any session parameter, but allows client with the >> > right Master Key to reuse their ID (and the parameters they got). >> > >> > Since nginx, does not cache anything and is thus unable to revalidate >> > anything but the Master Key, isn't it a violation of the RFC not to >> > validate all the parameters? >> >> You are misunderstanding what "ssl_session_cache none" does. It >> doesn't allow anything to be reused, just says so to clients. >> >> -- >> Maxim Dounin >> http://nginx.org/ >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From igor at sysoev.ru Fri Mar 4 09:33:07 2016 From: igor at sysoev.ru (Igor Sysoev) Date: Fri, 4 Mar 2016 12:33:07 +0300 Subject: TLS session resumption (identifier) In-Reply-To: References: <20160303132934.GQ31796@mdounin.ru> Message-ID: <5E35CF52-1C9C-4916-8E2F-CBF01644C8BC@sysoev.ru> On 04 Mar 2016, at 12:08, B.R. wrote: > Thanks Igor, that makes the whole thing crystal clear! > > What saves us there is the fact that, if I understand it well, the RFC 5077? states the server decides by itself on the use of tickets and those have precedence over identifiers. Yes. > But still, advertising something without actually supporting it must lead to cases where sessions reuse is believed to take place without ever happening, harming performance... that was probably happening in versions < 1.5.9. I do not think that it should harm performance. > Giving the possibility to accomodate with Outlook (and Microsoft products in general) numerous quirks is fine, but making it the default is debatable? I believe this is safe default and clients should not rely on resumed sessions because 1) sessions have timeout defined by server security policy, 2) and server has limited session storage so old sessions are removed. > Maybe the docs should be more explicit about the reason of the existence of 'none'? Code comments are clearer than the docs on this matter. Yes, probably. -- Igor Sysoev http://nginx.com > On Thu, Mar 3, 2016 at 4:48 PM, Igor Sysoev wrote: > > On 03 Mar 2016, at 18:42, B.R. wrote: > >> Thanks, Maxim. >> >> You were right: I did my tests improperly... >> >> What is the use of the 'none' value then? Should not there be only the 'off' one? >> There must be some benefit to it, but I fail to catch it. > > Initially it has been implemented for mail proxy module, but it seems that ?none? > is more graceful than ?off? in general: > > /* > * If the server explicitly says that it does not support > * session reuse (see SSL_SESS_CACHE_OFF above), then > * Outlook Express fails to upload a sent email to > * the Sent Items folder on the IMAP server via a separate IMAP > * connection in the background. Therefore we have a special > * mode (SSL_SESS_CACHE_SERVER|SSL_SESS_CACHE_NO_INTERNAL_STORE) > * where the server pretends that it supports session reuse, > * but it does not actually store any session. > */ > > -- > Igor Sysoev > http://nginx.com > >> On Thu, Mar 3, 2016 at 2:29 PM, Maxim Dounin wrote: >> Hello! >> >> On Thu, Mar 03, 2016 at 12:42:55PM +0100, B.R. wrote: >> >> > Based on the default value of ssl_session_cache >> > , >> > nginx does not store any session parameter, but allows client with the >> > right Master Key to reuse their ID (and the parameters they got). >> > >> > Since nginx, does not cache anything and is thus unable to revalidate >> > anything but the Master Key, isn't it a violation of the RFC not to >> > validate all the parameters? >> >> You are misunderstanding what "ssl_session_cache none" does. It >> doesn't allow anything to be reused, just says so to clients. >> >> -- >> Maxim Dounin >> http://nginx.org/ >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From reallfqq-nginx at yahoo.fr Fri Mar 4 09:55:56 2016 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Fri, 4 Mar 2016 10:55:56 +0100 Subject: TLS session resumption (identifier) In-Reply-To: <5E35CF52-1C9C-4916-8E2F-CBF01644C8BC@sysoev.ru> References: <20160303132934.GQ31796@mdounin.ru> <5E35CF52-1C9C-4916-8E2F-CBF01644C8BC@sysoev.ru> Message-ID: On Fri, Mar 4, 2016 at 10:33 AM, Igor Sysoev wrote: > But still, advertising something without actually supporting it must lead > to cases where sessions reuse is believed to take place without ever > happening, harming performance... that was probably happening in versions < > 1.5.9. > > > I do not think that it should harm performance. > ?Oh yes it does?... I am surprised by your stance and I beg to differ. Having quite some load from many clients on a web-server delivering content over HTTPS, it relieves a lot of pain to save CPU cycles by avoiding a full handshake. When a client browses a website, (s)he will initiate many connections. Beyond the first one (ones with multiplexing?), session reuse kicks in. Repeat that for each client and sum all the saved CPU cycles. Even an improper (non-scientific) benchmark will show you improvement. Session reuse is part of the effort of optimizing TLS trafic to minimize its overhead. Have a talk about it with the W3C webperf group if you wish, to which Ilya Grigorik (participated at nginxconf 2014) belongs. Have a look at his checklist too.? > Giving the possibility to accomodate with Outlook (and Microsoft products > in general) numerous quirks is fine, but making it the default is debatable? > > > I believe this is safe default and clients should not rely on resumed > sessions because > 1) sessions have timeout defined by server security policy, > 2) and server has limited session storage so old sessions are removed. > ?Well, the mechanism behind TLS sessions is basically a trial-and-error? ?one.? Even for tickets I would add, if the server changed its Master Key between ticket creation and reuse. There is little-to-no overhead at trying an expired session ID/ticket which the client communicate in his initial message to the server. ID search in cache or ticket invalidation requires few overhead resource and in case of invalidation, normal protocol to initiate a new session resumes. There is no guarantee a session exists, but there is everything to gain from it if it does.? --- *B. R.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Fri Mar 4 10:00:01 2016 From: nginx-forum at forum.nginx.org (stefws) Date: Fri, 04 Mar 2016 05:00:01 -0500 Subject: upstream prematurely closes cnx => 502 Bad Gateway to client In-Reply-To: References: Message-ID: <3eb17ba595dd27318a1998c6f3618361.NginxMailingListEnglish@forum.nginx.org> Seems I'm not alone w/TC issues ;) Posted at Nginx Forum: https://forum.nginx.org/read.php?2,265031,265078#msg-265078 From nginx-forum at forum.nginx.org Fri Mar 4 10:00:50 2016 From: nginx-forum at forum.nginx.org (stefws) Date: Fri, 04 Mar 2016 05:00:50 -0500 Subject: upstream prematurely closes cnx => 502 Bad Gateway to client In-Reply-To: <3eb17ba595dd27318a1998c6f3618361.NginxMailingListEnglish@forum.nginx.org> References: <3eb17ba595dd27318a1998c6f3618361.NginxMailingListEnglish@forum.nginx.org> Message-ID: <7677b72308cbea379cb12d1aa16fc802.NginxMailingListEnglish@forum.nginx.org> stefws Wrote: ------------------------------------------------------- > Seems I'm not alone w/TC issues ;) missed the link: http://permalink.gmane.org/gmane.comp.web.haproxy/26860 Posted at Nginx Forum: https://forum.nginx.org/read.php?2,265031,265079#msg-265079 From igor at sysoev.ru Fri Mar 4 10:19:21 2016 From: igor at sysoev.ru (Igor Sysoev) Date: Fri, 4 Mar 2016 13:19:21 +0300 Subject: TLS session resumption (identifier) In-Reply-To: References: <20160303132934.GQ31796@mdounin.ru> <5E35CF52-1C9C-4916-8E2F-CBF01644C8BC@sysoev.ru> Message-ID: <2E9E7199-6B4F-4AD0-B7BC-7D59AC5EB11B@sysoev.ru> On 04 Mar 2016, at 12:55, B.R. wrote: > On Fri, Mar 4, 2016 at 10:33 AM, Igor Sysoev wrote: >> But still, advertising something without actually supporting it must lead to cases where sessions reuse is believed to take place without ever happening, harming performance... that was probably happening in versions < 1.5.9. > > I do not think that it should harm performance. > > ?Oh yes it does?... I am surprised by your stance and I beg to differ. > Having quite some load from many clients on a web-server delivering content over HTTPS, it relieves a lot of pain to save CPU cycles by avoiding a full handshake. > When a client browses a website, (s)he will initiate many connections. Beyond the first one (ones with multiplexing?), session reuse kicks in. Repeat that for each client and sum all the saved CPU cycles. Even an improper (non-scientific) benchmark will show you improvement. > > Session reuse is part of the effort of optimizing TLS trafic to minimize its overhead. Have a talk about it with the W3C webperf group if you wish, to which Ilya Grigorik (participated at nginxconf 2014) belongs. Have a look at his checklist too.? Sorry, I meant there is no performance difference between ?none? and ?off? settings. As to default value, builtin session cache was by default initially but it turned out that it leads to memory fragmentation. So the default value has been changed to ?off? and later to ?none?. Of course shared cache is certainly better as default value but there is no good understanding what default cache size should be used. And now it becomes less important with ticket introduction. -- Igor Sysoev http://nginx.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From reallfqq-nginx at yahoo.fr Fri Mar 4 10:30:32 2016 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Fri, 4 Mar 2016 11:30:32 +0100 Subject: TLS session resumption (identifier) In-Reply-To: <2E9E7199-6B4F-4AD0-B7BC-7D59AC5EB11B@sysoev.ru> References: <20160303132934.GQ31796@mdounin.ru> <5E35CF52-1C9C-4916-8E2F-CBF01644C8BC@sysoev.ru> <2E9E7199-6B4F-4AD0-B7BC-7D59AC5EB11B@sysoev.ru> Message-ID: On Fri, Mar 4, 2016 at 11:19 AM, Igor Sysoev wrote: > Sorry, I meant there is no performance difference between ?none? and ?off? > settings. > ?Well, the client believes he should remember every session ID and store it somewhere for nothing, reading/resending/writing it on every connection. Small enough network traffic difference, though (the extra, useless ID in the ClientHello message could be considered harmless, even though those extra bytes appear on each TLS session establishement). As to default value, builtin session cache was by default initially but it > turned out that > it leads to memory fragmentation. So the default value has been changed to > ?off? and > later to ?none?. > > Of course shared cache is certainly better as default value but there is > no good understanding > what default cache size should be used. And now it becomes less important > with ticket introduction. > Total agreement there: I was not pushing for a default activating a cache, but rather for the clean 'off' setting.? --- *B. R.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From igor at sysoev.ru Fri Mar 4 10:38:53 2016 From: igor at sysoev.ru (Igor Sysoev) Date: Fri, 4 Mar 2016 13:38:53 +0300 Subject: TLS session resumption (identifier) In-Reply-To: References: <20160303132934.GQ31796@mdounin.ru> <5E35CF52-1C9C-4916-8E2F-CBF01644C8BC@sysoev.ru> <2E9E7199-6B4F-4AD0-B7BC-7D59AC5EB11B@sysoev.ru> Message-ID: <2C888581-B281-4984-8F99-CA51F5CE6B4C@sysoev.ru> On 04 Mar 2016, at 13:30, B.R. wrote: > On Fri, Mar 4, 2016 at 11:19 AM, Igor Sysoev wrote: > Sorry, I meant there is no performance difference between ?none? and ?off? settings. > > ?Well, the client believes he should remember every session ID and store it somewhere for nothing, reading/resending/writing it on every connection. > Small enough network traffic difference, though (the extra, useless ID in the ClientHello message could be considered harmless, even though those extra bytes appear on each TLS session establishment). I believe this is negligible degradation for a client. These operations can be only noticeable on server which serves a lot of simultaneous clients. > As to default value, builtin session cache was by default initially but it turned out that > it leads to memory fragmentation. So the default value has been changed to ?off? and > later to ?none?. > > Of course shared cache is certainly better as default value but there is no good understanding > what default cache size should be used. And now it becomes less important with ticket introduction. > > Total agreement there: I was not pushing for a default activating a cache, but rather for the clean 'off' setting.? > --- > B. R. -- Igor Sysoev http://nginx.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From al-nginx at none.at Fri Mar 4 10:48:52 2016 From: al-nginx at none.at (Aleksandar Lazic) Date: Fri, 04 Mar 2016 11:48:52 +0100 Subject: upstream prematurely closes cnx => 502 Bad Gateway to client In-Reply-To: <7677b72308cbea379cb12d1aa16fc802.NginxMailingListEnglish@forum.nginx.org> References: <3eb17ba595dd27318a1998c6f3618361.NginxMailingListEnglish@forum.nginx.org> <7677b72308cbea379cb12d1aa16fc802.NginxMailingListEnglish@forum.nginx.org> Message-ID: <47623fad811442e88f52da4461668742@none.at> Hi. Am 04-03-2016 11:00, schrieb stefws: > stefws Wrote: > ------------------------------------------------------- >> Seems I'm not alone w/TC issues ;) > missed the link: > http://permalink.gmane.org/gmane.comp.web.haproxy/26860 Well Maybe you have a completely different situation. Is it possible to build a more recent nginx version without the additional modules? Please can you setup the tomcat as described in my answer. http://permalink.gmane.org/gmane.comp.web.haproxy/26862 #### Please can you try to run the connector with debug on. http://tomcat.apache.org/tomcat-8.0-doc/logging.html#Using_java.util.logging_%28default%29 I would try to use this. org.apache.catalina.session.level=ALL org.apache.coyote.http11.Http11Protocol.level=ALL Pay attention this will produce a lot entries in the logs and could have some impact to the performance. The standard setup have also some low limits maybe you must increase this limits. http://tomcat.apache.org/tomcat-8.0-doc/config/http.html#Standard_Implementation #### Maybe I have missed it but which tomcat version do you use? Please also setup nginx with debug option as described here http://nginx.org/en/docs/debugging_log.html What's the error line(s) on the tomcat site? Please can you also post the current 'conf/server.xml' with all 'Connector*', thanks. BR Aleks From nginx-forum at forum.nginx.org Fri Mar 4 12:12:09 2016 From: nginx-forum at forum.nginx.org (vizl) Date: Fri, 04 Mar 2016 07:12:09 -0500 Subject: Workers CPU leak [epoll_wait,epoll_ctl] In-Reply-To: <3276415.0yY8t7h8rS@vbart-workstation> References: <3276415.0yY8t7h8rS@vbart-workstation> Message-ID: Sorry for long answer, but we have doing some tests, and notice that probles is appear when thread_pool enabled. thread_pool default threads=128 max_queue=1024 We need to use thread_pool, and can't permenent disable it unfortunately Posted at Nginx Forum: https://forum.nginx.org/read.php?2,264764,265085#msg-265085 From vbart at nginx.com Fri Mar 4 12:20:48 2016 From: vbart at nginx.com (Valentin V. Bartenev) Date: Fri, 04 Mar 2016 15:20:48 +0300 Subject: Workers CPU leak [epoll_wait,epoll_ctl] In-Reply-To: References: <3276415.0yY8t7h8rS@vbart-workstation> Message-ID: <13012861.meDOcPVDIf@vbart-workstation> On Friday 04 March 2016 07:12:09 vizl wrote: > Sorry for long answer, but we have doing some tests, and notice that probles > is appear when thread_pool enabled. > > thread_pool default threads=128 max_queue=1024 > > We need to use thread_pool, and can't permenent disable it unfortunately > [..] Do you modify files that are served by nginx? Do you have open_file_cache enabled? wbr, Valentin V. Bartenev From maxim at nginx.com Fri Mar 4 12:24:47 2016 From: maxim at nginx.com (Maxim Konovalov) Date: Fri, 4 Mar 2016 15:24:47 +0300 Subject: Workers CPU leak [epoll_wait,epoll_ctl] In-Reply-To: <13012861.meDOcPVDIf@vbart-workstation> References: <3276415.0yY8t7h8rS@vbart-workstation> <13012861.meDOcPVDIf@vbart-workstation> Message-ID: <56D97E8F.3030508@nginx.com> On 3/4/16 3:20 PM, Valentin V. Bartenev wrote: > On Friday 04 March 2016 07:12:09 vizl wrote: >> Sorry for long answer, but we have doing some tests, and notice that probles >> is appear when thread_pool enabled. >> >> thread_pool default threads=128 max_queue=1024 >> >> We need to use thread_pool, and can't permenent disable it unfortunately >> > [..] > > Do you modify files that are served by nginx? > Do you have open_file_cache enabled? > Can we ask you again about the nginx configuration? -- Maxim Konovalov From nginx-forum at forum.nginx.org Fri Mar 4 13:25:43 2016 From: nginx-forum at forum.nginx.org (vizl) Date: Fri, 04 Mar 2016 08:25:43 -0500 Subject: Workers CPU leak [epoll_wait,epoll_ctl] In-Reply-To: <13012861.meDOcPVDIf@vbart-workstation> References: <13012861.meDOcPVDIf@vbart-workstation> Message-ID: <7b3bb8c49ff87e300e3963c31d7ed11b.NginxMailingListEnglish@forum.nginx.org> user www; worker_processes 16; thread_pool default threads=128 max_queue=1024; worker_rlimit_nofile 65536; ###timer_resolution 100ms; #error_log /home/logs/error_log.nginx error; error_log /home/logs/error_log.nginx.debug debug; events { worker_connections 30000; use epoll; } http { include mime.types; default_type application/octet-stream; index index.html index.htm; output_buffers 2 256k; read_ahead 256k; # was 1m; aio threads=default; aio on; sendfile on; sendfile_max_chunk 256k; server { listen *:80 default rcvbuf=32768 backlog=2048 reuseport deferred; listen *:443 ssl default rcvbuf=32768 backlog=2048 reuseport deferred; server_name localhost; access_log /home/logs/access.log; error_log /home/logs/error.log warn; root /mnt; expires 20m; location ~ ^/crossdomain.xml { } location ~ \.[Ff][Ll][Vv]$ { flv; } location ~ \.[Mm][Pp]4$ { mp4; } } } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,264764,265089#msg-265089 From nginx-forum at forum.nginx.org Fri Mar 4 13:38:17 2016 From: nginx-forum at forum.nginx.org (stefws) Date: Fri, 04 Mar 2016 08:38:17 -0500 Subject: upstream prematurely closes cnx => 502 Bad Gateway to client In-Reply-To: <47623fad811442e88f52da4461668742@none.at> References: <47623fad811442e88f52da4461668742@none.at> Message-ID: <533f73754c8a1ef923db14c4b43ea75a.NginxMailingListEnglish@forum.nginx.org> I've also seen the issue when running plain nginx 1.9.11 sort like builting a new nginx, only the issue is that upstream closes cnxs, not nginx. Since I've discovered that our TCs had the Connector' keepAliveTimeout way low (10 msec), mistakenly thought the units were sec and not as actually msec. Since increasing this a 1000 fold everything seems much better now :) Thanks, will look into your suggestions next time I need to debug TC, right now they are running at high throttle ;) BTW I'm using Apache Tomcat/7.0.27 tcpdump snippet between nginx(10.45.69.14) and TC(10.45.69.25), where TC RESETs cnx instead of replying: 01:15:49.744283 IP 10.45.69.14.58702 > 10.45.69.25.8081: Flags [P.], seq 31688:39753, ack 191, win 219, options [nop,nop,TS val 38708511 ecr 32578863], length 8065 /PUT / HTTP/1.1^M Host: mosgenericbackend^M Content-Length: 7811^M Content-Type: application/x-www-form-urlencoded^M User-Agent: Java/1.8.0_66^M Accept: text/html, image/gif, image/jpeg, *; q=.2, */*; q=.2^M ^M 01:15:49.744311 IP 10.45.69.25.8081 > 10.45.69.14.58702: Flags [R], seq 1376851868, win 0, length 0 01:15:49.744467 IP 10.45.69.14.58702 > 10.45.69.25.8081: Flags [.], ack 192, win 219, options [nop,nop,TS val 38708512 ecr 32578874], length 0 01:15:49.744480 IP 10.45.69.25.8081 > 10.45.69.14.58702: Flags [R], seq 1376851869, win 0, length 0 Posted at Nginx Forum: https://forum.nginx.org/read.php?2,265031,265090#msg-265090 From vbart at nginx.com Fri Mar 4 13:44:48 2016 From: vbart at nginx.com (Valentin V. Bartenev) Date: Fri, 04 Mar 2016 16:44:48 +0300 Subject: Workers CPU leak [epoll_wait,epoll_ctl] In-Reply-To: <7b3bb8c49ff87e300e3963c31d7ed11b.NginxMailingListEnglish@forum.nginx.org> References: <13012861.meDOcPVDIf@vbart-workstation> <7b3bb8c49ff87e300e3963c31d7ed11b.NginxMailingListEnglish@forum.nginx.org> Message-ID: <1835922.5eEzMA9GLl@vbart-workstation> On Friday 04 March 2016 08:25:43 vizl wrote: > user www; > worker_processes 16; > thread_pool default threads=128 max_queue=1024; > worker_rlimit_nofile 65536; > ###timer_resolution 100ms; > > #error_log /home/logs/error_log.nginx error; > error_log /home/logs/error_log.nginx.debug debug; > > events { > worker_connections 30000; > use epoll; > } > > http { > include mime.types; > default_type application/octet-stream; > index index.html index.htm; > > output_buffers 2 256k; > read_ahead 256k; # was 1m; > aio threads=default; > aio on; This is invalid configuration due to duplicated "aio" directive. I assume it's not the configuration with which your nginx is currently running. > sendfile on; > sendfile_max_chunk 256k; > > server { > listen *:80 default rcvbuf=32768 backlog=2048 reuseport deferred; > listen *:443 ssl default rcvbuf=32768 backlog=2048 reuseport deferred; > server_name localhost; > access_log /home/logs/access.log; > error_log /home/logs/error.log warn; > root /mnt; > expires 20m; > > location ~ ^/crossdomain.xml { } > location ~ \.[Ff][Ll][Vv]$ { > flv; > } > location ~ \.[Mm][Pp]4$ { > mp4; > } > } > } > The question remains the same: do you or some tool periodically change the files? wbr, Valentin V. Bartenev From nginx-forum at forum.nginx.org Fri Mar 4 14:42:45 2016 From: nginx-forum at forum.nginx.org (vizl) Date: Fri, 04 Mar 2016 09:42:45 -0500 Subject: Workers CPU leak [epoll_wait,epoll_ctl] In-Reply-To: <1835922.5eEzMA9GLl@vbart-workstation> References: <1835922.5eEzMA9GLl@vbart-workstation> Message-ID: <12bcc21db12e7065dbbe4addd0fe4816.NginxMailingListEnglish@forum.nginx.org> Sorry, my misprint. Config whithout aio on; only aio threads=default; > do you or some tool periodically change the files ? no, files are unchanged, just periodically some new are added and some expired are deleted Posted at Nginx Forum: https://forum.nginx.org/read.php?2,264764,265095#msg-265095 From vbart at nginx.com Fri Mar 4 14:45:38 2016 From: vbart at nginx.com (Valentin V. Bartenev) Date: Fri, 04 Mar 2016 17:45:38 +0300 Subject: Workers CPU leak [epoll_wait,epoll_ctl] In-Reply-To: <12bcc21db12e7065dbbe4addd0fe4816.NginxMailingListEnglish@forum.nginx.org> References: <1835922.5eEzMA9GLl@vbart-workstation> <12bcc21db12e7065dbbe4addd0fe4816.NginxMailingListEnglish@forum.nginx.org> Message-ID: <10876994.dJh9ZuiH4b@vbart-workstation> On Friday 04 March 2016 09:42:45 vizl wrote: > Sorry, my misprint. > > Config whithout aio on; > > only aio threads=default; > > > do you or some tool periodically change the files ? > no, files are unchanged, just periodically some new are added and some > expired are deleted > Could you provide the debug log? wbr, Valentin V. Bartenev From giulio at loffreda.com.br Fri Mar 4 16:20:18 2016 From: giulio at loffreda.com.br (Giulio Loffreda) Date: Fri, 4 Mar 2016 17:20:18 +0100 Subject: TCP Connection In-Reply-To: References: Message-ID: <0F7728BB-8F31-46D5-AF9B-FC04F80612AA@loffreda.com.br> Hi All, If I?m sending this email to wrong list, apologies and give me the good one. I have one embedded application which needs to connect via TCP and send an HTTP request. We run Nginx on Ubuntu 14.04. I can connect and persist, but once the packet is sent, I get HTTP 400 without any log and without web server taking the request. I think it is a deeper level on Nginx but having hard to find a good answer. Hope somebody can help. Thanks Giulio From vbart at nginx.com Fri Mar 4 16:24:59 2016 From: vbart at nginx.com (Valentin V. Bartenev) Date: Fri, 04 Mar 2016 19:24:59 +0300 Subject: TCP Connection In-Reply-To: <0F7728BB-8F31-46D5-AF9B-FC04F80612AA@loffreda.com.br> References: <0F7728BB-8F31-46D5-AF9B-FC04F80612AA@loffreda.com.br> Message-ID: <1494504.esqr1lxAn2@vbart-workstation> On Friday 04 March 2016 17:20:18 Giulio Loffreda wrote: > Hi All, > > If I?m sending this email to wrong list, apologies and give me the good one. > > I have one embedded application which needs to connect via TCP and send an HTTP request. > We run Nginx on Ubuntu 14.04. > > I can connect and persist, but once the packet is sent, I get HTTP 400 without any log and without web server taking the request. > > I think it is a deeper level on Nginx but having hard to find a good answer. > > Hope somebody can help. > [..] The reason of 400 is usually logged on "info" level in error_log. See: http://nginx.org/r/error_log wbr, Valentin V. Bartenev From giulio at loffreda.com.br Fri Mar 4 16:41:42 2016 From: giulio at loffreda.com.br (Giulio Loffreda) Date: Fri, 4 Mar 2016 17:41:42 +0100 Subject: TCP Connection In-Reply-To: <1494504.esqr1lxAn2@vbart-workstation> References: <0F7728BB-8F31-46D5-AF9B-FC04F80612AA@loffreda.com.br> <1494504.esqr1lxAn2@vbart-workstation> Message-ID: Thank you for your answer. Here is my log after the request, my request, my response and my nginx.conf root at vps190138:~# cat /var/log/nginx-error.log 2016/03/04 15:49:02 [notice] 8990#0: using the "epoll" event method 2016/03/04 15:49:02 [notice] 8990#0: nginx/1.8.1 2016/03/04 15:49:02 [notice] 8990#0: OS: Linux 3.13.0-66-generic 2016/03/04 15:49:02 [notice] 8990#0: getrlimit(RLIMIT_NOFILE): 1024:4096 2016/03/04 15:49:02 [notice] 8991#0: start worker processes 2016/03/04 15:49:02 [notice] 8991#0: start worker process 8993 #REQUEST AT+CIPSTART="TCP", "myhost.com","80" AT+CIPSEND=150 POST /devices/deviceRegister HTTP/1.1 Host: myhost.com Cache-Control: no-cache Content-Type: application/json Content-Length:14 {"IMEI":"aa"} #RESPONSE SEND OK HTTP/1.1 400 Bad Request Server: nginx/1.8.1 Date: Fri, 04 Mar 2016 15:52:21 GMT Content-Type: text/html Content-Length: 172 Connection: close 400 Bad Request

400 Bad Request


nginx/1.8.1
-------------- next part -------------- A non-text attachment was scrubbed... Name: nginx.conf Type: application/octet-stream Size: 1625 bytes Desc: not available URL: -------------- next part -------------- > Le 4 mars 2016 ? 17:24, Valentin V. Bartenev a ?crit : > > On Friday 04 March 2016 17:20:18 Giulio Loffreda wrote: >> Hi All, >> >> If I?m sending this email to wrong list, apologies and give me the good one. >> >> I have one embedded application which needs to connect via TCP and send an HTTP request. >> We run Nginx on Ubuntu 14.04. >> >> I can connect and persist, but once the packet is sent, I get HTTP 400 without any log and without web server taking the request. >> >> I think it is a deeper level on Nginx but having hard to find a good answer. >> >> Hope somebody can help. >> > [..] > > The reason of 400 is usually logged on "info" level in error_log. > > See: http://nginx.org/r/error_log > > wbr, Valentin V. Bartenev > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From vbart at nginx.com Fri Mar 4 17:56:07 2016 From: vbart at nginx.com (Valentin V. Bartenev) Date: Fri, 04 Mar 2016 20:56:07 +0300 Subject: TCP Connection In-Reply-To: References: <1494504.esqr1lxAn2@vbart-workstation> Message-ID: <3329837.uuY5S0hQFr@vbart-workstation> On Friday 04 March 2016 17:41:42 Giulio Loffreda wrote: > Thank you for your answer. > > Here is my log after the request, my request, my response and my nginx.conf > > root at vps190138:~# cat /var/log/nginx-error.log > 2016/03/04 15:49:02 [notice] 8990#0: using the "epoll" event method > 2016/03/04 15:49:02 [notice] 8990#0: nginx/1.8.1 > 2016/03/04 15:49:02 [notice] 8990#0: OS: Linux 3.13.0-66-generic > 2016/03/04 15:49:02 [notice] 8990#0: getrlimit(RLIMIT_NOFILE): 1024:4096 > 2016/03/04 15:49:02 [notice] 8991#0: start worker processes > 2016/03/04 15:49:02 [notice] 8991#0: start worker process 8993 > [..] You have another error_log directive on the "http" level of your configuration, and it overrides log settings set on the main config level. wbr, Valentin V. Bartenev From giulio at loffreda.com.br Fri Mar 4 18:12:56 2016 From: giulio at loffreda.com.br (Giulio Loffreda) Date: Fri, 4 Mar 2016 19:12:56 +0100 Subject: TCP Connection In-Reply-To: <3329837.uuY5S0hQFr@vbart-workstation> References: <1494504.esqr1lxAn2@vbart-workstation> <3329837.uuY5S0hQFr@vbart-workstation> Message-ID: Still having no luck. No log is generated. TCP connection has any relation to web socket support configuration ? this is my site config. server { listen 80; server_name myhost.com; access_log /var/log/nginx/myhost.com.log combined; error_log /var/log/nginx/myhost.com.error.log; root /home/spark/myhost.com/web; try_files $uri /index.php; set $cache_uri $request_uri; if ($request_method = POST) { set $cache_uri 'null cache'; } location ~ \.php$ { try_files $uri =404; fastcgi_split_path_info ^(.+\.php)(/.+)$; # With php5-fpm: fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; include fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; # WebSocket support ? not working #proxy_pass http://127.0.0.1:8765; #proxy_redirect off; #proxy_http_version 1.1; #proxy_set_header Upgrade $http_upgrade; #proxy_set_header Connection "upgrade"; #proxy_buffering off; #debug requests #echo_duplicate 1 $echo_client_request_headers; #echo "\r"; #echo_read_request_body; #echo $request_body; } } > Le 4 mars 2016 ? 18:56, Valentin V. Bartenev a ?crit : > > On Friday 04 March 2016 17:41:42 Giulio Loffreda wrote: >> Thank you for your answer. >> >> Here is my log after the request, my request, my response and my nginx.conf >> >> root at vps190138:~# cat /var/log/nginx-error.log >> 2016/03/04 15:49:02 [notice] 8990#0: using the "epoll" event method >> 2016/03/04 15:49:02 [notice] 8990#0: nginx/1.8.1 >> 2016/03/04 15:49:02 [notice] 8990#0: OS: Linux 3.13.0-66-generic >> 2016/03/04 15:49:02 [notice] 8990#0: getrlimit(RLIMIT_NOFILE): 1024:4096 >> 2016/03/04 15:49:02 [notice] 8991#0: start worker processes >> 2016/03/04 15:49:02 [notice] 8991#0: start worker process 8993 >> > [..] > > You have another error_log directive on the "http" level of your configuration, > and it overrides log settings set on the main config level. > > wbr, Valentin V. Bartenev > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From reallfqq-nginx at yahoo.fr Fri Mar 4 20:46:30 2016 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Fri, 4 Mar 2016 21:46:30 +0100 Subject: TCP Connection In-Reply-To: References: <1494504.esqr1lxAn2@vbart-workstation> <3329837.uuY5S0hQFr@vbart-workstation> Message-ID: You have been told to use the info error_log level, which is not set (defaults to error). ?Make sure no other error_log directive might overwrite this one.? --- *B. R.* On Fri, Mar 4, 2016 at 7:12 PM, Giulio Loffreda wrote: > Still having no luck. > No log is generated. > > TCP connection has any relation to web socket support configuration ? > > this is my site config. > > server { > listen 80; > server_name myhost.com; > access_log /var/log/nginx/myhost.com.log combined; > error_log /var/log/nginx/myhost.com.error.log; > > root /home/spark/myhost.com/web; > try_files $uri /index.php; > > set $cache_uri $request_uri; > if ($request_method = POST) { > set $cache_uri 'null cache'; > } > location ~ \.php$ { > try_files $uri =404; > fastcgi_split_path_info ^(.+\.php)(/.+)$; > > # With php5-fpm: > fastcgi_pass unix:/var/run/php5-fpm.sock; > fastcgi_index index.php; > include fastcgi_params; > fastcgi_param SCRIPT_FILENAME > $document_root$fastcgi_script_name; > > # WebSocket support ? not working > #proxy_pass http://127.0.0.1:8765; > #proxy_redirect off; > #proxy_http_version 1.1; > #proxy_set_header Upgrade $http_upgrade; > #proxy_set_header Connection "upgrade"; > #proxy_buffering off; > > #debug requests > #echo_duplicate 1 $echo_client_request_headers; > #echo "\r"; > #echo_read_request_body; > #echo $request_body; > } > } > > > > Le 4 mars 2016 ? 18:56, Valentin V. Bartenev a ?crit : > > > > On Friday 04 March 2016 17:41:42 Giulio Loffreda wrote: > >> Thank you for your answer. > >> > >> Here is my log after the request, my request, my response and my > nginx.conf > >> > >> root at vps190138:~# cat /var/log/nginx-error.log > >> 2016/03/04 15:49:02 [notice] 8990#0: using the "epoll" event method > >> 2016/03/04 15:49:02 [notice] 8990#0: nginx/1.8.1 > >> 2016/03/04 15:49:02 [notice] 8990#0: OS: Linux 3.13.0-66-generic > >> 2016/03/04 15:49:02 [notice] 8990#0: getrlimit(RLIMIT_NOFILE): 1024:4096 > >> 2016/03/04 15:49:02 [notice] 8991#0: start worker processes > >> 2016/03/04 15:49:02 [notice] 8991#0: start worker process 8993 > >> > > [..] > > > > You have another error_log directive on the "http" level of your > configuration, > > and it overrides log settings set on the main config level. > > > > wbr, Valentin V. Bartenev > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From steve at greengecko.co.nz Fri Mar 4 21:48:35 2016 From: steve at greengecko.co.nz (steve) Date: Sat, 5 Mar 2016 10:48:35 +1300 Subject: TCP Connection In-Reply-To: References: <1494504.esqr1lxAn2@vbart-workstation> <3329837.uuY5S0hQFr@vbart-workstation> Message-ID: <56DA02B3.4080403@greengecko.co.nz> does /var/log/nginx exist, and it is writeable by the web server user? On 03/05/2016 07:12 AM, Giulio Loffreda wrote: > Still having no luck. > No log is generated. > > TCP connection has any relation to web socket support configuration ? > > this is my site config. > > server { > listen 80; > server_name myhost.com; > access_log /var/log/nginx/myhost.com.log combined; > error_log /var/log/nginx/myhost.com.error.log; > > root /home/spark/myhost.com/web; > try_files $uri /index.php; > > set $cache_uri $request_uri; > if ($request_method = POST) { > set $cache_uri 'null cache'; > } > location ~ \.php$ { > try_files $uri =404; > fastcgi_split_path_info ^(.+\.php)(/.+)$; > > # With php5-fpm: > fastcgi_pass unix:/var/run/php5-fpm.sock; > fastcgi_index index.php; > include fastcgi_params; > fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; > > # WebSocket support ? not working > #proxy_pass http://127.0.0.1:8765; > #proxy_redirect off; > #proxy_http_version 1.1; > #proxy_set_header Upgrade $http_upgrade; > #proxy_set_header Connection "upgrade"; > #proxy_buffering off; > > #debug requests > #echo_duplicate 1 $echo_client_request_headers; > #echo "\r"; > #echo_read_request_body; > #echo $request_body; > } > } > > >> Le 4 mars 2016 ? 18:56, Valentin V. Bartenev a ?crit : >> >> On Friday 04 March 2016 17:41:42 Giulio Loffreda wrote: >>> Thank you for your answer. >>> >>> Here is my log after the request, my request, my response and my nginx.conf >>> >>> root at vps190138:~# cat /var/log/nginx-error.log >>> 2016/03/04 15:49:02 [notice] 8990#0: using the "epoll" event method >>> 2016/03/04 15:49:02 [notice] 8990#0: nginx/1.8.1 >>> 2016/03/04 15:49:02 [notice] 8990#0: OS: Linux 3.13.0-66-generic >>> 2016/03/04 15:49:02 [notice] 8990#0: getrlimit(RLIMIT_NOFILE): 1024:4096 >>> 2016/03/04 15:49:02 [notice] 8991#0: start worker processes >>> 2016/03/04 15:49:02 [notice] 8991#0: start worker process 8993 >>> >> [..] >> >> You have another error_log directive on the "http" level of your configuration, >> and it overrides log settings set on the main config level. >> >> wbr, Valentin V. Bartenev >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Steve Holdoway BSc(Hons) MIITP http://www.greengecko.co.nz Linkedin: http://www.linkedin.com/in/steveholdoway Skype: sholdowa From vbart at nginx.com Fri Mar 4 22:25:16 2016 From: vbart at nginx.com (=?utf-8?B?0JLQsNC70LXQvdGC0LjQvSDQkdCw0YDRgtC10L3QtdCy?=) Date: Sat, 05 Mar 2016 01:25:16 +0300 Subject: TCP Connection In-Reply-To: References: <3329837.uuY5S0hQFr@vbart-workstation> Message-ID: <1612169.XtTRzRDQuF@vbart-laptop> On Friday 04 March 2016 19:12:56 Giulio Loffreda wrote: > Still having no luck. > No log is generated. > > TCP connection has any relation to web socket support configuration ? > > this is my site config. > > server { > listen 80; > server_name myhost.com; > access_log /var/log/nginx/myhost.com.log combined; > error_log /var/log/nginx/myhost.com.error.log; [..] You have another error_log directive on the "server" level that overrides the directives from previous levels. wbr, Valentin V. Bartenev From giulio at loffreda.com.br Fri Mar 4 22:25:27 2016 From: giulio at loffreda.com.br (Giulio Loffreda) Date: Fri, 4 Mar 2016 23:25:27 +0100 Subject: TCP Connection In-Reply-To: <56DA02B3.4080403@greengecko.co.nz> References: <1494504.esqr1lxAn2@vbart-workstation> <3329837.uuY5S0hQFr@vbart-workstation> <56DA02B3.4080403@greengecko.co.nz> Message-ID: Here is my nginx.conf and site config. After calling tcp connection and getting http 400, no log is generated. I?m following Thanks server { listen 80; server_name apidvc.sparkgo.cc; access_log /var/log/nginx/myhost.com.access.log combined; root /home/spark/myhost.com/web; try_files $uri /index.php; set $cache_uri $request_uri; if ($request_method = POST) { set $cache_uri 'null cache'; } location ~ \.php$ { try_files $uri =404; fastcgi_split_path_info ^(.+\.php)(/.+)$; # NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini # With php5-fpm: fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; include fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; #debug request #echo_duplicate 1 $echo_client_request_headers; #echo "\r"; #echo_read_request_body; #echo $request_body; } } The file exists and it is fed as you can see below when I restart nginx: root at vps190138:~# cat /var/log/nginx-error.log 2016/03/04 21:14:55 [notice] 13900#0: signal 3 (SIGQUIT) received, shutting down 2016/03/04 21:14:55 [notice] 13902#0: gracefully shutting down 2016/03/04 21:14:55 [notice] 13902#0: exiting 2016/03/04 21:14:55 [notice] 13902#0: exit 2016/03/04 21:14:55 [notice] 13900#0: signal 15 (SIGTERM) received, exiting 2016/03/04 21:14:55 [notice] 13902#0: signal 15 (SIGTERM) received, exiting 2016/03/04 21:14:55 [notice] 13900#0: signal 17 (SIGCHLD) received 2016/03/04 21:14:55 [notice] 13900#0: worker process 13902 exited with code 0 2016/03/04 21:14:55 [notice] 13900#0: exit 2016/03/04 21:14:55 [notice] 13949#0: using the "epoll" event method 2016/03/04 21:14:55 [notice] 13949#0: nginx/1.8.1 2016/03/04 21:14:55 [notice] 13949#0: OS: Linux 3.13.0-66-generic 2016/03/04 21:14:55 [notice] 13949#0: getrlimit(RLIMIT_NOFILE): 1024:4096 2016/03/04 21:14:55 [notice] 13950#0: start worker processes 2016/03/04 21:14:55 [notice] 13950#0: start worker process 13952 root at vps190138:~# -------------- next part -------------- A non-text attachment was scrubbed... Name: nginx.conf Type: application/octet-stream Size: 1628 bytes Desc: not available URL: -------------- next part -------------- > Le 4 mars 2016 ? 22:48, steve a ?crit : > > does /var/log/nginx exist, and it is writeable by the web server user? > > On 03/05/2016 07:12 AM, Giulio Loffreda wrote: >> Still having no luck. >> No log is generated. >> >> TCP connection has any relation to web socket support configuration ? >> >> this is my site config. >> >> server { >> listen 80; >> server_name myhost.com; >> access_log /var/log/nginx/myhost.com.log combined; >> error_log /var/log/nginx/myhost.com.error.log; >> >> root /home/spark/myhost.com/web; >> try_files $uri /index.php; >> >> set $cache_uri $request_uri; >> if ($request_method = POST) { >> set $cache_uri 'null cache'; >> } >> location ~ \.php$ { >> try_files $uri =404; >> fastcgi_split_path_info ^(.+\.php)(/.+)$; >> >> # With php5-fpm: >> fastcgi_pass unix:/var/run/php5-fpm.sock; >> fastcgi_index index.php; >> include fastcgi_params; >> fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; >> >> # WebSocket support ? not working >> #proxy_pass http://127.0.0.1:8765; >> #proxy_redirect off; >> #proxy_http_version 1.1; >> #proxy_set_header Upgrade $http_upgrade; >> #proxy_set_header Connection "upgrade"; >> #proxy_buffering off; >> >> #debug requests >> #echo_duplicate 1 $echo_client_request_headers; >> #echo "\r"; >> #echo_read_request_body; >> #echo $request_body; >> } >> } >> >> >>> Le 4 mars 2016 ? 18:56, Valentin V. Bartenev a ?crit : >>> >>> On Friday 04 March 2016 17:41:42 Giulio Loffreda wrote: >>>> Thank you for your answer. >>>> >>>> Here is my log after the request, my request, my response and my nginx.conf >>>> >>>> root at vps190138:~# cat /var/log/nginx-error.log >>>> 2016/03/04 15:49:02 [notice] 8990#0: using the "epoll" event method >>>> 2016/03/04 15:49:02 [notice] 8990#0: nginx/1.8.1 >>>> 2016/03/04 15:49:02 [notice] 8990#0: OS: Linux 3.13.0-66-generic >>>> 2016/03/04 15:49:02 [notice] 8990#0: getrlimit(RLIMIT_NOFILE): 1024:4096 >>>> 2016/03/04 15:49:02 [notice] 8991#0: start worker processes >>>> 2016/03/04 15:49:02 [notice] 8991#0: start worker process 8993 >>>> >>> [..] >>> >>> You have another error_log directive on the "http" level of your configuration, >>> and it overrides log settings set on the main config level. >>> >>> wbr, Valentin V. Bartenev >>> >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > -- > Steve Holdoway BSc(Hons) MIITP > http://www.greengecko.co.nz > Linkedin: http://www.linkedin.com/in/steveholdoway > Skype: sholdowa > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at forum.nginx.org Fri Mar 4 22:55:51 2016 From: nginx-forum at forum.nginx.org (Alt) Date: Fri, 04 Mar 2016 17:55:51 -0500 Subject: Nginx 1.9.11 and OpenSSL 1.0.2G - HTTP2, but no ALPN negotiated. In-Reply-To: <56D88971.2050506@nginx.com> References: <56D88971.2050506@nginx.com> Message-ID: Hello, Great, thanks Andrew! Best Regards Posted at Nginx Forum: https://forum.nginx.org/read.php?2,265036,265115#msg-265115 From vbart at nginx.com Fri Mar 4 23:20:41 2016 From: vbart at nginx.com (=?utf-8?B?0JLQsNC70LXQvdGC0LjQvSDQkdCw0YDRgtC10L3QtdCy?=) Date: Sat, 05 Mar 2016 02:20:41 +0300 Subject: TCP Connection In-Reply-To: References: <56DA02B3.4080403@greengecko.co.nz> Message-ID: <1995423.qkbOhDgudR@vbart-laptop> On Friday 04 March 2016 23:25:27 Giulio Loffreda wrote: > Here is my nginx.conf and site config. > After calling tcp connection and getting http 400, no log is generated. > > I?m following > > Thanks > > server { > listen 80; > server_name apidvc.sparkgo.cc; > access_log /var/log/nginx/myhost.com.access.log combined; > > root /home/spark/myhost.com/web; > try_files $uri /index.php; > > set $cache_uri $request_uri; > if ($request_method = POST) { > set $cache_uri 'null cache'; > } > location ~ \.php$ { > try_files $uri =404; > fastcgi_split_path_info ^(.+\.php)(/.+)$; > # NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini > > # With php5-fpm: > fastcgi_pass unix:/var/run/php5-fpm.sock; > fastcgi_index index.php; > include fastcgi_params; > fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; > > #debug request > #echo_duplicate 1 $echo_client_request_headers; > #echo "\r"; > #echo_read_request_body; > #echo $request_body; > } > } > > > The file exists and it is fed as you can see below when I restart nginx: > > root at vps190138:~# cat /var/log/nginx-error.log > 2016/03/04 21:14:55 [notice] 13900#0: signal 3 (SIGQUIT) received, shutting down > 2016/03/04 21:14:55 [notice] 13902#0: gracefully shutting down > 2016/03/04 21:14:55 [notice] 13902#0: exiting > 2016/03/04 21:14:55 [notice] 13902#0: exit > 2016/03/04 21:14:55 [notice] 13900#0: signal 15 (SIGTERM) received, exiting > 2016/03/04 21:14:55 [notice] 13902#0: signal 15 (SIGTERM) received, exiting > 2016/03/04 21:14:55 [notice] 13900#0: signal 17 (SIGCHLD) received > 2016/03/04 21:14:55 [notice] 13900#0: worker process 13902 exited with code 0 > 2016/03/04 21:14:55 [notice] 13900#0: exit > 2016/03/04 21:14:55 [notice] 13949#0: using the "epoll" event method > 2016/03/04 21:14:55 [notice] 13949#0: nginx/1.8.1 > 2016/03/04 21:14:55 [notice] 13949#0: OS: Linux 3.13.0-66-generic > 2016/03/04 21:14:55 [notice] 13949#0: getrlimit(RLIMIT_NOFILE): 1024:4096 > 2016/03/04 21:14:55 [notice] 13950#0: start worker processes > 2016/03/04 21:14:55 [notice] 13950#0: start worker process 13952 > root at vps190138:~# Do you have other configuration files in your /etc/nginx/conf.d/ and /etc/nginx/sites-enabled/ directories, which are included in your nginx.conf? wbr, Valentin V. Bartenev From maxim at nginx.com Sat Mar 5 06:01:59 2016 From: maxim at nginx.com (Maxim Konovalov) Date: Sat, 5 Mar 2016 09:01:59 +0300 Subject: TCP Connection In-Reply-To: References: <1494504.esqr1lxAn2@vbart-workstation> <3329837.uuY5S0hQFr@vbart-workstation> <56DA02B3.4080403@greengecko.co.nz> Message-ID: <56DA7657.5010905@nginx.com> Giulio, On 3/5/16 1:25 AM, Giulio Loffreda wrote: > Here is my nginx.conf and site config. After calling tcp > connection and getting http 400, no log is generated. > > I?m following > Let me re-phrase: this is not your _full_ nginx config. There are probably other files in /etc/nginx or somewhere in your system that include the snippet below and/or other files. We know that because nginx just won't start with the snippet below for a number of reasons. To diagnose the issue and provide advises we need full nginx configuration. If your nginx version (see nginx -V) is more or less recent, i.e 1.9.2+, please run "nginx -T" command to collect the configuration. Otherwise just find a folder with the nginx configs and check for another error_log directives (e.g. grep -r error_log /etc/nginx/). Hope this helps, Maxim -- Maxim Konovalov From nginx-forum at forum.nginx.org Sat Mar 5 18:02:01 2016 From: nginx-forum at forum.nginx.org (Guest84) Date: Sat, 05 Mar 2016 13:02:01 -0500 Subject: XSLT and autoindex XML output: conditionally transform XML only when it's autoindex output? In-Reply-To: <20160226192728.GI1402@barfooze.de> References: <20160226192728.GI1402@barfooze.de> Message-ID: 16c999e8c71134401a78d4d46435517b2271d6ac mojombo at 16c999e8c71134401a78d4d46435517b2271d6ac mojombo/github-flavored-markdown at 16c999e8c71134401a78d4d46435517b2271d6ac Posted at Nginx Forum: https://forum.nginx.org/read.php?2,264890,265120#msg-265120 From chino.aureus at gmail.com Mon Mar 7 14:10:26 2016 From: chino.aureus at gmail.com (Chino Aureus) Date: Mon, 7 Mar 2016 22:10:26 +0800 Subject: TCP Load Balancing with Proxy Protocol Message-ID: Hi, Newbie here. Would like to ask if NGINX supports Proxy Protocol. This to expose client IP addresses to the backend nodes. Regards, Chino -------------- next part -------------- An HTML attachment was scrubbed... URL: From pavlo at lotusflare.com Mon Mar 7 14:33:48 2016 From: pavlo at lotusflare.com (Pavlo Zhuk) Date: Mon, 7 Mar 2016 16:33:48 +0200 Subject: TCP Load Balancing with Proxy Protocol In-Reply-To: References: Message-ID: Yes, it is https://www.nginx.com/resources/admin-guide/proxy-protocol/ On Mon, Mar 7, 2016 at 4:10 PM, Chino Aureus wrote: > > Hi, > > Newbie here. Would like to ask if NGINX supports Proxy Protocol. This to > expose client IP addresses to the backend nodes. > > > Regards, > Chino > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- BR, Pavlo Zhuk +38093 2412222 -------------- next part -------------- An HTML attachment was scrubbed... URL: From al-nginx at none.at Mon Mar 7 14:57:38 2016 From: al-nginx at none.at (Aleksandar Lazic) Date: Mon, 07 Mar 2016 15:57:38 +0100 Subject: TCP Load Balancing with Proxy Protocol In-Reply-To: References: Message-ID: <2006fdd55b5338218c373e1c9e50d5fe@none.at> Hi. Am 07-03-2016 15:33, schrieb Pavlo Zhuk: > Yes, it is https://www.nginx.com/resources/admin-guide/proxy-protocol/ Please pay attention that only V1 is supported! http://hg.nginx.org/nginx/file/tip/src/core/ngx_proxy_protocol.c http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#send-proxy If you want to use the check option in haproxy then please also take a look into this. http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#check-send-proxy BR Aleks > On Mon, Mar 7, 2016 at 4:10 PM, Chino Aureus > wrote: > >> Hi, >> >> Newbie here. Would like to ask if NGINX supports Proxy Protocol. >> This to expose client IP addresses to the backend nodes. >> >> Regards, >> Chino >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > > -- > > BR, > Pavlo Zhuk > +38093 2412222 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From krishna at Brocade.com Mon Mar 7 19:08:04 2016 From: krishna at Brocade.com (Krishna Kumar K K) Date: Mon, 7 Mar 2016 19:08:04 +0000 Subject: This webpage has a redirect loop (ERR_TOO_MANY_REDIRECTS) Message-ID: Hi, My set up is as below: NGINX (reverse proxy) -->IBM WebSeal (redirects to a common login page, after authentication forwards to internal proxy along with the redirected url) --> Internal Proxy (IBM http Server) --> WebSphere Portal. I am trying to access https:///wps/seedlist/myserver?Source=com.ibm.lotus.search.plugins.seedlist.retriever.portal.PortalRetrieverFactory&Action=GetDocuments&Range=100&locale=en Host_name is the server_name on NGINX and the url is in the Portal server. When I am accessing it directly, replacing the host_name with Portal server IP/port, it works. With the host_name, I am getting the message as in the subject line, on the browser. My nginx config is below:- #Security server_tokens off; #Turn off version number add_header X-Frame-Options "SAMEORIGIN"; #Turn off click jacking; so no frames add_header X-XSS-Protection "1; mode=block"; add_header X-Content-Type-Options nosniff; # Redirect all insecure requests to the secure port server { listen :80 ; server_name ; return 301 https://$request_uri; } # Serve SSL encrypted data server { listen :443 default_server ssl; add_header Strict-Transport-Security max-age=15768000; server_name ; access_log /web/nginx/servers/name/logs/access.log; error_log /web/nginx/servers/name/logs/ error.log; # Security ssl on; ssl_prefer_server_ciphers on; ssl_session_cache shared:SSL:10m; ssl_dhparam /etc/ssl/certs/dhparam.pem; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers 'EECDH+AESGCM:EDH+AESGCM:ECDHE-RSA-AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-GCM-SHA256:!aNULL:!eNULL:!EXPORT:!DES:!MD5:!PSK:!RC4'; # Specify the certificate and key ssl_certificate /etc/nginx/ssl/name/server.name.com.crt; ssl_certificate_key /etc/nginx/ssl/name/server.name.com.key; location /download/ { rewrite ^/download/vadxeval$ "https:///mybrocade/secure/navigate?nid=n32&prodCode=VIRTUAL_ADX&pname=VADX_DOWNLOAD&completePath=downloads/Virtual ADX/Virtual ADX_Eval" break; rewrite ^/download/apitoolkit$ "https:// /mybrocade/secure/navigate?nid=n30&prodCode=BRD_API_SUPPORT&prodCatCode=API&pname=VYATTA_DOWNLOAD&completePath=Brocade API Toolkit" break; } location / { rewrite ^/$ https:// /wps/myportal/ break; rewrite ^/wps/portal$ http:// /wps/myportal/ break; index index.html; root /web/nginx/servers/name/conf; proxy_set_header Host $server_name; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_pass http:///; proxy_read_timeout 90; } } Please help. Thanks, Krishna -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Mon Mar 7 19:23:34 2016 From: nginx-forum at forum.nginx.org (krishna@brocade.com) Date: Mon, 07 Mar 2016 14:23:34 -0500 Subject: This webpage has a redirect loop (ERR_TOO_MANY_REDIRECTS) Message-ID: <42bb0ece4a81095a0e57a5d8e6b57ad8.NginxMailingListEnglish@forum.nginx.org> Hi, My set up is as below: NGINX (reverse proxy) ?IBM WebSeal (redirects to a common login page, after authentication forwards to internal proxy along with the redirected url) ? Internal Proxy (IBM http Server) ? WebSphere Portal. I am trying to access https:///wps/seedlist/myserver?Source=com.ibm.lotus.search.plugins.seedlist.retriever.portal.PortalRetrieverFactory&Action=GetDocuments&Range=100&locale=en Host_name is the server_name on NGINX and the url is in the Portal server. When I am accessing it directly, replacing the host_name with Portal server IP/port, it works. With the host_name, I am getting the message as in the subject line, on the browser. My nginx config is below:- #Security server_tokens off; #Turn off version number add_header X-Frame-Options "SAMEORIGIN"; #Turn off click jacking; so no frames add_header X-XSS-Protection "1; mode=block"; add_header X-Content-Type-Options nosniff; # Redirect all insecure requests to the secure port server { listen :80 ; server_name ; return 301 https://$request_uri; } # Serve SSL encrypted data server { listen :443 default_server ssl; add_header Strict-Transport-Security max-age=15768000; server_name ; access_log /web/nginx/servers/name/logs/access.log; error_log /web/nginx/servers/name/logs/ error.log; # Security ssl on; ssl_prefer_server_ciphers on; ssl_session_cache shared:SSL:10m; ssl_dhparam /etc/ssl/certs/dhparam.pem; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers 'EECDH+AESGCM:EDH+AESGCM:ECDHE-RSA-AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-GCM-SHA256:!aNULL:!eNULL:!EXPORT:!DES:!MD5:!PSK:!RC4'; # Specify the certificate and key ssl_certificate /etc/nginx/ssl/name/server.name.com.crt; ssl_certificate_key /etc/nginx/ssl/name/server.name.com.key; location /download/ { rewrite ^/download/vadxeval$ "https:///mybrocade/secure/navigate?nid=n32&prodCode=VIRTUAL_ADX&pname=VADX_DOWNLOAD&completePath=downloads/Virtual ADX/Virtual ADX_Eval" break; rewrite ^/download/apitoolkit$ "https:// /mybrocade/secure/navigate?nid=n30&prodCode=BRD_API_SUPPORT&prodCatCode=API&pname=VYATTA_DOWNLOAD&completePath=Brocade API Toolkit" break; } location / { rewrite ^/$ https:// /wps/myportal/ break; rewrite ^/wps/portal$ http:// /wps/myportal/ break; index index.html; root /web/nginx/servers/name/conf; proxy_set_header Host $server_name; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_pass http:///; proxy_read_timeout 90; } } Please help. Thanks, Krishna Posted at Nginx Forum: https://forum.nginx.org/read.php?2,265136,265136#msg-265136 From nginx-forum at forum.nginx.org Mon Mar 7 19:38:01 2016 From: nginx-forum at forum.nginx.org (krishna@brocade.com) Date: Mon, 07 Mar 2016 14:38:01 -0500 Subject: secure and httponly cookies Message-ID: Hi, How to mark all the cookies from the backend servers as secure and httponly? Is there some config in NGINX available for this? Thanks, Krishna Posted at Nginx Forum: https://forum.nginx.org/read.php?2,265137,265137#msg-265137 From lucas at slcoding.com Mon Mar 7 19:47:16 2016 From: lucas at slcoding.com (Lucas Rolff) Date: Mon, 07 Mar 2016 20:47:16 +0100 Subject: secure and httponly cookies In-Reply-To: References: Message-ID: <56DDDAC4.30109@slcoding.com> This isn't really something you do on your web server but rather in your backend configuration (such as php.ini), etc. > krishna at brocade.com > 7 March 2016 at 20:38 > Hi, > > How to mark all the cookies from the backend servers as secure and > httponly? > > Is there some config in NGINX available for this? > > Thanks, > Krishna > > Posted at Nginx Forum: > https://forum.nginx.org/read.php?2,265137,265137#msg-265137 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Mon Mar 7 19:54:37 2016 From: nginx-forum at forum.nginx.org (krishna@brocade.com) Date: Mon, 07 Mar 2016 14:54:37 -0500 Subject: secure and httponly cookies In-Reply-To: <56DDDAC4.30109@slcoding.com> References: <56DDDAC4.30109@slcoding.com> Message-ID: <1bbdfa1ebe9dd42c03852bc23443bb50.NginxMailingListEnglish@forum.nginx.org> Thanks for the response. Yes, i understand that. But here they dont create a secure or httponly cookie in the backend (webseal/ibm portal). Earlier we were using ibm http server (IHS) and were adding these flags in the web server itself. Now we are trying to replace IHS with nginx but not able to accomplish the same here. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,265137,265140#msg-265140 From lucas at slcoding.com Mon Mar 7 20:00:48 2016 From: lucas at slcoding.com (Lucas Rolff) Date: Mon, 07 Mar 2016 21:00:48 +0100 Subject: secure and httponly cookies In-Reply-To: <1bbdfa1ebe9dd42c03852bc23443bb50.NginxMailingListEnglish@forum.nginx.org> References: <56DDDAC4.30109@slcoding.com> <1bbdfa1ebe9dd42c03852bc23443bb50.NginxMailingListEnglish@forum.nginx.org> Message-ID: <56DDDDF0.7080802@slcoding.com> Without knowing much about webseal (only simple googling), webseal really seems to be a very custom IBM product that does one thing: Integrate into Tivoli Access Manager - meaning they've very specific features (such as single sign-on) etc. nginx is a general webserver, it doesn't hook into your backend system, usually you proxy some requests to it, or serve some files. The only way I can think of, is by using LUA to rewrite the Set-Cookie headers, but it's not really a nice solution. krishna at brocade.com wrote: > Thanks for the response. > > Yes, i understand that. But here they dont create a secure or httponly > cookie in the backend (webseal/ibm portal). > > Earlier we were using ibm http server (IHS) and were adding these flags in > the web server itself. > > Now we are trying to replace IHS with nginx but not able to accomplish the > same here. > > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,265137,265140#msg-265140 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at forum.nginx.org Mon Mar 7 20:15:20 2016 From: nginx-forum at forum.nginx.org (krishna@brocade.com) Date: Mon, 07 Mar 2016 15:15:20 -0500 Subject: secure and httponly cookies In-Reply-To: <56DDDDF0.7080802@slcoding.com> References: <56DDDDF0.7080802@slcoding.com> Message-ID: <30a5f4a5612d929ddbec32bea384889a.NginxMailingListEnglish@forum.nginx.org> Here, nginx is proxy passing the requests to webseal and webseal sends the response with cookies. We are trying to rewrite this cookie headers. Could you tell me more about LUA or some links where i can read about it? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,265137,265142#msg-265142 From aapo.talvensaari at gmail.com Mon Mar 7 20:31:09 2016 From: aapo.talvensaari at gmail.com (Aapo Talvensaari) Date: Mon, 7 Mar 2016 22:31:09 +0200 Subject: secure and httponly cookies In-Reply-To: <30a5f4a5612d929ddbec32bea384889a.NginxMailingListEnglish@forum.nginx.org> References: <56DDDDF0.7080802@slcoding.com> <30a5f4a5612d929ddbec32bea384889a.NginxMailingListEnglish@forum.nginx.org> Message-ID: On 7 March 2016 at 22:15, krishna at brocade.com wrote: > > Could you tell me more about LUA or some links where i can read about it? > Here you go: https://github.com/openresty/lua-nginx-module#header_filter_by_lua There you can replace the Set-Cookie-headers, and append HttpOnly and Secure flags. -------------- next part -------------- An HTML attachment was scrubbed... URL: From rpaprocki at fearnothingproductions.net Mon Mar 7 20:36:59 2016 From: rpaprocki at fearnothingproductions.net (Robert Paprocki) Date: Mon, 7 Mar 2016 12:36:59 -0800 Subject: secure and httponly cookies In-Reply-To: References: <56DDDDF0.7080802@slcoding.com> <30a5f4a5612d929ddbec32bea384889a.NginxMailingListEnglish@forum.nginx.org> Message-ID: There's a relevant resty library as well - https://github.com/cloudflare/lua-resty-cookie > On Mar 7, 2016, at 12:31, Aapo Talvensaari wrote: > >> On 7 March 2016 at 22:15, krishna at brocade.com wrote: >> Could you tell me more about LUA or some links where i can read about it? > > Here you go: > https://github.com/openresty/lua-nginx-module#header_filter_by_lua > > There you can replace the Set-Cookie-headers, and append HttpOnly and Secure flags. > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From al-nginx at none.at Mon Mar 7 21:16:03 2016 From: al-nginx at none.at (Aleksandar Lazic) Date: Mon, 07 Mar 2016 22:16:03 +0100 Subject: secure and httponly cookies In-Reply-To: <30a5f4a5612d929ddbec32bea384889a.NginxMailingListEnglish@forum.nginx.org> References: <56DDDDF0.7080802@slcoding.com> <30a5f4a5612d929ddbec32bea384889a.NginxMailingListEnglish@forum.nginx.org> Message-ID: <69c7b4b776c1b56f734511e3ba8b0d83@none.at> Hi. Am 07-03-2016 21:15, schrieb krishna at brocade.com: > Here, nginx is proxy passing the requests to webseal and webseal sends > the > response with cookies. > We are trying to rewrite this cookie headers. Please can you show us how you have tried to do this. As you can see on this pages there should be a option with 'plain' nginx ;-) http://serverfault.com/questions/268633/controlling-nginx-proxy-target-using-a-cookie https://maximilian-boehm.com/hp2134/NGINX-as-Proxy-Rewrite-Set-Cookie-to-Secure-and-HttpOnly.htm Please can you also post the output of nginx -V and the config. Cheers Aleks > Could you tell me more about LUA or some links where i can read about > it? > > Posted at Nginx Forum: > https://forum.nginx.org/read.php?2,265137,265142#msg-265142 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From krishna at Brocade.com Mon Mar 7 21:50:00 2016 From: krishna at Brocade.com (Krishna Kumar K K) Date: Mon, 7 Mar 2016 21:50:00 +0000 Subject: secure and httponly cookies In-Reply-To: <69c7b4b776c1b56f734511e3ba8b0d83@none.at> References: <56DDDDF0.7080802@slcoding.com> <30a5f4a5612d929ddbec32bea384889a.NginxMailingListEnglish@forum.nginx.org> <69c7b4b776c1b56f734511e3ba8b0d83@none.at> Message-ID: <512543939b854d0abe9e6cedcf874c7c@BRMWP-EXMB11.corp.brocade.com> I have tried exactly the same as in this page:- proxy_cookie_path / "/; secure; HttpOnly"; it sets the flags on the cookie in the response header, but when I refresh the page, it is sending the cookies in the requests header without these flags, it just resets it. Thanks, Krishna -----Original Message----- From: nginx [mailto:nginx-bounces at nginx.org] On Behalf Of Aleksandar Lazic Sent: Monday, March 07, 2016 1:16 PM To: nginx at nginx.org Subject: Re: secure and httponly cookies Hi. Am 07-03-2016 21:15, schrieb krishna at brocade.com: > Here, nginx is proxy passing the requests to webseal and webseal sends > the response with cookies. > We are trying to rewrite this cookie headers. Please can you show us how you have tried to do this. As you can see on this pages there should be a option with 'plain' nginx ;-) https://urldefense.proofpoint.com/v2/url?u=http-3A__serverfault.com_questions_268633_controlling-2Dnginx-2Dproxy-2Dtarget-2Dusing-2Da-2Dcookie&d=CwICAg&c=IL_XqQWOjubgfqINi2jTzg&r=PZ7-DbptEeW_9SeYl3U87b-UoRqXIcJD3kzHs3AtV7E&m=6gm5ZW2zS0OsqHDgC0ZQdRy2r648aRPQq1pCVy1H4sA&s=RUz0YUGoSUkE6lu5tJ39Q6wGT4OOTv5_pHDdBeUYXs8&e= https://urldefense.proofpoint.com/v2/url?u=https-3A__maximilian-2Dboehm.com_hp2134_NGINX-2Das-2DProxy-2DRewrite-2DSet-2DCookie-2Dto-2DSecure-2Dand-2DHttpOnly.htm&d=CwICAg&c=IL_XqQWOjubgfqINi2jTzg&r=PZ7-DbptEeW_9SeYl3U87b-UoRqXIcJD3kzHs3AtV7E&m=6gm5ZW2zS0OsqHDgC0ZQdRy2r648aRPQq1pCVy1H4sA&s=yaYJMYFzaQG_Jx8xt2eDryBca7PrrSJCMoxoMwcR5xQ&e= Please can you also post the output of nginx -V and the config. Cheers Aleks > Could you tell me more about LUA or some links where i can read about > it? > > Posted at Nginx Forum: > https://urldefense.proofpoint.com/v2/url?u=https-3A__forum.nginx.org_r > ead.php-3F2-2C265137-2C265142-23msg-2D265142&d=CwICAg&c=IL_XqQWOjubgfq > INi2jTzg&r=PZ7-DbptEeW_9SeYl3U87b-UoRqXIcJD3kzHs3AtV7E&m=6gm5ZW2zS0Osq > HDgC0ZQdRy2r648aRPQq1pCVy1H4sA&s=Mv5hguz8jSa78zlUxgzcU4OCcKCRtqjhKZ_xl > wesMOA&e= > > _______________________________________________ > nginx mailing list > nginx at nginx.org > https://urldefense.proofpoint.com/v2/url?u=http-3A__mailman.nginx.org_ > mailman_listinfo_nginx&d=CwICAg&c=IL_XqQWOjubgfqINi2jTzg&r=PZ7-DbptEeW > _9SeYl3U87b-UoRqXIcJD3kzHs3AtV7E&m=6gm5ZW2zS0OsqHDgC0ZQdRy2r648aRPQq1p > CVy1H4sA&s=AFoUlENMfmYahoSjjMns5RW3FemZeDlb6xodRGyXtmA&e= _______________________________________________ nginx mailing list nginx at nginx.org https://urldefense.proofpoint.com/v2/url?u=http-3A__mailman.nginx.org_mailman_listinfo_nginx&d=CwICAg&c=IL_XqQWOjubgfqINi2jTzg&r=PZ7-DbptEeW_9SeYl3U87b-UoRqXIcJD3kzHs3AtV7E&m=6gm5ZW2zS0OsqHDgC0ZQdRy2r648aRPQq1pCVy1H4sA&s=AFoUlENMfmYahoSjjMns5RW3FemZeDlb6xodRGyXtmA&e= From krishna at Brocade.com Mon Mar 7 21:52:39 2016 From: krishna at Brocade.com (Krishna Kumar K K) Date: Mon, 7 Mar 2016 21:52:39 +0000 Subject: secure and httponly cookies References: <56DDDDF0.7080802@slcoding.com> <30a5f4a5612d929ddbec32bea384889a.NginxMailingListEnglish@forum.nginx.org> <69c7b4b776c1b56f734511e3ba8b0d83@none.at> Message-ID: Nginx -V nginx version: nginx/1.8.0 built by gcc 4.4.7 20120313 (Red Hat 4.4.7-3) (GCC) built with OpenSSL 1.0.1e-fips 11 Feb 2013 TLS SNI support enabled configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx --group=nginx --with-http_ssl_module --with-http_realip_module --with-http_addition_module --with-http_sub_module --with-http_dav_module --with-http_flv_module --with-http_mp4_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_random_index_module --with-http_secure_link_module --with-http_stub_status_module --with-http_auth_request_module --with-mail --with-mail_ssl_module --with-file-aio --with-ipv6 --with-http_spdy_module --with-cc-opt='-O2 -g -pipe -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic' Config:- #Security server_tokens off; #Turn off version number add_header X-Frame-Options "SAMEORIGIN"; #Turn off click jacking; so no frames add_header X-XSS-Protection "1; mode=block"; add_header X-Content-Type-Options nosniff; # Redirect all insecure requests to the secure port server { listen :80 ; server_name ; return 301 https://$request_uri; } # Serve SSL encrypted data server { listen :443 default_server ssl; add_header Strict-Transport-Security max-age=15768000; server_name ; access_log /web/nginx/servers/name/logs/access.log; error_log /web/nginx/servers/name/logs/ error.log; # Security ssl on; ssl_prefer_server_ciphers on; ssl_session_cache shared:SSL:10m; ssl_dhparam /etc/ssl/certs/dhparam.pem; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers 'EECDH+AESGCM:EDH+AESGCM:ECDHE-RSA-AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-GCM-SHA256:!aNULL:!eNULL:!EXPORT:!DES:!MD5:!PSK:!RC4'; # Specify the certificate and key ssl_certificate /etc/nginx/ssl/name/server.name.com.crt; ssl_certificate_key /etc/nginx/ssl/name/server.name.com.key; location /download/ { rewrite ^/download/vadxeval$ "https:///mybrocade/secure/navigate?nid=n32&prodCode=VIRTUAL_ADX&pname=VADX_DOWNLOAD&completePath=downloads/Virtual ADX/Virtual ADX_Eval" break; rewrite ^/download/apitoolkit$ "https:// /mybrocade/secure/navigate?nid=n30&prodCode=BRD_API_SUPPORT&prodCatCode=API&pname=VYATTA_DOWNLOAD&completePath=Brocade API Toolkit" break; } location / { rewrite ^/$ https:// /wps/myportal/ break; rewrite ^/wps/portal$ http:// /wps/myportal/ break; index index.html; root /web/nginx/servers/name/conf; proxy_set_header Host $server_name; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_pass http:///; proxy_read_timeout 90; } } -----Original Message----- From: Krishna Kumar K K Sent: Monday, March 07, 2016 1:50 PM To: nginx at nginx.org Subject: RE: secure and httponly cookies I have tried exactly the same as in this page:- proxy_cookie_path / "/; secure; HttpOnly"; it sets the flags on the cookie in the response header, but when I refresh the page, it is sending the cookies in the requests header without these flags, it just resets it. Thanks, Krishna -----Original Message----- From: nginx [mailto:nginx-bounces at nginx.org] On Behalf Of Aleksandar Lazic Sent: Monday, March 07, 2016 1:16 PM To: nginx at nginx.org Subject: Re: secure and httponly cookies Hi. Am 07-03-2016 21:15, schrieb krishna at brocade.com: > Here, nginx is proxy passing the requests to webseal and webseal sends > the response with cookies. > We are trying to rewrite this cookie headers. Please can you show us how you have tried to do this. As you can see on this pages there should be a option with 'plain' nginx ;-) https://urldefense.proofpoint.com/v2/url?u=http-3A__serverfault.com_questions_268633_controlling-2Dnginx-2Dproxy-2Dtarget-2Dusing-2Da-2Dcookie&d=CwICAg&c=IL_XqQWOjubgfqINi2jTzg&r=PZ7-DbptEeW_9SeYl3U87b-UoRqXIcJD3kzHs3AtV7E&m=6gm5ZW2zS0OsqHDgC0ZQdRy2r648aRPQq1pCVy1H4sA&s=RUz0YUGoSUkE6lu5tJ39Q6wGT4OOTv5_pHDdBeUYXs8&e= https://urldefense.proofpoint.com/v2/url?u=https-3A__maximilian-2Dboehm.com_hp2134_NGINX-2Das-2DProxy-2DRewrite-2DSet-2DCookie-2Dto-2DSecure-2Dand-2DHttpOnly.htm&d=CwICAg&c=IL_XqQWOjubgfqINi2jTzg&r=PZ7-DbptEeW_9SeYl3U87b-UoRqXIcJD3kzHs3AtV7E&m=6gm5ZW2zS0OsqHDgC0ZQdRy2r648aRPQq1pCVy1H4sA&s=yaYJMYFzaQG_Jx8xt2eDryBca7PrrSJCMoxoMwcR5xQ&e= Please can you also post the output of nginx -V and the config. Cheers Aleks > Could you tell me more about LUA or some links where i can read about > it? > > Posted at Nginx Forum: > https://urldefense.proofpoint.com/v2/url?u=https-3A__forum.nginx.org_r > ead.php-3F2-2C265137-2C265142-23msg-2D265142&d=CwICAg&c=IL_XqQWOjubgfq > INi2jTzg&r=PZ7-DbptEeW_9SeYl3U87b-UoRqXIcJD3kzHs3AtV7E&m=6gm5ZW2zS0Osq > HDgC0ZQdRy2r648aRPQq1pCVy1H4sA&s=Mv5hguz8jSa78zlUxgzcU4OCcKCRtqjhKZ_xl > wesMOA&e= > > _______________________________________________ > nginx mailing list > nginx at nginx.org > https://urldefense.proofpoint.com/v2/url?u=http-3A__mailman.nginx.org_ > mailman_listinfo_nginx&d=CwICAg&c=IL_XqQWOjubgfqINi2jTzg&r=PZ7-DbptEeW > _9SeYl3U87b-UoRqXIcJD3kzHs3AtV7E&m=6gm5ZW2zS0OsqHDgC0ZQdRy2r648aRPQq1p > CVy1H4sA&s=AFoUlENMfmYahoSjjMns5RW3FemZeDlb6xodRGyXtmA&e= _______________________________________________ nginx mailing list nginx at nginx.org https://urldefense.proofpoint.com/v2/url?u=http-3A__mailman.nginx.org_mailman_listinfo_nginx&d=CwICAg&c=IL_XqQWOjubgfqINi2jTzg&r=PZ7-DbptEeW_9SeYl3U87b-UoRqXIcJD3kzHs3AtV7E&m=6gm5ZW2zS0OsqHDgC0ZQdRy2r648aRPQq1pCVy1H4sA&s=AFoUlENMfmYahoSjjMns5RW3FemZeDlb6xodRGyXtmA&e= From nginx-forum at forum.nginx.org Mon Mar 7 22:04:00 2016 From: nginx-forum at forum.nginx.org (j.o.l) Date: Mon, 07 Mar 2016 17:04:00 -0500 Subject: Is there a length limitation on file extensions? Message-ID: <1feb27cc68566275c9dbcf383e84c2f3.NginxMailingListEnglish@forum.nginx.org> I am using Nginx to serve a website that hosts a .Net application. The file a user needs to download and that triggers installation is a *.application file, and an MS Internet Information Server associates that with the mime type application/x-ms-application. However that file never gets any Content-Type header. I edited the mime.types configuration file to include that, but Nginx ignores that. When I rename the file to .app, and use a mime type definition for that file it works. I also tried various other file extensions of varying length, and it looks like there is a limit of 10 characters in a file type extenion of Nginx, and if that is exceeded there will be no Content-Type header. Is my assumption correct? Where is this in the source files? (I know how to compile Nginx). Or any other idea what causes my problem? Thanks, Joachim Posted at Nginx Forum: https://forum.nginx.org/read.php?2,265148,265148#msg-265148 From sca at andreasschulze.de Mon Mar 7 22:33:53 2016 From: sca at andreasschulze.de (A. Schulze) Date: Mon, 07 Mar 2016 23:33:53 +0100 Subject: Is there a length limitation on file extensions? In-Reply-To: <1feb27cc68566275c9dbcf383e84c2f3.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20160307233353.Horde.7-6OFy3hVgO6PRgEEsZdVub@andreasschulze.de> j.o.l: > I am using Nginx to serve a website that hosts a .Net application. The file > a user needs to download and that triggers installation is a *.application > file, and an MS Internet Information Server associates that with the mime > type application/x-ms-application. However that file never gets any > Content-Type header. I edited the mime.types configuration file to include > that, but Nginx ignores that. When I rename the file to .app, and use a mime > type definition for that file it works. I also tried various other file > extensions of varying length, and it looks like there is a limit of 10 > characters in a file type extenion of Nginx, and if that is exceeded there > will be no Content-Type header. > Is my assumption correct? Where is this in the source files? (I know how to > compile Nginx). Or any other idea what causes my problem? > Thanks, Joachim works here, so your assumption seems wrong. server { listen *:80; server_name localhost; location / { autoindex on; alias /tmp/; } types { application/x-ms-executable application; } } $ touch /tmp/foo.application $ curl -I localhost/foo.application HTTP/1.1 200 OK Server: nginx Date: Mon, 07 Mar 2016 22:29:42 GMT Content-Type: application/x-ms-executable Content-Length: 0 Last-Modified: Mon, 07 Mar 2016 22:26:52 GMT Connection: keep-alive Accept-Ranges: bytes $ From francis at daoine.org Mon Mar 7 22:57:05 2016 From: francis at daoine.org (Francis Daly) Date: Mon, 7 Mar 2016 22:57:05 +0000 Subject: secure and httponly cookies In-Reply-To: <512543939b854d0abe9e6cedcf874c7c@BRMWP-EXMB11.corp.brocade.com> References: <56DDDDF0.7080802@slcoding.com> <30a5f4a5612d929ddbec32bea384889a.NginxMailingListEnglish@forum.nginx.org> <69c7b4b776c1b56f734511e3ba8b0d83@none.at> <512543939b854d0abe9e6cedcf874c7c@BRMWP-EXMB11.corp.brocade.com> Message-ID: <20160307225705.GA29861@daoine.org> On Mon, Mar 07, 2016 at 09:50:00PM +0000, Krishna Kumar K K wrote: Hi there, > I have tried exactly the same as in this page:- > > proxy_cookie_path / "/; secure; HttpOnly"; > > it sets the flags on the cookie in the response header, but when I refresh the page, it is sending the cookies in the requests header without these flags, it just resets it. That sounds like it is doing exactly what it should, no? Flags are sent by the server in Set-Cookie response headers. Cookies are sent by the client (or not) in Cookie request headers. What behaviour do you want that you are not seeing? f -- Francis Daly francis at daoine.org From krishna at Brocade.com Tue Mar 8 00:38:48 2016 From: krishna at Brocade.com (Krishna Kumar K K) Date: Tue, 8 Mar 2016 00:38:48 +0000 Subject: secure and httponly cookies In-Reply-To: <20160307225705.GA29861@daoine.org> References: <56DDDDF0.7080802@slcoding.com> <30a5f4a5612d929ddbec32bea384889a.NginxMailingListEnglish@forum.nginx.org> <69c7b4b776c1b56f734511e3ba8b0d83@none.at> <512543939b854d0abe9e6cedcf874c7c@BRMWP-EXMB11.corp.brocade.com> <20160307225705.GA29861@daoine.org> Message-ID: <96f6d2a2264d4901b2cdf389e386fac8@BRMWP-EXMB11.corp.brocade.com> I am able to modify the set-cookie header from the server to flag it secure. I am trying to do the same in the request header as well. -----Original Message----- From: nginx [mailto:nginx-bounces at nginx.org] On Behalf Of Francis Daly Sent: Monday, March 07, 2016 2:57 PM To: nginx at nginx.org Subject: Re: secure and httponly cookies On Mon, Mar 07, 2016 at 09:50:00PM +0000, Krishna Kumar K K wrote: Hi there, > I have tried exactly the same as in this page:- > > proxy_cookie_path / "/; secure; HttpOnly"; > > it sets the flags on the cookie in the response header, but when I refresh the page, it is sending the cookies in the requests header without these flags, it just resets it. That sounds like it is doing exactly what it should, no? Flags are sent by the server in Set-Cookie response headers. Cookies are sent by the client (or not) in Cookie request headers. What behaviour do you want that you are not seeing? f -- Francis Daly francis at daoine.org _______________________________________________ nginx mailing list nginx at nginx.org https://urldefense.proofpoint.com/v2/url?u=http-3A__mailman.nginx.org_mailman_listinfo_nginx&d=CwICAg&c=IL_XqQWOjubgfqINi2jTzg&r=PZ7-DbptEeW_9SeYl3U87b-UoRqXIcJD3kzHs3AtV7E&m=qqv8VRtGpRns7L0SDrt1t6zKEagc2ZGMgkx7L4rLIMY&s=KQ19DpL_IThnal0du_vPQ-KtWlThbMiKK2gnyg0s2Vs&e= From aapo.talvensaari at gmail.com Tue Mar 8 07:34:06 2016 From: aapo.talvensaari at gmail.com (Aapo Talvensaari) Date: Tue, 8 Mar 2016 09:34:06 +0200 Subject: secure and httponly cookies In-Reply-To: <96f6d2a2264d4901b2cdf389e386fac8@BRMWP-EXMB11.corp.brocade.com> References: <56DDDDF0.7080802@slcoding.com> <30a5f4a5612d929ddbec32bea384889a.NginxMailingListEnglish@forum.nginx.org> <69c7b4b776c1b56f734511e3ba8b0d83@none.at> <512543939b854d0abe9e6cedcf874c7c@BRMWP-EXMB11.corp.brocade.com> <20160307225705.GA29861@daoine.org> <96f6d2a2264d4901b2cdf389e386fac8@BRMWP-EXMB11.corp.brocade.com> Message-ID: On Tuesday, 8 March 2016, Krishna Kumar K K wrote: > I am able to modify the set-cookie header from the server to flag it > secure. I am trying to do the same in the request header as well. > Those flags are instructions to client. They don't have meaning on request headers. Only on response headers. -------------- next part -------------- An HTML attachment was scrubbed... URL: From krishna at Brocade.com Tue Mar 8 07:44:59 2016 From: krishna at Brocade.com (Krishna Kumar K K) Date: Tue, 8 Mar 2016 07:44:59 +0000 Subject: secure and httponly cookies In-Reply-To: References: <56DDDDF0.7080802@slcoding.com> <30a5f4a5612d929ddbec32bea384889a.NginxMailingListEnglish@forum.nginx.org> <69c7b4b776c1b56f734511e3ba8b0d83@none.at> <512543939b854d0abe9e6cedcf874c7c@BRMWP-EXMB11.corp.brocade.com> <20160307225705.GA29861@daoine.org> <96f6d2a2264d4901b2cdf389e386fac8@BRMWP-EXMB11.corp.brocade.com> Message-ID: <2c0494e709824c2b9e7980a93ec9307d@BRMWP-EXMB11.corp.brocade.com> Thing is its failing in the vulnerability scan (nexpose tool is used) saying cookie is not secure or httponly. From: nginx [mailto:nginx-bounces at nginx.org] On Behalf Of Aapo Talvensaari Sent: Monday, March 07, 2016 11:34 PM To: nginx at nginx.org Subject: Re: secure and httponly cookies On Tuesday, 8 March 2016, Krishna Kumar K K > wrote: I am able to modify the set-cookie header from the server to flag it secure. I am trying to do the same in the request header as well. Those flags are instructions to client. They don't have meaning on request headers. Only on response headers. -------------- next part -------------- An HTML attachment was scrubbed... URL: From al-nginx at none.at Tue Mar 8 07:59:09 2016 From: al-nginx at none.at (Aleksandar Lazic) Date: Tue, 08 Mar 2016 08:59:09 +0100 Subject: secure and httponly cookies In-Reply-To: <2c0494e709824c2b9e7980a93ec9307d@BRMWP-EXMB11.corp.brocade.com> References: <56DDDDF0.7080802@slcoding.com> <30a5f4a5612d929ddbec32bea384889a.NginxMailingListEnglish@forum.nginx.org> <69c7b4b776c1b56f734511e3ba8b0d83@none.at> <512543939b854d0abe9e6cedcf874c7c@BRMWP-EXMB11.corp.brocade.com> <20160307225705.GA29861@daoine.org> <96f6d2a2264d4901b2cdf389e386fac8@BRMWP-EXMB11.corp.brocade.com> <2c0494e709824c2b9e7980a93ec9307d@BRMWP-EXMB11.corp.brocade.com> Message-ID: <0bb5455c974d83b987be0cb473d2591a@none.at> Hi. Am 08-03-2016 08:44, schrieb Krishna Kumar K K: > Thing is its failing in the vulnerability scan (nexpose tool is used) > saying cookie is not secure or httponly. As Aapo said the request header is a client header. This is only changeable at client side with some javascript code. If you want to use such a solution you can try this module. http://nginx.org/en/docs/http/ngx_http_addition_module.html But to be more precise which request header do you want to change? client request --> nginx request --> IBM WebSeal request --> Other backend ??? ??? http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_set_header You can also try to use the 'add_header ... ' that the client receive the additional header and send it back at the following requests. http://nginx.org/en/docs/http/ngx_http_headers_module.html As for the scanner he get's the cookie from the response not from the request, afaik. Maybe you can turn on the debug logging and see what the scanner gets as response. http://nginx.org/en/docs/debugging_log.html Maybe you will need the nginx-debug package. What's your system on which you run nginx? Aleks > FROM: nginx [mailto:nginx-bounces at nginx.org] ON BEHALF OF Aapo > Talvensaari > SENT: Monday, March 07, 2016 11:34 PM > TO: nginx at nginx.org > SUBJECT: Re: secure and httponly cookies > > On Tuesday, 8 March 2016, Krishna Kumar K K > wrote: > > >> I am able to modify the set-cookie header from the server to flag it >> secure. I am trying to do the same in the request header as well. > > Those flags are instructions to client. They don't have meaning on > request headers. Only on response headers. > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From elias.abacioglu at deltaprojects.com Tue Mar 8 12:22:42 2016 From: elias.abacioglu at deltaprojects.com (Elias Abacioglu) Date: Tue, 8 Mar 2016 13:22:42 +0100 Subject: proxy_cache_path max_size doesn't work Message-ID: Hi, I'm using nginx as a reverse proxy with caching on a ramdisk /var/cache/nginx is a tmpfs of the size 1800m. And this is my proxy_cache_path line in nginx: proxy_cache_path /var/cache/nginx levels=1:2 use_temp_path=off keys_zone=default:50m inactive=120m max_size=1500m; This is the error message when the ramdisk is filled. [crit] 1460#0: *14844194 pwritev() "/var/cache/nginx/temp/3/12/0001578123" failed (28: No space left on device) while reading upstream, client: XXX, server: , request: "GET /xxx.jpg HTTP/1.1", upstream: " http://127.0.0.1:81/xxx.jpg", host: "xxx.com", referrer: "http://xxx.com/" I see two problems here. 1. Why didn't max_size work and clear the space in the tmpfs mount when it got past 1500m? 2. Why doesn't nginx just bypass the cache and serve from upstrean when the mount is full instead of hard failing? And what solutions do I have? Do I increase the ramdisk/tmpfs size? Do I decrease the max_size? Should I run a external cronjob that restarts nginx when the ramdisk is more than 1500MB? Or is nginx proxy_cache_path broken and do I need to change to another solution like squid or varnish? -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Tue Mar 8 13:01:08 2016 From: nginx-forum at forum.nginx.org (ben5192) Date: Tue, 08 Mar 2016 08:01:08 -0500 Subject: NGINX reload memory leak Message-ID: <0b2a9111413c29c31d7d9a3e1243a319.NginxMailingListEnglish@forum.nginx.org> Hi, I am working on a module for NGINX and am having a problem with memory leaking when using "./nginx -s reload". Everything that is allocated is done so through ngx_palloc or ngx_pcalloc so NGINX should know to clean it up. I have also added a function to exit process which uses ngx_pfree on everything and then destroys the pool I created with ngx_destroy pool, so there should be nothing left. Even so, each reload increases the memory slightly till NGINX crashes. Any ideas on why this is happening? Thanks, Ben. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,265157,265157#msg-265157 From al-nginx at none.at Tue Mar 8 16:19:08 2016 From: al-nginx at none.at (Aleksandar Lazic) Date: Tue, 08 Mar 2016 17:19:08 +0100 Subject: NGINX reload memory leak In-Reply-To: <0b2a9111413c29c31d7d9a3e1243a319.NginxMailingListEnglish@forum.nginx.org> References: <0b2a9111413c29c31d7d9a3e1243a319.NginxMailingListEnglish@forum.nginx.org> Message-ID: <753a7d0abd9bda61d8d2eace16e89f7a@none.at> Hi. Am 08-03-2016 14:01, schrieb ben5192: > Hi, > I am working on a module for NGINX and am having a problem with memory > leaking when using "./nginx -s reload". Everything that is allocated is > done > so through ngx_palloc or ngx_pcalloc so NGINX should know to clean it > up. I > have also added a function to exit process which uses ngx_pfree on > everything and then destroys the pool I created with ngx_destroy pool, > so > there should be nothing left. Even so, each reload increases the memory > slightly till NGINX crashes. > Any ideas on why this is happening? Please can you post: .) nginx -V .) the config .) lsb_release -a or something similar for your system .) HW/VM/Image setup > Thanks, > Ben. > > Posted at Nginx Forum: > https://forum.nginx.org/read.php?2,265157,265157#msg-265157 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From rpaprocki at fearnothingproductions.net Tue Mar 8 16:45:44 2016 From: rpaprocki at fearnothingproductions.net (Robert Paprocki) Date: Tue, 8 Mar 2016 08:45:44 -0800 Subject: NGINX reload memory leak In-Reply-To: <0b2a9111413c29c31d7d9a3e1243a319.NginxMailingListEnglish@forum.nginx.org> References: <0b2a9111413c29c31d7d9a3e1243a319.NginxMailingListEnglish@forum.nginx.org> Message-ID: This may be more appropriate for the nginx-devel list. Additionally, when you post there you'll probably want to include your modules source so people can actually assist in debugging. > On Mar 8, 2016, at 05:01, ben5192 wrote: > > Hi, > I am working on a module for NGINX and am having a problem with memory > leaking when using "./nginx -s reload". Everything that is allocated is done > so through ngx_palloc or ngx_pcalloc so NGINX should know to clean it up. I > have also added a function to exit process which uses ngx_pfree on > everything and then destroys the pool I created with ngx_destroy pool, so > there should be nothing left. Even so, each reload increases the memory > slightly till NGINX crashes. > Any ideas on why this is happening? > > Thanks, > Ben. > > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,265157,265157#msg-265157 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at forum.nginx.org Tue Mar 8 17:03:28 2016 From: nginx-forum at forum.nginx.org (ben5192) Date: Tue, 08 Mar 2016 12:03:28 -0500 Subject: NGINX reload memory leak In-Reply-To: <753a7d0abd9bda61d8d2eace16e89f7a@none.at> References: <753a7d0abd9bda61d8d2eace16e89f7a@none.at> Message-ID: <4b71e782f51a200e6a1e5424d42993a9.NginxMailingListEnglish@forum.nginx.org> Yeah, here's the outputs: nginx -V nginx version: nginx/1.9.9 configure arguments: --prefix=/path/to/my/module --with-debug --with-ld-opt=-lm config (with some names changed): events { worker_connections 1046; } http { my_set_int 10000; my_set_string /path/to/a/data/file; my_set_another_int 20; server { listen 8888; location / { my_location_function string; proxy_pass http://localhost/; } } } lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 14.04.3 LTS Release: 14.04 Codename: trusty This is running in a Hyper-V virtual machine with an i7 processor. Thanks Aleksandar Lazic Wrote: ------------------------------------------------------- > Hi. > > Am 08-03-2016 14:01, schrieb ben5192: > > Hi, > > I am working on a module for NGINX and am having a problem with > memory > > leaking when using "./nginx -s reload". Everything that is allocated > is > > done > > so through ngx_palloc or ngx_pcalloc so NGINX should know to clean > it > > up. I > > have also added a function to exit process which uses ngx_pfree on > > everything and then destroys the pool I created with ngx_destroy > pool, > > so > > there should be nothing left. Even so, each reload increases the > memory > > slightly till NGINX crashes. > > Any ideas on why this is happening? > > Please can you post: > > ..) nginx -V > ..) the config > ..) lsb_release -a or something similar for your system > ..) HW/VM/Image setup > > > > Thanks, > > Ben. > > > > Posted at Nginx Forum: > > https://forum.nginx.org/read.php?2,265157,265157#msg-265157 > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > https://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > https://mailman.nginx.org/mailman/listinfo/nginx Posted at Nginx Forum: https://forum.nginx.org/read.php?2,265157,265160#msg-265160 From nginx-forum at forum.nginx.org Tue Mar 8 17:04:15 2016 From: nginx-forum at forum.nginx.org (ben5192) Date: Tue, 08 Mar 2016 12:04:15 -0500 Subject: NGINX reload memory leak In-Reply-To: References: Message-ID: <3aaa6a60ae4b935c4a3811d054bb3412.NginxMailingListEnglish@forum.nginx.org> Ok, thanks. I will post this over there. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,265157,265161#msg-265161 From nginx-forum at forum.nginx.org Tue Mar 8 20:34:23 2016 From: nginx-forum at forum.nginx.org (j.o.l) Date: Tue, 08 Mar 2016 15:34:23 -0500 Subject: Is there a length limitation on file extensions? In-Reply-To: <20160307233353.Horde.7-6OFy3hVgO6PRgEEsZdVub@andreasschulze.de> References: <20160307233353.Horde.7-6OFy3hVgO6PRgEEsZdVub@andreasschulze.de> Message-ID: <9d3368925756b8c2d80945470a97f70e.NginxMailingListEnglish@forum.nginx.org> Thanks a lot for providing a working example. I reproduced it and yes works with that server block. Then I tried to change until I discovered the following. Here is my server block: server { listen 443 ssl; listen 8080; server_name xxx; ssl_certificate /etc/letsencrypt/live/xxx/fullchain.pem; ssl_certificate_key /etc/letsencrypt/live/xxx/privkey.pem; root /usr/local/nginx/conf/websites/xxx; autoindex on; types { application/x-ms-application application; } } When I access that server using http, Content-Type is returned as defined, when accessing it via https, Content-Type is missing. Strange, isn?t it? Btw. I am using nginx 1.8.0. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,265148,265162#msg-265162 From agentzh at gmail.com Tue Mar 8 20:53:47 2016 From: agentzh at gmail.com (Yichun Zhang (agentzh)) Date: Tue, 8 Mar 2016 12:53:47 -0800 Subject: Reminder: The 1st Bay Area OpenResty Meetup Tomorrow Evening Message-ID: Hi folks We're going to have the first Bay Area OpenResty meetup tomorrow evening, in CloudFlare's office, as originally planned about one month ago. When: 5:30pm ~ 6:30pm, 9 March 2016 Where: 101 Townsend St San Francisco CA Fee: Free. Meetup website: http://www.meetup.com/Bay-Area-OpenResty-Meetup/events/228411221/ If you like to come, please do RSVP on the web page above so that we can do proper preparation. Anyone who has good stories, tips, and case studies around OpenResty, NGINX, and/or Lua/LuaJIT in a Web environment is welcome to present a lightening talk (3 ~ 5 min). Presentation proposals are still open. Just drop me a line privately today. We hope that this is a place for OpenResty users and developers to find each other in person. Several OpenResty/NGINX developers of the CloudFlare Edge team will join us as well. We'll have a screen, a microphone, and some food and beverage. Thanks CloudFlare for kindly sponsoring this event. Until now, we have the following presentations scheduled: 1. Marco Palladino, the CTO of Mashape, will talk about how Kong leverages OpenResty to provide API and Microservice gateway functionalities to upstream services. 2. Dragos Dascalita Haut of Adobe will present their work around OpenResty at Adobe. 3. As the creator of OpenResty, I will talk briefly about all the new features that have landed or landing to OpenResty, like semaphore API, new downstream SSL features, generic downstream TCP support, balancer_by_lua*, a package management toolchain, and more. And finally, we'll have a Q&A session in the end. OpenResty is a full-fledged web platform by integrating the standard Nginx core, LuaJIT, many carefully written Lua libraries, lots of high quality 3rd-party Nginx modules, and most of their external dependencies. It is designed to help developers easily build scalable web applications, web services, and dynamic web gateways. You can find more about OpenResty on its official web site: https://openresty.org/ See you tomorrow! Best regards, -agentzh From nginx-forum at forum.nginx.org Wed Mar 9 06:39:24 2016 From: nginx-forum at forum.nginx.org (codedmart) Date: Wed, 09 Mar 2016 01:39:24 -0500 Subject: Nginx proxy_pass ssl nodejs socketio error Message-ID: <4722d8381a314ee16d25bdc83378fea1.NginxMailingListEnglish@forum.nginx.org> I am trying to track down what is going on but an not having any success. I am using nginx as a proxy for ssl in front of my nodejs app. The http api works fine. I have the websocket on a separate subdomain. I connects sometimes but I have a ton of these messages in the error_log: upstream prematurely closed connection while reading response header from upstream Here is my current config: https://gist.github.com/codedmart/7182a10da31fcaa5143c Any help or ideas on what to look for would be great. Thanks! Posted at Nginx Forum: https://forum.nginx.org/read.php?2,265176,265176#msg-265176 From nginx-forum at forum.nginx.org Wed Mar 9 07:15:33 2016 From: nginx-forum at forum.nginx.org (RemcoJanssen) Date: Wed, 09 Mar 2016 02:15:33 -0500 Subject: How do I set Keep Alive Message-ID: <2100410942778f2b4ce626d7edbf71bd.NginxMailingListEnglish@forum.nginx.org> Hello, I am new to nginx and Speedgrade suggest I enable keepalive to enhance performance of my website(s). I've read*1 that enabling this is also a risk. What is the best practise? *1 https://www.nginx.com/blog/http-keepalives-and-web-performance/ Posted at Nginx Forum: https://forum.nginx.org/read.php?2,265177,265177#msg-265177 From hello at petermolnar.eu Wed Mar 9 11:31:08 2016 From: hello at petermolnar.eu (Peter Molnar) Date: Wed, 9 Mar 2016 11:31:08 +0000 Subject: unexpected location regex behaviour Message-ID: <56E0097C.8060107@petermolnar.eu> Dear nginx.org, I'm facing some strange, unexpected regex behaviour in my setup. Nginx is 1.9.12, self compiled, with openssl-1.0.2g and with the following modules: - echo-nginx-module (https://github.com/agentzh/echo-nginx-module.git) - headers-more-nginx-module (https://github.com/agentzh/headers-more-nginx-module.git) - ngx_upstream_status (https://github.com/petermolnar/ngx_upstream_status.git) - ngx-fancyindex (https://github.com/aperezdc/ngx-fancyindex.git) - ngx_devel_kit (https://github.com/simpl/ngx_devel_kit.git) - set-misc-nginx-module (https://github.com/openresty/set-misc-nginx-module) Apart from this issue, everything is fine and working as expected. The regexes ----------- ``` location ~ "^(?:(?!.*/files/.*-[0-9]{2,4}x[0-9]{2,4}).)*\.jpe?g$" { rewrite ^/files(.*) /wp-content/files$1 last; allow 127.0.0.1; deny all; } location ~ "^/files/(.*)$" { try_files /wp-content/cache/$1 /wp-content/files/$1 @filesmagic; } location @filesmagic { rewrite "^/files/(.*?)-[0-9]{2,4}x[0-9]{2,4}\.jpg$" /wp-content/cache/$1-180x180.jpg last; } ``` Expected behaviour ------------------ The goal of the first rule is to block access to original, unresized files in a WordPress setup. Resized files all match the filename-[width]x[height].jpe?g pattern, therefore if a query is jpg, but doesn't match this, it should be blocked outside of localhost. The second is to have shorter urls and checks for file existence in cache and files folder; in case it fails, it should go to the @filesmagic locations. In @filesmagic, in case the pattern matches the afromentioned resized jpg format, but the file doesn't exist ( that is how we should have gotten into this location block ) show a smaller version which should always exist. This is what should happen: http://domain.com/files/large_original_image.jpg - full size image, should be blocked http://domain.com/files/large_original_image-1280x1280.jpg - resized image, file exists, should be served http://domain.com/files/large_original_image-800x800.jpg - resized image, file does not exist, smaller file should be served The actual behaviour -------------------- When a nonexistent size is queried, the first rule is hit, thus it gets blocked and returns a 403. This is what happens: http://domain.com/files/large_original_image.jpg - full size image, blocked http://domain.com/files/large_original_image-1280x1280.jpg - resized image, file exists, served http://domain.com/files/large_original_image-800x800.jpg - resized image, file does not exist, blocked by first rule Tt is for sure blocked by that rule; in case the block is removed, the serving of smaller image instead of nonexistent is working as expected. Any help would be appreciated. Thank you in advance, -- Peter Molnar hello at petermolnar.eu https://petermolnar.eu -------------- next part -------------- A non-text attachment was scrubbed... Name: 0x6C1F051F.asc Type: application/pgp-keys Size: 22235 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 836 bytes Desc: OpenPGP digital signature URL: From francis at daoine.org Wed Mar 9 13:24:37 2016 From: francis at daoine.org (Francis Daly) Date: Wed, 9 Mar 2016 13:24:37 +0000 Subject: unexpected location regex behaviour In-Reply-To: <56E0097C.8060107@petermolnar.eu> References: <56E0097C.8060107@petermolnar.eu> Message-ID: <20160309132437.GB3340@daoine.org> On Wed, Mar 09, 2016 at 11:31:08AM +0000, Peter Molnar wrote: Hi there, > I'm facing some strange, unexpected regex behaviour in my setup. I think the answer is at http://nginx.org/r/rewrite and the "last" flag. What do you want to happen if the incoming request is for /wp-content/cache/file-180x180.jpg? What have you configured nginx to do if the incoming request is for /wp-content/cache/file-180x180.jpg? > location ~ "^(?:(?!.*/files/.*-[0-9]{2,4}x[0-9]{2,4}).)*\.jpe?g$" { > rewrite ^/files(.*) /wp-content/files$1 last; > allow 127.0.0.1; > deny all; > } > > location ~ "^/files/(.*)$" { > try_files /wp-content/cache/$1 /wp-content/files/$1 @filesmagic; > } > > location @filesmagic { > rewrite "^/files/(.*?)-[0-9]{2,4}x[0-9]{2,4}\.jpg$" > /wp-content/cache/$1-180x180.jpg last; > } > The goal of the first rule is to block access to original, unresized > files in a WordPress setup. The first rule matches a lot more requests than just those ones. > This is what should happen: > http://domain.com/files/large_original_image.jpg > - full size image, should be blocked That is, request should match the first location and be processed there. "Processed" is "rewrite and start again"; or "do not rewrite and allow or deny". "Start again" will get it back to this location, but the rewrite will not happen the second time around because the new request does not match the rewrite. > http://domain.com/files/large_original_image-1280x1280.jpg > - resized image, file exists, should be served That is, request should match the second location, and one of the first two try_files arguments should cause it to be served from the filesystem. > http://domain.com/files/large_original_image-800x800.jpg > - resized image, file does not exist, smaller file should be served That is, request should match the second location, and the final try_files argument should cause it to be handled in the third location. But in that location, the rewrite will happen and the whole thing starts again. Now the request is for /wp-content/cache/large_original_image-180x180.jpg, which matches the first location. In the first location, the rewrite does not match and so the request is allowed or denied. As I see it, you could either add a location{} to match your /wp-content/cache/ requests; or your @filesmagic rewrite could be to /files/$1-180x180.jpg, so that it will not match the first location. There probably are other ways too. Good luck with it, f -- Francis Daly francis at daoine.org From zxcvbn4038 at gmail.com Wed Mar 9 13:48:48 2016 From: zxcvbn4038 at gmail.com (CJ Ess) Date: Wed, 9 Mar 2016 08:48:48 -0500 Subject: How do I set Keep Alive In-Reply-To: <2100410942778f2b4ce626d7edbf71bd.NginxMailingListEnglish@forum.nginx.org> References: <2100410942778f2b4ce626d7edbf71bd.NginxMailingListEnglish@forum.nginx.org> Message-ID: If your backend is sensitive to keepalive traffic (mine are), then my advice is to enable keepalives as far into your stack as you can. i.e. I have nginx fronting haproxy and varnish, I enable keepalives to both haproxy and varnish add have them add a "connection: close" header to their backend requests. That keeps the backends happy but still lets the frontends take advantage of connection reuse. On Wed, Mar 9, 2016 at 2:15 AM, RemcoJanssen wrote: > Hello, I am new to nginx and Speedgrade suggest I enable keepalive to > enhance performance of my website(s). > > I've read*1 that enabling this is also a risk. What is the best practise? > > *1 https://www.nginx.com/blog/http-keepalives-and-web-performance/ > > Posted at Nginx Forum: > https://forum.nginx.org/read.php?2,265177,265177#msg-265177 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Wed Mar 9 14:28:20 2016 From: nginx-forum at forum.nginx.org (vizl) Date: Wed, 09 Mar 2016 09:28:20 -0500 Subject: Workers CPU leak [epoll_wait,epoll_ctl] In-Reply-To: <10876994.dJh9ZuiH4b@vbart-workstation> References: <10876994.dJh9ZuiH4b@vbart-workstation> Message-ID: <5910987d4d13e23a341902aa1d41b515.NginxMailingListEnglish@forum.nginx.org> Debug log regarding to hanged PID 7479 http://dev.vizl.org/debug.log.txt Posted at Nginx Forum: https://forum.nginx.org/read.php?2,264764,265188#msg-265188 From hello at petermolnar.eu Wed Mar 9 14:31:11 2016 From: hello at petermolnar.eu (Peter Molnar) Date: Wed, 9 Mar 2016 14:31:11 +0000 Subject: unexpected location regex behaviour In-Reply-To: <20160309132437.GB3340@daoine.org> References: <56E0097C.8060107@petermolnar.eu> <20160309132437.GB3340@daoine.org> Message-ID: <56E033AF.2020003@petermolnar.eu> On 03/09/2016 01:24 PM, Francis Daly wrote: > On Wed, Mar 09, 2016 at 11:31:08AM +0000, Peter Molnar wrote: > > Hi there, > >> I'm facing some strange, unexpected regex behaviour in my setup. > > I think the answer is at http://nginx.org/r/rewrite and the "last" flag. You were right about this. If I replace the 'last' with 'break' in the @filesmagic block, it starts working as expected. I somehow mistaken the behaviour of last for the behaviour of break in this case. Thank you! -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 836 bytes Desc: OpenPGP digital signature URL: From mdounin at mdounin.ru Wed Mar 9 15:12:51 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 9 Mar 2016 18:12:51 +0300 Subject: Workers CPU leak [epoll_wait,epoll_ctl] In-Reply-To: <5910987d4d13e23a341902aa1d41b515.NginxMailingListEnglish@forum.nginx.org> References: <10876994.dJh9ZuiH4b@vbart-workstation> <5910987d4d13e23a341902aa1d41b515.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20160309151251.GB31796@mdounin.ru> Hello! On Wed, Mar 09, 2016 at 09:28:20AM -0500, vizl wrote: > Debug log regarding to hanged PID 7479 http://dev.vizl.org/debug.log.txt This looks like a threads + sendfile() loop due to non-atomic updates of underlying file, similar to one recently reported on the Russian mailing list. Correct solution would be to fix your system to update files atomically instead of overwriting them in place. The patch below will resolve CPU hog and will log an alert instead: # HG changeset patch # User Maxim Dounin # Date 1457536139 -10800 # Wed Mar 09 18:08:59 2016 +0300 # Node ID e96e5dfe4ff8ffe301264c3eb2771596fae24d38 # Parent 93049710cb7f6ea91fa9bd707e88fbe79d82d0ef Truncation detection in sendfile() on Linux. This addresses connection hangs as observed in ticket #504, and CPU hogs with "aio threads; sendfile on" as reported in the mailing list, see http://mailman.nginx.org/pipermail/nginx-ru/2016-March/057638.html. The alert is identical to one used on FreeBSD. diff --git a/src/os/unix/ngx_linux_sendfile_chain.c b/src/os/unix/ngx_linux_sendfile_chain.c --- a/src/os/unix/ngx_linux_sendfile_chain.c +++ b/src/os/unix/ngx_linux_sendfile_chain.c @@ -292,6 +292,19 @@ eintr: } } + if (n == 0) { + /* + * if sendfile returns zero, then someone has truncated the file, + * so the offset became beyond the end of the file + */ + + ngx_log_error(NGX_LOG_ALERT, c->log, 0, + "sendfile() reported that \"%s\" was truncated at %O", + file->file->name.data, file->file_pos); + + return NGX_ERROR; + } + ngx_log_debug3(NGX_LOG_DEBUG_EVENT, c->log, 0, "sendfile: %z of %uz @%O", n, size, file->file_pos); @@ -349,6 +362,19 @@ ngx_linux_sendfile_thread(ngx_connection return NGX_ERROR; } + if (ctx->err != NGX_AGAIN && ctx->sent == 0) { + /* + * if sendfile returns zero, then someone has truncated the file, + * so the offset became beyond the end of the file + */ + + ngx_log_error(NGX_LOG_ALERT, c->log, 0, + "sendfile() reported that \"%s\" was truncated at %O", + file->file->name.data, file->file_pos); + + return NGX_ERROR; + } + *sent = ctx->sent; return (ctx->sent == ctx->size) ? NGX_DONE : NGX_AGAIN; -- Maxim Dounin http://nginx.org/ From nginx-forum at forum.nginx.org Wed Mar 9 15:56:40 2016 From: nginx-forum at forum.nginx.org (vizl) Date: Wed, 09 Mar 2016 10:56:40 -0500 Subject: Workers CPU leak [epoll_wait,epoll_ctl] In-Reply-To: <20160309151251.GB31796@mdounin.ru> References: <20160309151251.GB31796@mdounin.ru> Message-ID: <1e00febf73866843d3bd77d2b0cc968f.NginxMailingListEnglish@forum.nginx.org> Thank you. We don't make any changes with files or overwrite them while sendfile processing it. Only create temp file and then mv it. Maybe Is it the same bug concerned with treads aio like in this messeges, https://forum.nginx.org/read.php?21,264701,265016#msg-265016 and it would be fixed in future ? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,264764,265193#msg-265193 From reallfqq-nginx at yahoo.fr Wed Mar 9 16:15:58 2016 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Wed, 9 Mar 2016 17:15:58 +0100 Subject: Is there a length limitation on file extensions? In-Reply-To: <9d3368925756b8c2d80945470a97f70e.NginxMailingListEnglish@forum.nginx.org> References: <20160307233353.Horde.7-6OFy3hVgO6PRgEEsZdVub@andreasschulze.de> <9d3368925756b8c2d80945470a97f70e.NginxMailingListEnglish@forum.nginx.org> Message-ID: Are you sure this configuration gets loaded (nginx -t OK, no error on configuration -re-loading)? You could try to replace your types block with an empty one (to override defaults if they are defined at upper level, or simply remove them) and add the following directive: default_type application/x-ms-application You should se the Content-Encoding set to this special value by requesting with HTTP on port 8080 and HTTPS on port 443. If you get anything different, your request is probably served elsewhere. --- *B. R.* On Tue, Mar 8, 2016 at 9:34 PM, j.o.l wrote: > Thanks a lot for providing a working example. I reproduced it and yes works > with that server block. Then I tried to change until I discovered the > following. Here is my server block: > > server { > listen 443 ssl; > listen 8080; > server_name xxx; > ssl_certificate /etc/letsencrypt/live/xxx/fullchain.pem; > ssl_certificate_key /etc/letsencrypt/live/xxx/privkey.pem; > root /usr/local/nginx/conf/websites/xxx; > autoindex on; > types { > application/x-ms-application application; > } > } > > When I access that server using http, Content-Type is returned as defined, > when accessing it via https, Content-Type is missing. Strange, isn?t it? > Btw. I am using nginx 1.8.0. > > Posted at Nginx Forum: > https://forum.nginx.org/read.php?2,265148,265162#msg-265162 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Wed Mar 9 16:19:12 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 9 Mar 2016 19:19:12 +0300 Subject: Workers CPU leak [epoll_wait,epoll_ctl] In-Reply-To: <1e00febf73866843d3bd77d2b0cc968f.NginxMailingListEnglish@forum.nginx.org> References: <20160309151251.GB31796@mdounin.ru> <1e00febf73866843d3bd77d2b0cc968f.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20160309161912.GG31796@mdounin.ru> Hello! On Wed, Mar 09, 2016 at 10:56:40AM -0500, vizl wrote: > Thank you. > We don't make any changes with files or overwrite them while sendfile > processing it. > Only create temp file and then mv it. The debug log suggests this is not true. > Maybe Is it the same bug concerned with treads aio like in this messeges, > https://forum.nginx.org/read.php?21,264701,265016#msg-265016 > and it would be fixed in future ? Yes, and the patch provided links to the very same thread and resolves the problem in nginx, i.e., CPU hog. With the patch an alert will be correctly logged. Note well that the root cause of CPU hog observed are non-atomic file updates. You will still see other problems till this is resolved (including data corruption), even with the patch. -- Maxim Dounin http://nginx.org/ From nginx-forum at forum.nginx.org Wed Mar 9 18:06:53 2016 From: nginx-forum at forum.nginx.org (j.o.l) Date: Wed, 09 Mar 2016 13:06:53 -0500 Subject: Is there a length limitation on file extensions? In-Reply-To: References: Message-ID: <4b20983e2753407467c9a89e745f757c.NginxMailingListEnglish@forum.nginx.org> yes, nginx -t reports OK. I tried the empty types block with default_type as suggested - no change to the behavior. I verified that there is no other block processing the request. There is no other block that uses that certificate on port 443, and the port 8080 is used for that server block only. I also added an access_log to that server block redirecting output to a different file, and yes, the output goes to that different file. I even defined my own log_format that logs $sent_http_content_type, and it reports application/x-ms-application for http, and - when using https... Any other idea what to try or check? Thanks a lot! Posted at Nginx Forum: https://forum.nginx.org/read.php?2,265148,265197#msg-265197 From nginx-forum at forum.nginx.org Wed Mar 9 20:25:09 2016 From: nginx-forum at forum.nginx.org (j.o.l) Date: Wed, 09 Mar 2016 15:25:09 -0500 Subject: Is there a length limitation on file extensions? In-Reply-To: <4b20983e2753407467c9a89e745f757c.NginxMailingListEnglish@forum.nginx.org> References: <4b20983e2753407467c9a89e745f757c.NginxMailingListEnglish@forum.nginx.org> Message-ID: after some more experiments it looks like I was caught by a looking at cached data. I was reloading the file for https as it was displayed as text (happens to be readable xml). However this reload was answered with not-modified without content-type (which is afaik standard conformant). Only on hard reload Chrome got a new copy with content type header. Goofish or not, maybe others stumble accross the same problem and this will help them to resolve the issue. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,265148,265200#msg-265200 From sca at andreasschulze.de Wed Mar 9 20:25:08 2016 From: sca at andreasschulze.de (A. Schulze) Date: Wed, 09 Mar 2016 21:25:08 +0100 Subject: Is there a length limitation on file extensions? In-Reply-To: <4b20983e2753407467c9a89e745f757c.NginxMailingListEnglish@forum.nginx.org> References: <4b20983e2753407467c9a89e745f757c.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20160309212508.Horde.CjsxZ6_0tVqc6pUXSj9bzbO@andreasschulze.de> j.o.l: > Any other idea what to try or check? I would write two separate server blocks, server { listen 8080; server_name xxx; root /usr/local/nginx/conf/websites/xxx; autoindex on; types { application/x-ms-application application; } } server { listen 443 ssl; ssl_certificate /etc/letsencrypt/live/xxx/fullchain.pem; ssl_certificate_key /etc/letsencrypt/live/xxx/privkey.pem; server_name xxx; root /usr/local/nginx/conf/websites/xxx; autoindex on; types { application/x-ms-application application; } } From zxcvbn4038 at gmail.com Thu Mar 10 03:12:09 2016 From: zxcvbn4038 at gmail.com (CJ Ess) Date: Wed, 9 Mar 2016 22:12:09 -0500 Subject: Does stub_status itself cause performance issues? In-Reply-To: References: Message-ID: I did some performance tests and it seemed to me as-if the status stub caused a bit of a performance hit but nothing really concerning. However for the status stub doesn't really give a lot of useful information IMO because its just supposed to be a placeholder for an nginx+ status page- I'm going the route of having a log tailer extract metrics. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Thu Mar 10 14:08:37 2016 From: nginx-forum at forum.nginx.org (vizl) Date: Thu, 10 Mar 2016 09:08:37 -0500 Subject: Workers CPU leak [epoll_wait,epoll_ctl] In-Reply-To: <20160309161912.GG31796@mdounin.ru> References: <20160309161912.GG31796@mdounin.ru> Message-ID: <2b9a636aa67a55241dc77e6f920a1e1b.NginxMailingListEnglish@forum.nginx.org> One thing: when we disable threads in config, and leave only "senfile on;" the problem was gone. So if problem caused by sendfile and non-atomic file updates, why it does not apeared when threads off ? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,264764,265214#msg-265214 From nginx-forum at forum.nginx.org Thu Mar 10 16:19:03 2016 From: nginx-forum at forum.nginx.org (mex) Date: Thu, 10 Mar 2016 11:19:03 -0500 Subject: how to forward basic-auth from upstream Message-ID: hi list, i have an nginx infront of apaches, and the apacheshold a list of locations with basic-auth. i cannot pass the auth-request from the upstream through nginx to the user, when i access the urls through nginx i get 403 Forbidden, while direct access sends the correct 401 Authorization Required back. is there a simple way to passthrough the auth-request without doing nginx basic-auth itself? what i've done so far (in any variation) location /authme { ... proxy_pass_header Authorization; more_set_headers -s 401 'WWW-Authenticate: Basic realm="$host"'; more_set_input_headers 'Authorization: $http_authorization'; ... } cheers, mex Posted at Nginx Forum: https://forum.nginx.org/read.php?2,265220,265220#msg-265220 From zxcvbn4038 at gmail.com Thu Mar 10 17:12:44 2016 From: zxcvbn4038 at gmail.com (CJ Ess) Date: Thu, 10 Mar 2016 12:12:44 -0500 Subject: nginx cache - could not allocate node in cache keys zone Message-ID: I have four servers in a pool running nginx with proxy_cache. One of the nodes started spewing "could not allocate node in cache keys zone" errors for every request (which gave 500 status). I did a restart and it started working again. What conditions cause that error in general? If it happens again is there anything I can do to try and determine the root cause? -------------- next part -------------- An HTML attachment was scrubbed... URL: From zxcvbn4038 at gmail.com Thu Mar 10 19:06:21 2016 From: zxcvbn4038 at gmail.com (CJ Ess) Date: Thu, 10 Mar 2016 14:06:21 -0500 Subject: nginx cache - could not allocate node in cache keys zone In-Reply-To: References: Message-ID: Same condition on two more of the servers in the same pool. Reload doesn't resolve the issue, but restart does. No limit being hit on disk space, inodes, open files, or memory. On Thu, Mar 10, 2016 at 12:12 PM, CJ Ess wrote: > I have four servers in a pool running nginx with proxy_cache. One of the > nodes started spewing "could not allocate node in cache keys zone" errors > for every request (which gave 500 status). I did a restart and it started > working again. > > What conditions cause that error in general? > > If it happens again is there anything I can do to try and determine the > root cause? > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zxcvbn4038 at gmail.com Thu Mar 10 19:07:35 2016 From: zxcvbn4038 at gmail.com (CJ Ess) Date: Thu, 10 Mar 2016 14:07:35 -0500 Subject: nginx cache - could not allocate node in cache keys zone In-Reply-To: References: Message-ID: This is nginx/1.9.0 BTW On Thu, Mar 10, 2016 at 2:06 PM, CJ Ess wrote: > Same condition on two more of the servers in the same pool. Reload doesn't > resolve the issue, but restart does. No limit being hit on disk space, > inodes, open files, or memory. > > > On Thu, Mar 10, 2016 at 12:12 PM, CJ Ess wrote: > >> I have four servers in a pool running nginx with proxy_cache. One of the >> nodes started spewing "could not allocate node in cache keys zone" errors >> for every request (which gave 500 status). I did a restart and it started >> working again. >> >> What conditions cause that error in general? >> >> If it happens again is there anything I can do to try and determine the >> root cause? >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From r1ch+nginx at teamliquid.net Thu Mar 10 19:38:51 2016 From: r1ch+nginx at teamliquid.net (Richard Stanway) Date: Thu, 10 Mar 2016 20:38:51 +0100 Subject: nginx cache - could not allocate node in cache keys zone In-Reply-To: References: Message-ID: At a guess I would say your key zone is full. Try increasing the size of it. On Thu, Mar 10, 2016 at 8:07 PM, CJ Ess wrote: > This is nginx/1.9.0 BTW > > > On Thu, Mar 10, 2016 at 2:06 PM, CJ Ess wrote: > >> Same condition on two more of the servers in the same pool. Reload >> doesn't resolve the issue, but restart does. No limit being hit on disk >> space, inodes, open files, or memory. >> >> >> On Thu, Mar 10, 2016 at 12:12 PM, CJ Ess wrote: >> >>> I have four servers in a pool running nginx with proxy_cache. One of the >>> nodes started spewing "could not allocate node in cache keys zone" errors >>> for every request (which gave 500 status). I did a restart and it started >>> working again. >>> >>> What conditions cause that error in general? >>> >>> If it happens again is there anything I can do to try and determine the >>> root cause? >>> >>> >> > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zxcvbn4038 at gmail.com Thu Mar 10 21:18:43 2016 From: zxcvbn4038 at gmail.com (CJ Ess) Date: Thu, 10 Mar 2016 16:18:43 -0500 Subject: nginx cache - could not allocate node in cache keys zone In-Reply-To: References: Message-ID: I will try that now - but shouldn't it be evicting a key if it can't fit a new one? On Thu, Mar 10, 2016 at 2:38 PM, Richard Stanway wrote: > At a guess I would say your key zone is full. Try increasing the size of > it. > > On Thu, Mar 10, 2016 at 8:07 PM, CJ Ess wrote: > >> This is nginx/1.9.0 BTW >> >> >> On Thu, Mar 10, 2016 at 2:06 PM, CJ Ess wrote: >> >>> Same condition on two more of the servers in the same pool. Reload >>> doesn't resolve the issue, but restart does. No limit being hit on disk >>> space, inodes, open files, or memory. >>> >>> >>> On Thu, Mar 10, 2016 at 12:12 PM, CJ Ess wrote: >>> >>>> I have four servers in a pool running nginx with proxy_cache. One of >>>> the nodes started spewing "could not allocate node in cache keys zone" >>>> errors for every request (which gave 500 status). I did a restart and it >>>> started working again. >>>> >>>> What conditions cause that error in general? >>>> >>>> If it happens again is there anything I can do to try and determine the >>>> root cause? >>>> >>>> >>> >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zxcvbn4038 at gmail.com Thu Mar 10 22:07:05 2016 From: zxcvbn4038 at gmail.com (CJ Ess) Date: Thu, 10 Mar 2016 17:07:05 -0500 Subject: nginx cache - could not allocate node in cache keys zone In-Reply-To: References: Message-ID: One other question - is the key zone shared between the worker processes? Or will each worker allocate its own copy? On Thu, Mar 10, 2016 at 4:18 PM, CJ Ess wrote: > I will try that now - but shouldn't it be evicting a key if it can't fit a > new one? > > > On Thu, Mar 10, 2016 at 2:38 PM, Richard Stanway < > r1ch+nginx at teamliquid.net> wrote: > >> At a guess I would say your key zone is full. Try increasing the size of >> it. >> >> On Thu, Mar 10, 2016 at 8:07 PM, CJ Ess wrote: >> >>> This is nginx/1.9.0 BTW >>> >>> >>> On Thu, Mar 10, 2016 at 2:06 PM, CJ Ess wrote: >>> >>>> Same condition on two more of the servers in the same pool. Reload >>>> doesn't resolve the issue, but restart does. No limit being hit on disk >>>> space, inodes, open files, or memory. >>>> >>>> >>>> On Thu, Mar 10, 2016 at 12:12 PM, CJ Ess wrote: >>>> >>>>> I have four servers in a pool running nginx with proxy_cache. One of >>>>> the nodes started spewing "could not allocate node in cache keys zone" >>>>> errors for every request (which gave 500 status). I did a restart and it >>>>> started working again. >>>>> >>>>> What conditions cause that error in general? >>>>> >>>>> If it happens again is there anything I can do to try and determine >>>>> the root cause? >>>>> >>>>> >>>> >>> >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx >>> >> >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gfrankliu at gmail.com Thu Mar 10 22:55:06 2016 From: gfrankliu at gmail.com (Frank Liu) Date: Thu, 10 Mar 2016 14:55:06 -0800 Subject: dns name for upstream Message-ID: Hi, I saw this example at serverfault.com: server { ... resolver 127.0.0.1; set $backend "http://dynamic.example.com:80"; proxy_pass $backend; ... } I have a few questions: 1) If the resolver DNS becomes unavailable (say connection timeout), what will nginx do? Will it keep using the old IPs or will it flush the DNS since TTL expires? If later, the proxy will stop working. 2) In the upstream block, I could define "keepalive #", but with this example, how can I do that? 3) This page http://nginx.org/en/docs/stream/ngx_stream_core_module.html#resolver says "This directive is available as part of our commercial subscription.". Is that still up to date? Can "resolver", "resolver_timeout" be used in free edition now? Thanks! Frank -------------- next part -------------- An HTML attachment was scrubbed... URL: From 18620708940 at 163.com Fri Mar 11 03:32:13 2016 From: 18620708940 at 163.com (18620708940 at 163.com) Date: Fri, 11 Mar 2016 11:32:13 +0800 Subject: nginx1.94 ssl use TLS1.0. References: Message-ID: <201603111132120998451@163.com> nginx1.94 ssl use TLS1.0. server { listen 443; server_name a.com; ssi on; ssi_silent_errors on; ssi_types text/shtml; ssl on; ssl_certificate ssl_certificate_key ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers HIGH:!aNULL:!MD5; ssl_prefer_server_ciphers on; ....} but nginx1.6.2 ssl use TLS1.2 18620708940 at 163.com From: Frank Liu Date: 2016-03-11 06:55 To: nginx Subject: dns name for upstream Hi, I saw this example at serverfault.com: server { ... resolver 127.0.0.1; set $backend "http://dynamic.example.com:80"; proxy_pass $backend; ... } I have a few questions: 1) If the resolver DNS becomes unavailable (say connection timeout), what will nginx do? Will it keep using the old IPs or will it flush the DNS since TTL expires? If later, the proxy will stop working. 2) In the upstream block, I could define "keepalive #", but with this example, how can I do that? 3) This page http://nginx.org/en/docs/stream/ngx_stream_core_module.html#resolver says "This directive is available as part of our commercial subscription.". Is that still up to date? Can "resolver", "resolver_timeout" be used in free edition now? Thanks! Frank -------------- next part -------------- An HTML attachment was scrubbed... URL: From reallfqq-nginx at yahoo.fr Fri Mar 11 09:47:00 2016 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Fri, 11 Mar 2016 10:47:00 +0100 Subject: dns name for upstream In-Reply-To: References: Message-ID: On Thu, Mar 10, 2016 at 11:55 PM, Frank Liu wrote: > server { > ... > resolver 127.0.0.1; > set $backend "http://dynamic.example.com:80"; > proxy_pass $backend; > ... > } > > I have a few questions: > 1) If the resolver DNS becomes unavailable (say connection timeout), what > will nginx do? Will it keep using the old IPs or will it flush the DNS > since TTL expires? If later, the proxy will stop working. > ?I suppose you will get a 504 'Gateway Timeout'? ? > 2) In the upstream block, I could define "keepalive #", but with this > example, how can I do that? > ?The keepalive directive is only valid in the upstream block and there does not see?m to be any equivalent. You could use an upstream name in your variable to dynamically choose an upstream group in which everything is configured as you wish. 3) This page > http://nginx.org/en/docs/stream/ngx_stream_core_module.html#resolver says > "This directive is available as part of our commercial subscription.". Is > that still up to date? Can "resolver", "resolver_timeout" be used in free > edition now? > ?nginx Inc. seems to be very conservative about keeping the incentives to their product as part of their business model. No movement on that side since the dawn of times. --- *B. R.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From maxim at nginx.com Fri Mar 11 09:58:29 2016 From: maxim at nginx.com (Maxim Konovalov) Date: Fri, 11 Mar 2016 12:58:29 +0300 Subject: dns name for upstream In-Reply-To: References: Message-ID: <56E296C5.1050305@nginx.com> On 3/11/16 12:47 PM, B.R. wrote: [...] > 3) This > page http://nginx.org/en/docs/stream/ngx_stream_core_module.html#resolver > says "This directive is available as part of our commercial > subscription.". Is that still up to date? Can "resolver", > "resolver_timeout" be used in free edition now? > > ? nginx Inc. seems to be very conservative about keeping the > incentives to their product as part of their business model. No > movement on that side since the dawn of times. Can you elaborate your point please? -- Maxim Konovalov From vbart at nginx.com Fri Mar 11 11:44:53 2016 From: vbart at nginx.com (Valentin V. Bartenev) Date: Fri, 11 Mar 2016 14:44:53 +0300 Subject: dns name for upstream In-Reply-To: References: Message-ID: <5929338.pLYABk6r8D@vbart-workstation> On Thursday 10 March 2016 14:55:06 Frank Liu wrote: > Hi, > > I saw this example at serverfault.com: > > server { > ... > resolver 127.0.0.1; > set $backend "http://dynamic.example.com:80"; > proxy_pass $backend; > ... > } > > I have a few questions: > 1) If the resolver DNS becomes unavailable (say connection timeout), what > will nginx do? Will it keep using the old IPs or will it flush the DNS > since TTL expires? If later, the proxy will stop working. > 2) In the upstream block, I could define "keepalive #", but with this > example, how can I do that? > 3) This page > http://nginx.org/en/docs/stream/ngx_stream_core_module.html#resolver says > "This directive is available as part of our commercial subscription.". Is > that still up to date? Can "resolver", "resolver_timeout" be used in free > edition now? > It's unclear about what module you are asking. Note that the "set" and "keepalive" directives are part of the "http" modules, while in 3-rd question you're asking about the "resolver" directive in the "stream" module. There are also "resolver" and "resolver_timeout" directives in the "http" module, and they are available in free edition: http://nginx.org/en/docs/http/ngx_http_core_module.html#resolver wbr, Valentin V. Bartenev From nginx-forum at forum.nginx.org Fri Mar 11 12:33:02 2016 From: nginx-forum at forum.nginx.org (djeyewater) Date: Fri, 11 Mar 2016 07:33:02 -0500 Subject: limit_req is limiting requests outside of location block it is applied to Message-ID: <852296bd3c8a98e97206294881d7863b.NginxMailingListEnglish@forum.nginx.org> I have a location ~* \.php$ with limit_req set inside it. But requests outside of this location block, e.g. for .js and .css files are also being limited. I only want to limit the number of requests to .php files. This is my config: worker_processes 2; pid logs/nginx.pid; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; server_names_hash_bucket_size 128; sendfile on; keepalive_timeout 10 10; port_in_redirect off; #Fix IP address set_real_ip_from 127.0.0.1; real_ip_header X-Forwarded-For; limit_req_zone $binary_remote_addr zone=one:10m rate=1r/s; log_format main '$remote_addr - $remote_user [$time_local] ' '"$request" $status $body_bytes_sent "$http_referer" ' '"$http_user_agent"'; server { listen 7776; server_name www.test.com; access_log logs/test.log main; error_log logs/test-error.log warn; root /path/to/test.com; location / { index index.php; try_files $uri $uri/ /index.php?$args; } # Pass PHP scripts on to PHP-FPM location ~* \.php$ { limit_req zone=one burst=5; try_files $uri =404; include fastcgi_params; fastcgi_pass unix:/path/to/php.sock; } } } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,265244,265244#msg-265244 From mdounin at mdounin.ru Fri Mar 11 13:10:07 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 11 Mar 2016 16:10:07 +0300 Subject: nginx1.94 ssl use TLS1.0. In-Reply-To: <201603111132120998451@163.com> References: <201603111132120998451@163.com> Message-ID: <20160311131007.GG12808@mdounin.ru> Hello! On Fri, Mar 11, 2016 at 11:32:13AM +0800, 18620708940 at 163.com wrote: > nginx1.94 ssl use TLS1.0. > server { > listen 443; > server_name a.com; > > ssi on; > ssi_silent_errors on; > ssi_types text/shtml; > > ssl on; > ssl_certificate > ssl_certificate_key > ssl_protocols TLSv1 TLSv1.1 TLSv1.2; > ssl_ciphers HIGH:!aNULL:!MD5; > ssl_prefer_server_ciphers on; > ....} > but nginx1.6.2 ssl use TLS1.2 Please do not hijack unrelated threads. Thank you. -- Maxim Dounin http://nginx.org/ From nginx-forum at forum.nginx.org Fri Mar 11 13:17:03 2016 From: nginx-forum at forum.nginx.org (djeyewater) Date: Fri, 11 Mar 2016 08:17:03 -0500 Subject: limit_req is limiting requests outside of location block it is applied to In-Reply-To: <852296bd3c8a98e97206294881d7863b.NginxMailingListEnglish@forum.nginx.org> References: <852296bd3c8a98e97206294881d7863b.NginxMailingListEnglish@forum.nginx.org> Message-ID: <65e7a67aa2c27f8f5e9cb77467a90842.NginxMailingListEnglish@forum.nginx.org> Never mind, Nginx was actually limiting requests correctly. What I was seeing in my logs was requests for non-existent .js and .css files, which were then being passed to index.php as per my try_files in the / location block. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,265244,265248#msg-265248 From phil at dunlop-lello.uk Fri Mar 11 14:40:34 2016 From: phil at dunlop-lello.uk (Phil Lello) Date: Fri, 11 Mar 2016 14:40:34 +0000 Subject: HTTP/2 roadmap Message-ID: Hi, What's the best place to find details on planned features for HTTP/2 support? I've only been looking at HTTP/2 for a few days, so forgive me if this is already covered. It seems pretty obvious to me that it provides an opportunity for potentially significant performance gains if changes are made to the xCGI model, and potentially web applications. Specifically, since there is a quasi-persistent [1] connection between a browser and a server, serialisation of a session object between page requests is no longer necessary, and it can become bound to the transport layer - whilst this may seem to introduce possible race conditions between pages, this is no different from concurrent requests on the same session under HTTP/1.x. A secondary requirement is a mechanism to implement server-push, so that can specify page dependencies, rather than requiring inspection of content within the server. Is any work currently being done in this direction? Phil [1] Since there needs to be periodic SSL renegotiation which would cause reconnects, as well as sub-optimal network conditions on wireless and mobile networks (and indeed, devices flip-flopping between the two). draft-bishop-support-reneg-00 if adopted should address this in optimal network conditions. -------------- next part -------------- An HTML attachment was scrubbed... URL: From gfrankliu at gmail.com Fri Mar 11 15:01:49 2016 From: gfrankliu at gmail.com (Frank Liu) Date: Fri, 11 Mar 2016 07:01:49 -0800 Subject: dns name for upstream In-Reply-To: <5929338.pLYABk6r8D@vbart-workstation> References: <5929338.pLYABk6r8D@vbart-workstation> Message-ID: Hi Valentin, Thanks for clearing up . I was looking at the wrong module. Do you have any comments to the other two questions? Frank On Friday, March 11, 2016, Valentin V. Bartenev wrote: > On Thursday 10 March 2016 14:55:06 Frank Liu wrote: > > Hi, > > > > I saw this example at serverfault.com: > > > > server { > > ... > > resolver 127.0.0.1; > > set $backend "http://dynamic.example.com:80"; > > proxy_pass $backend; > > ... > > } > > > > I have a few questions: > > 1) If the resolver DNS becomes unavailable (say connection timeout), what > > will nginx do? Will it keep using the old IPs or will it flush the DNS > > since TTL expires? If later, the proxy will stop working. > > 2) In the upstream block, I could define "keepalive #", but with this > > example, how can I do that? > > 3) This page > > http://nginx.org/en/docs/stream/ngx_stream_core_module.html#resolver > says > > "This directive is available as part of our commercial subscription.". Is > > that still up to date? Can "resolver", "resolver_timeout" be used in free > > edition now? > > > > It's unclear about what module you are asking. Note that the "set" and > "keepalive" directives are part of the "http" modules, while in 3-rd > question you're asking about the "resolver" directive in the "stream" > module. > > There are also "resolver" and "resolver_timeout" directives in the > "http" module, and they are available in free edition: > http://nginx.org/en/docs/http/ngx_http_core_module.html#resolver > > wbr, Valentin V. Bartenev > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Fri Mar 11 15:58:25 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 11 Mar 2016 18:58:25 +0300 Subject: HTTP/2 roadmap In-Reply-To: References: Message-ID: <20160311155825.GJ12808@mdounin.ru> Hello! On Fri, Mar 11, 2016 at 02:40:34PM +0000, Phil Lello wrote: > Hi, > > What's the best place to find details on planned features for HTTP/2 > support? > > I've only been looking at HTTP/2 for a few days, so forgive me if this is > already covered. > > It seems pretty obvious to me that it provides an opportunity for > potentially significant performance gains if changes are made to the xCGI > model, and potentially web applications. > > Specifically, since there is a quasi-persistent [1] connection between a > browser and a server, serialisation of a session object between page > requests is no longer necessary, and it can become bound to the transport > layer - whilst this may seem to introduce possible race conditions between > pages, this is no different from concurrent requests on the same session > under HTTP/1.x. This is not going to work for multiple reasons, at least: - connections can be broken for unrelated reasons (network changes, server reloads, whatever); - transport layer is not guaranteed to be bound to a particular client, and can be used by many different clients instead (e.g., when used by proxy servers); - there may be intermediate servers and different protocols involved, so from backend point of view there will be multiple different connections; We've already seen how connection-oriented model [does not] work for Microsoft with their NTLM authentication scheme. Don't try to repeat their mistakes. > A secondary requirement is a mechanism to implement server-push, so that > can specify page dependencies, rather than requiring > inspection of content within the server. > > Is any work currently being done in this direction? No. -- Maxim Dounin http://nginx.org/ From nginx-forum at forum.nginx.org Fri Mar 11 20:12:26 2016 From: nginx-forum at forum.nginx.org (mevans336) Date: Fri, 11 Mar 2016 15:12:26 -0500 Subject: Correct Rewrite? Message-ID: We currently use the following method to perform an http to https rewrite. rewrite ^ https://$server_name$request_uri permanent; I am planning to change it to the preferred method of: return 301 https://$server_name$request_uri; However, we'd like to also make sure any requests for domain.com are sent to www.domain.com, whether someone tries to access domain.com via http or https. How would I write the redirect statement to rewrite http:// to https:// but also rewrite domain.com to www.domain.com? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,265257,265257#msg-265257 From r1ch+nginx at teamliquid.net Fri Mar 11 20:29:24 2016 From: r1ch+nginx at teamliquid.net (Richard Stanway) Date: Fri, 11 Mar 2016 21:29:24 +0100 Subject: Correct Rewrite? In-Reply-To: References: Message-ID: The way I do this is to use multiple server {} blocks, and put all the non-canonical hostnames / port 80 requests in a server block with a return 301 to the canonical (and HTTPS) host which only listens on 443 and the canonical hostname. On Fri, Mar 11, 2016 at 9:12 PM, mevans336 wrote: > We currently use the following method to perform an http to https rewrite. > > rewrite ^ https://$server_name$request_uri permanent; > > I am planning to change it to the preferred method of: > > return 301 https://$server_name$request_uri; > > However, we'd like to also make sure any requests for domain.com are sent > to > www.domain.com, whether someone tries to access domain.com via http or > https. > > How would I write the redirect statement to rewrite http:// to https:// > but > also rewrite domain.com to www.domain.com? > > Posted at Nginx Forum: > https://forum.nginx.org/read.php?2,265257,265257#msg-265257 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Sat Mar 12 05:40:53 2016 From: nginx-forum at forum.nginx.org (zzyber) Date: Sat, 12 Mar 2016 00:40:53 -0500 Subject: Name based virtual hosts not working Message-ID: Hi! I just did a lemp setup but I can't get my virtual hosts to work properly. I?m totally lost and out of ideas. Two domains and one starting on e is always responding showing its root and the other starting on s is just dead. I understand that nginx is handling this as Apache analyzing the headers for a match and if no match it goes alphabetically taking the first as default domain. I have my configuration files in /etc/nginx/conf.d, domain1.conf and domain2.conf and the root pointing to /var/www/domain1/html and /var/www/domain2/html The domains are pointed A records to my public IP What can be wrong here? So many people having this problem but I can?t find a solution. I hope this forum are holding cutting edge professionals to help me out. config file server { listen 80; root /var/www/domain1/html; index index.php index.html index.htm; server_name domain1.com www.domain1.com; location / { try_files $uri $uri/ /index.php; } location ~ \.php$ { try_files $uri =404; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; fastcgi_pass unix:/var/run/php5-fpm.sock; } } /zzyber Posted at Nginx Forum: https://forum.nginx.org/read.php?2,265259,265259#msg-265259 From nginx-forum at forum.nginx.org Sat Mar 12 07:31:18 2016 From: nginx-forum at forum.nginx.org (reaman) Date: Sat, 12 Mar 2016 02:31:18 -0500 Subject: Problem for play my custom pages error 401 Message-ID: I come to you because I am facing a problem with the customization of the 401 error page in fact I use Nginx and rutorrent, I proceeded as follows: I create a folder named "error_pages" I placed in the /var/www/ then I created my pages 400.html 401.html 403.html 404.html 503.html 50x.html I created the page ?error_pages.conf? (/etc/nginx/conf.d/error_pages.conf) and I placed the code below error_page 400 /error_pages/400.html; location = /400.html { root /var/www/; } error_page 401 /error_pages/401.html; location = /401.html { root /var/www/; auth_basic off; allow all; } error_page 403 /error_pages/403.html; location = /403.html { root /var/www/; allow all; } error_page 404 /error_pages/404.html; location = /404.html { root /var/www/; auth_basic off; internal; } error_page 503 /error_pages/503.html; error_page 500 501 502 504 /error_pages/50x.html; location ^~ /error_pages { root /var/www/; internal; } Then in the file "rutorrent.conf" located in the "/etc/nginx/sites-enabled" the following code: server { listen 443 default_server ssl http2; ?.. include /etc/nginx/conf.d/error_pages.conf; .?. I restart nginx services. But obviously my custom error page is not displayed in place of the original one and I do not see why. Can you help me ? Thank you Posted at Nginx Forum: https://forum.nginx.org/read.php?2,265260,265260#msg-265260 From nginx-forum at forum.nginx.org Sat Mar 12 07:35:09 2016 From: nginx-forum at forum.nginx.org (reaman) Date: Sat, 12 Mar 2016 02:35:09 -0500 Subject: Problem for play my custom pages error 401 In-Reply-To: References: Message-ID: reaman Wrote: ------------------------------------------------------- > I come to you because I am facing a problem with the customization of > the 401 error page in fact I use Nginx and rutorrent, I proceeded as > follows: > > I create a folder named "error_pages" I placed in the /var/www/ > then I created my pages 400.html 401.html 403.html 404.html 503.html > 50x.html > > I created the page ?error_pages.conf? > (/etc/nginx/conf.d/error_pages.conf) and I placed the code below > > error_page 400 /error_pages/400.html; > location = /400.html { > root /var/www/; > } > > error_page 401 /error_pages/401.html; > location = /401.html { > root /var/www/; > auth_basic off; > allow all; > } > > error_page 403 /error_pages/403.html; > location = /403.html { > root /var/www/; > allow all; > } > > error_page 404 /error_pages/404.html; > location = /404.html { > root /var/www/; > auth_basic off; > internal; > } > > error_page 503 /error_pages/503.html; > error_page 500 501 502 504 /error_pages/50x.html; > location ^~ /error_pages { > root /var/www/; > internal; > } > > Then in the file "rutorrent.conf" located in the > "/etc/nginx/sites-enabled" the following code: > > server { > listen 443 default_server ssl http2; > ?.. > include /etc/nginx/conf.d/error_pages.conf; > .?. > > I restart nginx services. > > But obviously my custom error page is not displayed in place of the > original one and I do not see why. > > Can you help me ? > > Thank you Posted at Nginx Forum: https://forum.nginx.org/read.php?2,265260,265261#msg-265261 From anoopalias01 at gmail.com Sat Mar 12 07:36:10 2016 From: anoopalias01 at gmail.com (Anoop Alias) Date: Sat, 12 Mar 2016 13:06:10 +0530 Subject: Name based virtual hosts not working In-Reply-To: References: Message-ID: Can you try listen IP:80; instead of listen 80; where IP is the IP address both domains resolve to ( and I assume its the same IP). -- *Anoop P Alias* -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Sat Mar 12 07:45:21 2016 From: nginx-forum at forum.nginx.org (reaman) Date: Sat, 12 Mar 2016 02:45:21 -0500 Subject: Problem for play my custom pages error 401 In-Reply-To: References: Message-ID: <5f0af69db641d4704e8de6cb6912272a.NginxMailingListEnglish@forum.nginx.org> reaman Wrote: ------------------------------------------------------- > reaman Wrote: > ------------------------------------------------------- > > I come to you because I am facing a problem with the customization > of > > the 401 error page in fact I use Nginx and rutorrent, I proceeded > as > > follows: > > > > I create a folder named "error_pages" I placed in the /var/www/ > > then I created my pages 400.html 401.html 403.html 404.html > 503.html > > 50x.html > > > > I created the page ?error_pages.conf? > > (/etc/nginx/conf.d/error_pages.conf) and I placed the code below > > > > error_page 400 /error_pages/400.html; > > location = /400.html { > > root /var/www/; > > } > > > > error_page 401 /error_pages/401.html; > > location = /401.html { > > root /var/www/; > > auth_basic off; > > allow all; > > } > > > > error_page 403 /error_pages/403.html; > > location = /403.html { > > root /var/www/; > > allow all; > > } > > > > error_page 404 /error_pages/404.html; > > location = /404.html { > > root /var/www/; > > auth_basic off; > > internal; > > } > > > > error_page 503 /error_pages/503.html; location = /503.html { root /var/www/; auth_basic off; internal; } > > error_page 500 501 502 504 /error_pages/50x.html; > > location ^~ /error_pages { > > root /var/www/; > > internal; > > } > > > > Then in the file "rutorrent.conf" located in the > > "/etc/nginx/sites-enabled" the following code: > > > > server { > > listen 443 default_server ssl http2; > > ?.. > > include /etc/nginx/conf.d/error_pages.conf; > > .?. > > > > I restart nginx services. > > > > But obviously my custom error page is not displayed in place of the > > original one and I do not see why. > > > > Can you help me ? > > > > Thank you Posted at Nginx Forum: https://forum.nginx.org/read.php?2,265260,265263#msg-265263 From alexandr.porunov at gmail.com Sat Mar 12 08:39:15 2016 From: alexandr.porunov at gmail.com (Alexandr Porunov) Date: Sat, 12 Mar 2016 10:39:15 +0200 Subject: Can we ask for permission before download? Message-ID: Hello, I have a storage like S3 with photos. And I need to check users permissions before photo downloading. Somebody can downloads a photo and somebody can't. Can we configure NGINX to act like this: [image: Inline image 1] Sincerely, Alexandr -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: NGINX_photo_downloading.png Type: image/png Size: 66631 bytes Desc: not available URL: From me at myconan.net Sat Mar 12 08:46:59 2016 From: me at myconan.net (nanaya) Date: Sat, 12 Mar 2016 17:46:59 +0900 Subject: Can we ask for permission before download? In-Reply-To: References: Message-ID: <1457772419.540688.547043226.32337AFF@webmail.messagingengine.com> Hi, On Sat, Mar 12, 2016, at 17:39, Alexandr Porunov wrote: > Hello, > I have a storage like S3 with photos. And I need to check users > permissions > before photo downloading. Somebody can downloads a photo and somebody > can't. Can we configure NGINX to act like this: This should help: https://www.nginx.com/resources/wiki/start/topics/examples/x-accel/ From alexandr.porunov at gmail.com Sat Mar 12 09:10:42 2016 From: alexandr.porunov at gmail.com (Alexandr Porunov) Date: Sat, 12 Mar 2016 11:10:42 +0200 Subject: Can we ask for permission before download? In-Reply-To: <1457772419.540688.547043226.32337AFF@webmail.messagingengine.com> References: <1457772419.540688.547043226.32337AFF@webmail.messagingengine.com> Message-ID: Hello, Thank you very much! Sincerely, Alexandr On Sat, Mar 12, 2016 at 10:46 AM, nanaya wrote: > Hi, > > On Sat, Mar 12, 2016, at 17:39, Alexandr Porunov wrote: > > Hello, > > I have a storage like S3 with photos. And I need to check users > > permissions > > before photo downloading. Somebody can downloads a photo and somebody > > can't. Can we configure NGINX to act like this: > > > This should help: > https://www.nginx.com/resources/wiki/start/topics/examples/x-accel/ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From phil at dunlop-lello.uk Sat Mar 12 13:09:41 2016 From: phil at dunlop-lello.uk (Phil Lello) Date: Sat, 12 Mar 2016 13:09:41 +0000 Subject: HTTP/2 roadmap In-Reply-To: <20160311155825.GJ12808@mdounin.ru> References: <20160311155825.GJ12808@mdounin.ru> Message-ID: On Fri, Mar 11, 2016 at 3:58 PM, Maxim Dounin wrote: > Hello! > > On Fri, Mar 11, 2016 at 02:40:34PM +0000, Phil Lello wrote: > > > Hi, > > > > What's the best place to find details on planned features for HTTP/2 > > support? > > > > I've only been looking at HTTP/2 for a few days, so forgive me if this is > > already covered. > > > > It seems pretty obvious to me that it provides an opportunity for > > potentially significant performance gains if changes are made to the xCGI > > model, and potentially web applications. > > > > Specifically, since there is a quasi-persistent [1] connection between a > > browser and a server, serialisation of a session object between page > > requests is no longer necessary, and it can become bound to the transport > > layer - whilst this may seem to introduce possible race conditions > between > > pages, this is no different from concurrent requests on the same session > > under HTTP/1.x. > > This is not going to work for multiple reasons, at least: > > - connections can be broken for unrelated reasons (network > changes, server reloads, whatever); > > That's why I refer to it as a quasi-persistent connection; I'd expect serialisation/deserialisation to still occur, and covered that in my footnote. > - transport layer is not guaranteed to be bound to a particular > client, and can be used by many different clients instead (e.g., > when used by proxy servers); > > - there may be intermediate servers and different protocols > involved, so from backend point of view there will be multiple > different connections; > Given that the current state of HTTP/2 support in browsers seems to be forcing the use of TLS, it seems that the opportunity for proxies to kick in is relatively limited. Of course, it remains possible (or even likely) that aggregation can/will occur at netowrk edges as/if SSL-offloading converts h2 into h2c. As an aside, that's a concern, since in the absense of readily available tools to test the h2c transport (e.g. a browser), implementations are more likely to be buggy. More likely, if HTTP/2 is widely used, we'll start seeing SSL-offloading become a means to control access to real certificates, and organisation-local CA's or self-signed certs being used on the backend. Which also makes me nervous, as once organisation CA's are widely installed in a network, less ethical places could decode/snoop/encode supposedly secure traffic. I've gone wildly off-topic. We've already seen how connection-oriented model [does not] work > for Microsoft with their NTLM authentication scheme. Don't try to > repeat their mistakes. > > I'll do some research on this, thank you for bringing it to my attention. > > A secondary requirement is a mechanism to implement server-push, so that > > can specify page dependencies, rather than requiring > > inspection of content within the server. > > > > Is any work currently being done in this direction? > > No. > OK. So the question now becomes, if I start work in these areas, is it likely to be rejected by core, or is it simply that no one else has had the time and motivation? I must admit though, the more I look at HTTP/2, the less appealing it seems, for reasons that are adequately covered by multiple authors on the HTTPWG list. Phil -------------- next part -------------- An HTML attachment was scrubbed... URL: From reallfqq-nginx at yahoo.fr Sat Mar 12 15:35:29 2016 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Sat, 12 Mar 2016 16:35:29 +0100 Subject: HTTP/2 roadmap In-Reply-To: References: <20160311155825.GJ12808@mdounin.ru> Message-ID: On Sat, Mar 12, 2016 at 2:09 PM, Phil Lello wrote: > > Given that the current state of HTTP/2 support in browsers seems to be > forcing the use of TLS, it seems that the opportunity for proxies to kick > in is relatively limited. > ?Why could not proxies? ?be used with HTTP/2? Why would you expect the existing one to suddenly be removed?? > A secondary requirement is a mechanism to implement server-push, so that >> > can specify page dependencies, rather than >> requiring >> > inspection of content within the server. >> > >> > Is any work currently being done in this direction? >> >> No. >> > > OK. So the question now becomes, if I start work in these areas, is it > likely to be rejected by core, or is it simply that no one else has had the > time and motivation? > Whatever others' stance on that matter, you could always prepare some module of your own and make it publicly available. If nginx rejects your work for whatever reasons, people would still be able to use it in their setup. --- *B. R.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From ingwie2000 at googlemail.com Sat Mar 12 16:59:50 2016 From: ingwie2000 at googlemail.com (Kevin "Ingwie Phoenix" Ingwersen) Date: Sat, 12 Mar 2016 17:59:50 +0100 Subject: Name based virtual hosts not working In-Reply-To: References: Message-ID: <76130CFA-3BDC-476A-96B8-60A4BA0FE1BD@googlemail.com> Hey. I am using a setup with a bunch of domains - where I am just using the server_name property. As far as I can see, your config looks ok. But you have to keep in mind one thing: the "default" property, AND if it is being accessed through a proxy. That can - I am no expert! - possibly be a cause. So in general, first delete - or change - the sites-enabled/default file. Remove the "default" keyword in the "listen" entry. Why? Well, it tends to cause NGINX to prefer one domain over another, and afaik, having multiple of these can even lead to an error, or unexpected behavior. Then, create files for your domains in the sites-available folder, and symlink them to sites-enabled. I wish there was a tool for that honestly? But first creating your configs in sites-available gives you a listing of all the sites you could offer. Again, check for the default keyword. Then symlink them - with their full path, not relative! - into sites-enabled. You should end up with - for example - two files: sites-enabled/domain1.com sites-enabled/domain2.com The both files should start similarily. Like so: server { listen 80; server_name domain1.com www.domain2.com # ? other stuff ... } Just that the file for domain2.com obviously has a different server_name. NGINX has a handy tool to check your config. I strongly recommend doing so _before_ restarting. Because a full-blown reconfiguration is better applied by a restart, thats at least how it went for me. If your config is good, restart the service - and try again! :) Kind regards, Ingwie. > Am 12.03.2016 um 06:40 schrieb zzyber : > > Hi! > > I just did a lemp setup but I can't get my virtual hosts to work properly. > I?m totally lost and out of ideas. > Two domains and one starting on e is always responding showing its root and > the other starting on s is just dead. I understand that nginx is handling > this as Apache analyzing the headers for a match and if no match it goes > alphabetically taking the first as default domain. > > I have my configuration files in /etc/nginx/conf.d, domain1.conf and > domain2.conf and the root pointing to /var/www/domain1/html and > /var/www/domain2/html > The domains are pointed A records to my public IP > > What can be wrong here? So many people having this problem but I can?t find > a solution. I hope this forum are holding cutting edge professionals to help > me out. > > config file > > server { > listen 80; > > root /var/www/domain1/html; > index index.php index.html index.htm; > > server_name domain1.com www.domain1.com; > > location / { > try_files $uri $uri/ /index.php; > } > > location ~ \.php$ { > try_files $uri =404; > fastcgi_param SCRIPT_FILENAME > $document_root$fastcgi_script_name; > include fastcgi_params; > fastcgi_pass unix:/var/run/php5-fpm.sock; > } > } > > > /zzyber > > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,265259,265259#msg-265259 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at forum.nginx.org Sat Mar 12 23:33:01 2016 From: nginx-forum at forum.nginx.org (Alt) Date: Sat, 12 Mar 2016 18:33:01 -0500 Subject: nginx-1.9.12 In-Reply-To: <289037d21c70f7c0ca47c6643c638213.NginxMailingListEnglish@forum.nginx.org> References: <20160224151056.GJ31796@mdounin.ru> <289037d21c70f7c0ca47c6643c638213.NginxMailingListEnglish@forum.nginx.org> Message-ID: <69638fd3971aefd07bbc30d93a2e388d.NginxMailingListEnglish@forum.nginx.org> Hello! George Wrote: ------------------------------------------------------- > But no love for LibreSSL users as Nginx 1.9.12 seems to broken > compilation against LibreSSL 2.2.6 for me > https://trac.nginx.org/nginx/ticket/908#ticket ? Great news, there's a fix in LibreSSL: https://github.com/libressl-portable/portable/commit/be3b129221b2f77b23fc3d833c0eb2c444624eb0 I still would love to see LibreSSL be officially supported in nginx. Best Regards. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,264770,265277#msg-265277 From nginx-forum at forum.nginx.org Sun Mar 13 11:24:05 2016 From: nginx-forum at forum.nginx.org (elanh) Date: Sun, 13 Mar 2016 07:24:05 -0400 Subject: proxy_ssl_certificate not working as expected Message-ID: <2de119a860223aace08c95141c38093b.NginxMailingListEnglish@forum.nginx.org> Hello, I'm using nginx as a proxy to a backend server. The backend server is also using nginx and enforcing client certificate authentication using the ssl_client_certificate and ssl_verify_client directives. In my nginx server I set the following: location /proxy { proxy_pass https://www.backend.com; proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_ssl_certificate /etc/nginx/cert/client.crt; proxy_ssl_certificate_key /etc/nginx/cert/client.key; } according to http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_ssl_certificate. However, the backend is still responding with a 400 reponse code "No required SSL certificate was sent". Note that when issuing requests to the backend server using wget with the client certificate, I get a valid 200 OK response. What am I missing in my nginx configuration? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,265281,265281#msg-265281 From nginx-forum at forum.nginx.org Sun Mar 13 11:30:45 2016 From: nginx-forum at forum.nginx.org (mevans336) Date: Sun, 13 Mar 2016 07:30:45 -0400 Subject: Correct Rewrite? In-Reply-To: References: Message-ID: <9f97573765d73a7dc5e5f0b77c14e80d.NginxMailingListEnglish@forum.nginx.org> That seems like a very elegant way to handle the problem. I'll give it a shot. Thanks! Posted at Nginx Forum: https://forum.nginx.org/read.php?2,265257,265282#msg-265282 From k.vlaskin at gmail.com Sun Mar 13 12:20:42 2016 From: k.vlaskin at gmail.com (Kirill Vlaskin) Date: Sun, 13 Mar 2016 15:20:42 +0300 Subject: No subject Message-ID: -- *? ?????????/ Best regards, * *??????? ??????/ Vlaskin Kirill* *mail: k.vlaskin at gmail.com * *Skype: k.vlaskin * *Tel. 8-985-145-61-92* -------------- next part -------------- An HTML attachment was scrubbed... URL: From sammyraul1 at gmail.com Sun Mar 13 13:59:39 2016 From: sammyraul1 at gmail.com (sammy_raul) Date: Sun, 13 Mar 2016 06:59:39 -0700 (MST) Subject: How Does Nginx forward connections? Message-ID: <1457877579923-7596539.post@n2.nabble.com> Hello All, I am thinking, how does Nginx forward TCP connections. Does it create a single TCP connection to backend server (considering single backend) or as many as clients connected to the Nginx proxy. What I mean to say is, as Nginx Proxy receives a client connection, it must call Accept and create a new Socket connection and call Connect on it every time, it receives a new client connection. Or does it create a single Socket to the backend at the start of Nginx and basically sends the data over this single connection. If it is a single connection, how does Nginx knows which client to reply when it receives reply from the backend server. How does it work? Can anyone please provide some details? Thanks, Raul -- View this message in context: http://nginx.2469901.n2.nabble.com/How-Does-Nginx-forward-connections-tp7596539.html Sent from the nginx mailing list archive at Nabble.com. From subscriptions at znet.ca Sun Mar 13 14:40:50 2016 From: subscriptions at znet.ca (Subscriptions) Date: Sun, 13 Mar 2016 10:40:50 -0400 Subject: HLS proxying multiple sources Message-ID: <1056ede9130128d08ea0605f651dd529@znet.ca> Hello, I need a way to provide the 'proxy_pass' argument from external script for hls streaming. One way will be to use exec_pull with rtmp module, but this means to convert hls to rtmp, and run ffmpeg - pretty cumbersome. If there is a way to do something like ** proxy_pass `my_script original_request` ** (my_script is in backticks), it will be ideal. Situation: I need to choose dynamically the server where I pull the hls video stream from. Some of the upstream servers (usually 2-4 in total) may be down or slow at the moment where the request comes to the local nginx, so I need to dynamically choose the 'preferred' one. On top, I have different login credentials for those, and a limit of logins per server. Almost sure nginx will have some way of doing it - it does anything imaginable - and few things beyond. Any suggestion appreciated, Thanks, George. From mdounin at mdounin.ru Sun Mar 13 22:23:05 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 14 Mar 2016 01:23:05 +0300 Subject: HTTP/2 roadmap In-Reply-To: References: <20160311155825.GJ12808@mdounin.ru> Message-ID: <20160313222305.GL12808@mdounin.ru> Hello! On Sat, Mar 12, 2016 at 01:09:41PM +0000, Phil Lello wrote: [...] > > > A secondary requirement is a mechanism to implement server-push, so that > > > can specify page dependencies, rather than requiring > > > inspection of content within the server. > > > > > > Is any work currently being done in this direction? > > > > No. > > > > OK. So the question now becomes, if I start work in these areas, is it > likely to be rejected by core, or is it simply that no one else has had the > time and motivation? Well-designed server push mechanism is likely to be accepted. Making things connection-oriented doesn't really make sense for the reasons outlined in the previous message. -- Maxim Dounin http://nginx.org/ From nginx-forum at forum.nginx.org Mon Mar 14 09:12:29 2016 From: nginx-forum at forum.nginx.org (vps4) Date: Mon, 14 Mar 2016 05:12:29 -0400 Subject: how can i bind proxy_bind to interface name? Message-ID: <6add81a1cacd018256b107304150138d.NginxMailingListEnglish@forum.nginx.org> i use proxy_bind with interface ip address, it works fine but i need bind to interface name, for example: ppp0 but it not worked, how can i do? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,265297,265297#msg-265297 From nginx-forum at forum.nginx.org Mon Mar 14 11:01:42 2016 From: nginx-forum at forum.nginx.org (ben5192) Date: Mon, 14 Mar 2016 07:01:42 -0400 Subject: main conf created twice, only closed once? Message-ID: <42a21bb19e7f6563c847c00f03db2dc1.NginxMailingListEnglish@forum.nginx.org> Hi, I have a problem with a module I'm writing. I need to do something in the main config after variables are read from the conf file, so I have put this in the post_conf function. Then I need do something when to it when the process is closed via ./nginx -s reload this is in the exit_process function. What I would expect is that for each reload there is one new main conf, this is not the case ( see output below ). Two new main configs are created, then only one of them is 'seen' by the exit_process. create main conf mainconf=0x9908e8 post conf mainconf=0x9908e8 --- reload happens create main conf mainconf=0xe508e8 post conf mainconf=0xe508e8 create main conf mainconf=0x993ef8 post conf mainconf=0x993ef8 exit mainconf=0x9908e8 So on reload, two new main configs are created, and the previous one is closed. This wouldn't be a problem, but every subsequent reload the same thing happens ( two are created, but only one of the previous two exited). Anyone have any idea why this might happen? Thanks. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,265302,265302#msg-265302 From mdounin at mdounin.ru Mon Mar 14 13:23:27 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 14 Mar 2016 16:23:27 +0300 Subject: how can i bind proxy_bind to interface name? In-Reply-To: <6add81a1cacd018256b107304150138d.NginxMailingListEnglish@forum.nginx.org> References: <6add81a1cacd018256b107304150138d.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20160314132327.GN12808@mdounin.ru> Hello! On Mon, Mar 14, 2016 at 05:12:29AM -0400, vps4 wrote: > i use proxy_bind with interface ip address, it works fine > but i need bind to interface name, for example: ppp0 > but it not worked, how can i do? This is not supported. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Mon Mar 14 13:47:50 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 14 Mar 2016 16:47:50 +0300 Subject: main conf created twice, only closed once? In-Reply-To: <42a21bb19e7f6563c847c00f03db2dc1.NginxMailingListEnglish@forum.nginx.org> References: <42a21bb19e7f6563c847c00f03db2dc1.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20160314134750.GO12808@mdounin.ru> Hello! On Mon, Mar 14, 2016 at 07:01:42AM -0400, ben5192 wrote: > Hi, > I have a problem with a module I'm writing. I need to do something in the > main config after variables are read from the conf file, so I have put this > in the post_conf function. Then I need do something when to it when the > process is closed via > ./nginx -s reload > this is in the exit_process function. What I would expect is that for each > reload there is one new main conf, this is not the case ( see output below > ). Two new main configs are created, then only one of them is 'seen' by the > exit_process. > > create main conf mainconf=0x9908e8 > post conf mainconf=0x9908e8 > --- reload happens > create main conf mainconf=0xe508e8 > post conf mainconf=0xe508e8 > create main conf mainconf=0x993ef8 > post conf mainconf=0x993ef8 > exit mainconf=0x9908e8 > > So on reload, two new main configs are created, and the previous one is > closed. This wouldn't be a problem, but every subsequent reload the same > thing happens ( two are created, but only one of the previous two exited). > > Anyone have any idea why this might happen? Are you using nginx on Windows? On Windows both master and worker processes read the configuration, as there is no fork(). Note well that the exit_process callback is called on worker process exit, but not on exit of other processes. It's not expected to be balanced with configuration creation. -- Maxim Dounin http://nginx.org/ From nginx-forum at forum.nginx.org Mon Mar 14 16:39:11 2016 From: nginx-forum at forum.nginx.org (aruzsi) Date: Mon, 14 Mar 2016 12:39:11 -0400 Subject: Rev. proxying a java applet Message-ID: Hi, I'm a beginner to nginx. I want to rev. proxying a page with java applet. I think it is usual and nothing special ... This is my 1st config: location /pl-wbr/ { rewrite /pl-wbr/(.*) /$1 break; proxy_pass http://pl-wbr/; #proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; The java applet wants to connect to TCP/7000 on the proxied host. Something has happened but it doesn't work. :-( I checked the access.log on the proxied Apache host and I couldn't see big differences. I think it is too general: I don't know any special thing about applet. It is a simple serial terminal on WEB page. I used tcpdump to watch the network traffic and no communication on port 7000 on my client machine. Any help would be appreciated! TIA, Posted at Nginx Forum: https://forum.nginx.org/read.php?2,265320,265320#msg-265320 From nginx-forum at forum.nginx.org Mon Mar 14 22:43:46 2016 From: nginx-forum at forum.nginx.org (Roswebnet) Date: Mon, 14 Mar 2016 18:43:46 -0400 Subject: HTTP/2 and HTTPS Message-ID: Hi everyone, I have strange issue with nginx 1.9.12. I have 3 IP addresses as a server name that are alias IPs on a single Ubuntu server 15.10. Each servername related to specific protocol: http: server { listen 80; server_name 192.168.1.161; #charset koi8-r; #access_log /var/log/nginx/log/host.access.log main; location / { root /usr/share/nginx/static; index index.html; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/share/nginx/html; } } https: server { listen 443 ssl; server_name 192.168.1.162; #charset koi8-r; #access_log /var/log/nginx/log/host.access.log main; root /usr/share/nginx/static; index index.html index.htm; ssl_certificate /etc/nginx/tls/certificate.crt; ssl_certificate_key /etc/nginx/tls/privatekey.key; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers HIGH+kEECDH+AESGCM:HIGH+kEECDH:HIGH+kEDH:HIGH:!aNULL; location / { try_files $uri $uri/ =404; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/share/nginx/html; } } http2: server { listen 443 ssl http2; server_name 192.168.1.163; #charset koi8-r; #access_log /var/log/nginx/log/host.access.log main; ssl_certificate /etc/nginx/tls/certificate.crt; ssl_certificate_key /etc/nginx/tls/privatekey.key; location / { root /usr/share/nginx/static; index index.html index.htm; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/share/nginx/html; } } As you see for those servers content is the same, and it is served well. However, if I use webconsole of Firefox I am getting that https site is a http2 site and http2 site is http2. The same situation is in Internet explorer. What I am doing wrong? Thank you. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,265326,265326#msg-265326 From r1ch+nginx at teamliquid.net Mon Mar 14 23:02:01 2016 From: r1ch+nginx at teamliquid.net (Richard Stanway) Date: Tue, 15 Mar 2016 00:02:01 +0100 Subject: HTTP/2 and HTTPS In-Reply-To: References: Message-ID: You probably need to specify the IP on the listen directive if you want different configurations of listening ports on different IPs. On Mon, Mar 14, 2016 at 11:43 PM, Roswebnet wrote: > Hi everyone, > > I have strange issue with nginx 1.9.12. I have 3 IP addresses as a server > name that are alias IPs on a single Ubuntu server 15.10. Each servername > related to specific protocol: > > http: > > server { > listen 80; > server_name 192.168.1.161; > > #charset koi8-r; > #access_log /var/log/nginx/log/host.access.log main; > > location / { > root /usr/share/nginx/static; > index index.html; > } > > #error_page 404 /404.html; > > # redirect server error pages to the static page /50x.html > # > error_page 500 502 503 504 /50x.html; > location = /50x.html { > root /usr/share/nginx/html; > } > } > > https: > > server { > listen 443 ssl; > server_name 192.168.1.162; > > #charset koi8-r; > #access_log /var/log/nginx/log/host.access.log main; > > root /usr/share/nginx/static; > index index.html index.htm; > > ssl_certificate /etc/nginx/tls/certificate.crt; > ssl_certificate_key /etc/nginx/tls/privatekey.key; > > > ssl_protocols TLSv1 TLSv1.1 TLSv1.2; > ssl_ciphers HIGH+kEECDH+AESGCM:HIGH+kEECDH:HIGH+kEDH:HIGH:!aNULL; > > location / { > try_files $uri $uri/ =404; > } > > #error_page 404 /404.html; > > # redirect server error pages to the static page /50x.html > # > error_page 500 502 503 504 /50x.html; > location = /50x.html { > root /usr/share/nginx/html; > } > } > > http2: > > server { > listen 443 ssl http2; > server_name 192.168.1.163; > > #charset koi8-r; > #access_log /var/log/nginx/log/host.access.log main; > > ssl_certificate /etc/nginx/tls/certificate.crt; > ssl_certificate_key /etc/nginx/tls/privatekey.key; > > location / { > root /usr/share/nginx/static; > index index.html index.htm; > } > > #error_page 404 /404.html; > > # redirect server error pages to the static page /50x.html > # > error_page 500 502 503 504 /50x.html; > location = /50x.html { > root /usr/share/nginx/html; > } > } > > > As you see for those servers content is the same, and it is served well. > However, if I use webconsole of Firefox I am getting that https site is a > http2 site and http2 site is http2. The same situation is in Internet > explorer. > > What I am doing wrong? > > Thank you. > > Posted at Nginx Forum: > https://forum.nginx.org/read.php?2,265326,265326#msg-265326 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Mon Mar 14 23:39:10 2016 From: nginx-forum at forum.nginx.org (Roswebnet) Date: Mon, 14 Mar 2016 19:39:10 -0400 Subject: HTTP/2 and HTTPS In-Reply-To: References: Message-ID: <661f9b4920bda94fcd6281ee2c18e485.NginxMailingListEnglish@forum.nginx.org> Thank you for your fast response. However, could you please provide an example of "IP on the listen directive" I am accessing content from Firefox like https://192.168.1.162 for https connection and https://192.168.1.163 for http2. Moreover, those ip also accessible by http:// and served also well, but in my opinion it should gave some kind of error, because those server names do not have port 80 configured. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,265326,265328#msg-265328 From sven at elite12.de Tue Mar 15 00:36:37 2016 From: sven at elite12.de (Sven Kirschbaum) Date: Tue, 15 Mar 2016 01:36:37 +0100 Subject: HTTP/2 and HTTPS In-Reply-To: <661f9b4920bda94fcd6281ee2c18e485.NginxMailingListEnglish@forum.nginx.org> References: <661f9b4920bda94fcd6281ee2c18e485.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hi, a similar issue has been discussed here: https://forum.nginx.org/read.php?2,264392,264392 In short: If you have any http2 directives for a port in your configuration, every connection on that port can use http2. Im not sure if specifying the listen IP will help in this case, but its worth a try. Modify your listen directives to "listen ip:port options;", for example for your last vhost: "listen 192.168.1.163:443 ssl http2;" Mit Freundlichen Gr??en Sven Kirschbaum 2016-03-15 0:39 GMT+01:00 Roswebnet : > Thank you for your fast response. > > However, could you please provide an example of "IP on the listen > directive" > > I am accessing content from Firefox like https://192.168.1.162 for https > connection and > https://192.168.1.163 for http2. Moreover, those ip also accessible by > http:// and served also well, but in my opinion it should gave some kind > of > error, because those server names do not have port 80 configured. > > Posted at Nginx Forum: > https://forum.nginx.org/read.php?2,265326,265328#msg-265328 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From reallfqq-nginx at yahoo.fr Tue Mar 15 08:01:00 2016 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Tue, 15 Mar 2016 09:01:00 +0100 Subject: HTTP/2 and HTTPS In-Reply-To: <661f9b4920bda94fcd6281ee2c18e485.NginxMailingListEnglish@forum.nginx.org> References: <661f9b4920bda94fcd6281ee2c18e485.NginxMailingListEnglish@forum.nginx.org> Message-ID: RTFM (listen directive)? :o) --- *B. R.* On Tue, Mar 15, 2016 at 12:39 AM, Roswebnet wrote: > Thank you for your fast response. > > However, could you please provide an example of "IP on the listen > directive" > > I am accessing content from Firefox like https://192.168.1.162 for https > connection and > https://192.168.1.163 for http2. Moreover, those ip also accessible by > http:// and served also well, but in my opinion it should gave some kind > of > error, because those server names do not have port 80 configured. > > Posted at Nginx Forum: > https://forum.nginx.org/read.php?2,265326,265328#msg-265328 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Tue Mar 15 14:23:37 2016 From: nginx-forum at forum.nginx.org (Roswebnet) Date: Tue, 15 Mar 2016 10:23:37 -0400 Subject: HTTP/2 and HTTPS In-Reply-To: References: Message-ID: <4c2f378d9c62c17404f8a9c0d273f5de.NginxMailingListEnglish@forum.nginx.org> Hi, I tried this one: http.conf: listen 192.168.1.161:80; https.conf: listen 192.168.1.162:443 ssl; http2.conf: listen 192.168.1.163:443 ssl http2; Looks like it solve issue especially when I do request for the first time. For the second time in IE I can get https in place of http2. Firefox mostly do not provide such behaviour. May it lay on certificate? I use the same certificate for both HTTPS and HTTP2. And certificate was issued for server hostname not for vhost IP. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,265326,265349#msg-265349 From vbart at nginx.com Tue Mar 15 15:14:10 2016 From: vbart at nginx.com (Valentin V. Bartenev) Date: Tue, 15 Mar 2016 18:14:10 +0300 Subject: HTTP/2 and HTTPS In-Reply-To: <4c2f378d9c62c17404f8a9c0d273f5de.NginxMailingListEnglish@forum.nginx.org> References: <4c2f378d9c62c17404f8a9c0d273f5de.NginxMailingListEnglish@forum.nginx.org> Message-ID: <6848995.KcjtuiXlM8@vbart-workstation> On Tuesday 15 March 2016 10:23:37 Roswebnet wrote: > Hi, > > I tried this one: > http.conf: > listen 192.168.1.161:80; > > https.conf: > listen 192.168.1.162:443 ssl; > > http2.conf: > listen 192.168.1.163:443 ssl http2; > > Looks like it solve issue especially when I do request for the first time. > For the second time in IE I can get https in place of http2. Firefox mostly > do not provide such behaviour. May it lay on certificate? I use the same > certificate for both HTTPS and HTTP2. And certificate was issued for server > hostname not for vhost IP. Your version of IE may not support HTTP/2 negotiation using NPN, or may not support HTTP/2 at all. wbr, Valentin V. Bartenev From nginx-forum at forum.nginx.org Tue Mar 15 15:54:33 2016 From: nginx-forum at forum.nginx.org (Roswebnet) Date: Tue, 15 Mar 2016 11:54:33 -0400 Subject: HTTP/2 and HTTPS In-Reply-To: <6848995.KcjtuiXlM8@vbart-workstation> References: <6848995.KcjtuiXlM8@vbart-workstation> Message-ID: <895b31ee4853f5ec9d1256699566f828.NginxMailingListEnglish@forum.nginx.org> I am using W10Pro and IE 11.162.10586.0 Desktop version. https://en.wikipedia.org/wiki/HTTP/2 "The standardization effort was supported by Chrome, Opera, Firefox, Internet Explorer 11, Safari, Amazon Silk and Edge browsers.[9] Most major browsers added HTTP/2 support by the end of 2015." In addition: https://msdn.microsoft.com/en-us/library/dn905221(v=vs.85).aspx Of course, it is maybe some wrong implementation of Microsoft... Still a bit strange. P.S.: F12 tools of Chrome do not catch type of protocol. At least I could not find this functionality by default. Therefore, I use only IE and FF. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,265326,265362#msg-265362 From vbart at nginx.com Tue Mar 15 16:43:57 2016 From: vbart at nginx.com (Valentin V. Bartenev) Date: Tue, 15 Mar 2016 19:43:57 +0300 Subject: HTTP/2 and HTTPS In-Reply-To: <895b31ee4853f5ec9d1256699566f828.NginxMailingListEnglish@forum.nginx.org> References: <6848995.KcjtuiXlM8@vbart-workstation> <895b31ee4853f5ec9d1256699566f828.NginxMailingListEnglish@forum.nginx.org> Message-ID: <1937434.8923nPJJCC@vbart-workstation> On Tuesday 15 March 2016 11:54:33 Roswebnet wrote: > I am using W10Pro and IE 11.162.10586.0 Desktop version. > > https://en.wikipedia.org/wiki/HTTP/2 > > "The standardization effort was supported by Chrome, Opera, Firefox, > Internet Explorer 11, Safari, Amazon Silk and Edge browsers.[9] Most major > browsers added HTTP/2 support by the end of 2015." > > In addition: > https://msdn.microsoft.com/en-us/library/dn905221(v=vs.85).aspx > > Of course, it is maybe some wrong implementation of Microsoft... > > Still a bit strange. [..] None of the links above mention that IE supports HTTP/2 negotiation using NPN. I guess it supports only ALPN, which isn't supported by OpenSSL version in your Ubuntu 15.10. > > P.S.: F12 tools of Chrome do not catch type of protocol. At least I could > not find this functionality by default. Therefore, I use only IE and FF. > [..] You can find all the information on "chrome://net-internals" page. wbr, Valentin V. Bartenev From nginx-forum at forum.nginx.org Tue Mar 15 17:33:31 2016 From: nginx-forum at forum.nginx.org (Roswebnet) Date: Tue, 15 Mar 2016 13:33:31 -0400 Subject: HTTP/2 and HTTPS In-Reply-To: <1937434.8923nPJJCC@vbart-workstation> References: <1937434.8923nPJJCC@vbart-workstation> Message-ID: <5efd94e2d6ba377af3476713d824048c.NginxMailingListEnglish@forum.nginx.org> >None of the links above mention that IE supports HTTP/2 negotiation >using NPN. Agree. > I guess it supports only ALPN, which isn't supported by OpenSSL > version in your Ubuntu 15.10. I have just researched installed openssl. root at LIA-RP-VS-WEB:/etc/nginx/tls# openssl version -a -v -b -o -f -p -d OpenSSL 1.0.2g 1 Mar 2016 built on: reproducible build, date unspecified platform: debian-amd64 options: bn(64,64) rc4(16x,int) des(idx,cisc,16,int) blowfish(idx) compiler: gcc -I. -I.. -I../include -fPIC -DOPENSSL_PIC -DOPENSSL_THREADS -D_REENTRANT -DDSO_DLFCN -DHAVE_DLFCN_H -m64 -DL_ENDIAN -g -O2 -fstack-protector-strong -Wformat -Werror=format-security -D_FORTIFY_SOURCE=2 -Wl,-Bsymbolic-functions -Wl,-z,relro -Wa,--noexecstack -Wall -DMD32_REG_T=int -DOPENSSL_IA32_SSE2 -DOPENSSL_BN_ASM_MONT -DOPENSSL_BN_ASM_MONT5 -DOPENSSL_BN_ASM_GF2m -DSHA1_ASM -DSHA256_ASM -DSHA512_ASM -DMD5_ASM -DAES_ASM -DVPAES_ASM -DBSAES_ASM -DWHIRLPOOL_ASM -DGHASH_ASM -DECP_NISTZ256_ASM OPENSSLDIR: "/usr/lib/ssl" According this note: https://www.openssl.org/news/openssl-1.0.2-notes.html Major changes between OpenSSL 1.0.1l and OpenSSL 1.0.2 [22 Jan 2015]: [..] ?ALPN support. Therefore, ALPN is supported and should work with IE. Am I right? Actually I should have OpenSSL 1.0.2d In addition, the LIA-RP-VS-WEB is a XEN guest. Thank you for your tip about chrome. I can see and investigate the protocol information: 454: HTTP2_SESSION 192.168.100.163:443 (DIRECT) Start Time: 2016-03-15 18:13:13.832 t=1138095 [st= 0] +HTTP2_SESSION [dt=180146] --> host = "192.168.100.163:443" --> proxy = "DIRECT" t=1138095 [st= 0] HTTP2_SESSION_INITIALIZED --> protocol = "h2" --> source_dependency = 453 (SOCKET) t=1138095 [st= 0] HTTP2_SESSION_SEND_SETTINGS [..] Posted at Nginx Forum: https://forum.nginx.org/read.php?2,265326,265365#msg-265365 From vbart at nginx.com Tue Mar 15 18:31:52 2016 From: vbart at nginx.com (Valentin V. Bartenev) Date: Tue, 15 Mar 2016 21:31:52 +0300 Subject: HTTP/2 and HTTPS In-Reply-To: <5efd94e2d6ba377af3476713d824048c.NginxMailingListEnglish@forum.nginx.org> References: <1937434.8923nPJJCC@vbart-workstation> <5efd94e2d6ba377af3476713d824048c.NginxMailingListEnglish@forum.nginx.org> Message-ID: <2051514.AEOfRvrnlb@vbart-workstation> On Tuesday 15 March 2016 13:33:31 Roswebnet wrote: > >None of the links above mention that IE supports HTTP/2 negotiation > >using NPN. > > Agree. > > > I guess it supports only ALPN, which isn't supported by OpenSSL > > version in your Ubuntu 15.10. > > I have just researched installed openssl. > > root at LIA-RP-VS-WEB:/etc/nginx/tls# openssl version -a -v -b -o -f -p -d > OpenSSL 1.0.2g 1 Mar 2016 > built on: reproducible build, date unspecified > platform: debian-amd64 > options: bn(64,64) rc4(16x,int) des(idx,cisc,16,int) blowfish(idx) > compiler: gcc -I. -I.. -I../include -fPIC -DOPENSSL_PIC -DOPENSSL_THREADS > -D_REENTRANT -DDSO_DLFCN -DHAVE_DLFCN_H -m64 -DL_ENDIAN -g -O2 > -fstack-protector-strong -Wformat -Werror=format-security > -D_FORTIFY_SOURCE=2 -Wl,-Bsymbolic-functions -Wl,-z,relro -Wa,--noexecstack > -Wall -DMD32_REG_T=int -DOPENSSL_IA32_SSE2 -DOPENSSL_BN_ASM_MONT > -DOPENSSL_BN_ASM_MONT5 -DOPENSSL_BN_ASM_GF2m -DSHA1_ASM -DSHA256_ASM > -DSHA512_ASM -DMD5_ASM -DAES_ASM -DVPAES_ASM -DBSAES_ASM -DWHIRLPOOL_ASM > -DGHASH_ASM -DECP_NISTZ256_ASM > OPENSSLDIR: "/usr/lib/ssl" > > According this note: https://www.openssl.org/news/openssl-1.0.2-notes.html > > Major changes between OpenSSL 1.0.1l and OpenSSL 1.0.2 [22 Jan 2015]: > [..] > ?ALPN support. > > Therefore, ALPN is supported and should work with IE. Am I right? > Actually I should have OpenSSL 1.0.2d > > In addition, the LIA-RP-VS-WEB is a XEN guest. > [..] You should also check the output of "nginx -V" command to be sure that nginx is built with this version of OpenSSL wbr, Valentin V. Bartenev From nginx-forum at forum.nginx.org Wed Mar 16 00:20:25 2016 From: nginx-forum at forum.nginx.org (miky) Date: Tue, 15 Mar 2016 20:20:25 -0400 Subject: upstream server does not match virtual host Message-ID: <9a1c6fc988e40e2802d6a5a1fe5ba43d.NginxMailingListEnglish@forum.nginx.org> Hello, I have a web server (nginx, iis, apache, whatever) on which I access: http://1.1.1.1 => displays default page http://virt1 => displays virtual host 1 (ping virt 1 = 1.1.1.1) When I access the url virt1 the page of the corresponding virtual host which is different from when I access the url with the ip adress NB: if I access http://1.1.1.1/virt1, it's the same page as http://virt1 I use a nginx server in front of it as a reverse proxy and use a virtual host. When I access http://portal, I want it to display the page as if it was http://virt1 My configuration starts like this upstream webservers { server virt1:80; } server { listen 80; server_name server_name portal; index index.php index.html; location @proxy { proxy_pass http://webservers; include /etc/nginx/proxy_params; } With this configuration it works but not as I wish. When I access the page http://portal it displays the page as if I accessed http://1.1.1.1 but I wanted it to display the page that looks like http://virt1 If I access http://portal/virt1 it displays the correct page. Conclusion my configuration makes my nginx connect to the right server but doesn't send a parameter so that I access on the right virtual host. Do you know how to correct this ? Thank you Posted at Nginx Forum: https://forum.nginx.org/read.php?2,265370,265370#msg-265370 From steve at greengecko.co.nz Wed Mar 16 06:00:22 2016 From: steve at greengecko.co.nz (steve) Date: Wed, 16 Mar 2016 19:00:22 +1300 Subject: HTTP/2 and HTTPS In-Reply-To: <5efd94e2d6ba377af3476713d824048c.NginxMailingListEnglish@forum.nginx.org> References: <1937434.8923nPJJCC@vbart-workstation> <5efd94e2d6ba377af3476713d824048c.NginxMailingListEnglish@forum.nginx.org> Message-ID: <56E8F676.3080408@greengecko.co.nz> On 03/16/2016 06:33 AM, Roswebnet wrote: >> None of the links above mention that IE supports HTTP/2 negotiation >> using NPN. > Agree. > >> I guess it supports only ALPN, which isn't supported by OpenSSL >> version in your Ubuntu 15.10. > I have just researched installed openssl. > > root at LIA-RP-VS-WEB:/etc/nginx/tls# openssl version -a -v -b -o -f -p -d > OpenSSL 1.0.2g 1 Mar 2016 > built on: reproducible build, date unspecified > platform: debian-amd64 > options: bn(64,64) rc4(16x,int) des(idx,cisc,16,int) blowfish(idx) > compiler: gcc -I. -I.. -I../include -fPIC -DOPENSSL_PIC -DOPENSSL_THREADS > -D_REENTRANT -DDSO_DLFCN -DHAVE_DLFCN_H -m64 -DL_ENDIAN -g -O2 > -fstack-protector-strong -Wformat -Werror=format-security > -D_FORTIFY_SOURCE=2 -Wl,-Bsymbolic-functions -Wl,-z,relro -Wa,--noexecstack > -Wall -DMD32_REG_T=int -DOPENSSL_IA32_SSE2 -DOPENSSL_BN_ASM_MONT > -DOPENSSL_BN_ASM_MONT5 -DOPENSSL_BN_ASM_GF2m -DSHA1_ASM -DSHA256_ASM > -DSHA512_ASM -DMD5_ASM -DAES_ASM -DVPAES_ASM -DBSAES_ASM -DWHIRLPOOL_ASM > -DGHASH_ASM -DECP_NISTZ256_ASM > OPENSSLDIR: "/usr/lib/ssl" > > According this note: https://www.openssl.org/news/openssl-1.0.2-notes.html > > Major changes between OpenSSL 1.0.1l and OpenSSL 1.0.2 [22 Jan 2015]: > [..] > ?ALPN support. > > Therefore, ALPN is supported and should work with IE. Am I right? > Actually I should have OpenSSL 1.0.2d > > In addition, the LIA-RP-VS-WEB is a XEN guest. > > Thank you for your tip about chrome. I can see and investigate the protocol > information: > > 454: HTTP2_SESSION > 192.168.100.163:443 (DIRECT) > Start Time: 2016-03-15 18:13:13.832 > > t=1138095 [st= 0] +HTTP2_SESSION [dt=180146] > --> host = "192.168.100.163:443" > --> proxy = "DIRECT" > t=1138095 [st= 0] HTTP2_SESSION_INITIALIZED > --> protocol = "h2" > --> source_dependency = 453 (SOCKET) > t=1138095 [st= 0] HTTP2_SESSION_SEND_SETTINGS > > [..] > > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,265326,265365#msg-265365 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > Have you checked the server directly? I don't have intimate knowledge of http2 so rely on Qualys to tell me when I've got it set up properly... https://www.ssllabs.com/ssltest/analyze.html?d=www.greengecko.co.nz&s=101.0.108.116&latest Works fine for me... nginx 1.9.12 + openssl 1.0.2g. ( note g, not d is current ). Built from source. Steve -- Steve Holdoway BSc(Hons) MIITP http://www.greengecko.co.nz Linkedin: http://www.linkedin.com/in/steveholdoway Skype: sholdowa From francis at daoine.org Wed Mar 16 08:33:58 2016 From: francis at daoine.org (Francis Daly) Date: Wed, 16 Mar 2016 08:33:58 +0000 Subject: upstream server does not match virtual host In-Reply-To: <9a1c6fc988e40e2802d6a5a1fe5ba43d.NginxMailingListEnglish@forum.nginx.org> References: <9a1c6fc988e40e2802d6a5a1fe5ba43d.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20160316083358.GE3340@daoine.org> On Tue, Mar 15, 2016 at 08:20:25PM -0400, miky wrote: Hi there, > I use a nginx server in front of it as a reverse proxy and use a virtual > host. When I access http://portal, I want it to display the page as if it > was http://virt1 http://nginx.org/r/proxy_set_header f -- Francis Daly francis at daoine.org From francis at daoine.org Wed Mar 16 08:43:00 2016 From: francis at daoine.org (Francis Daly) Date: Wed, 16 Mar 2016 08:43:00 +0000 Subject: Rev. proxying a java applet In-Reply-To: References: Message-ID: <20160316084300.GF3340@daoine.org> On Mon, Mar 14, 2016 at 12:39:11PM -0400, aruzsi wrote: Hi there, > I want to rev. proxying a page with java applet. I think it is usual and > nothing special ... The applet itself will be a http request, which it looks like you have working. What the applet actually *does* is another matter -- and in general, only you know what that is. > The java applet wants to connect to TCP/7000 on the proxied host. > Something has happened but it doesn't work. :-( Can you describe the full network traffic when the applet works normally? The machines involved are: * the client (your browser) * the nginx server * the upstream web server If nginx were not involved, the client would make a http request of the upstream web server to fetch the applet, and then... what? The applet runs on the client and tries to access port 7000 of the server the client got it from? And speaks http on port 7000; or speaks its own protocol? Does it use anything other than port 7000? If it speaks http, then possibly a nginx http server{} which listens on port 7000 and proxy_passes to the upstream port 7000 could work. If it uses a single port, then possible a nginx stream server{} could work. If the applet knows the upstream it came from, and tries to access that directly, then nginx is probably not involved. The best way to understand how to proxy this service (if it is even possible), is to know what it wants to do, at the network level. Good luck with it, f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Wed Mar 16 10:43:53 2016 From: nginx-forum at forum.nginx.org (aruzsi) Date: Wed, 16 Mar 2016 06:43:53 -0400 Subject: Rev. proxying a java applet In-Reply-To: <20160316084300.GF3340@daoine.org> References: <20160316084300.GF3340@daoine.org> Message-ID: <1d2dbe882ae8b2a228de1bef02899eec.NginxMailingListEnglish@forum.nginx.org> Hi Francis, Thank you for your questions. ;-) I try to answer them as I can. > > I want to rev. proxying a page with java applet. I think it is usual > and > > nothing special ... > > The applet itself will be a http request, which it looks like you have > working. The start page is almost working. First of all there are some upstream servers which are connected to different devices (serial line) so I want some URL to different the servers or (places). So I use http://nginx.server.com//. Something is working because IcedTea asks my permissions for Java thing and the siloutte of the WEB page appears full of error messages (missing IDs, etc.) I think this first stage is not correct because the different (virtual) URL. > What the applet actually *does* is another matter -- and in general, > only you know what that is. Yes, I know this. I just think I know some thing about the behaviour of the applet on the network. > > The java applet wants to connect to TCP/7000 on the proxied host. > > Something has happened but it doesn't work. :-( > > Can you describe the full network traffic when the applet works > normally? Maybe. I can use tcpdump of course. With rev. proxy and without it (through SSH port forwarding). Is that right for you? > The machines involved are: > > * the client (your browser) > * the nginx server > * the upstream web server OK. As I can see no communication between my client (browser) and upstream server. It would not be a problem, because I want to proxying everything (separated subnet, no route, firewall, etc.), But my browser needs communicate on TCP/7000 port to the nginx or upstream server and it doesn't try. :-( I think some other Java archives (.jar) not loaded by my browser. I don't know why. > If nginx were not involved, the client would make a http request of > the > upstream web server to fetch the applet, and then... what? I don't know. I tried to check the upstream server's Apache access and error.log without any information for me except missing some Java files when nginx is involved in the request. > The applet runs on the client and tries to access port 7000 of the > server > the client got it from? And speaks http on port 7000; or speaks its > own > protocol? Does it use anything other than port 7000? I think client try to connect TCP/7000 but not HTTP. (without nginx, so in normal situation when no proxy) But something is different because I did a test when the TCP/7000 wasn't included in SSH port forwarding. The applet started perfectly without the error messeges in it (missing IDs, etc.) but when I try to connect to the serial port I got an error message that no communication which was right because the TCP/7000 wasn't forwarded. So I'm somewhere before the 7000 port problem, I think. The behaviour of the applet is different when the proxy is involved in the communication or normal SSH port forwarding was used without (or disabled) TCP/7000 port. And that's why I don't understand what is the difference! > If it speaks http, then possibly a nginx http server{} which listens > on > port 7000 and proxy_passes to the upstream port 7000 could work. > > If it uses a single port, then possible a nginx stream server{} > could work. Not HTTP protocol, I think. TCP stream between the browser (client) and remote server. > If the applet knows the upstream it came from, and tries to access > that > directly, then nginx is probably not involved. I have to proxy all the communication. Of course SSH port forwarding will work but I don't like it or I don't want it. > The best way to understand how to proxy this service (if it is even > possible), is to know what it wants to do, at the network level. I try to get some information about this applet. > Good luck with it, Thank you. Will you help me if I got more information? TIA, Posted at Nginx Forum: https://forum.nginx.org/read.php?2,265320,265387#msg-265387 From nginx-forum at forum.nginx.org Wed Mar 16 11:10:24 2016 From: nginx-forum at forum.nginx.org (Roswebnet) Date: Wed, 16 Mar 2016 07:10:24 -0400 Subject: HTTP/2 and HTTPS In-Reply-To: <56E8F676.3080408@greengecko.co.nz> References: <56E8F676.3080408@greengecko.co.nz> Message-ID: On US15.10 I have nginx: root at LIA-RP-VS-WEB:/etc/nginx/tls# nginx -V nginx version: nginx/1.9.12 built by gcc 5.2.1 20151010 (Ubuntu 5.2.1-22ubuntu2) built with OpenSSL 1.0.2d 9 Jul 2015 TLS SNI support enabled configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --modules-path=/etc/nginx/modules --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx --group=nginx --with-http_ssl_module --with-http_realip_module --with-http_addition_module --with-http_sub_module --with-http_dav_module --with-http_flv_module --with-http_mp4_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_random_index_module --with-http_secure_link_module --with-http_stub_status_module --with-http_auth_request_module --with-http_xslt_module=dynamic --with-http_image_filter_module=dynamic --with-http_geoip_module=dynamic --with-threads --with-stream --with-stream_ssl_module --with-http_slice_module --with-mail --with-mail_ssl_module --with-file-aio --with-http_v2_module --with-cc-opt='-g -O2 -fstack-protector-strong -Wformat -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2' --with-ld-opt='-Wl,-Bsymbolic-functions -Wl,-z,relro -Wl,--as-needed' --with-ipv6 OpenSSL root at LIA-RP-VS-WEB:/etc/nginx/tls# openssl version -a -v -b -o -f -p -d OpenSSL 1.0.2g 1 Mar 2016 built on: reproducible build, date unspecified platform: debian-amd64 options: bn(64,64) rc4(16x,int) des(idx,cisc,16,int) blowfish(idx) compiler: gcc -I. -I.. -I../include -fPIC -DOPENSSL_PIC -DOPENSSL_THREADS -D_REENTRANT -DDSO_DLFCN -DHAVE_DLFCN_H -m64 -DL_ENDIAN -g -O2 -fstack-protector-strong -Wformat -Werror=format-security -D_FORTIFY_SOURCE=2 -Wl,-Bsymbolic-functions -Wl,-z,relro -Wa,--noexecstack -Wall -DMD32_REG_T=int -DOPENSSL_IA32_SSE2 -DOPENSSL_BN_ASM_MONT -DOPENSSL_BN_ASM_MONT5 -DOPENSSL_BN_ASM_GF2m -DSHA1_ASM -DSHA256_ASM -DSHA512_ASM -DMD5_ASM -DAES_ASM -DVPAES_ASM -DBSAES_ASM -DWHIRLPOOL_ASM -DGHASH_ASM -DECP_NISTZ256_ASM OPENSSLDIR: "/usr/lib/ssl" Problem with IE 11 is still exist. First connection to static page is HTTP/2 and if I do refresh I am getting HTTPS in developers tools. FF and Chrome do not have this problem. I think (I feel, my intuition tells me :) ) it may lay on self signed certificate. I have create it following the https://tools.ietf.org/html/rfc7540#section-9.2 requirements. root at LIA-RP-VS-WEB:/etc/nginx/tls# openssl ecparam -out privatekey.key -name prime256v1 -genkey root at LIA-RP-VS-WEB:/etc/nginx/tls# openssl req -new -key privatekey.key -out csr.pem root at LIA-RP-VS-WEB:/etc/nginx/tls# openssl req -x509 -days 365 -key privatekey.key -in csr.pem -out certificate.crt root at LIA-RP-VS-WEB:/etc/nginx/tls# ll total 20 drwxr-xr-x 2 root root 4096 Mar 13 20:48 ./ drwxr-xr-x 4 root root 4096 Mar 13 20:45 ../ -rw-r--r-- 1 root root 899 Mar 13 20:48 certificate.crt -rw-r--r-- 1 root root 530 Mar 13 20:48 csr.pem -rw-r--r-- 1 root root 302 Mar 13 20:45 privatekey.key This certificate is used for both HTTPS and HTTP2. P.S.: I saw multiple tutorials where nginx plays a role as a simple forward proxy for HTTP and HTTPS, will it work for HTTP/2? Any Idea? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,265326,265389#msg-265389 From nginx-forum at forum.nginx.org Wed Mar 16 11:12:20 2016 From: nginx-forum at forum.nginx.org (Roswebnet) Date: Wed, 16 Mar 2016 07:12:20 -0400 Subject: HTTP/2 and HTTPS In-Reply-To: References: <56E8F676.3080408@greengecko.co.nz> Message-ID: Oh yeah small addition because I use only IP's I can not test Self signed certificate most of the SSL checking tools online. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,265326,265390#msg-265390 From nginx-forum at forum.nginx.org Wed Mar 16 11:33:45 2016 From: nginx-forum at forum.nginx.org (miky) Date: Wed, 16 Mar 2016 07:33:45 -0400 Subject: upstream server does not match virtual host In-Reply-To: <20160316083358.GE3340@daoine.org> References: <20160316083358.GE3340@daoine.org> Message-ID: <86f77dbc6a3f3b26f2fc1c741c267e46.NginxMailingListEnglish@forum.nginx.org> Thank you Francis for pointing the right direction. If I understand correctly the documentation I should add proxy_set_header Host $host; considering I had the upstream server defined by: upstream webservers { server virt1:80; } I tried it but had no luck with it, I also tried $http_host; and $proxy_host; Also should I place this proxy_set_header before the proxy_pass ? I think yes, could you confirm this ? Regards Posted at Nginx Forum: https://forum.nginx.org/read.php?2,265370,265391#msg-265391 From nginx-forum at forum.nginx.org Wed Mar 16 12:24:33 2016 From: nginx-forum at forum.nginx.org (itpp2012) Date: Wed, 16 Mar 2016 08:24:33 -0400 Subject: HTTP/2 and HTTPS In-Reply-To: References: <56E8F676.3080408@greengecko.co.nz> Message-ID: <23b8da0256081bcdc9d418f97f7541b3.NginxMailingListEnglish@forum.nginx.org> Roswebnet Wrote: ------------------------------------------------------- > On US15.10 I have nginx: > > root at LIA-RP-VS-WEB:/etc/nginx/tls# nginx -V > nginx version: nginx/1.9.12 > built by gcc 5.2.1 20151010 (Ubuntu 5.2.1-22ubuntu2) > built with OpenSSL 1.0.2d 9 Jul 2015 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ You need g, not d. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,265326,265392#msg-265392 From nginx-forum at forum.nginx.org Wed Mar 16 12:30:08 2016 From: nginx-forum at forum.nginx.org (Roswebnet) Date: Wed, 16 Mar 2016 08:30:08 -0400 Subject: HTTP/2 and HTTPS In-Reply-To: <23b8da0256081bcdc9d418f97f7541b3.NginxMailingListEnglish@forum.nginx.org> References: <56E8F676.3080408@greengecko.co.nz> <23b8da0256081bcdc9d418f97f7541b3.NginxMailingListEnglish@forum.nginx.org> Message-ID: <4fc44945ada67c9c2269aa382d411642.NginxMailingListEnglish@forum.nginx.org> Ok thank you for pointing out. This version of nginx I got from NGINX repository. http://nginx.org/en/linux_packages.html#mainline deb http://nginx.org/packages/mainline/ubuntu/ wily nginx deb-src http://nginx.org/packages/mainline/ubuntu/ wily nginx It means that it was built with older version of openssl. Am I right? Therefore, I need to compile nginx by myself... Posted at Nginx Forum: https://forum.nginx.org/read.php?2,265326,265393#msg-265393 From sz159357 at yahoo.com Wed Mar 16 16:01:01 2016 From: sz159357 at yahoo.com (SZ) Date: Wed, 16 Mar 2016 16:01:01 +0000 (UTC) Subject: Reverse Proxy without backend private key References: <747150900.1189010.1458144061350.JavaMail.yahoo.ref@mail.yahoo.com> Message-ID: <747150900.1189010.1458144061350.JavaMail.yahoo@mail.yahoo.com> dear allI want to setup this: client ----------------- https-nginx (443)?----------------- https-backend (443) but i don't have private key and i want to cache data...what can i do ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From vbart at nginx.com Wed Mar 16 16:34:12 2016 From: vbart at nginx.com (Valentin V. Bartenev) Date: Wed, 16 Mar 2016 19:34:12 +0300 Subject: dns name for upstream In-Reply-To: References: <5929338.pLYABk6r8D@vbart-workstation> Message-ID: <3139486.3MGE5UZGf5@vbart-workstation> On Friday 11 March 2016 07:01:49 Frank Liu wrote: > Hi Valentin, > Thanks for clearing up . I was looking at the wrong module. > Do you have any comments to the other two questions? 1. That will result in "502 Bad Gateway" response, and corresponding message will be written to error_log. 2. There's the "resolve" parameter of the "server" directive in upstream, but it's available in commercial version only. See the docs: http://nginx.org/en/docs/http/ngx_http_upstream_module.html#server Back to your first question, this parameter has different behavior, it will preserve the old IPs in case of resolving error. wbr, Valentin V. Bartenev From wandenberg at gmail.com Wed Mar 16 16:41:07 2016 From: wandenberg at gmail.com (Wandenberg Peixoto) Date: Wed, 16 Mar 2016 13:41:07 -0300 Subject: dns name for upstream In-Reply-To: <3139486.3MGE5UZGf5@vbart-workstation> References: <5929338.pLYABk6r8D@vbart-workstation> <3139486.3MGE5UZGf5@vbart-workstation> Message-ID: You can try to use this module to resolve the DNS ;) https://github.com/GUI/nginx-upstream-dynamic-servers On Wed, Mar 16, 2016 at 1:34 PM, Valentin V. Bartenev wrote: > On Friday 11 March 2016 07:01:49 Frank Liu wrote: > > Hi Valentin, > > Thanks for clearing up . I was looking at the wrong module. > > Do you have any comments to the other two questions? > > 1. That will result in "502 Bad Gateway" response, and corresponding > message will be written to error_log. > > 2. There's the "resolve" parameter of the "server" directive in upstream, > but it's available in commercial version only. > > See the docs: > http://nginx.org/en/docs/http/ngx_http_upstream_module.html#server > > Back to your first question, this parameter has different behavior, > it will preserve the old IPs in case of resolving error. > > > wbr, Valentin V. Bartenev > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Wed Mar 16 16:52:40 2016 From: francis at daoine.org (Francis Daly) Date: Wed, 16 Mar 2016 16:52:40 +0000 Subject: upstream server does not match virtual host In-Reply-To: <86f77dbc6a3f3b26f2fc1c741c267e46.NginxMailingListEnglish@forum.nginx.org> References: <20160316083358.GE3340@daoine.org> <86f77dbc6a3f3b26f2fc1c741c267e46.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20160316165240.GG3340@daoine.org> On Wed, Mar 16, 2016 at 07:33:45AM -0400, miky wrote: Hi there, > If I understand correctly the documentation I should add > proxy_set_header Host $host; What Host: header is sent by the browser to the upstream web server when things work? >From the original mail, that seems to be "virt1". So you want proxy_set_header Host virt1; > Also should I place this proxy_set_header before the proxy_pass ? I think > yes, could you confirm this ? Easiest is to say "in the same {}-block as the proxy_pass". The order of the two within that block does not matter. Good luck with it, f -- Francis Daly francis at daoine.org From mdounin at mdounin.ru Wed Mar 16 20:12:47 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 16 Mar 2016 23:12:47 +0300 Subject: proxy_ssl_certificate not working as expected In-Reply-To: <2de119a860223aace08c95141c38093b.NginxMailingListEnglish@forum.nginx.org> References: <2de119a860223aace08c95141c38093b.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20160316201247.GM12808@mdounin.ru> Hello! On Sun, Mar 13, 2016 at 07:24:05AM -0400, elanh wrote: > Hello, > > I'm using nginx as a proxy to a backend server. > The backend server is also using nginx and enforcing client certificate > authentication using the ssl_client_certificate and ssl_verify_client > directives. > > In my nginx server I set the following: > > location /proxy { > proxy_pass https://www.backend.com; > > proxy_set_header X-Forwarded-Host $host; > proxy_set_header X-Forwarded-Server $host; > proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; > > proxy_ssl_certificate /etc/nginx/cert/client.crt; > proxy_ssl_certificate_key /etc/nginx/cert/client.key; > } > > according to > http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_ssl_certificate. > > However, the backend is still responding with a 400 reponse code "No > required SSL certificate was sent". > > Note that when issuing requests to the backend server using wget with the > client certificate, I get a valid 200 OK response. > > What am I missing in my nginx configuration? Configuration looks fine, but likely it's not a configuration which is used to handle the requests. Some basic hints: - make sure to test with something low level like telnet/curl/wget, browsers often return cached results; - check if the configuration is actually loaded (you can use "nginx -t" to check for syntax errors; look into error log after a configuration reload to make sure reload went fine; just stop and then start nginx to make sure); - make sure the location you are configuring is one used for requests (a simple test would be to write something like "return 200 ok;" in it and check if "ok" is actually returned). Note well that proxy_ssl_certificate is only available in nginx 1.7.8 and newer. Configuration testing as done by "nginx -t" should complain about unknown directives if you are using an older version. -- Maxim Dounin http://nginx.org/ From francis at daoine.org Wed Mar 16 22:09:21 2016 From: francis at daoine.org (Francis Daly) Date: Wed, 16 Mar 2016 22:09:21 +0000 Subject: Rev. proxying a java applet In-Reply-To: <1d2dbe882ae8b2a228de1bef02899eec.NginxMailingListEnglish@forum.nginx.org> References: <20160316084300.GF3340@daoine.org> <1d2dbe882ae8b2a228de1bef02899eec.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20160316220921.GH3340@daoine.org> On Wed, Mar 16, 2016 at 06:43:53AM -0400, aruzsi wrote: Hi there, > > The applet itself will be a http request, which it looks like you have > > working. > > The start page is almost working. > First of all there are some upstream servers which are connected to > different devices (serial line) so I want some URL to different the > servers or (places). > > So I use http://nginx.server.com//. Start simple. Get just one working first. > Something is working because IcedTea asks my permissions for > Java thing and the siloutte of the WEB page appears full of error > messages (missing IDs, etc.) > I think this first stage is not correct because the different (virtual) > URL. Check the apache logs for when you connect to apache directly; you will want to know what the full set of requests and responses is. Then when you go through nginx, do you see the same requests and responses? If not, find out why and fix it. > > What the applet actually *does* is another matter -- and in general, > > only you know what that is. > > Yes, I know this. I just think I know some thing about the behaviour > of the applet on the network. You will need to know that, if you want to configure nginx to support it. For example: how does the applet know to use port 7000? How does the applet know which server to connect to on port 7000? > > Can you describe the full network traffic when the applet works > > normally? > > Maybe. I can use tcpdump of course. > With rev. proxy and without it (through SSH port forwarding). Is that > right for you? SSH port forwarding is a new thing. Keep it simple. Until you know what is supposed to happen, you will not be able to know whether everything is working the way you want. > > The machines involved are: > > > > * the client (your browser) > > * the nginx server > > * the upstream web server > > OK. > As I can see no communication between my client (browser) and > upstream server. Should there be any communication there? > It would not be a problem, because I want to > proxying everything (separated subnet, no route, firewall, etc.), > But my browser needs communicate on TCP/7000 port to the > nginx or upstream server and it doesn't try. :-( Which server should your browser choose? Why should it choose that server? Learn that first, and then it might be clear what you need to configure. > I think some other Java archives (.jar) not loaded by my browser. > I don't know why. Which other jars? How does your browser know to load them? Which urls does your browser use for them? Check logs for when things work. > > If nginx were not involved, the client would make a http request of > > the > > upstream web server to fetch the applet, and then... what? > > I don't know. > I tried to check the upstream server's Apache access and error.log > without any information for me except missing some Java files > when nginx is involved in the request. > > > The applet runs on the client and tries to access port 7000 of the > > server > > the client got it from? And speaks http on port 7000; or speaks its > > own > > protocol? Does it use anything other than port 7000? > > I think client try to connect TCP/7000 but not HTTP. (without nginx, > so in normal situation when no proxy) > But something is different because I did a test when the TCP/7000 > wasn't included in SSH port forwarding. The applet started perfectly > without the error messeges in it (missing IDs, etc.) but when I try to > connect to the serial port I got an error message that no communication > which was right because the TCP/7000 wasn't forwarded. I do not understand what network traffic you are describing. That's ok - I don't need to understand it. But you should understand it, and be able to draw a picture of what talks to what. > > If it uses a single port, then possible a nginx stream server{} > > could work. > > Not HTTP protocol, I think. TCP stream between the browser (client) and > remote server. In that case, nginx is not involved, no? How does the browser know to talk to the remote server? > Thank you. Will you help me if I got more information? If you have enough information, it may be clear what is needed. So long as it remains nginx-relevant, keep updating this thread. Someone will probably be able to offer advice. Cheers, f -- Francis Daly francis at daoine.org From francis at daoine.org Wed Mar 16 22:14:21 2016 From: francis at daoine.org (Francis Daly) Date: Wed, 16 Mar 2016 22:14:21 +0000 Subject: Reverse Proxy without backend private key In-Reply-To: <747150900.1189010.1458144061350.JavaMail.yahoo@mail.yahoo.com> References: <747150900.1189010.1458144061350.JavaMail.yahoo.ref@mail.yahoo.com> <747150900.1189010.1458144061350.JavaMail.yahoo@mail.yahoo.com> Message-ID: <20160316221421.GI3340@daoine.org> On Wed, Mar 16, 2016 at 04:01:01PM +0000, SZ wrote: Hi there, > dear allI want to setup this: > client ----------------- https-nginx (443)?----------------- https-backend (443) > but i don't have private key and i want to cache data...what can i do ? You get your own certificate and key in the name of your backend, and use that on nginx. (nginx is a reverse proxy. You reverse-proxy services you control.) f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Wed Mar 16 23:12:34 2016 From: nginx-forum at forum.nginx.org (miky) Date: Wed, 16 Mar 2016 19:12:34 -0400 Subject: upstream server does not match virtual host - SOLVED In-Reply-To: <20160316165240.GG3340@daoine.org> References: <20160316165240.GG3340@daoine.org> Message-ID: Francis, Thank you very much, your help is always appreciated. It didn't work in the first place because I also had an include proxy params statement which overrided the setting you indicated. Retesting from a very basic configuration proved the solution you offered worked. At the end I kept the include proxy param and modified what I wanted there. Have a nice day Posted at Nginx Forum: https://forum.nginx.org/read.php?2,265370,265417#msg-265417 From nginx-forum at forum.nginx.org Wed Mar 16 23:56:14 2016 From: nginx-forum at forum.nginx.org (ajrpayne) Date: Wed, 16 Mar 2016 19:56:14 -0400 Subject: nginx + openssl 1.0.2 increased memory usage Message-ID: <38bb4102b7238fbaa8d6f583d5ed8139.NginxMailingListEnglish@forum.nginx.org> Hello, We attempted to upgrade from nginx + openssl 1.0.1 to nginx + openssl 1.0.2, but unfortunately we ran into some memory related issues when running nginx + openssl 1.0.2. When we are running nginx + openssl 1.0.1 as a reverse proxy, our nginx instance uses about 10 gigs of memory, and around 20 gigs during a reload. Which to me makes sense as during the reload we'd have double the number of worker processes. When we are running nginx + openssl 1.0.2 as a reverse proxy, our nginx instance uses about 25 gigs or more of memory, and around 50 gigs or more during a reload, and sometimes that memory is not recovered. Has anyone else noticed similar increases in memory use when combining nginx + openssl 1.0.2? Any ideas what could be the cause for the increase? Just looking for a general direction I should be exploring. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,265419,265419#msg-265419 From agentzh at gmail.com Thu Mar 17 05:26:06 2016 From: agentzh at gmail.com (Yichun Zhang (agentzh)) Date: Wed, 16 Mar 2016 22:26:06 -0700 Subject: [ANN] OpenResty 1.9.7.4 released Message-ID: Hi folks I am happy to announce the new formal release, 1.9.7.4, of the OpenResty web platform based on NGINX and LuaJIT: https://openresty.org/#Download Both the (portable) source code distribution and the Win32 binary distribution are provided on this Download page. The highlights of this version are 1. New Lua API functions for reverse DNS queries in the lua-resty-dns library. 2. New options to insert custom nginx.conf configuration snippets in the "resty" command-line utility. 3. Various important bug fixes in recently-added new features like balancer_by_lua* and ssl_certificate_by_lua*. Special thanks go to all our developers and contributors! Complete change log since the last (formal) release, 1.9.7.3: * bugfix: "./configure": use of relative paths like "./nginx" in "--prefix=PATH" led to compilation errors. thanks Tao Huang for the report. * upgraded the ngx_lua module to 0.10.2. * feature: the C implementation for set SSL private keys now supports non-RSA private keys as well. thanks Alessandro Ghedini for the patch. * feature: ngx.log() and print() now accept Lua tables with the "__tostring" metamethod. * feature: added new API, ngx.config.subsystem, which always takes the Lua string value "http" in this module. * feature: added new API function ngx.socket.stream() which is an alias to ngx.socket.tcp(). * feature: added HTTP 2.0 support to ngx.req.http_version(). * feature: this module can now be built as a "dynamic module" with NGINX 1.9.11+ via the "--add-dynamic-module=PATH" option of "./configure". * bugfix: balancer_by_lua* did not respect "lua_code_cache off". thanks XI WANG for the report and Dejiang Zhu for the patch. * bugfix: hot loop might happen when balancer_by_lua* was used with the keepalive directive. thanks GhostZch for the report. * bugfix: balancer_by_lua* might crash the nginx worker when SSL (https) is used for upstream connections. thanks Alistair Wooldrige for the report. * bugfix: stream-typed cosockets: we did not set the "error" field of the "ngx_connection_t" object which MIGHT lead to socket leaks. * bugfix: avoided a potential memory issue when the request handler is aborted prematurely (via ngx.exit, for example) while a light thread is still waiting on ngx.flush(true). * bugfix: we might not respond to client abort events when lua_check_client_abort is on. * bugfix: fixed the compiler warning "unused variable" when compiling with nginx cores older than 1.7.5 (exclusive). thanks Marc Neudert for the patch. * bugfix: fixed compilation errors with LibreSSL by disabling ssl_certificiate_by_lua* and some ngx.ssl API functions that are not supported by LibreSSL. thanks George Liu and Bret for the reports. * bugfix: fixed compilation errors with nginx 1.9.11+. thanks Charles R. Portwood II and Tomas Kvasnicka for the reports. * bugfix: fixed compatibility issues with other nginx modules loaded as "dynamic modules" in NGINX 1.9.11+. * bugfix: SSL: set error message on "i2d_X509()" failure as well. thanks Alessandro Ghedini for the patch. * bugfix: SSL: remove leading white space from error messages. thanks Alessandro Ghedini for the patch. * optimize: use "lua_pushliteral()" for string literals. thanks Tatsuhiko Kubo for the patch. * change: unmatched submatch captures are now set to "false" instead of "nil" in the captures table (named captures are not affected). thanks Julien Desgats for the patch. * change: ngx.req.get_post_args: returns error message instead of raising an exception when request body is in temp file. thanks yejingx for the patch. * change: ngx.shared.DICT: throws a Lua error when the "exptime" argument is invalid. * doc: documented that ngx.req.get_body_data() is available in the context of log_by_lua*. thanks YuanSheng Wang for the patch. * doc: added balancer_by_lua* and ssl_certificate_by_lua* to the context of some Lua API functions. thanks Dejiang Zhu for the patch. * doc: fixed the documentation of log_by_lua* which actually runs after nginx's access log handler. * doc: updated the documentation for ngx.req.discard_body() to reflect recent changes. now ignoring request bodies indeed trigger discarding the body upon request finalization. * doc: updated the docs of ngx.get_phase() for new Lua execution contexts. thanks Thibault Charbonnier for the patches. * doc: typo fix in sample configurations from othree. * doc: typo fix in sample configurations from Adam Malone. * doc: typo fix from Prayag Verma. * doc: typo fix from leemingtian. * upgraded the lua-resty-core library to 0.1.5. * optimize: ngx.ssl: removed unnecessary request checks from the priv_key_pem_to_der and cert_pem_to_der functions to allow them to be used in more contexts. thanks Tom Thorogood for the patch. * bugfix: resty.core.regex: non-string values passed as string arguments might throw out Lua errors. thanks Robert Paprocki for the patch. * change: resty.core.shdict: throws out a Lua error when the exptime arg is invalid. * change: resty.core.regex: unmatched submatch captures are set to "false" instead of "nil" in captures table. thanks Julien Desgats for the patch. * doc: typo fix from thefosk. * doc: typo fix from Anton Ovchinnikov. * upgraded the ngx_lua_upstream module to 0.05. * feature: expose peer connection count as the "conns" Lua table field. thanks Justin Li for the patch. * feature: this module can now be built as a "dynamic module" with NGINX 1.9.11+ via the "--add-dynamic-module=PATH" option of "./configure". thanks Hiroaki Nakamura for the original patch. * doc: fixes from Justin Li. * upgraded the lua-resty-upstream-healthcheck library to 0.04. * feature: added IPv6 address support in upstream peer names. thanks szelcsanyi for the patch. * feature: status_page(): now we mark those upstream blocks without any (live) health checkers so as to avoid potential confusions when the checker light threads were aborted due to some fatal errors. * refactor: various coding refactoring to improve code readability. thanks Thijs Schreijer and Dejiang Zhu for the patches. * optimize: minor Lua code improvements from Aapo Talvensaari. * doc: link fixes from Thijs Schreijer. * doc: fixed escaping issues in the configuration samples in the Synopsis section by migrating to the "*_by_lua_block {}" directives. thanks whatacold for the report. * upgraded the lua-resty-dns library to 0.15. * feature: added reverse DNS utilities: reverse_query, arpa_str, and expand_ipv6_addr. thanks bjoe2k4 for the patch. * upgraded resty-cli to 0.06. * feature: resty: added new options "--http-include=PATH" and "--main-include=PATH" to include user files in the auto-generated "nginx.conf" file. thanks Nils Nordman for the patch. * upgraded the ngx_set_misc module to 0.30. * feature: this module can now be compiled as a dynamic module with NGINX 1.9.11+ via the "--with-dynamic-module=PATH" option of "./configure". * bugfix: fixed errors and warnings with C compilers without variadic macro support. * upgraded the ngx_array_var module to 0.05. * feature: this module can now be compiled as a dynamic module with NGINX 1.9.11+ via the "--with-dynamic-module=PATH" option of "./configure". * bugfix: fixed errors and warnings with C compilers without variadic macro support. The HTML version of the change log with lots of helpful hyper-links can be browsed here: https://openresty.org/#ChangeLog1009007 OpenResty (aka. ngx_openresty) is a full-fledged web platform by bundling the standard Nginx core, Lua/LuaJIT, lots of 3rd-party Nginx modules and Lua libraries, as well as most of their external dependencies. See OpenResty's homepage for details: https://openresty.org/ We have run extensive testing on our Amazon EC2 test cluster and ensured that all the components (including the Nginx core) play well together. The latest test report can always be found here: https://qa.openresty.org Have fun! -agentzh From nginx-forum at forum.nginx.org Thu Mar 17 11:04:11 2016 From: nginx-forum at forum.nginx.org (meteor8488) Date: Thu, 17 Mar 2016 07:04:11 -0400 Subject: nginx http2 pictures timeout Message-ID: <3ab1fdc5024d6f1a7df64785f5d8dcb6.NginxMailingListEnglish@forum.nginx.org> Hi All, After I upgrade nginx to 1.9.12 and enabled http2 for my website. I found a wired issue related with download pictures. My website is a photo-sharing websites. So on each page there are about 100-200 pictures, the size of each of them may from 10K to 500K. In the past (http and https with spdy), I'm using below settings: client_body_timeout 10; client_header_timeout 10; keepalive_timeout 30; send_timeout 30; keepalive_requests 200; keepalive_disable none; reset_timedout_connection on; And everything is good. For users from different countries, they can view the pictures without any error. But after I upgrade to http2, with the same settings, it seems for users who have a good bandwidth, everything is fine. But for users who are from different countries, some of them experience issue that on the webpage some pictures can't display properly. And if they refresh the webpage for 1 times or more, then the whole webpage can display as normal, all pictures are downloaded and display. I tried to change send_timeout value from 30s to 300s, then it seems it can fix the problem. I used firefox to monitor the webpage load speed, I can see for the pictures of the webpage, the waiting time is pretty long, sometimes it's more than 60s - 120s, and then firefox start to receiving the pictures. My understanding is that after enable http2, nginx are using "Multiplexing and concurrency" to send out pictures, which mean for the webpage with 200 pictures, the web browser is ready to receive all pictures when the connections is established. But for clients who has limited network bandwidth to the server, they can only download the pictures slowly. And then some of the pictures will be timeout and can't display. So, is there any better solution to fix this kind of issue? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,265430,265430#msg-265430 From phil at dunlop-lello.uk Thu Mar 17 11:41:28 2016 From: phil at dunlop-lello.uk (Phil Lello) Date: Thu, 17 Mar 2016 11:41:28 +0000 Subject: nginx http2 pictures timeout In-Reply-To: <3ab1fdc5024d6f1a7df64785f5d8dcb6.NginxMailingListEnglish@forum.nginx.org> References: <3ab1fdc5024d6f1a7df64785f5d8dcb6.NginxMailingListEnglish@forum.nginx.org> Message-ID: Are you able to test with an alternate brand of browser to isolate if this is a client or server issue? Does turning off http/2 fix the issue? It's possible but unlikely that network conditions have changed at the same time as your move to http/2. Phil Hi All, After I upgrade nginx to 1.9.12 and enabled http2 for my website. I found a wired issue related with download pictures. My website is a photo-sharing websites. So on each page there are about 100-200 pictures, the size of each of them may from 10K to 500K. In the past (http and https with spdy), I'm using below settings: client_body_timeout 10; client_header_timeout 10; keepalive_timeout 30; send_timeout 30; keepalive_requests 200; keepalive_disable none; reset_timedout_connection on; And everything is good. For users from different countries, they can view the pictures without any error. But after I upgrade to http2, with the same settings, it seems for users who have a good bandwidth, everything is fine. But for users who are from different countries, some of them experience issue that on the webpage some pictures can't display properly. And if they refresh the webpage for 1 times or more, then the whole webpage can display as normal, all pictures are downloaded and display. I tried to change send_timeout value from 30s to 300s, then it seems it can fix the problem. I used firefox to monitor the webpage load speed, I can see for the pictures of the webpage, the waiting time is pretty long, sometimes it's more than 60s - 120s, and then firefox start to receiving the pictures. My understanding is that after enable http2, nginx are using "Multiplexing and concurrency" to send out pictures, which mean for the webpage with 200 pictures, the web browser is ready to receive all pictures when the connections is established. But for clients who has limited network bandwidth to the server, they can only download the pictures slowly. And then some of the pictures will be timeout and can't display. So, is there any better solution to fix this kind of issue? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,265430,265430#msg-265430 _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From vbart at nginx.com Thu Mar 17 13:23:17 2016 From: vbart at nginx.com (Valentin V. Bartenev) Date: Thu, 17 Mar 2016 16:23:17 +0300 Subject: nginx http2 pictures timeout In-Reply-To: <3ab1fdc5024d6f1a7df64785f5d8dcb6.NginxMailingListEnglish@forum.nginx.org> References: <3ab1fdc5024d6f1a7df64785f5d8dcb6.NginxMailingListEnglish@forum.nginx.org> Message-ID: <2575407.3zCVz9Pomt@vbart-workstation> On Thursday 17 March 2016 07:04:11 meteor8488 wrote: > Hi All, > > After I upgrade nginx to 1.9.12 and enabled http2 for my website. I found a > wired issue related with download pictures. > > My website is a photo-sharing websites. So on each page there are about > 100-200 pictures, the size of each of them may from 10K to 500K. > > In the past (http and https with spdy), I'm using below settings: > > client_body_timeout 10; > client_header_timeout 10; > keepalive_timeout 30; > send_timeout 30; > > keepalive_requests 200; > keepalive_disable none; > reset_timedout_connection on; > > And everything is good. For users from different countries, they can view > the pictures without any error. > > But after I upgrade to http2, with the same settings, it seems for users who > have a good bandwidth, everything is fine. But for users who are from > different countries, some of them experience issue that on the webpage some > pictures can't display properly. And if they refresh the webpage for 1 times > or more, then the whole webpage can display as normal, all pictures are > downloaded and display. > > > I tried to change send_timeout value from 30s to 300s, then it seems it can > fix the problem. I used firefox to monitor the webpage load speed, I can see > for the pictures of the webpage, the waiting time is pretty long, sometimes > it's more than 60s - 120s, and then firefox start to receiving the > pictures. > > My understanding is that after enable http2, nginx are using "Multiplexing > and concurrency" to send out pictures, which mean for the webpage with 200 > pictures, the web browser is ready to receive all pictures when the > connections is established. But for clients who has limited network > bandwidth to the server, they can only download the pictures slowly. And > then some of the pictures will be timeout and can't display. > > So, is there any better solution to fix this kind of issue? > [..] You can try to lower concurrency in HTTP/2 connection. See the "http2_max_concurrent_streams" directive: http://nginx.org/r/http2_max_concurrent_streams wbr, Valentin V. Bartenev From nginx-forum at forum.nginx.org Thu Mar 17 13:52:02 2016 From: nginx-forum at forum.nginx.org (George) Date: Thu, 17 Mar 2016 09:52:02 -0400 Subject: [ANN] OpenResty 1.9.7.4 released In-Reply-To: References: Message-ID: <349925563d650b4a25c7cc94e58424f1.NginxMailingListEnglish@forum.nginx.org> cheers agentzh thanks for that workaround for LibreSSL and ssl_certificiate_by_lua* incompatibility :) Posted at Nginx Forum: https://forum.nginx.org/read.php?2,265422,265450#msg-265450 From nginx-forum at forum.nginx.org Thu Mar 17 23:32:40 2016 From: nginx-forum at forum.nginx.org (meteor8488) Date: Thu, 17 Mar 2016 19:32:40 -0400 Subject: nginx http2 pictures timeout In-Reply-To: References: Message-ID: Hi, Thanks for your reply. I tried to disable http/2, then this issue got fixed.So pretty sure this issue is caused by http2 Posted at Nginx Forum: https://forum.nginx.org/read.php?2,265430,265453#msg-265453 From nginx-forum at forum.nginx.org Thu Mar 17 23:34:07 2016 From: nginx-forum at forum.nginx.org (meteor8488) Date: Thu, 17 Mar 2016 19:34:07 -0400 Subject: nginx http2 pictures timeout In-Reply-To: <2575407.3zCVz9Pomt@vbart-workstation> References: <2575407.3zCVz9Pomt@vbart-workstation> Message-ID: <1581cf4b18b8766531d09e9a86eeee65.NginxMailingListEnglish@forum.nginx.org> Thanks for your reply. The default value is 128 for http2_max_concurrent_streams I tried to change it 64, no big difference. And I also checked http2 document, it suggested that this kind of value should not less than 100 Posted at Nginx Forum: https://forum.nginx.org/read.php?2,265430,265454#msg-265454 From nginx-forum at forum.nginx.org Fri Mar 18 13:45:58 2016 From: nginx-forum at forum.nginx.org (malish8632) Date: Fri, 18 Mar 2016 09:45:58 -0400 Subject: limit_req is not working with dynamically extracted user address Message-ID: <2cf59f93e5e235abb819b366fc5a1e01.NginxMailingListEnglish@forum.nginx.org> Hi, if our HTTP block looks like below where we find IP from X-Forwarded-For using perl module it looks like zone and limit_req are not using correct variable $user_real_ip or it is reset right after logging. The nginx_access_log shows the first logged field correctly as $user_real_ip - which is first element of comma separated IP addresses in X-Forwarded-For header from different hopes. User request comes thorough several different CDN and DDOS services. But when limit_req kicks in it would take for some reason last element in same header X-Forwarded-For which of course not our intention. Can you help understand why and what we are doing wrong? Also to note we are using 1.6.2 version. We tried to upgrade it to 1.8 version but it didn't help. Thanks, Sergey Example of nginx_access_log and following nginx_error_log ------------------------------------------------------------------------------------- 555.182.61.171 - - - www.my.com - [17/Mar/2016:17:44:15 -0400] "GET /our/api/here HTTP/1.1" 503 270 0.000 "-" "Java/1.8.0_51" "555.182.61.171, 333.101.98.188" - 2016/03/17 17:44:15 [error] 19382#0: *8 limiting requests, excess: 5.613 by zone "one", client: 333.101.98.188, server: www.my.com, request: "GET /our/api/here HTTP/1.1", host: "www.my.com" ------------------------------------------------------------------------------------- HTTP block --------------------------------------------- http { include /etc/nginx/mime.types; include /etc/nginx/proxy.conf; index index.html index.htm index.php; .... perl_set $user_real_ip ' sub { my $r = shift; my $str = $r->header_in("X-Forwarded-For"); my @fields = split /,/, $str; my $real_ip = $fields[0]; return $real_ip; } '; log_format main '$user_real_ip - $remote_addr - $remote_user - $host - [$time_local] "$request" ' '$status $body_bytes_sent $request_time "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for" $http_cf_ray'; sendfile on; keepalive_timeout 20; limit_req_zone $user_real_ip zone=one:50m rate=1r/s; include /etc/nginx/conf.d/*.conf; } www.conf location ------------------------------- location /xxxx/ { limit_req zone=one burst=5 nodelay; ..... Posted at Nginx Forum: https://forum.nginx.org/read.php?2,265461,265461#msg-265461 From mdounin at mdounin.ru Fri Mar 18 14:21:32 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 18 Mar 2016 17:21:32 +0300 Subject: limit_req is not working with dynamically extracted user address In-Reply-To: <2cf59f93e5e235abb819b366fc5a1e01.NginxMailingListEnglish@forum.nginx.org> References: <2cf59f93e5e235abb819b366fc5a1e01.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20160318142132.GH12808@mdounin.ru> Hello! On Fri, Mar 18, 2016 at 09:45:58AM -0400, malish8632 wrote: > Hi, if our HTTP block looks like below where we find IP from X-Forwarded-For > using perl module it looks like zone and limit_req are not using correct > variable $user_real_ip or it is reset right after logging. > > The nginx_access_log shows the first logged field correctly as $user_real_ip > - which is first element of comma separated IP addresses in X-Forwarded-For > header from different hopes. User request comes thorough several different > CDN and DDOS services. > > But when limit_req kicks in it would take for some reason last element in > same header X-Forwarded-For which of course not our intention. > > Can you help understand why and what we are doing wrong? How did you found that limit_req uses a wrong element? Note: > 2016/03/17 17:44:15 [error] 19382#0: *8 limiting requests, excess: 5.613 by > zone "one", client: 333.101.98.188, server: www.my.com, request: "GET > /our/api/here HTTP/1.1", host: "www.my.com" The "client: 333.101.98.188" is the client address as automatically logged by nginx for all error messages (also known as $remote_addr). It's not related to the limit string used by limit_req. [...] > perl_set $user_real_ip ' > > sub { > my $r = shift; > my $str = $r->header_in("X-Forwarded-For"); > my @fields = split /,/, $str; > my $real_ip = $fields[0]; > return $real_ip; > } > > '; Note well: this can be easily tricked by clients using a fake address in initial X-Forwarded-For header. You may want to use the realip module instead, it allows to trust only known proxies, see here: http://nginx.org/en/docs/http/ngx_http_realip_module.html -- Maxim Dounin http://nginx.org/ From nginx-forum at forum.nginx.org Fri Mar 18 14:48:56 2016 From: nginx-forum at forum.nginx.org (malish8632) Date: Fri, 18 Mar 2016 10:48:56 -0400 Subject: limit_req is not working with dynamically extracted user address In-Reply-To: <20160318142132.GH12808@mdounin.ru> References: <20160318142132.GH12808@mdounin.ru> Message-ID: Hi Maxim, thank you for quick response. > How did you found that limit_req uses a wrong element? We don't know if this is limit_req - in reality we were just looking into logs and I guess that's what confused us. We observed those IPs and rolled back the changes as we assumed that all requests from CDN or DDOS Service were blocked. The only way to I guess to verify that our current schema works is to use some arbitrary IP and see if our requests are blocked rather then CDN service IP is blocked. We've looked into http://nginx.org/en/docs/http/ngx_http_realip_module.html and not sure if it is going to work. As you saw one of the examples we have other services in front of us. There are 2 cases: User -> DDOS Service -> Our NGINX - X-Forwarded-For ex: 555.182.61.171, 333.101.98.188 User -> CDN -> DDOS Service -> Our NGINX - X-Forwarded-For ex: 555.182.61.171, 444.1.3.56, 555.12.34.567, 333.101.98.188 Will realip module able to identify real IP of end user? Should we set CIDR of both DDOS Service and CDN Service as real ip tables: set_real_ip_from 192.168.1.0/24; set_real_ip_from 192.168.2.1; set_real_ip_from 2001:0db8::/32; Thanks again. Sergey Posted at Nginx Forum: https://forum.nginx.org/read.php?2,265461,265491#msg-265491 From mdounin at mdounin.ru Fri Mar 18 15:10:30 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 18 Mar 2016 18:10:30 +0300 Subject: limit_req is not working with dynamically extracted user address In-Reply-To: References: <20160318142132.GH12808@mdounin.ru> Message-ID: <20160318151030.GK12808@mdounin.ru> Hello! On Fri, Mar 18, 2016 at 10:48:56AM -0400, malish8632 wrote: > > How did you found that limit_req uses a wrong element? > > We don't know if this is limit_req - in reality we were just looking into > logs and I guess that's what confused us. We observed those IPs and rolled > back the changes as we assumed that all requests from CDN or DDOS Service > were blocked. > > The only way to I guess to verify that our current schema works is to use > some arbitrary IP and see if our requests are blocked rather then CDN > service IP is blocked. Ok, so no problem here. > We've looked into http://nginx.org/en/docs/http/ngx_http_realip_module.html > and not sure if it is going to work. > > As you saw one of the examples we have other services in front of us. > There are 2 cases: > User -> DDOS Service -> Our NGINX - X-Forwarded-For ex: > 555.182.61.171, 333.101.98.188 > User -> CDN -> DDOS Service -> Our NGINX - X-Forwarded-For ex: > 555.182.61.171, 444.1.3.56, 555.12.34.567, 333.101.98.188 > > Will realip module able to identify real IP of end user? > Should we set CIDR of both DDOS Service and CDN Service as real ip tables: > > set_real_ip_from 192.168.1.0/24; > set_real_ip_from 192.168.2.1; > set_real_ip_from 2001:0db8::/32; The realip module uses last non-trusted address from the header (assuming real_ip_recursive is set). So you have to instruct it to trust addresses of your DDoS mitigation service and CDN, e.g.: set_real_ip_from ; set_real_ip_from ; real_ip_header X-Forwarded-For; real_ip_recursive on; -- Maxim Dounin http://nginx.org/ From nginx-forum at forum.nginx.org Sat Mar 19 20:10:19 2016 From: nginx-forum at forum.nginx.org (shiz) Date: Sat, 19 Mar 2016 16:10:19 -0400 Subject: nginx 1.9.12 proxy_cache always returns MISS Message-ID: <849b35c0b34b2e0dcdc98a83edbcf373.NginxMailingListEnglish@forum.nginx.org> Been playing with this for 2 days. proxy_pass is working correctly but the proxy_cache_path remains empty whatever I make. Here's the source I use for tests: root at NC-PH-0657-10:/etc/nginx/snippets# curl -X GET -I http://www.kuriyama-truck.com/images/parts/13375/thumbnail_0/1_1.jpg HTTP/1.1 200 OK Date: Sat, 19 Mar 2016 18:15:16 GMT Server: Apache/2.4.16 (Amazon) PHP/5.6.17 Last-Modified: Thu, 10 Mar 2016 05:01:30 GMT ETag: "d0f3-52daab51fbe80" Accept-Ranges: bytes Content-Length: 53491 Content-Type: image/jpeg Now here's the response from the nginx: root at NC-PH-0657-10:/etc/nginx/snippets# curl -X GET -I http://dev.ts-export.com/kuriyamacache/images/parts/13375/thumbnail_0/1_1.jpg HTTP/1.1 200 OK Server: nginx Date: Sat, 19 Mar 2016 18:14:46 GMT Content-Type: image/jpeg Content-Length: 53491 Connection: keep-alive Expires: Sun, 19 Mar 2017 18:14:46 GMT Cache-Control: max-age=31536000 Cache-Control: public X-Cache-Status: MISS Accept-Ranges: bytes Here are the request headers from my browser: GET /kuriyamacache/images/parts/13375/thumbnail_1/1_1.jpg HTTP/1.1 Host: dev.ts-export.com Connection: keep-alive Cache-Control: max-age=0 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8 Upgrade-Insecure-Requests: 1 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/49.0.2623.87 Safari/537.36 DNT: 1 Accept-Encoding: gzip, deflate, sdch Accept-Language: fr-CA,fr;q=0.8,en-US;q=0.6,en;q=0.4,ja;q=0.2,de;q=0.2 Cookie: PRUM_EPISODES=s=1458412951203 Part of my setup: proxy_cache_path /tmp/nginx/dev levels=1:2 keys_zone=my_zone:10m max_size=10g inactive=60m use_temp_path=off; server { set $rule_3 0; set $rule_4 0; set $rule_5 0; set $rule_8 0; set $rule_9 0; server_name dev.ts-export.com; listen 80; listen [::]:80; root /home/tsuchi/public_html; if ($reject_those_args) { return 403; } include snippets/filters.conf; error_page 404 /404.html; if ($request_uri ~ "^/index.(php|html?)$" ) { return 301 $scheme://dev.ts-export.com; } # no SSL for IE6-8 on XP and Android 2.3 if ($scheme = https) { set $rule_8 1$rule_8; } if ($http_user_agent ~ "MSIE (6|7|8).*Windows NT 5|Android 2\.3"){ set $rule_8 2$rule_8; } if ($rule_8 = "210"){ return 301 http://dev.ts-export.com$request_uri; } location = / { allow all; } location = /robots.txt { add_header X-Robots-Tag noindex; } location '/.well-known/acme-challenge' { default_type "text/plain"; root /tmp/letsencrypt-auto; } include snippets/proxyimg.conf; location / { try_files $uri $uri/ @rewrites; allow all; } (...) } Contents of proxyimg.conf: location ^~ /kuriyamacache { expires 1y; access_log off; log_not_found off; resolver 127.0.0.1; proxy_pass http://www.kuriyama-truck.com/; proxy_cache my_zone; proxy_cache_key "$scheme$request_method$host$request_uri"; proxy_buffering on; proxy_cache_valid 200 301 302 60m; proxy_cache_valid 404 1m; proxy_cache_use_stale error timeout http_500 http_502 http_503 http_504; proxy_cache_revalidate on; proxy_cache_lock on; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_ignore_headers X-Accel-Redirect X-Accel-Expires X-Accel-Limit-Rate X-Accel-Buffering X-Accel-Charset Set-Cookie Cache-Control Vary Expires; proxy_pass_header ETag; proxy_hide_header Cache-Control; add_header Cache-Control "public, max-age=31536000"; add_header X-Cache-Status $upstream_cache_status; } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,265504,265504#msg-265504 From zxcvbn4038 at gmail.com Sat Mar 19 20:33:33 2016 From: zxcvbn4038 at gmail.com (CJ Ess) Date: Sat, 19 Mar 2016 16:33:33 -0400 Subject: nginx 1.9.12 proxy_cache always returns MISS In-Reply-To: <849b35c0b34b2e0dcdc98a83edbcf373.NginxMailingListEnglish@forum.nginx.org> References: <849b35c0b34b2e0dcdc98a83edbcf373.NginxMailingListEnglish@forum.nginx.org> Message-ID: I think I've run into the problem before - move the proxypass statement from the top of the location stanza to the bottom, and I think that will solve your issue. On Sat, Mar 19, 2016 at 4:10 PM, shiz wrote: > Been playing with this for 2 days. > > proxy_pass is working correctly but the proxy_cache_path remains empty > whatever I make. > > > Here's the source I use for tests: > root at NC-PH-0657-10:/etc/nginx/snippets# curl -X GET -I > http://www.kuriyama-truck.com/images/parts/13375/thumbnail_0/1_1.jpg > HTTP/1.1 200 OK > Date: Sat, 19 Mar 2016 18:15:16 GMT > Server: Apache/2.4.16 (Amazon) PHP/5.6.17 > Last-Modified: Thu, 10 Mar 2016 05:01:30 GMT > ETag: "d0f3-52daab51fbe80" > Accept-Ranges: bytes > Content-Length: 53491 > Content-Type: image/jpeg > > Now here's the response from the nginx: > root at NC-PH-0657-10:/etc/nginx/snippets# curl -X GET -I > > http://dev.ts-export.com/kuriyamacache/images/parts/13375/thumbnail_0/1_1.jpg > HTTP/1.1 200 OK > Server: nginx > Date: Sat, 19 Mar 2016 18:14:46 GMT > Content-Type: image/jpeg > Content-Length: 53491 > Connection: keep-alive > Expires: Sun, 19 Mar 2017 18:14:46 GMT > Cache-Control: max-age=31536000 > Cache-Control: public > X-Cache-Status: MISS > Accept-Ranges: bytes > > Here are the request headers from my browser: > GET /kuriyamacache/images/parts/13375/thumbnail_1/1_1.jpg HTTP/1.1 > Host: dev.ts-export.com > Connection: keep-alive > Cache-Control: max-age=0 > Accept: > text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8 > Upgrade-Insecure-Requests: 1 > User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, > like Gecko) Chrome/49.0.2623.87 Safari/537.36 > DNT: 1 > Accept-Encoding: gzip, deflate, sdch > Accept-Language: fr-CA,fr;q=0.8,en-US;q=0.6,en;q=0.4,ja;q=0.2,de;q=0.2 > Cookie: PRUM_EPISODES=s=1458412951203 > > Part of my setup: > > proxy_cache_path /tmp/nginx/dev levels=1:2 keys_zone=my_zone:10m > max_size=10g inactive=60m use_temp_path=off; > > server { > > set $rule_3 0; > set $rule_4 0; > set $rule_5 0; > set $rule_8 0; > set $rule_9 0; > > server_name dev.ts-export.com; > listen 80; > listen [::]:80; > > root /home/tsuchi/public_html; > > if ($reject_those_args) { > return 403; > } > > include snippets/filters.conf; > > error_page 404 /404.html; > > if ($request_uri ~ "^/index.(php|html?)$" ) { > return 301 $scheme://dev.ts-export.com; > } > > > # no SSL for IE6-8 on XP and Android 2.3 > if ($scheme = https) { > set $rule_8 1$rule_8; > } > if ($http_user_agent ~ "MSIE (6|7|8).*Windows NT 5|Android 2\.3"){ > set $rule_8 2$rule_8; > } > if ($rule_8 = "210"){ > return 301 http://dev.ts-export.com$request_uri; > } > > > location = / { > allow all; > } > > location = /robots.txt { > add_header X-Robots-Tag noindex; > } > > > location '/.well-known/acme-challenge' { > default_type "text/plain"; > root /tmp/letsencrypt-auto; > } > > include snippets/proxyimg.conf; > > location / { > try_files $uri $uri/ @rewrites; > allow all; > } > (...) > } > > Contents of proxyimg.conf: > > location ^~ /kuriyamacache { > expires 1y; > access_log off; > log_not_found off; > resolver 127.0.0.1; > > proxy_pass http://www.kuriyama-truck.com/; > > proxy_cache my_zone; > proxy_cache_key "$scheme$request_method$host$request_uri"; > proxy_buffering on; > > proxy_cache_valid 200 301 302 60m; > proxy_cache_valid 404 1m; > proxy_cache_use_stale error timeout http_500 http_502 http_503 > http_504; > proxy_cache_revalidate on; > proxy_cache_lock on; > > proxy_set_header Host $host; > proxy_set_header X-Real-IP $remote_addr; > proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; > > proxy_ignore_headers X-Accel-Redirect X-Accel-Expires > X-Accel-Limit-Rate > X-Accel-Buffering X-Accel-Charset Set-Cookie Cache-Control Vary Expires; > > proxy_pass_header ETag; > proxy_hide_header Cache-Control; > > add_header Cache-Control "public, max-age=31536000"; > add_header X-Cache-Status $upstream_cache_status; > > } > > Posted at Nginx Forum: > https://forum.nginx.org/read.php?2,265504,265504#msg-265504 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lucas at slcoding.com Sat Mar 19 20:43:48 2016 From: lucas at slcoding.com (Lucas Rolff) Date: Sat, 19 Mar 2016 21:43:48 +0100 Subject: nginx 1.9.12 proxy_cache always returns MISS In-Reply-To: References: <849b35c0b34b2e0dcdc98a83edbcf373.NginxMailingListEnglish@forum.nginx.org> Message-ID: <56EDBA04.1050201@slcoding.com> Seems like it's resolved: $ curl -I http://dev.ts-export.com/kuriyamacache/images/parts/13375/thumbnail_0/1_1.jpg HTTP/1.1 200 OK Server: nginx Date: Sat, 19 Mar 2016 20:42:46 GMT Content-Type: image/jpeg Content-Length: 53491 Connection: keep-alive Last-Modified: Thu, 10 Mar 2016 05:01:30 GMT ETag: "d0f3-52daab51fbe80" Expires: Sun, 19 Mar 2017 20:42:46 GMT Cache-Control: max-age=31536000 Cache-Control: public, max-age=31536000 X-Cache-Status: MISS Accept-Ranges: bytes $ curl -I http://dev.ts-export.com/kuriyamacache/images/parts/13375/thumbnail_0/1_1.jpg HTTP/1.1 200 OK Server: nginx Date: Sat, 19 Mar 2016 20:42:48 GMT Content-Type: image/jpeg Content-Length: 53491 Connection: keep-alive Last-Modified: Thu, 10 Mar 2016 05:01:30 GMT ETag: "d0f3-52daab51fbe80" Expires: Sun, 19 Mar 2017 20:42:48 GMT Cache-Control: max-age=31536000 Cache-Control: public, max-age=31536000 X-Cache-Status: HIT Accept-Ranges: bytes CJ Ess wrote: > I think I've run into the problem before - move the proxypass > statement from the top of the location stanza to the bottom, and I > think that will solve your issue. > > > On Sat, Mar 19, 2016 at 4:10 PM, shiz > wrote: > > Been playing with this for 2 days. > > proxy_pass is working correctly but the proxy_cache_path remains empty > whatever I make. > > > Here's the source I use for tests: > root at NC-PH-0657-10:/etc/nginx/snippets# curl -X GET -I > http://www.kuriyama-truck.com/images/parts/13375/thumbnail_0/1_1.jpg > HTTP/1.1 200 OK > Date: Sat, 19 Mar 2016 18:15:16 GMT > Server: Apache/2.4.16 (Amazon) PHP/5.6.17 > Last-Modified: Thu, 10 Mar 2016 05:01:30 GMT > ETag: "d0f3-52daab51fbe80" > Accept-Ranges: bytes > Content-Length: 53491 > Content-Type: image/jpeg > > Now here's the response from the nginx: > root at NC-PH-0657-10:/etc/nginx/snippets# curl -X GET -I > http://dev.ts-export.com/kuriyamacache/images/parts/13375/thumbnail_0/1_1.jpg > HTTP/1.1 200 OK > Server: nginx > Date: Sat, 19 Mar 2016 18:14:46 GMT > Content-Type: image/jpeg > Content-Length: 53491 > Connection: keep-alive > Expires: Sun, 19 Mar 2017 18:14:46 GMT > Cache-Control: max-age=31536000 > Cache-Control: public > X-Cache-Status: MISS > Accept-Ranges: bytes > > Here are the request headers from my browser: > GET /kuriyamacache/images/parts/13375/thumbnail_1/1_1.jpg HTTP/1.1 > Host: dev.ts-export.com > Connection: keep-alive > Cache-Control: max-age=0 > Accept: > text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8 > Upgrade-Insecure-Requests: 1 > User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 > (KHTML, > like Gecko) Chrome/49.0.2623.87 Safari/537.36 > DNT: 1 > Accept-Encoding: gzip, deflate, sdch > Accept-Language: fr-CA,fr;q=0.8,en-US;q=0.6,en;q=0.4,ja;q=0.2,de;q=0.2 > Cookie: PRUM_EPISODES=s=1458412951203 > > Part of my setup: > > proxy_cache_path /tmp/nginx/dev levels=1:2 keys_zone=my_zone:10m > max_size=10g inactive=60m use_temp_path=off; > > server { > > set $rule_3 0; > set $rule_4 0; > set $rule_5 0; > set $rule_8 0; > set $rule_9 0; > > server_name dev.ts-export.com ; > listen 80; > listen [::]:80; > > root /home/tsuchi/public_html; > > if ($reject_those_args) { > return 403; > } > > include snippets/filters.conf; > > error_page 404 /404.html; > > if ($request_uri ~ "^/index.(php|html?)$" ) { > return 301 $scheme://dev.ts-export.com ; > } > > > # no SSL for IE6-8 on XP and Android 2.3 > if ($scheme = https) { > set $rule_8 1$rule_8; > } > if ($http_user_agent ~ "MSIE (6|7|8).*Windows NT 5|Android 2\.3"){ > set $rule_8 2$rule_8; > } > if ($rule_8 = "210"){ > return 301 http://dev.ts-export.com$request_uri; > } > > > location = / { > allow all; > } > > location = /robots.txt { > add_header X-Robots-Tag noindex; > } > > > location '/.well-known/acme-challenge' { > default_type "text/plain"; > root /tmp/letsencrypt-auto; > } > > include snippets/proxyimg.conf; > > location / { > try_files $uri $uri/ @rewrites; > allow all; > } > (...) > } > > Contents of proxyimg.conf: > > location ^~ /kuriyamacache { > expires 1y; > access_log off; > log_not_found off; > resolver 127.0.0.1; > > proxy_pass http://www.kuriyama-truck.com/; > > proxy_cache my_zone; > proxy_cache_key "$scheme$request_method$host$request_uri"; > proxy_buffering on; > > proxy_cache_valid 200 301 302 60m; > proxy_cache_valid 404 1m; > proxy_cache_use_stale error timeout http_500 http_502 http_503 > http_504; > proxy_cache_revalidate on; > proxy_cache_lock on; > > proxy_set_header Host $host; > proxy_set_header X-Real-IP $remote_addr; > proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; > > proxy_ignore_headers X-Accel-Redirect X-Accel-Expires > X-Accel-Limit-Rate > X-Accel-Buffering X-Accel-Charset Set-Cookie Cache-Control Vary > Expires; > > proxy_pass_header ETag; > proxy_hide_header Cache-Control; > > add_header Cache-Control "public, max-age=31536000"; > add_header X-Cache-Status $upstream_cache_status; > > } > > Posted at Nginx Forum: > https://forum.nginx.org/read.php?2,265504,265504#msg-265504 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From zxcvbn4038 at gmail.com Sat Mar 19 20:56:21 2016 From: zxcvbn4038 at gmail.com (CJ Ess) Date: Sat, 19 Mar 2016 16:56:21 -0400 Subject: nginx 1.9.12 proxy_cache always returns MISS In-Reply-To: <56EDBA04.1050201@slcoding.com> References: <849b35c0b34b2e0dcdc98a83edbcf373.NginxMailingListEnglish@forum.nginx.org> <56EDBA04.1050201@slcoding.com> Message-ID: Great! =) Make sure proxy buffering stays on - that will bypass the cache if turned off, and make sure your key space is large because you'll throw 500s for everything if it runs out (I figured it would evict a key if it ran out of space, and what was a wrong assumption) On Sat, Mar 19, 2016 at 4:43 PM, Lucas Rolff wrote: > Seems like it's resolved: > > $ curl -I > http://dev.ts-export.com/kuriyamacache/images/parts/13375/thumbnail_0/1_1.jpg > HTTP/1.1 200 OK > Server: nginx > Date: Sat, 19 Mar 2016 20:42:46 GMT > Content-Type: image/jpeg > Content-Length: 53491 > Connection: keep-alive > Last-Modified: Thu, 10 Mar 2016 05:01:30 GMT > ETag: "d0f3-52daab51fbe80" > Expires: Sun, 19 Mar 2017 20:42:46 GMT > Cache-Control: max-age=31536000 > Cache-Control: public, max-age=31536000 > X-Cache-Status: MISS > Accept-Ranges: bytes > > $ curl -I > http://dev.ts-export.com/kuriyamacache/images/parts/13375/thumbnail_0/1_1.jpg > HTTP/1.1 200 OK > Server: nginx > Date: Sat, 19 Mar 2016 20:42:48 GMT > Content-Type: image/jpeg > Content-Length: 53491 > Connection: keep-alive > Last-Modified: Thu, 10 Mar 2016 05:01:30 GMT > ETag: "d0f3-52daab51fbe80" > Expires: Sun, 19 Mar 2017 20:42:48 GMT > Cache-Control: max-age=31536000 > Cache-Control: public, max-age=31536000 > X-Cache-Status: HIT > Accept-Ranges: bytes > > > CJ Ess wrote: > > I think I've run into the problem before - move the proxypass statement > from the top of the location stanza to the bottom, and I think that will > solve your issue. > > > On Sat, Mar 19, 2016 at 4:10 PM, shiz wrote: > >> Been playing with this for 2 days. >> >> proxy_pass is working correctly but the proxy_cache_path remains empty >> whatever I make. >> >> >> Here's the source I use for tests: >> root at NC-PH-0657-10:/etc/nginx/snippets# curl -X GET -I >> http://www.kuriyama-truck.com/images/parts/13375/thumbnail_0/1_1.jpg >> HTTP/1.1 200 OK >> Date: Sat, 19 Mar 2016 18:15:16 GMT >> Server: Apache/2.4.16 (Amazon) PHP/5.6.17 >> Last-Modified: Thu, 10 Mar 2016 05:01:30 GMT >> ETag: "d0f3-52daab51fbe80" >> Accept-Ranges: bytes >> Content-Length: 53491 >> Content-Type: image/jpeg >> >> Now here's the response from the nginx: >> root at NC-PH-0657-10:/etc/nginx/snippets# curl -X GET -I >> >> http://dev.ts-export.com/kuriyamacache/images/parts/13375/thumbnail_0/1_1.jpg >> HTTP/1.1 200 OK >> Server: nginx >> Date: Sat, 19 Mar 2016 18:14:46 GMT >> Content-Type: image/jpeg >> Content-Length: 53491 >> Connection: keep-alive >> Expires: Sun, 19 Mar 2017 18:14:46 GMT >> Cache-Control: max-age=31536000 >> Cache-Control: public >> X-Cache-Status: MISS >> Accept-Ranges: bytes >> >> Here are the request headers from my browser: >> GET /kuriyamacache/images/parts/13375/thumbnail_1/1_1.jpg HTTP/1.1 >> Host: dev.ts-export.com >> Connection: keep-alive >> Cache-Control: max-age=0 >> Accept: >> text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8 >> Upgrade-Insecure-Requests: 1 >> User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, >> like Gecko) Chrome/49.0.2623.87 Safari/537.36 >> DNT: 1 >> Accept-Encoding: gzip, deflate, sdch >> Accept-Language: fr-CA,fr;q=0.8,en-US;q=0.6,en;q=0.4,ja;q=0.2,de;q=0.2 >> Cookie: PRUM_EPISODES=s=1458412951203 >> >> Part of my setup: >> >> proxy_cache_path /tmp/nginx/dev levels=1:2 keys_zone=my_zone:10m >> max_size=10g inactive=60m use_temp_path=off; >> >> server { >> >> set $rule_3 0; >> set $rule_4 0; >> set $rule_5 0; >> set $rule_8 0; >> set $rule_9 0; >> >> server_name dev.ts-export.com; >> listen 80; >> listen [::]:80; >> >> root /home/tsuchi/public_html; >> >> if ($reject_those_args) { >> return 403; >> } >> >> include snippets/filters.conf; >> >> error_page 404 /404.html; >> >> if ($request_uri ~ "^/index.(php|html?)$" ) { >> return 301 $scheme://dev.ts-export.com; >> } >> >> >> # no SSL for IE6-8 on XP and Android 2.3 >> if ($scheme = https) { >> set $rule_8 1$rule_8; >> } >> if ($http_user_agent ~ "MSIE (6|7|8).*Windows NT 5|Android 2\.3"){ >> set $rule_8 2$rule_8; >> } >> if ($rule_8 = "210"){ >> return 301 http://dev.ts-export.com$request_uri; >> } >> >> >> location = / { >> allow all; >> } >> >> location = /robots.txt { >> add_header X-Robots-Tag noindex; >> } >> >> >> location '/.well-known/acme-challenge' { >> default_type "text/plain"; >> root /tmp/letsencrypt-auto; >> } >> >> include snippets/proxyimg.conf; >> >> location / { >> try_files $uri $uri/ @rewrites; >> allow all; >> } >> (...) >> } >> >> Contents of proxyimg.conf: >> >> location ^~ /kuriyamacache { >> expires 1y; >> access_log off; >> log_not_found off; >> resolver 127.0.0.1; >> >> proxy_pass http://www.kuriyama-truck.com/; >> >> proxy_cache my_zone; >> proxy_cache_key "$scheme$request_method$host$request_uri"; >> proxy_buffering on; >> >> proxy_cache_valid 200 301 302 60m; >> proxy_cache_valid 404 1m; >> proxy_cache_use_stale error timeout http_500 http_502 http_503 >> http_504; >> proxy_cache_revalidate on; >> proxy_cache_lock on; >> >> proxy_set_header Host $host; >> proxy_set_header X-Real-IP $remote_addr; >> proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; >> >> proxy_ignore_headers X-Accel-Redirect X-Accel-Expires >> X-Accel-Limit-Rate >> X-Accel-Buffering X-Accel-Charset Set-Cookie Cache-Control Vary Expires; >> >> proxy_pass_header ETag; >> proxy_hide_header Cache-Control; >> >> add_header Cache-Control "public, max-age=31536000"; >> add_header X-Cache-Status $upstream_cache_status; >> >> } >> >> Posted at Nginx Forum: >> https://forum.nginx.org/read.php?2,265504,265504#msg-265504 >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > _______________________________________________ > nginx mailing listnginx at nginx.orghttp://mailman.nginx.org/mailman/listinfo/nginx > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zxcvbn4038 at gmail.com Sat Mar 19 21:02:16 2016 From: zxcvbn4038 at gmail.com (CJ Ess) Date: Sat, 19 Mar 2016 17:02:16 -0400 Subject: Nginx proxy_path key zone Message-ID: The value I specify for the size of my key zone in the proxy_path statement - is that a per-worker memory allocation or a shared memory zone? (i.e. if its 64mb and I have 32 processors, does the zone consume 64mb of main memory or 2gb?) -------------- next part -------------- An HTML attachment was scrubbed... URL: From reallfqq-nginx at yahoo.fr Sat Mar 19 21:15:20 2016 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Sat, 19 Mar 2016 22:15:20 +0100 Subject: nginx 1.9.12 proxy_cache always returns MISS In-Reply-To: References: <849b35c0b34b2e0dcdc98a83edbcf373.NginxMailingListEnglish@forum.nginx.org> Message-ID: Although you stated you problem was resolved, you need to understand what you are configuring/testing. In the first message you posted, the first response URL using the cache (starting with '/kuriyamacache') did not correspond to the alleged request headers' URL ('thumbnail_0' vs 'thumbnail_1'), so you probably got mixed up in your tests. Note that you configured cache only for path starting with '/kuriyamacache', not others, which will end up being served by another location block. Since you add headers in the response of requests being served at the proper location, look for their existence. The cache will only get triggered if a requests is served by this specific location, if the request URI matches the cache key defined with the proxy_cache_key directive, if the status code of the response is cacheable, and with the cache entry being only valid for the specified amount of time. I do not thing the proxy_pass directive is sensitive to order, otherwise you would not have seen the headers added with add_header in any response. I encourage you to carefully re-test the configuration you provided here and if any trouble arise, please provide a detailed step-by-step way of reproducing the problem. --- *B. R.* On Sat, Mar 19, 2016 at 9:33 PM, CJ Ess wrote: > I think I've run into the problem before - move the proxypass statement > from the top of the location stanza to the bottom, and I think that will > solve your issue. > > > On Sat, Mar 19, 2016 at 4:10 PM, shiz wrote: > >> Been playing with this for 2 days. >> >> proxy_pass is working correctly but the proxy_cache_path remains empty >> whatever I make. >> >> >> Here's the source I use for tests: >> root at NC-PH-0657-10:/etc/nginx/snippets# curl -X GET -I >> http://www.kuriyama-truck.com/images/parts/13375/thumbnail_0/1_1.jpg >> HTTP/1.1 200 OK >> Date: Sat, 19 Mar 2016 18:15:16 GMT >> Server: Apache/2.4.16 (Amazon) PHP/5.6.17 >> Last-Modified: Thu, 10 Mar 2016 05:01:30 GMT >> ETag: "d0f3-52daab51fbe80" >> Accept-Ranges: bytes >> Content-Length: 53491 >> Content-Type: image/jpeg >> >> Now here's the response from the nginx: >> root at NC-PH-0657-10:/etc/nginx/snippets# curl -X GET -I >> >> http://dev.ts-export.com/kuriyamacache/images/parts/13375/thumbnail_0/1_1.jpg >> HTTP/1.1 200 OK >> Server: nginx >> Date: Sat, 19 Mar 2016 18:14:46 GMT >> Content-Type: image/jpeg >> Content-Length: 53491 >> Connection: keep-alive >> Expires: Sun, 19 Mar 2017 18:14:46 GMT >> Cache-Control: max-age=31536000 >> Cache-Control: public >> X-Cache-Status: MISS >> Accept-Ranges: bytes >> >> Here are the request headers from my browser: >> GET /kuriyamacache/images/parts/13375/thumbnail_1/1_1.jpg HTTP/1.1 >> Host: dev.ts-export.com >> Connection: keep-alive >> Cache-Control: max-age=0 >> Accept: >> text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8 >> Upgrade-Insecure-Requests: 1 >> User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, >> like Gecko) Chrome/49.0.2623.87 Safari/537.36 >> DNT: 1 >> Accept-Encoding: gzip, deflate, sdch >> Accept-Language: fr-CA,fr;q=0.8,en-US;q=0.6,en;q=0.4,ja;q=0.2,de;q=0.2 >> Cookie: PRUM_EPISODES=s=1458412951203 >> >> Part of my setup: >> >> proxy_cache_path /tmp/nginx/dev levels=1:2 keys_zone=my_zone:10m >> max_size=10g inactive=60m use_temp_path=off; >> >> server { >> >> set $rule_3 0; >> set $rule_4 0; >> set $rule_5 0; >> set $rule_8 0; >> set $rule_9 0; >> >> server_name dev.ts-export.com; >> listen 80; >> listen [::]:80; >> >> root /home/tsuchi/public_html; >> >> if ($reject_those_args) { >> return 403; >> } >> >> include snippets/filters.conf; >> >> error_page 404 /404.html; >> >> if ($request_uri ~ "^/index.(php|html?)$" ) { >> return 301 $scheme://dev.ts-export.com; >> } >> >> >> # no SSL for IE6-8 on XP and Android 2.3 >> if ($scheme = https) { >> set $rule_8 1$rule_8; >> } >> if ($http_user_agent ~ "MSIE (6|7|8).*Windows NT 5|Android 2\.3"){ >> set $rule_8 2$rule_8; >> } >> if ($rule_8 = "210"){ >> return 301 http://dev.ts-export.com$request_uri; >> } >> >> >> location = / { >> allow all; >> } >> >> location = /robots.txt { >> add_header X-Robots-Tag noindex; >> } >> >> >> location '/.well-known/acme-challenge' { >> default_type "text/plain"; >> root /tmp/letsencrypt-auto; >> } >> >> include snippets/proxyimg.conf; >> >> location / { >> try_files $uri $uri/ @rewrites; >> allow all; >> } >> (...) >> } >> >> Contents of proxyimg.conf: >> >> location ^~ /kuriyamacache { >> expires 1y; >> access_log off; >> log_not_found off; >> resolver 127.0.0.1; >> >> proxy_pass http://www.kuriyama-truck.com/; >> >> proxy_cache my_zone; >> proxy_cache_key "$scheme$request_method$host$request_uri"; >> proxy_buffering on; >> >> proxy_cache_valid 200 301 302 60m; >> proxy_cache_valid 404 1m; >> proxy_cache_use_stale error timeout http_500 http_502 http_503 >> http_504; >> proxy_cache_revalidate on; >> proxy_cache_lock on; >> >> proxy_set_header Host $host; >> proxy_set_header X-Real-IP $remote_addr; >> proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; >> >> proxy_ignore_headers X-Accel-Redirect X-Accel-Expires >> X-Accel-Limit-Rate >> X-Accel-Buffering X-Accel-Charset Set-Cookie Cache-Control Vary Expires; >> >> proxy_pass_header ETag; >> proxy_hide_header Cache-Control; >> >> add_header Cache-Control "public, max-age=31536000"; >> add_header X-Cache-Status $upstream_cache_status; >> >> } >> >> Posted at Nginx Forum: >> https://forum.nginx.org/read.php?2,265504,265504#msg-265504 >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From reallfqq-nginx at yahoo.fr Sat Mar 19 21:19:35 2016 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Sat, 19 Mar 2016 22:19:35 +0100 Subject: Nginx proxy_path key zone In-Reply-To: References: Message-ID: I suppose you are talking about the proxy_cache_path directive. Its docs state you are defining the size of a *shared memory zone*, most probably allocated by the master process at configuration loading time and then accessible/accessed by workers when needed. You will be able to make a conclusion by yourself. :o) --- *B. R.* On Sat, Mar 19, 2016 at 10:02 PM, CJ Ess wrote: > The value I specify for the size of my key zone in the proxy_path > statement - is that a per-worker memory allocation or a shared memory zone? > (i.e. if its 64mb and I have 32 processors, does the zone consume 64mb of > main memory or 2gb?) > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zxcvbn4038 at gmail.com Sat Mar 19 21:28:45 2016 From: zxcvbn4038 at gmail.com (CJ Ess) Date: Sat, 19 Mar 2016 17:28:45 -0400 Subject: Nginx proxy_path key zone In-Reply-To: References: Message-ID: Great, thank you! On Sat, Mar 19, 2016 at 5:19 PM, B.R. wrote: > I suppose you are talking about the proxy_cache_path directive. Its docs > > state you are defining the size of a *shared memory zone*, most probably > allocated by the master process at configuration loading time and then > accessible/accessed by workers when needed. > > You will be able to make a conclusion by yourself. :o) > --- > *B. R.* > > On Sat, Mar 19, 2016 at 10:02 PM, CJ Ess wrote: > >> The value I specify for the size of my key zone in the proxy_path >> statement - is that a per-worker memory allocation or a shared memory zone? >> (i.e. if its 64mb and I have 32 processors, does the zone consume 64mb of >> main memory or 2gb?) >> >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Sat Mar 19 22:15:21 2016 From: nginx-forum at forum.nginx.org (shiz) Date: Sat, 19 Mar 2016 18:15:21 -0400 Subject: nginx 1.9.12 proxy_cache always returns MISS In-Reply-To: References: Message-ID: Yes it's resolved. I've change the cache path yesterday and a few minutes ago, I noticed that error message: "2016/03/19 12:31:02 [emerg] 8984#8984: cache "my_zone" uses the "/tmp/nginx/dev" cache path while previously it used the "/tmp/nginx" cache path" It seems it was enough to prevent the cache from running. Interesting. BTW, do you see anything useless, redundant in my config? Thanks for the help! Posted at Nginx Forum: https://forum.nginx.org/read.php?2,265504,265510#msg-265510 From nginx-forum at forum.nginx.org Sun Mar 20 13:07:58 2016 From: nginx-forum at forum.nginx.org (ndrini) Date: Sun, 20 Mar 2016 09:07:58 -0400 Subject: sites overlap in a EC2 server Message-ID: <297ffd6f3181ee5dbd00eb92ca52a297.NginxMailingListEnglish@forum.nginx.org> Hi, I have an elatic virtual private server (EC2 VPS by Amazon), in which I would like to host two website (ndrini.eu and dradambrosio.eu). The first (ndrini.eu) is a django dinamic website, and the second (dradambrosio.eu) is only a static file (to be simpler as possible). If only ndrini.eu is enabled, the ndrini.eu site works all right, but if I enable also dradambrosio.eu, the first also stops to work. These are my settings: ndrini.eu server { listen 80; server_name *.ndrini.eu; location /static { alias /home/ndrini/sites/superlists-staging.ndrini.eu/static; } location / { proxy_pass http://localhost:8000; } } ============================================ dradambrosio.eu server { listen 80; server_name *.dradambrosio.eu; location / { root /home/ndrini/sites/loomio; index home.html; } } Can someone tell me why and how to fix it? Thanks, Andrea Posted at Nginx Forum: https://forum.nginx.org/read.php?2,265512,265512#msg-265512 From reallfqq-nginx at yahoo.fr Sun Mar 20 13:15:28 2016 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Sun, 20 Mar 2016 14:15:28 +0100 Subject: sites overlap in a EC2 server In-Reply-To: <297ffd6f3181ee5dbd00eb92ca52a297.NginxMailingListEnglish@forum.nginx.org> References: <297ffd6f3181ee5dbd00eb92ca52a297.NginxMailingListEnglish@forum.nginx.org> Message-ID: Looks good, although you did not provide the whole configuration file (or chain if there are some include directives). You can test your configuration with nginx -t. On reload command, check the error logfile for errors. How do you 'enable' or 'disable' a domain? What tests are you doing? What results did you expect? What results did you get? --- *B. R.* On Sun, Mar 20, 2016 at 2:07 PM, ndrini wrote: > Hi, > > I have an elatic virtual private server (EC2 VPS by Amazon), in which I > would like to host two website (ndrini.eu and dradambrosio.eu). > > The first (ndrini.eu) is a django dinamic website, > and the second (dradambrosio.eu) is only a static file (to be simpler as > possible). > > If only ndrini.eu is enabled, the ndrini.eu site works all right, > but if I enable also dradambrosio.eu, the first also stops to work. > > > These are my settings: > > ndrini.eu > > server { > listen 80; > server_name *.ndrini.eu; > > location /static { > alias /home/ndrini/sites/superlists-staging.ndrini.eu/static; > } > > location / { > proxy_pass http://localhost:8000; > } > } > > ============================================ > > dradambrosio.eu > > server { > listen 80; > server_name *.dradambrosio.eu; > location / { > root /home/ndrini/sites/loomio; > index home.html; > } > } > > > > > Can someone tell me why and how to fix it? > > Thanks, > > Andrea > > Posted at Nginx Forum: > https://forum.nginx.org/read.php?2,265512,265512#msg-265512 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Sun Mar 20 14:56:22 2016 From: nginx-forum at forum.nginx.org (shiz) Date: Sun, 20 Mar 2016 10:56:22 -0400 Subject: Please help rewrite proper parameter Message-ID: <7acf8f3aa49e0086c3e9a51f3396df1a.NginxMailingListEnglish@forum.nginx.org> Hi, Sometimes, bots are notorious for doing that, arguments are over urlencoded. If someone knew a way to rewrite them back to their normal state, that would be awesome. e.g. Normal URL: www.site.com/file.php?param=blahblah URLs that bots try sometimes (often in fact - don't know where they get those strings BTW): www.site.com/file.php?param%253Dblahblah www.site.com/file.php?param%25253Dblahblah www.site.com/file.php?param%2525253Dblahblah ... Tried the following in the general filter block, then into the php block, didn't help: rewrite ^(.*)(param)\%(25)+(3D)(.*)$ $1$2=$5 redirect; rewrite ^(.*)(param)=(.*)$ $1$2=$3 redirect; Thanks! Posted at Nginx Forum: https://forum.nginx.org/read.php?2,265514,265514#msg-265514 From nginx-forum at forum.nginx.org Sun Mar 20 20:34:14 2016 From: nginx-forum at forum.nginx.org (vizl) Date: Sun, 20 Mar 2016 16:34:14 -0400 Subject: Workers CPU leak [epoll_wait,epoll_ctl] In-Reply-To: <2b9a636aa67a55241dc77e6f920a1e1b.NginxMailingListEnglish@forum.nginx.org> References: <20160309161912.GG31796@mdounin.ru> <2b9a636aa67a55241dc77e6f920a1e1b.NginxMailingListEnglish@forum.nginx.org> Message-ID: We found, that we are running 'truncate -s 0' to file before removing them. Can it potentially cause the mentioned above problems ? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,264764,265516#msg-265516 From francis at daoine.org Sun Mar 20 20:59:59 2016 From: francis at daoine.org (Francis Daly) Date: Sun, 20 Mar 2016 20:59:59 +0000 Subject: sites overlap in a EC2 server In-Reply-To: <297ffd6f3181ee5dbd00eb92ca52a297.NginxMailingListEnglish@forum.nginx.org> References: <297ffd6f3181ee5dbd00eb92ca52a297.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20160320205959.GJ3340@daoine.org> On Sun, Mar 20, 2016 at 09:07:58AM -0400, ndrini wrote: Hi there, > If only ndrini.eu is enabled, the ndrini.eu site works all right, > but if I enable also dradambrosio.eu, the first also stops to work. > > > These are my settings: > > ndrini.eu > > server { > listen 80; > server_name *.ndrini.eu; http://nginx.org/r/server_name This sever_name will match www.ndrini.eu. but will not match ndrini.eu. > dradambrosio.eu > > server { > listen 80; > server_name *.dradambrosio.eu; And this one will match www.dradambrosio.eu but will not match dradambrosio.eu. If you make a request for a server_name which does not match, then the default server for the appropriate ip:port is chosen. > Can someone tell me why and how to fix it? My guess is that you are testing with a request for a url like http://dradambrosio.eu/, which you do not explicitly configure for. In that case, remove the "*". f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Sun Mar 20 22:12:06 2016 From: nginx-forum at forum.nginx.org (ndrini) Date: Sun, 20 Mar 2016 18:12:06 -0400 Subject: sites overlap in a EC2 server In-Reply-To: <20160320205959.GJ3340@daoine.org> References: <20160320205959.GJ3340@daoine.org> Message-ID: Thanks, I'll try it!! Andrea Posted at Nginx Forum: https://forum.nginx.org/read.php?2,265512,265518#msg-265518 From nginx-forum at forum.nginx.org Mon Mar 21 07:29:36 2016 From: nginx-forum at forum.nginx.org (nguyenbatam90) Date: Mon, 21 Mar 2016 03:29:36 -0400 Subject: How to keep alive when upstream 2 internal server nginx Message-ID: <37d18633da6b7ba43428b3b022f8fe82.NginxMailingListEnglish@forum.nginx.org> This is my config and how i can keep alive request from port *:83 and port *:7082 http{ upstream download_redirect { server 127.0.0.1:80 weight=1; keepalive 20; } server { listen 7082; server_name localhost; location ~ ^/x-keam/service/v0.5/videos/stream(.*)$ { if ( $args ) { return 307 http://111.30.211.144:8000/x-keam/service/v1.0/videos/stream$1?$args; } return 307 http://111.30.211.144:8000/x-keam/service/v1.0/videos/stream$1; } } server { listen 83; server_name localhost; location ~ ^/x-keam/service/v0.5/videos/stream(.*)$ { proxy_pass http://download_redirect; proxy_http_version 1.1; proxy_set_header Connection ""; } } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,265524,265524#msg-265524 From maxim at nginx.com Mon Mar 21 08:10:31 2016 From: maxim at nginx.com (Maxim Konovalov) Date: Mon, 21 Mar 2016 11:10:31 +0300 Subject: Workers CPU leak [epoll_wait,epoll_ctl] In-Reply-To: References: <20160309161912.GG31796@mdounin.ru> <2b9a636aa67a55241dc77e6f920a1e1b.NginxMailingListEnglish@forum.nginx.org> Message-ID: <56EFAC77.5030207@nginx.com> On 3/20/16 11:34 PM, vizl wrote: > We found, that we are running 'truncate -s 0' to file before removing them. > Can it potentially cause the mentioned above problems ? > Yes, quite possible. The fix was committed to the mainline branch and will be available in 1.9.13: http://mailman.nginx.org/pipermail/nginx-devel/2016-March/008012.html -- Maxim Konovalov From nginx-forum at forum.nginx.org Mon Mar 21 12:00:36 2016 From: nginx-forum at forum.nginx.org (vizl) Date: Mon, 21 Mar 2016 08:00:36 -0400 Subject: Workers CPU leak [epoll_wait,epoll_ctl] In-Reply-To: <56EFAC77.5030207@nginx.com> References: <56EFAC77.5030207@nginx.com> Message-ID: <17ac25700d03fe4c7ce5cf5cbffc8006.NginxMailingListEnglish@forum.nginx.org> Thank you. Waiting for 1.9.13 branch. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,264764,265536#msg-265536 From mdounin at mdounin.ru Mon Mar 21 13:29:16 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 21 Mar 2016 16:29:16 +0300 Subject: Workers CPU leak [epoll_wait,epoll_ctl] In-Reply-To: References: <20160309161912.GG31796@mdounin.ru> <2b9a636aa67a55241dc77e6f920a1e1b.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20160321132916.GA37718@mdounin.ru> Hello! On Sun, Mar 20, 2016 at 04:34:14PM -0400, vizl wrote: > We found, that we are running 'truncate -s 0' to file before removing them. > Can it potentially cause the mentioned above problems ? Yes, for sure. This is a modification of a file being served, and it's expected to cause the CPU hog in question when using sendfile in threads on Linux. In nginx 1.9.13 nginx will log an alert instead, see this commit: http://hg.nginx.org/nginx/rev/4df3d9fcdee8 -- Maxim Dounin http://nginx.org/ From nginx-forum at forum.nginx.org Mon Mar 21 16:29:20 2016 From: nginx-forum at forum.nginx.org (luocn99) Date: Mon, 21 Mar 2016 12:29:20 -0400 Subject: does nginx support linger_close as client? Message-ID: <668b711ae74a93109fcdd586b83b1057.NginxMailingListEnglish@forum.nginx.org> Hi guys i run nginx as a proxy, it became a client when connecting to a upstream server. i want to close the connection with SO_LINGER option when i read the full data, because linger_close send RST to upstream server and avoid too many time_wait state. does nginx support the feature? (i think this feature is different with the existing lingering_clise directive) Posted at Nginx Forum: https://forum.nginx.org/read.php?2,265547,265547#msg-265547 From nginx-forum at forum.nginx.org Mon Mar 21 17:51:31 2016 From: nginx-forum at forum.nginx.org (zharvey) Date: Mon, 21 Mar 2016 13:51:31 -0400 Subject: nginx seems to just be serving default page Message-ID: <2bc7de59db1a918eb40eda7a26097cb9.NginxMailingListEnglish@forum.nginx.org> I am brand new to nginx and have it running on a VM (mynginx.example.com) and running. I am trying to get it to serve content under /opt/mysite (where the homepage is located at /opt/mysite/index.html). Below is the nginx.conf that I'm using: user www-data; worker_processes 4; pid /run/nginx.pid; events { worker_connections 768; } http { sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; include /etc/nginx/mime.types; default_type application/octet-stream; access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; gzip on; gzip_disable "msie6"; server { listen 80; server_name mynginx.example.com; location / { root /opt/mysite; index index.html index.htm; } error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } } include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; include servers/*; } With this configuration, going to both http://mynginx.example.com and http://mynginx.example.com/index.html have the exact same effect: they take you to the default nginx page (**Welcome to nginx!**)... Can anybody spot why? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,265548,265548#msg-265548 From nginx-forum at forum.nginx.org Mon Mar 21 17:52:44 2016 From: nginx-forum at forum.nginx.org (zharvey) Date: Mon, 21 Mar 2016 13:52:44 -0400 Subject: nginx seems to just be serving default page In-Reply-To: <2bc7de59db1a918eb40eda7a26097cb9.NginxMailingListEnglish@forum.nginx.org> References: <2bc7de59db1a918eb40eda7a26097cb9.NginxMailingListEnglish@forum.nginx.org> Message-ID: As a followup to the original question, what I'm really looking for is for my homepage (index.html) to be served when you go to http://mynginx.example.com, *not* the nginx default page. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,265548,265549#msg-265549 From reallfqq-nginx at yahoo.fr Tue Mar 22 00:23:51 2016 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Tue, 22 Mar 2016 01:23:51 +0100 Subject: nginx seems to just be serving default page In-Reply-To: References: <2bc7de59db1a918eb40eda7a26097cb9.NginxMailingListEnglish@forum.nginx.org> Message-ID: Have you checked your configuration has been loaded properly? Most probably, either: - your configuration is not loaded: check it with the -t nginx option, then check your error log on reload to spot any message there - your configuration file is not used, the default one is (you might ensure you overwrote the right file or use the -c nginx option to specify your configuration file). If you are using nginx >=1.9.2, you might wanna check the new -T option so the loaded configuration is dumped on standard output --- *B. R.* On Mon, Mar 21, 2016 at 6:52 PM, zharvey wrote: > As a followup to the original question, what I'm really looking for is for > my homepage (index.html) to be served when you go to > http://mynginx.example.com, *not* the nginx default page. > > Posted at Nginx Forum: > https://forum.nginx.org/read.php?2,265548,265549#msg-265549 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From joan.sdiwc at gmail.com Tue Mar 22 04:27:46 2016 From: joan.sdiwc at gmail.com (Joan R) Date: Tue, 22 Mar 2016 12:27:46 +0800 Subject: The Fourth International Conference on Digital Information Processing, E-Business and Cloud Computing (DIPECC2016) Message-ID: You are invited to participate in The Fourth International Conference on Digital Information Processing, E-Business and Cloud Computing (DIPECC2016) that will be held in Asia Pacific University of Technology and Innovation (APU), Kuala Lumpur, Malaysia, on September 6-8, 2016 as part of The Fifth World Congress on Computing, Engineering and Technology (WCCET). The event will be held over three days, with presentations delivered by researchers from the international community, including presentations from keynote speakers and state-of-the-art lectures. September 6-8, 2016 ? Kuala Lumpur, Malaysia Asia Pacific University of Technology and Innovation (APU) Website: http://sdiwc.net/conferences/dipecc2016/ ================ *IMPORTANT DATES* Submission Dates Open from now until August 6, 2016 Notification of Acceptance August 20, 2016 or 4 weeks from the submission date Camera Ready Submission Open from now until August 26, 2016 Registration Deadline Open from now until August 26, 2016 Conference Dates September 6-8, 2016 The submission is open until August 6, 2016. Please consider submitting your papers to DIPECC2016. SUBMISSION LINK: http://sdiwc.net/conferences/dipecc2016/openconf/openconf.php EMAIL: dipecc16 at sdiwc.net -------------- next part -------------- An HTML attachment was scrubbed... URL: From joan.sdiwc at gmail.com Tue Mar 22 04:28:31 2016 From: joan.sdiwc at gmail.com (Joan R) Date: Tue, 22 Mar 2016 12:28:31 +0800 Subject: The Second International Conference on Electronics and Software Science (ICESS2016) Message-ID: You are invited to participate in *The Second International Conference on Electronics and Software Science (ICESS2016) *that will be held at the Takamatsu Sunport Hall Building, Takamatsu, Japan on November 14-16, 2016. The event will be held over three days, with presentations delivered by researchers from the international community, including presentations from keynote speakers and state-of-the-art lectures. *Nov. 14-16, 2016* *Kagawa University, Takamatsu, Japan* Submission Deadline Open from now until Sept. 14, 2016 Notification of Acceptance 4-7 weeks from the Submission Date Camera Ready Submission Oct. 14, 2016 Registration Deadline Oct. 14, 2016 Conference Dates Nov. 14-16, 2016 All registered papers will be published in SDIWC Digital Library and in the proceedings of the conference. The conference welcome papers on the following (but not limited to) research topics: Please check here: http://sdiwc.net/conferences/second-international-conference-on-electronics-and-software-science Contact email: icess16 at sdiwc.net -------------- next part -------------- An HTML attachment was scrubbed... URL: From zeal at freecharge.com Tue Mar 22 12:05:19 2016 From: zeal at freecharge.com (Zeal Vora) Date: Tue, 22 Mar 2016 17:35:19 +0530 Subject: Vulnerability related Doubts in Nginx Message-ID: Hi We are running Nginx version 1.8 ( nginx-1.8.1-1.amzn1.ngx.x86_64 ) in our servers. So in the Vulnerability Assessment, Nessus gave report that it is vulnerable. *Current version :-* nginx-1.8.1-1.amzn1.ngx.x86_64 *Fix Version ( According to Nessus ) :-* nginx-1.8.1-1.26.amzn1 I don't seem to find the " Fix Version " of Nginx which Nessus suggested. Is there any work around for this ? Is 1.8 the latest stable version which is available or we can move forward with higher one ? Any help will be appreciated! -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: nginx-vul.png Type: image/png Size: 61550 bytes Desc: not available URL: From maxim at nginx.com Tue Mar 22 12:13:57 2016 From: maxim at nginx.com (Maxim Konovalov) Date: Tue, 22 Mar 2016 15:13:57 +0300 Subject: Vulnerability related Doubts in Nginx In-Reply-To: References: Message-ID: <56F13705.2050803@nginx.com> Hi Zeal, On 3/22/16 3:05 PM, Zeal Vora wrote: > Hi > > We are running Nginx version 1.8 ( nginx-1.8.1-1.amzn1.ngx.x86_64 ) > in our servers. So in the Vulnerability Assessment, Nessus gave > report that it is vulnerable. > > *Current version :-* nginx-1.8.1-1.amzn1.ngx.x86_64 > > *Fix Version ( According to Nessus ) :-* nginx-1.8.1-1.26.amzn1 > > I don't seem to find the " Fix Version " of Nginx which Nessus > suggested. > > Is there any work around for this ? > > Is 1.8 the latest stable version which is available or we can move > forward with higher one ? > > > Any help will be appreciated! > Does it help? https://alas.aws.amazon.com/ALAS-2016-655.html -- Maxim Konovalov From vbart at nginx.com Tue Mar 22 12:13:22 2016 From: vbart at nginx.com (Valentin V. Bartenev) Date: Tue, 22 Mar 2016 15:13:22 +0300 Subject: Vulnerability related Doubts in Nginx In-Reply-To: References: Message-ID: <5768487.1M8OX76KmA@vbart-workstation> On Tuesday 22 March 2016 17:35:19 Zeal Vora wrote: > Hi > > We are running Nginx version 1.8 ( nginx-1.8.1-1.amzn1.ngx.x86_64 ) in our > servers. So in the Vulnerability Assessment, Nessus gave report that it is > vulnerable. > > *Current version :-* nginx-1.8.1-1.amzn1.ngx.x86_64 > > *Fix Version ( According to Nessus ) :-* nginx-1.8.1-1.26.amzn1 > > I don't seem to find the " Fix Version " of Nginx which Nessus suggested. > > Is there any work around for this ? > > Is 1.8 the latest stable version which is available or we can move forward > with higher one ? > > > Any help will be appreciated! The CVE-2016-0742 that is referenced in the report is fixed in nginx 1.8.1. See here for the official information: http://mailman.nginx.org/pipermail/nginx/2016-January/049700.html http://nginx.org/en/security_advisories.html http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2016-0742 wbr, Valentin V. Bartenev From zeal at freecharge.com Tue Mar 22 12:17:00 2016 From: zeal at freecharge.com (Zeal Vora) Date: Tue, 22 Mar 2016 17:47:00 +0530 Subject: Vulnerability related Doubts in Nginx In-Reply-To: <5768487.1M8OX76KmA@vbart-workstation> References: <5768487.1M8OX76KmA@vbart-workstation> Message-ID: @Maxim :- Thanks. Actually we compile Nginx so to include additional modules. The solution mentioned in Amazon page is " yum update nginx " is something which will not help as we will need the tar.gz / SRPM file for that version. @Valentin :- Thanks, actually we already have 1.8.1 but the reported fix is in nginx-1.8.1-1.26 for which I can't find any SRPM / tar.gz file. On Tue, Mar 22, 2016 at 5:43 PM, Valentin V. Bartenev wrote: > On Tuesday 22 March 2016 17:35:19 Zeal Vora wrote: > > Hi > > > > We are running Nginx version 1.8 ( nginx-1.8.1-1.amzn1.ngx.x86_64 ) in > our > > servers. So in the Vulnerability Assessment, Nessus gave report that it > is > > vulnerable. > > > > *Current version :-* nginx-1.8.1-1.amzn1.ngx.x86_64 > > > > *Fix Version ( According to Nessus ) :-* nginx-1.8.1-1.26.amzn1 > > > > I don't seem to find the " Fix Version " of Nginx which Nessus suggested. > > > > Is there any work around for this ? > > > > Is 1.8 the latest stable version which is available or we can move > forward > > with higher one ? > > > > > > Any help will be appreciated! > > The CVE-2016-0742 that is referenced in the report is fixed in nginx 1.8.1. > > See here for the official information: > http://mailman.nginx.org/pipermail/nginx/2016-January/049700.html > http://nginx.org/en/security_advisories.html > http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2016-0742 > > wbr, Valentin V. Bartenev > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From maxim at nginx.com Tue Mar 22 12:22:48 2016 From: maxim at nginx.com (Maxim Konovalov) Date: Tue, 22 Mar 2016 15:22:48 +0300 Subject: Vulnerability related Doubts in Nginx In-Reply-To: References: <5768487.1M8OX76KmA@vbart-workstation> Message-ID: <56F13918.20608@nginx.com> On 3/22/16 3:17 PM, Zeal Vora wrote: > @Maxim :- > > Thanks. Actually we compile Nginx so to include additional modules. > The solution mentioned in Amazon page is " yum update nginx " is > something which will not help as we will need the tar.gz / SRPM file > for that version. > > @Valentin :- > > Thanks, actually we already have 1.8.1 but the reported fix is > in nginx-1.8.1-1.26 for which I can't find any SRPM / tar.gz file. > The nessus report is about the package version. "nginx-1.8.1-1.26" is something AWS specific, it doesn't come from nginx.org. If you built your own package or compiled nginx from the nginx.org sources you are safe with 1.8.1. -- Maxim Konovalov From nginx at 2xlp.com Wed Mar 23 00:00:13 2016 From: nginx at 2xlp.com (Jonathan Vanasco) Date: Tue, 22 Mar 2016 20:00:13 -0400 Subject: simple auth question for nested sections Message-ID: <5456038E-CF9F-4C0F-A892-96C41979B6A5@2xlp.com> apologies for the simple question, but i could only find the opposite situation in the list archives and I haven't had to reconfigure some of these routes in years! i have # works location /foo { proxy_pass http://127.0.0.1:6543; } I want to lock down /foo/admin with basic auth # works location /foo/admin { proxy_pass http://127.0.0.1:6543; auth_basic "Administrator Login"; auth_basic_user_file /etc/nginx/_htpasswd/well-known; } Is there a syntax for nesting the two together, so the /foo/admin would inherit the /foo configuration without the need to redeclare everything? # something like location /foo { proxy_pass http://127.0.0.1:6543; location /foo/admin { auth_basic "Administrator Login"; auth_basic_user_file /etc/nginx/_htpasswd/well-known; } } From nginx-forum at forum.nginx.org Wed Mar 23 01:03:14 2016 From: nginx-forum at forum.nginx.org (Alt) Date: Tue, 22 Mar 2016 21:03:14 -0400 Subject: hello. can someone plz help me with a guide? In-Reply-To: References: Message-ID: <2ba29c9bcc4df77a16121eba93a42551.NginxMailingListEnglish@forum.nginx.org> Hello, I don't know CentOS, so I can't really help, but there are some instructions here: http://nginx.org/en/linux_packages.html And if you are getting an error, could you give more details (error, commands you are typing,...)? Best Regards Posted at Nginx Forum: https://forum.nginx.org/read.php?2,265568,265573#msg-265573 From chateau.xiao at gmail.com Wed Mar 23 04:12:07 2016 From: chateau.xiao at gmail.com (chateau Xiao) Date: Wed, 23 Mar 2016 12:12:07 +0800 Subject: hello. can someone plz help me with a guide? In-Reply-To: <2ba29c9bcc4df77a16121eba93a42551.NginxMailingListEnglish@forum.nginx.org> References: <2ba29c9bcc4df77a16121eba93a42551.NginxMailingListEnglish@forum.nginx.org> Message-ID: the nginx installation on CentOS can be done in two different way, that's 'installed by package' and 'build from source' nginx can be installed on CentOS by using the package manage tools named 'yum', after you set up the downloading source properly, you can simply type the one line command above to have nginx install: yum install -y nginx this cmd will install the nginx executable binary and all the dependency that nginx needed. after that, you can configurate it by editing files /etc/nginx/nginx.conf , then turn it on using CentOS command services. On Wed, Mar 23, 2016 at 9:03 AM, Alt wrote: > Hello, > > I don't know CentOS, so I can't really help, but there are some > instructions > here: http://nginx.org/en/linux_packages.html > And if you are getting an error, could you give more details (error, > commands you are typing,...)? > > Best Regards > > Posted at Nginx Forum: > https://forum.nginx.org/read.php?2,265568,265573#msg-265573 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Wed Mar 23 04:59:32 2016 From: nginx-forum at forum.nginx.org (marcusy3k) Date: Wed, 23 Mar 2016 00:59:32 -0400 Subject: 502 Bad Gateway once running php on command line Message-ID: <9bf78ff908abf0c9bf4cead0e54056ef.NginxMailingListEnglish@forum.nginx.org> I have just installed: - FreeBSD 10.2 - Nginx 1.8.1 - PHP 5.5.33 Nginx works fine with PHP that the web sites seems ok to run php pages. However, once I run php on command line (e.g. php -v), the web site will get "502 Bad Gateway" error, and I find the nginx error log as below: [error] 714#0: *3 upstream prematurely closed connection while reading response header from upstream, client: _____, server: www.____.com, request: "GET / HTTP/1.1", upstream: "fastcgi://unix:/ var/run/php5-fpm.sock:", host: "_____" I have tried to use either sock or port, but the problem still exist... any idea about what's wrong? thanks. "php -v" shows below: PHP 5.5.33 (cli) (built: Mar 15 2016 01:22:17) Copyright (c) 1997-2015 The PHP Group Zend Engine v2.5.0, Copyright (c) 1998-2015 Zend Technologies with XCache v3.2.0, Copyright (c) 2005-2014, by mOo with Zend OPcache v7.0.6-dev, Copyright (c) 1999-2015, by Zend Technologies with XCache Cacher v3.2.0, Copyright (c) 2005-2014, by mOo Posted at Nginx Forum: https://forum.nginx.org/read.php?2,265576,265576#msg-265576 From lucas at slcoding.com Wed Mar 23 06:30:17 2016 From: lucas at slcoding.com (Lucas Rolff) Date: Wed, 23 Mar 2016 07:30:17 +0100 Subject: 502 Bad Gateway once running php on command line In-Reply-To: <9bf78ff908abf0c9bf4cead0e54056ef.NginxMailingListEnglish@forum.nginx.org> References: <9bf78ff908abf0c9bf4cead0e54056ef.NginxMailingListEnglish@forum.nginx.org> Message-ID: <56F237F9.9080109@slcoding.com> Hi, What is the exact call you're trying to do? - Lucas marcusy3k wrote: > I have just installed: > - FreeBSD 10.2 > - Nginx 1.8.1 > - PHP 5.5.33 > > Nginx works fine with PHP that the web sites seems ok to run php pages. > However, once I run php on command line (e.g. php -v), the web site will get > "502 Bad Gateway" error, and I find the nginx error log as below: > > [error] 714#0: *3 upstream prematurely closed connection while reading > response header from > upstream, client: _____, server: www.____.com, request: "GET / HTTP/1.1", > upstream: "fastcgi://unix:/ > var/run/php5-fpm.sock:", host: "_____" > > I have tried to use either sock or port, but the problem still exist... any > idea about what's wrong? thanks. > > "php -v" shows below: > PHP 5.5.33 (cli) (built: Mar 15 2016 01:22:17) > Copyright (c) 1997-2015 The PHP Group > Zend Engine v2.5.0, Copyright (c) 1998-2015 Zend Technologies > with XCache v3.2.0, Copyright (c) 2005-2014, by mOo > with Zend OPcache v7.0.6-dev, Copyright (c) 1999-2015, by Zend > Technologies > with XCache Cacher v3.2.0, Copyright (c) 2005-2014, by mOo > > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,265576,265576#msg-265576 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at forum.nginx.org Wed Mar 23 06:35:49 2016 From: nginx-forum at forum.nginx.org (marcusy3k) Date: Wed, 23 Mar 2016 02:35:49 -0400 Subject: 502 Bad Gateway once running php on command line In-Reply-To: <56F237F9.9080109@slcoding.com> References: <56F237F9.9080109@slcoding.com> Message-ID: <90c586c7ba044a72481500ef09fcf16e.NginxMailingListEnglish@forum.nginx.org> hi, on command line, when I type: php or php -v after pressing [Enter] key, then the "502 Bad Gateway" will occur at once. It is very strange, because I always think php command line should not affect the php-fpm... Posted at Nginx Forum: https://forum.nginx.org/read.php?2,265576,265578#msg-265578 From nginx-forum at forum.nginx.org Wed Mar 23 09:05:05 2016 From: nginx-forum at forum.nginx.org (marcusy3k) Date: Wed, 23 Mar 2016 05:05:05 -0400 Subject: 502 Bad Gateway once running php on command line In-Reply-To: <9bf78ff908abf0c9bf4cead0e54056ef.NginxMailingListEnglish@forum.nginx.org> References: <9bf78ff908abf0c9bf4cead0e54056ef.NginxMailingListEnglish@forum.nginx.org> Message-ID: <37c16d733c3eb114deab06488c5cc5e0.NginxMailingListEnglish@forum.nginx.org> Eventually I find what went wrong, it should be caused by both Zend OPcache and XCache are installed, they may conflict each other in this case, once I've removed the XCache, it works fine, the php command line would no longer cause the php-fpm error. XCache should be unnecessary when OPcache is running. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,265576,265584#msg-265584 From lucas at slcoding.com Wed Mar 23 09:25:58 2016 From: lucas at slcoding.com (Lucas Rolff) Date: Wed, 23 Mar 2016 10:25:58 +0100 Subject: 502 Bad Gateway once running php on command line In-Reply-To: <37c16d733c3eb114deab06488c5cc5e0.NginxMailingListEnglish@forum.nginx.org> References: <9bf78ff908abf0c9bf4cead0e54056ef.NginxMailingListEnglish@forum.nginx.org> <37c16d733c3eb114deab06488c5cc5e0.NginxMailingListEnglish@forum.nginx.org> Message-ID: <56F26126.3040906@slcoding.com> When issuing php directly from the command-line, you don't even go through nginx. php from the command-line relies on the php-cli which isn't talking to your nginx process nor php-fpm. > marcusy3k > 23 March 2016 at 10:05 > Eventually I find what went wrong, it should be caused by both Zend > OPcache > and XCache are installed, they may conflict each other in this case, once > I've removed the XCache, it works fine, the php command line would no > longer > cause the php-fpm error. > > XCache should be unnecessary when OPcache is running. > > Posted at Nginx Forum: > https://forum.nginx.org/read.php?2,265576,265584#msg-265584 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > marcusy3k > 23 March 2016 at 05:59 > I have just installed: > - FreeBSD 10.2 > - Nginx 1.8.1 > - PHP 5.5.33 > > Nginx works fine with PHP that the web sites seems ok to run php pages. > However, once I run php on command line (e.g. php -v), the web site > will get > "502 Bad Gateway" error, and I find the nginx error log as below: > > [error] 714#0: *3 upstream prematurely closed connection while reading > response header from > upstream, client: _____, server: www.____.com, request: "GET / HTTP/1.1", > upstream: "fastcgi://unix:/ > var/run/php5-fpm.sock:", host: "_____" > > I have tried to use either sock or port, but the problem still > exist... any > idea about what's wrong? thanks. > > "php -v" shows below: > PHP 5.5.33 (cli) (built: Mar 15 2016 01:22:17) > Copyright (c) 1997-2015 The PHP Group > Zend Engine v2.5.0, Copyright (c) 1998-2015 Zend Technologies > with XCache v3.2.0, Copyright (c) 2005-2014, by mOo > with Zend OPcache v7.0.6-dev, Copyright (c) 1999-2015, by Zend > Technologies > with XCache Cacher v3.2.0, Copyright (c) 2005-2014, by mOo > > Posted at Nginx Forum: > https://forum.nginx.org/read.php?2,265576,265576#msg-265576 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Wed Mar 23 10:07:19 2016 From: nginx-forum at forum.nginx.org (marcusy3k) Date: Wed, 23 Mar 2016 06:07:19 -0400 Subject: 502 Bad Gateway once running php on command line In-Reply-To: <56F26126.3040906@slcoding.com> References: <56F26126.3040906@slcoding.com> Message-ID: <0ad6d89a3b32b8f802b629e7a970bcec.NginxMailingListEnglish@forum.nginx.org> yes, agree. Thanks Lucas. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,265576,265588#msg-265588 From reallfqq-nginx at yahoo.fr Wed Mar 23 11:20:17 2016 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Wed, 23 Mar 2016 12:20:17 +0100 Subject: simple auth question for nested sections In-Reply-To: <5456038E-CF9F-4C0F-A892-96C41979B6A5@2xlp.com> References: <5456038E-CF9F-4C0F-A892-96C41979B6A5@2xlp.com> Message-ID: Why would you want to do that? Spaghetti configuration? Some advice from Igor Sysoev: https://youtu.be/YWRYbLKsS0I --- *B. R.* On Wed, Mar 23, 2016 at 1:00 AM, Jonathan Vanasco wrote: > apologies for the simple question, but i could only find the opposite > situation in the list archives and I haven't had to reconfigure some of > these routes in years! > > i have > > # works > location /foo { > proxy_pass http://127.0.0.1:6543; > } > > I want to lock down /foo/admin with basic auth > > # works > location /foo/admin { > proxy_pass http://127.0.0.1:6543; > auth_basic "Administrator Login"; > auth_basic_user_file /etc/nginx/_htpasswd/well-known; > } > > Is there a syntax for nesting the two together, so the /foo/admin would > inherit the /foo configuration without the need to redeclare everything? > > # something like > location /foo { > proxy_pass http://127.0.0.1:6543; > location /foo/admin { > auth_basic "Administrator Login"; > auth_basic_user_file > /etc/nginx/_htpasswd/well-known; > } > } > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Jordan.Rakoske at cisecurity.org Wed Mar 23 12:55:34 2016 From: Jordan.Rakoske at cisecurity.org (Jordan C. Rakoske) Date: Wed, 23 Mar 2016 12:55:34 +0000 Subject: CIS NGINX Security Benchmark help Message-ID: Hello, The Center for Internet Security is looking for some folks to help in the creation of a security benchmark for NGINX. We have a wide range of benchmark guides that are created by the cyber security community and we offer them free to the world. All of our benchmarks are created from a consensus team comprised of subject matter experts, vendors, CIS members, and folks that just want to help out. If anyone is interested in helping out here is a link to our project page https://benchmarks.cisecurity.org/community/projects/index.cfm#123. We list all contributors/editors/authors inside the benchmark as well, we can even submit for CPE credits on your behalf. Please let me know if you have any additional questions or if you are willing to help. Thank you for your time :) Jordan C. Rakoske Technical Product Manager Center for Internet Security, Inc. 31 Tech Valley Drive East Greenbush, NY 12061 Jordan.Rakoske at cisecurity.org www.cisecurity.org Follow us on Twitter @CISecurity This message and attachments may contain confidential information. If it appears that this message was sent to you by mistake, any retention, dissemination, distribution or copying of this message and attachments is strictly prohibited. Please notify the sender immediately and permanently delete the message and any attachments. . . . -------------- next part -------------- An HTML attachment was scrubbed... URL: From phil at dunlop-lello.uk Wed Mar 23 14:03:07 2016 From: phil at dunlop-lello.uk (Phil Lello) Date: Wed, 23 Mar 2016 14:03:07 +0000 Subject: CIS NGINX Security Benchmark help In-Reply-To: References: Message-ID: Are you paying people for their time? On 23 Mar 2016 12:55, "Jordan C. Rakoske" wrote: > Hello, > The Center for Internet Security is looking for some folks to help in the > creation of a security benchmark for NGINX. We have a wide range of > benchmark guides that are created by the cyber security community and we > offer them free to the world. All of our benchmarks are created from a > consensus team comprised of subject matter experts, vendors, CIS members, > and folks that just want to help out. If anyone is interested in helping > out here is a link to our project page > https://benchmarks.cisecurity.org/community/projects/index.cfm#123. We > list all contributors/editors/authors inside the benchmark as well, we can > even submit for CPE credits on your behalf. Please let me know if you > have any additional questions or if you are willing to help. > > > > Thank you for your time J > > > > > > Jordan C. Rakoske > > Technical Product Manager > > Center for Internet Security, Inc. > > 31 Tech Valley Drive > > East Greenbush, NY 12061 > > Jordan.Rakoske at cisecurity.org > > *www.cisecurity.org > * > > *Follow us on Twitter @CISecurity * > > > This message and attachments may contain confidential information. If it > appears that this message was sent to you by mistake, any retention, > dissemination, distribution or copying of this message and attachments is > strictly prohibited. Please notify the sender immediately and permanently > delete the message and any attachments. > . . . > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Jordan.Rakoske at cisecurity.org Wed Mar 23 15:01:51 2016 From: Jordan.Rakoske at cisecurity.org (Jordan C. Rakoske) Date: Wed, 23 Mar 2016 15:01:51 +0000 Subject: CIS NGINX Security Benchmark help In-Reply-To: References: Message-ID: Phil, We prefer that our contributors and editors are volunteers. Our guides have always been publicly available and we have recently started converting their licensing to Creative Commons, which allows for derivative works under certain circumstances. Also, each published guide explicitly acknowledges its contributors and we broadcast its availability over social media and other channels. In addition we submit CPE credits on the volunteers behalf. We have also reached out to NGINX directly on this to have them review the created guidance once completed to provide input. If you have any specific questions about volunteer or other opportunities, feel free to contact me directly. Jordan C. Rakoske Technical Product Manager Center for Internet Security, Inc. 31 Tech Valley Drive East Greenbush, NY 12061 Jordan.Rakoske at cisecurity.org www.cisecurity.org Follow us on Twitter @CISecurity -----Original Message----- From: nginx [mailto:nginx-bounces at nginx.org] On Behalf Of Phil Lello Sent: Wednesday, March 23, 2016 10:03 AM To: nginx at nginx.org Subject: Re: CIS NGINX Security Benchmark help Are you paying people for their time? On 23 Mar 2016 12:55, "Jordan C. Rakoske" > wrote: Hello, The Center for Internet Security is looking for some folks to help in the creation of a security benchmark for NGINX. We have a wide range of benchmark guides that are created by the cyber security community and we offer them free to the world. All of our benchmarks are created from a consensus team comprised of subject matter experts, vendors, CIS members, and folks that just want to help out. If anyone is interested in helping out here is a link to our project page https://benchmarks.cisecurity.org/community/projects/index.cfm#123. We list all contributors/editors/authors inside the benchmark as well, we can even submit for CPE credits on your behalf. Please let me know if you have any additional questions or if you are willing to help. Thank you for your time J Jordan C. Rakoske Technical Product Manager Center for Internet Security, Inc. 31 Tech Valley Drive East Greenbush, NY 12061 Jordan.Rakoske at cisecurity.org www.cisecurity.org Follow us on Twitter @CISecurity This message and attachments may contain confidential information. If it appears that this message was sent to you by mistake, any retention, dissemination, distribution or copying of this message and attachments is strictly prohibited. Please notify the sender immediately and permanently delete the message and any attachments. . . . _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx ... This message and attachments may contain confidential information. If it appears that this message was sent to you by mistake, any retention, dissemination, distribution or copying of this message and attachments is strictly prohibited. Please notify the sender immediately and permanently delete the message and any attachments. . . . From francis at daoine.org Wed Mar 23 18:14:53 2016 From: francis at daoine.org (Francis Daly) Date: Wed, 23 Mar 2016 18:14:53 +0000 Subject: simple auth question for nested sections In-Reply-To: <5456038E-CF9F-4C0F-A892-96C41979B6A5@2xlp.com> References: <5456038E-CF9F-4C0F-A892-96C41979B6A5@2xlp.com> Message-ID: <20160323181453.GM3340@daoine.org> On Tue, Mar 22, 2016 at 08:00:13PM -0400, Jonathan Vanasco wrote: Hi there, > Is there a syntax for nesting the two together, so the /foo/admin would inherit the /foo configuration without the need to redeclare everything? > > # something like > location /foo { > proxy_pass http://127.0.0.1:6543; > location /foo/admin { > auth_basic "Administrator Login"; > auth_basic_user_file /etc/nginx/_htpasswd/well-known; > } > } You would do it exactly as you have done it. Any directives that inherit do not need to be repeated. If it does not work for you, that's probably due to proxy_pass not inheriting. So repeat that one, but not everything else. If your example is your exact config, then "everything else" is "nothing", and there is no immediate benefit to nesting. f -- Francis Daly francis at daoine.org From nginx at 2xlp.com Wed Mar 23 23:56:29 2016 From: nginx at 2xlp.com (Jonathan Vanasco) Date: Wed, 23 Mar 2016 19:56:29 -0400 Subject: simple auth question for nested sections In-Reply-To: <20160323181453.GM3340@daoine.org> References: <5456038E-CF9F-4C0F-A892-96C41979B6A5@2xlp.com> <20160323181453.GM3340@daoine.org> Message-ID: <53DCD1BD-5F25-48A7-BD36-1943E9D85120@2xlp.com> On Mar 23, 2016, at 2:14 PM, Francis Daly wrote: > Any directives that inherit do not need to be repeated. > > If it does not work for you, that's probably due to proxy_pass not > inheriting. Thanks - that's it -- `proxy_pass` does not inherit, but all the `proxy_set_header` directives in that block do. Only the `proxy_pass` directive needed to be repeated in the location block (thank goodness!) location /foo { proxy_pass http://127.0.0.1:6543; # nearly 10 lines of proxy_set_header ... location /foo/admin { proxy_pass http://127.0.0.1:6543; auth_basic "Administrator Login"; auth_basic_user_file /etc/nginx/_htpasswd/foo; } } On Mar 23, 2016, at 7:20 AM, B.R. wrote: > Why would you want to do that? Spaghetti configuration? The proxy has a dozen lines of configuration. The `proxy_pass` line doesn't inherit, but the docs don't mention that. Only the `proxy_pass` directive not inheriting was the last thing I expected. So I had to run duplicate blocks until things worked. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Thu Mar 24 03:47:19 2016 From: nginx-forum at forum.nginx.org (roslui12) Date: Wed, 23 Mar 2016 23:47:19 -0400 Subject: How do I install module Rate Limit Message-ID: <482a0c0f4f8a68adfbe8785a6fb5a597.NginxMailingListEnglish@forum.nginx.org> Hello friends, could you help me? as I install module for nginx Rate Limit? Thank you Posted at Nginx Forum: https://forum.nginx.org/read.php?2,265617,265617#msg-265617 From nginx-forum at forum.nginx.org Thu Mar 24 13:14:06 2016 From: nginx-forum at forum.nginx.org (john_smith77) Date: Thu, 24 Mar 2016 09:14:06 -0400 Subject: Homepage cache and cookies Message-ID: I am trying to cache the home page to a site. I am only caching the home page. If I put in a condition to check for cookies existing before caching, everything works as expected, but there is a high BYPASS rate due to the client not having the cookies the first time they visit the site. Once I took out the check for cookies, clients started getting cached cookies from other users. Is there a way to still have the cookies set from the origin server, and have a cache hit? server { listen 80; if ($request_uri !~* "home.html") { set $skip_cache 1; } location / { proxy_pass http://backend; proxy_set_header Host $host; proxy_cache_bypass $skip_cache; proxy_no_cache $skip_cache; proxy_pass_header Set-Cookie; proxy_cache one; proxy_cache_key $scheme$proxy_host$uri$is_args$args$cookie_myCookie$cookie_myOtherCookie; proxy_cache_valid 200 302 10m; proxy_cache_valid 404 1m; proxy_ignore_headers "Set-Cookie" "Cache-Control" "Expires"; proxy_hide_header "Set-Cookie"; proxy_cache_lock on; if ($http_user_agent ~* '(iPhone|Android|Phone|PalmSource|BOLT|Symbian|Fennec|GoBrowser|Maemo|MIB|Minimo|NetFront|Presto|SEMC|Skyfire|TeaShark|Teleca|uZard|palm|psp|openweb|mmp|novarra|nintendo ds|hiptop|ipod|blackberry|up.browser|up.link|wap|windows ce|blackberry|iemobile|bb10)' ) { proxy_pass http://backend_mobile; } if ($http_user_agent ~* '(iPad|playbook|hp-tablet|kindle|Silk)' ) { proxy_pass http://backend_tablet; } root /opt/rh/rh-nginx18/root/usr/share/nginx/html; index index.html index.htm; } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,265629,265629#msg-265629 From reallfqq-nginx at yahoo.fr Thu Mar 24 17:04:14 2016 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Thu, 24 Mar 2016 18:04:14 +0100 Subject: Homepage cache and cookies In-Reply-To: References: Message-ID: The proxy_cache_key defines which key differentiate any cache object. By using $cookie_name in the string, the *value* of the cookie 'name' will be taken as part of the key. If multiple clients have the same combination of variables value (specifically, for cookie, presence, no presence or same value-s), they will grab the same object from the cache. If different content might get produced/grabbed with the same value in variables, then you have a problem. As a sidenote, you are using proxy_pass_header Set-Cookie with proxy_hide_header Set-Cookie, which conflict one with each other. Remove them altogether. --- *B. R.* On Thu, Mar 24, 2016 at 2:14 PM, john_smith77 wrote: > I am trying to cache the home page to a site. I am only caching the home > page. If I put in a condition to check for cookies existing before caching, > everything works as expected, but there is a high BYPASS rate due to the > client not having the cookies the first time they visit the site. Once I > took out the check for cookies, clients started getting cached cookies from > other users. Is there a way to still have the cookies set from the origin > server, and have a cache hit? > > server { > listen 80; > > if ($request_uri !~* "home.html") { > set $skip_cache 1; } > > location / { > proxy_pass http://backend; > proxy_set_header Host $host; > proxy_cache_bypass $skip_cache; > proxy_no_cache $skip_cache; > proxy_pass_header Set-Cookie; > proxy_cache one; > proxy_cache_key > $scheme$proxy_host$uri$is_args$args$cookie_myCookie$cookie_myOtherCookie; > proxy_cache_valid 200 302 10m; > proxy_cache_valid 404 1m; > proxy_ignore_headers "Set-Cookie" "Cache-Control" "Expires"; > proxy_hide_header "Set-Cookie"; > proxy_cache_lock on; > if ($http_user_agent ~* > > '(iPhone|Android|Phone|PalmSource|BOLT|Symbian|Fennec|GoBrowser|Maemo|MIB|Minimo|NetFront|Presto|SEMC|Skyfire|TeaShark|Teleca|uZard|palm|psp|openweb|mmp|novarra|nintendo > ds|hiptop|ipod|blackberry|up.browser|up.link|wap|windows > ce|blackberry|iemobile|bb10)' ) { > proxy_pass http://backend_mobile; } > if ($http_user_agent ~* '(iPad|playbook|hp-tablet|kindle|Silk)' ) { > proxy_pass http://backend_tablet; } > > root /opt/rh/rh-nginx18/root/usr/share/nginx/html; > index index.html index.htm; > } > > Posted at Nginx Forum: > https://forum.nginx.org/read.php?2,265629,265629#msg-265629 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From reallfqq-nginx at yahoo.fr Thu Mar 24 17:10:52 2016 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Thu, 24 Mar 2016 18:10:52 +0100 Subject: How do I install module Rate Limit In-Reply-To: <482a0c0f4f8a68adfbe8785a6fb5a597.NginxMailingListEnglish@forum.nginx.org> References: <482a0c0f4f8a68adfbe8785a6fb5a597.NginxMailingListEnglish@forum.nginx.org> Message-ID: The ngx_http_limit_req_module module is built as part of nginx core: every nginx binay has it. You thus do not need to do anything. --- *B. R.* On Thu, Mar 24, 2016 at 4:47 AM, roslui12 wrote: > Hello friends, could you help me? as I install module for nginx Rate Limit? > Thank you > > Posted at Nginx Forum: > https://forum.nginx.org/read.php?2,265617,265617#msg-265617 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From reallfqq-nginx at yahoo.fr Thu Mar 24 17:17:44 2016 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Thu, 24 Mar 2016 18:17:44 +0100 Subject: simple auth question for nested sections In-Reply-To: <53DCD1BD-5F25-48A7-BD36-1943E9D85120@2xlp.com> References: <5456038E-CF9F-4C0F-A892-96C41979B6A5@2xlp.com> <20160323181453.GM3340@daoine.org> <53DCD1BD-5F25-48A7-BD36-1943E9D85120@2xlp.com> Message-ID: I guess the docs logic is reversed: it is explicitely stated when a directive inherits, which must be that way because not considered the default behavior (although I am not in Igor's head...). This product uses a different paradigm that 'minimal configuration lines', and Igor has nothing against duplicated blocks, which allow direct understanding of what is in effect in the location block you are looking at, compared to horizontal/similar ones (ofc server-wide directives shall not and won't be redeclared at location level). In the modern world, use of configuration management tools, which allow templating, allows duplicated stanzas without trouble on configuration generation/deployment. That is a way of looking at configuration, and everyone has his/her views on it. Please, correct me if I am wrong. --- *B. R.* On Thu, Mar 24, 2016 at 12:56 AM, Jonathan Vanasco wrote: > > On Mar 23, 2016, at 2:14 PM, Francis Daly wrote: > > Any directives that inherit do not need to be repeated. > > If it does not work for you, that's probably due to proxy_pass not > inheriting. > > > Thanks - that's it -- `proxy_pass` does not inherit, but all the > `proxy_set_header` directives in that block do. > Only the `proxy_pass` directive needed to be repeated in the location > block (thank goodness!) > > location /foo { > proxy_pass http://127.0.0.1:6543; > # nearly 10 lines of proxy_set_header > ... > location /foo/admin { > proxy_pass http://127.0.0.1:6543; > auth_basic "Administrator Login"; > auth_basic_user_file /etc/nginx/_htpasswd/foo; > } > } > > On Mar 23, 2016, at 7:20 AM, B.R. wrote: > > Why would you want to do that? Spaghetti configuration? > > > The proxy has a dozen lines of configuration. > The `proxy_pass` line doesn't inherit, but the docs don't mention that. > Only the `proxy_pass` directive not inheriting was the last thing I > expected. So I had to run duplicate blocks until things worked. > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Thu Mar 24 17:27:58 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 24 Mar 2016 20:27:58 +0300 Subject: simple auth question for nested sections In-Reply-To: References: <5456038E-CF9F-4C0F-A892-96C41979B6A5@2xlp.com> <20160323181453.GM3340@daoine.org> <53DCD1BD-5F25-48A7-BD36-1943E9D85120@2xlp.com> Message-ID: <20160324172758.GM37718@mdounin.ru> Hello! On Thu, Mar 24, 2016 at 06:17:44PM +0100, B.R. wrote: > I guess the docs logic is reversed: it is explicitely stated when a > directive inherits, which must be that way because not considered the > default behavior (although I am not in Igor's head...). No, by default all directives are inherited from previous levels. The only exceptions are: - location content handlers (proxy_pass, fastcgi_pass, scgi_pass, uwsgi_pass, memcached_pass, empty_gif, stub_status, mp4, flv, perl); - rewrite directives (if, rewrite, break, return); - try_files. In most cases this is more or less obvious when directives are not inherited, though docs can be a bit more clear on this. -- Maxim Dounin http://nginx.org/ From reallfqq-nginx at yahoo.fr Thu Mar 24 17:36:37 2016 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Thu, 24 Mar 2016 18:36:37 +0100 Subject: simple auth question for nested sections In-Reply-To: <20160324172758.GM37718@mdounin.ru> References: <5456038E-CF9F-4C0F-A892-96C41979B6A5@2xlp.com> <20160323181453.GM3340@daoine.org> <53DCD1BD-5F25-48A7-BD36-1943E9D85120@2xlp.com> <20160324172758.GM37718@mdounin.ru> Message-ID: Thanks Maxim for your lights! Please discard all the crap I wrote before and apologies to Jonathan, then. :oP --- *B. R.* On Thu, Mar 24, 2016 at 6:27 PM, Maxim Dounin wrote: > Hello! > > On Thu, Mar 24, 2016 at 06:17:44PM +0100, B.R. wrote: > > > I guess the docs logic is reversed: it is explicitely stated when a > > directive inherits, which must be that way because not considered the > > default behavior (although I am not in Igor's head...). > > No, by default all directives are inherited from previous levels. > The only exceptions are: > > - location content handlers (proxy_pass, fastcgi_pass, scgi_pass, > uwsgi_pass, memcached_pass, empty_gif, stub_status, mp4, flv, > perl); > > - rewrite directives (if, rewrite, break, return); > > - try_files. > > In most cases this is more or less obvious when directives are not > inherited, though docs can be a bit more clear on this. > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx at 2xlp.com Thu Mar 24 19:35:00 2016 From: nginx at 2xlp.com (Jonathan Vanasco) Date: Thu, 24 Mar 2016 15:35:00 -0400 Subject: simple auth question for nested sections In-Reply-To: <20160324172758.GM37718@mdounin.ru> References: <5456038E-CF9F-4C0F-A892-96C41979B6A5@2xlp.com> <20160323181453.GM3340@daoine.org> <53DCD1BD-5F25-48A7-BD36-1943E9D85120@2xlp.com> <20160324172758.GM37718@mdounin.ru> Message-ID: <8B0D9137-F7E9-4EB5-810F-E6561B51BBA5@2xlp.com> On Mar 24, 2016, at 1:27 PM, Maxim Dounin wrote: > In most cases this is more or less obvious when directives are not > inherited, though docs can be a bit more clear on this. What is not-obvious / confusing is that the *_pass items are not inherited... but their associated directives from the same module are. On Mar 24, 2016, at 1:17 PM, B.R. wrote: > This product uses a different paradigm that 'minimal configuration lines', and Igor has nothing against duplicated blocks, which allow direct understanding of what is in effect in the location block you are looking at, compared to horizontal/similar ones (ofc server-wide directives shall not and won't be redeclared at location level). > In the modern world, use of configuration management tools, which allow templating, allows duplicated stanzas without trouble on configuration generation/deployment. I'm fine with that. I've been using nginx for 10 years now (!) and have a huge library of macros that are included into various blocks dozens of times. Right now I'm trying to open-source something I wrote to handle lets-encrypt certificate management behind nginx. For simplicity, I wanted to make a very concise nested block for the documents. When there are many header variables that need to be set, the docs can get hard for people to follow. -------------- next part -------------- An HTML attachment was scrubbed... URL: From reallfqq-nginx at yahoo.fr Thu Mar 24 20:09:03 2016 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Thu, 24 Mar 2016 21:09:03 +0100 Subject: simple auth question for nested sections In-Reply-To: <8B0D9137-F7E9-4EB5-810F-E6561B51BBA5@2xlp.com> References: <5456038E-CF9F-4C0F-A892-96C41979B6A5@2xlp.com> <20160323181453.GM3340@daoine.org> <53DCD1BD-5F25-48A7-BD36-1943E9D85120@2xlp.com> <20160324172758.GM37718@mdounin.ru> <8B0D9137-F7E9-4EB5-810F-E6561B51BBA5@2xlp.com> Message-ID: Jonathan, please read Maxim's answer, which is the one going the right way. My hypothesis proved to be wrong. --- *B. R.* On Thu, Mar 24, 2016 at 8:35 PM, Jonathan Vanasco wrote: > > On Mar 24, 2016, at 1:27 PM, Maxim Dounin wrote: > > In most cases this is more or less obvious when directives are not > inherited, though docs can be a bit more clear on this. > > > What is not-obvious / confusing is that the *_pass items are not > inherited... but their associated directives from the same module are. > > > On Mar 24, 2016, at 1:17 PM, B.R. wrote: > > This product uses a different paradigm that 'minimal configuration lines', > and Igor has nothing against duplicated blocks, which allow direct > understanding of what is in effect in the location block you are looking > at, compared to horizontal/similar ones (ofc server-wide directives shall > not and won't be redeclared at location level). > In the modern world, use of configuration management tools, which allow > templating, allows duplicated stanzas without trouble on configuration > generation/deployment. > > > I'm fine with that. I've been using nginx for 10 years now (!) and have a > huge library of macros that are included into various blocks dozens of > times. > > Right now I'm trying to open-source something I wrote to handle > lets-encrypt certificate management behind nginx. > > For simplicity, I wanted to make a very concise nested block for the > documents. When there are many header variables that need to be set, the > docs can get hard for people to follow. > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Thu Mar 24 20:31:32 2016 From: nginx-forum at forum.nginx.org (john_smith77) Date: Thu, 24 Mar 2016 16:31:32 -0400 Subject: Homepage cache and cookies In-Reply-To: References: Message-ID: <7ca332704f852fac630b384362a536cb.NginxMailingListEnglish@forum.nginx.org> Thanks for the info. I have removed the redundant config. I suppose what I am really getting at is that I would like Set-Cookie to never be cached with a cache MISS so that the cached cookie values are then not there for subsequent HITS. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,265629,265645#msg-265645 From reallfqq-nginx at yahoo.fr Fri Mar 25 08:37:57 2016 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Fri, 25 Mar 2016 09:37:57 +0100 Subject: Homepage cache and cookies In-Reply-To: <7ca332704f852fac630b384362a536cb.NginxMailingListEnglish@forum.nginx.org> References: <7ca332704f852fac630b384362a536cb.NginxMailingListEnglish@forum.nginx.org> Message-ID: That seems to match what the docs for proxy_hide_header states, just do not use proxy_pass_header at the same time... which does the exact opposite. --- *B. R.* On Thu, Mar 24, 2016 at 9:31 PM, john_smith77 wrote: > Thanks for the info. I have removed the redundant config. I suppose what I > am really getting at is that I would like Set-Cookie to never be cached > with > a cache MISS so that the cached cookie values are then not there for > subsequent HITS. > > Posted at Nginx Forum: > https://forum.nginx.org/read.php?2,265629,265645#msg-265645 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Fri Mar 25 13:06:54 2016 From: nginx-forum at forum.nginx.org (john_smith77) Date: Fri, 25 Mar 2016 09:06:54 -0400 Subject: Homepage cache and cookies In-Reply-To: References: Message-ID: Indeed, but that prevents all cookies from being set. What I am looking for is to have a cache hit, but have the set-cookie served from the origin server. In Apache httpd cache, this can be accomplished with CacheIgnoreHeaders Set-Cookie. The back end server here is Apache and I have tried setting it to send back this header: Cache-Control:no-cache="set-cookie" There just does not seem to be a combination of options for nginx that allows this to work. I've tried this in nginx both with scenarios: nginx -> Apache httpd -> wordpress nginx-> Apache httpd -> tomcat In both cases, proxy_hide_header will prevent the cookies from ever being set. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,265629,265652#msg-265652 From nginx-forum at forum.nginx.org Fri Mar 25 18:22:19 2016 From: nginx-forum at forum.nginx.org (twister5800) Date: Fri, 25 Mar 2016 14:22:19 -0400 Subject: unknown directive "flv" Message-ID: <76428b2666238056b8ec31ba1ed0d6f7.NginxMailingListEnglish@forum.nginx.org> Hi, I have compiled nginx with allmodules: nginx version: nginx/1.9.12 built by gcc 5.2.1 20151010 (Ubuntu 5.2.1-22ubuntu2) built with OpenSSL 1.0.2d 9 Jul 2015 TLS SNI support enabled configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --conf-path=/etc/nginx/nginx.conf --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --with-http_ssl_module --with-http_realip_module --with-http_addition_module --with-http_sub_module --with-http_dav_module --with-http_flv_module --with-http_mp4_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_random_index_module --with-http_secure_link_module --with-http_stub_status_module --with-file-aio --with-pcre --with-file-aio --with-cc-opt='-g -O2 -fstack-protector --param=ssp-buffer-size=4 -Wformat -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -DTCP_FASTOPEN=23' --with-ld-opt='-Wl,-z,relro -Wl,--as-needed -L /usr/lib' --with-ipv6 --with-debug --without-http_scgi_module --without-http_uwsgi_module --add-module=/home/ubuntuadmin/nginx-rtmp-module But why can't nginx start when I add this to nginx.conf: location ~ .flv$ { flv; } It gives this: 2016/03/25 18:59:14 [emerg] 4643#0: unknown directive "flv" in /usr/local/nginx/conf/nginx.conf:63 Best regards Martin Posted at Nginx Forum: https://forum.nginx.org/read.php?2,265654,265654#msg-265654 From nginx-forum at forum.nginx.org Fri Mar 25 21:33:42 2016 From: nginx-forum at forum.nginx.org (john_smith77) Date: Fri, 25 Mar 2016 17:33:42 -0400 Subject: Homepage cache and cookies In-Reply-To: References: Message-ID: So I found a solution and it seems like there is unexpected behavior from nginx. proxy_hide_header Set-Cookie does not seem to work when the location block is set to / So: location / { #Lots of other proxy stuff here...... proxy_hide_header "Set-Cookie"; } does not allow a cookie to ever be set, but: location ~home.html { #Lots of other proxy stuff here...... proxy_hide_header "Set-Cookie"; } will allow cache HITS but won't cache Set-Cookie headers. Clients are still able to get the cookies from the back end and will never get another users session cookies. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,265629,265658#msg-265658 From ph.gras at worldonline.fr Fri Mar 25 22:09:01 2016 From: ph.gras at worldonline.fr (Ph. Gras) Date: Fri, 25 Mar 2016 23:09:01 +0100 Subject: Homepage cache and cookies In-Reply-To: References: Message-ID: Hello John! I think you can do this better with sessions in NginX. I've made a script that checks if a client is a bot or not, by testing if the Google Analytics cookie is here, or not. If it is, the client can post a comment or login. If not, the server is giving a fake page. In my PHP script : ***************************************************** $host = $_SERVER['REMOTE_ADDR']; $session_id = session_id(); $session_name = session_name(); [?] if ( '' == $session_name ) { $session_name = '0'; session_start(); echo '

server busy, come back later!

'; exit(); } [?] ***************************************************** All pages are cached ;-) Regards, Ph. Gras Le 25 mars 2016 ? 22:33, john_smith77 a ?crit : > So I found a solution and it seems like there is unexpected behavior from > nginx. proxy_hide_header Set-Cookie does not seem to work when the location > block is set to / > > So: > > location / { > #Lots of other proxy stuff here...... > proxy_hide_header "Set-Cookie"; } > > does not allow a cookie to ever be set, but: > > location ~home.html { > #Lots of other proxy stuff here...... > proxy_hide_header "Set-Cookie"; } > > will allow cache HITS but won't cache Set-Cookie headers. Clients are still > able to get the cookies from the back end and will never get another users > session cookies. > > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,265629,265658#msg-265658 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From rowland at river2sea.org Sat Mar 26 00:00:42 2016 From: rowland at river2sea.org (Rowland Smith) Date: Fri, 25 Mar 2016 20:00:42 -0400 Subject: Why is nginx returning a 301 redirect in my reverse-proxy to a REST service? Message-ID: <3CBD4EFD-15B8-4A70-BB4F-C332680802DD@river2sea.org> I have a REST service with an endpoint ?registrations' running in a Docker container and it maps port 5000 to port 5001 on the Docker host. My /etc/nginx/conf.d/default.conf contains: upstream registration-api { server registration:5001 ; } server { listen 80; server_name ""; # Registration Service location /registrations/ { proxy_pass http://registration_api/; } } ? I?m using Docker, so ?registration-api? is defined in the nginx container?s /etc/hosts file. My Docker host is running at 192.168.99.100. I can run: curl -v http://192.168.99.100:5001/registrations/ and I get the response I am expecting. However, when I try to go through nginx reverse-proxy with the command: curl -v http://192.168.99.100/registrations/ I get the following redirect: * Trying 192.168.99.100... * Connected to 192.168.99.100 (192.168.99.100) port 80 (#0) > GET /registrations/ HTTP/1.1 > Host: 192.168.99.100 > User-Agent: curl/7.43.0 > Accept: */* > < HTTP/1.1 301 Moved Permanently < Server: nginx/1.9.12 < Date: Fri, 25 Mar 2016 23:57:20 GMT < Content-Type: text/html < Transfer-Encoding: chunked < Connection: keep-alive < Location: http://localhost < Expires: Fri, 25 Mar 2016 23:57:19 GMT < Cache-Control: no-cache < 301 Moved Permanently

301 Moved Permanently


nginx
* Connection #0 to host 192.168.99.100 left intact Any help would be appreciated, Thanks, Rowlad From francis at daoine.org Sat Mar 26 01:24:03 2016 From: francis at daoine.org (Francis Daly) Date: Sat, 26 Mar 2016 01:24:03 +0000 Subject: unknown directive "flv" In-Reply-To: <76428b2666238056b8ec31ba1ed0d6f7.NginxMailingListEnglish@forum.nginx.org> References: <76428b2666238056b8ec31ba1ed0d6f7.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20160326012403.GD28270@daoine.org> On Fri, Mar 25, 2016 at 02:22:19PM -0400, twister5800 wrote: Hi there, > nginx version: nginx/1.9.12 > built by gcc 5.2.1 20151010 (Ubuntu 5.2.1-22ubuntu2) > configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx ... > --add-module=/home/ubuntuadmin/nginx-rtmp-module > But why can't nginx start when I add this to nginx.conf: > > location ~ .flv$ { > flv; > } > > It gives this: > 2016/03/25 18:59:14 [emerg] 4643#0: unknown directive "flv" in > /usr/local/nginx/conf/nginx.conf:63 Either: the nginx binary that you are running when reading the config file does not include the flv module; or your third-party module is broken. Try without the third-party module; if it works, then find the version of the third-party module that is appropriate for this nginx version. f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Sat Mar 26 02:01:28 2016 From: nginx-forum at forum.nginx.org (JoakimR) Date: Fri, 25 Mar 2016 22:01:28 -0400 Subject: nginx HttpSecureLinkModule php streaming Message-ID: <10801f1a50ef8788127ecf2e6fd7e8dd.NginxMailingListEnglish@forum.nginx.org> I have for hours been trying to understand and figure out how this is working, and searched and tried a lot, but still.... No hair left on my head :( Sources for trial and errors http://nginx.org/en/docs/http/ngx_http_secure_link_module.html#variables http://stackoverflow.com/questions/8848919/secure-pseudo-streaming-flv-files/8863219#8863219 http://stackoverflow.com/questions/27468743/nginx-secure-link-module-not-working-with-php-files-but-working-on-static-files http://stackoverflow.com/questions/30065870/nginx-httpsecurelinkmodule-php-example The question is how do I get Nginx and php to make a timed encrypted link to play a video? The final output should be looking like http://domain.tld/videos/529d86c4ff560cb559df7eb794aeb4b1/56f5bec4/5/4/d/f/e/54dfee5c43e6e.mp4 real path for this video is /var/www/domain.tld/media/videos/5/4/d/f/e/54dfee5c43e6e.mp4 The php snippet that should be left unaltered if possible, since it will be updated from time to time from the developer. So all modifications have to made in NginX //generate lighttpd hash function getMediaLink($filename) { global $modsec_secret; // example: s3cr3tk3y (from your lighttpd config) global $modsec_url; // example: http://media.asfd.com/dl/ with trailing slash $filename = rawurldecode($filename); $f = "/".$filename; $t = time(); $t_hex = sprintf("%08x", $t); $m = md5($modsec_secret.$f.$t_hex); $link = $modsec_url.$m.'/'.$t_hex.$f; return $link; } And to the NginX conf file i have this head: server { listen 80; # http2; server_name localhost; index index.php index.html index.htm; #charset koi8-r; access_log /var/log/nginx/access.log main; root /var/www/domain.tld; location / { ..... (the latest attempt) location /videos/ { secure_link $arg_st,$arg_e; secure_link_md5 secretstring$uri$arg_e; location ~ \.mp4$ { if ($secure_link = "") { return 403; } mp4; mp4_buffer_size 1m; mp4_max_buffer_size 5m; gzip off; } location ~ \.flv$ { if ($secure_link = "") { return 403; } flv; } } Could some one please try to tell me how to make this happen, but maybe more important why, in a passion that mae me understand how and why? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,265663,265663#msg-265663 From nginx-forum at forum.nginx.org Sat Mar 26 03:09:21 2016 From: nginx-forum at forum.nginx.org (JoakimR) Date: Fri, 25 Mar 2016 23:09:21 -0400 Subject: Why is nginx returning a 301 redirect in my reverse-proxy to a REST service? In-Reply-To: <3CBD4EFD-15B8-4A70-BB4F-C332680802DD@river2sea.org> References: <3CBD4EFD-15B8-4A70-BB4F-C332680802DD@river2sea.org> Message-ID: <76e4117f77e8085c45df71fa575d2f0c.NginxMailingListEnglish@forum.nginx.org> Hi I'm pretty new in NginX but the directives suggest you add proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; to your config http://nginx.org/en/docs/http/ngx_http_proxy_module.html Posted at Nginx Forum: https://forum.nginx.org/read.php?2,265660,265664#msg-265664 From reallfqq-nginx at yahoo.fr Sat Mar 26 10:25:20 2016 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Sat, 26 Mar 2016 11:25:20 +0100 Subject: Homepage cache and cookies In-Reply-To: References: Message-ID: Make sure you test this unexpected behavior with a minimal, reproducible configuration. If you succeed, please share it or (even better!) fill a bug report . Otherwise, that meant you probably did something Wrong. ?You were previously saying proxy_hise_headers did not pass cookies to the client. Now yo uare saying clients are able to grab their cookie. I am a bit confused, here...? --- *B. R.* On Fri, Mar 25, 2016 at 10:33 PM, john_smith77 wrote: > So I found a solution and it seems like there is unexpected behavior from > nginx. proxy_hide_header Set-Cookie does not seem to work when the location > block is set to / > > So: > > location / { > #Lots of other proxy stuff here...... > proxy_hide_header "Set-Cookie"; } > > does not allow a cookie to ever be set, but: > > location ~home.html { > #Lots of other proxy stuff here...... > proxy_hide_header "Set-Cookie"; } > > will allow cache HITS but won't cache Set-Cookie headers. Clients are still > able to get the cookies from the back end and will never get another users > session cookies. > > Posted at Nginx Forum: > https://forum.nginx.org/read.php?2,265629,265658#msg-265658 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Sat Mar 26 11:35:40 2016 From: nginx-forum at forum.nginx.org (edtaa) Date: Sat, 26 Mar 2016 07:35:40 -0400 Subject: Sendy in foreign language Message-ID: <1a21a71cc033e823fe7b563ccc211bb7.NginxMailingListEnglish@forum.nginx.org> I have Sendy running fine on a Debian box and served by Nginx. I have installed French language files to enable Sendy to operate in French. This works perfectly under Apache on my local computer but not on the production site with Nginx. Sendy uses gettext so this should be fairly standard to get working. The Sendy forum has some suggestions on getting translations to work and I have followed these without success. I've had a discussion with Ben on this forum which is online here: https://sendy.co/forum/messages/5445#24416 I don't want to be forced to use Apache as Ben suggests. So I'd be eternally grateful if anyone can see how to get this working on Nginx. Here is my server configuration which I downloaded via the Sendy forum: Server { listen 80; listen [::]:80; server_name example.com; autoindex off; index index.php index.html; root /var/www/example.com/public_html; access_log /var/www/example.com/log/access.log; error_log /var/www/example.com/log/error.log; location / { try_files $uri $uri/ $uri.php?$args; } location /l/ { rewrite ^/l/([a-zA-Z0-9/]+)$ /l.php?i=$1 last; } location /t/ { rewrite ^/t/([a-zA-Z0-9/]+)$ /t.php?i=$1 last; } location /w/ { rewrite ^/w/([a-zA-Z0-9/]+)$ /w.php?i=$1 last; } location /unsubscribe/ { rewrite ^/unsubscribe/(.*)$ /unsubscribe.php?i=$1 last; } location /subscribe/ { rewrite ^/subscribe/(.*)$ /subscribe.php?i=$1 last; } location ~ \.php$ { try_files $uri =404; fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; include fastcgi_params; } location ~* ^.+.(jpg|jpeg|gif|css|png|js|ico|xml)$ { access_log off; log_not_found off; expires 30d; } } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,265666,265666#msg-265666 From nginx-forum at forum.nginx.org Sat Mar 26 16:55:40 2016 From: nginx-forum at forum.nginx.org (Askancy) Date: Sat, 26 Mar 2016 12:55:40 -0400 Subject: mod_rewrite with Nginx? Message-ID: Hello is the first time that NGINX uses, and I'm turning my .htaccess to use it. Unfortunately the access is really long, and it is my first time I do this. I converted the .htaccess with nginx online tool, but unfortunately, load the conf changed, I restart the nginx service. When I click a link on the page, it automatically downloads Him the .php files. Here's my .htaccess file: http://pastebin.com/QiNGGbbn Below is my .conf instead of Nginx: http://pastebin.com/7DZKChGm Posted at Nginx Forum: https://forum.nginx.org/read.php?2,265668,265668#msg-265668 From nginx-forum at forum.nginx.org Sat Mar 26 22:12:17 2016 From: nginx-forum at forum.nginx.org (JoakimR) Date: Sat, 26 Mar 2016 18:12:17 -0400 Subject: Sendy in foreign language In-Reply-To: <1a21a71cc033e823fe7b563ccc211bb7.NginxMailingListEnglish@forum.nginx.org> References: <1a21a71cc033e823fe7b563ccc211bb7.NginxMailingListEnglish@forum.nginx.org> Message-ID: <28fda72a6dad7d22d57016156b51deeb.NginxMailingListEnglish@forum.nginx.org> have you seen or tested this module? https://www.nginx.com/resources/wiki/modules/accept_language/ Posted at Nginx Forum: https://forum.nginx.org/read.php?2,265666,265669#msg-265669 From nginx-forum at forum.nginx.org Sat Mar 26 22:14:11 2016 From: nginx-forum at forum.nginx.org (JoakimR) Date: Sat, 26 Mar 2016 18:14:11 -0400 Subject: mod_rewrite with Nginx? In-Reply-To: References: Message-ID: This online tool convert your .htaccess to nice nginx stuff :) PS: IT's not failproff but would help you a lot. http://winginx.com/en/htaccess Posted at Nginx Forum: https://forum.nginx.org/read.php?2,265668,265670#msg-265670 From nginx-forum at forum.nginx.org Sat Mar 26 22:26:33 2016 From: nginx-forum at forum.nginx.org (Askancy) Date: Sat, 26 Mar 2016 18:26:33 -0400 Subject: mod_rewrite with Nginx? In-Reply-To: References: Message-ID: <05362f2941cf45e9423415dc3e2d53e2.NginxMailingListEnglish@forum.nginx.org> Hi, thanks for the reply, I used it ... but it still does not work ... This is my new conf: http://pastebin.com/93b58G68 but it does not work... Posted at Nginx Forum: https://forum.nginx.org/read.php?2,265668,265671#msg-265671 From reallfqq-nginx at yahoo.fr Sun Mar 27 20:10:53 2016 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Sun, 27 Mar 2016 22:10:53 +0200 Subject: mod_rewrite with Nginx? In-Reply-To: <05362f2941cf45e9423415dc3e2d53e2.NginxMailingListEnglish@forum.nginx.org> References: <05362f2941cf45e9423415dc3e2d53e2.NginxMailingListEnglish@forum.nginx.org> Message-ID: Copy-pasting configuration of using pre-cooked one is always a bad idea since you do not understand what you are dealing with. The same applies to automatic conversion tools since the paradigm behind the Apache Web server is totally different from the one from nginx: there is no one-to-one relationship (bijection) between the way to redact Apache configuration directives and nginx' ones, so the very idea of such a tool is flawed. Too bad people keep trying to provide their attempt at it without thinking about that first. You deleted the first pastebins you provided. People reading the thread entirely thus cannot get a hang on your initial question. If you do the same about the last one, the whole thread will transform into junk for future reference. Now, about the configuration you provided in the last pastebin: 1. For the list of console names, a single rewrite at server level would have proven enough. You could have transformed it into a regex location + rewrite to take advantage of the precedence of location blocks depending on their type/modifiers 2. You could have grouped all the location blocks doing a similar jobs together 3. You do not provide a location dealing with the index.php file, so the default behavior would be for nginx to serve that file 4. You do not provide the whole configuration as there is no http block (which contains the server ones) nor do you provide what is in the included files at the end of it (particularly the one php.conf one whoch could prove useful) ?Please take some time to read nginx docs to get a hang on the basics.? There even is a 'Beginner's Guide' to help you through. Generic tip: it is usually a good idea to prepare a minimal configuration depicting your problem rather than sending a huge, incomplete, configuration fil which is, in the end, useless. You are not encouraging people to help. Do your homework to show people you tried solving your own problem and saving as much of their time as possible. ?My 2 cents, --- *B. R.* On Sat, Mar 26, 2016 at 11:26 PM, Askancy wrote: > Hi, thanks for the reply, I used it ... but it still does not work ... > > This is my new conf: http://pastebin.com/93b58G68 but it does not work... > > Posted at Nginx Forum: > https://forum.nginx.org/read.php?2,265668,265671#msg-265671 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Mon Mar 28 04:00:14 2016 From: nginx-forum at forum.nginx.org (JoakimR) Date: Mon, 28 Mar 2016 00:00:14 -0400 Subject: nginx http2 pictures timeout In-Reply-To: <1581cf4b18b8766531d09e9a86eeee65.NginxMailingListEnglish@forum.nginx.org> References: <2575407.3zCVz9Pomt@vbart-workstation> <1581cf4b18b8766531d09e9a86eeee65.NginxMailingListEnglish@forum.nginx.org> Message-ID: <3e98c6c3ce5843c1339e93ca51b51dfa.NginxMailingListEnglish@forum.nginx.org> Having same issue using http2 both on my image and reverse proxy (both nginx) the backend is running on sata disk and the proxy on ssd disk.... this is why the setup is as is like this. With http2 enabled i get some weird signs, but disabling http2 makes things run... So I follow this thread :) Posted at Nginx Forum: https://forum.nginx.org/read.php?2,265430,265679#msg-265679 From nginx-forum at forum.nginx.org Mon Mar 28 07:54:40 2016 From: nginx-forum at forum.nginx.org (meteor8488) Date: Mon, 28 Mar 2016 03:54:40 -0400 Subject: deny in http {}, get 500 response , how to log this? Message-ID: <672b12cc93e497044e2709c5631eb951.NginxMailingListEnglish@forum.nginx.org> Hi All, I'm using deny to deny some IPs for my server. http { deny 192.168.1.123; # this is an example server { error_page 403 /error/403.htm; error_page 404 /error/404.htm; error_page 502 /error/502.htm; error_page 503 /error/503.htm; location = /error/403.htm { index 403.htm; access_log /var/log/403.log main; } location ~* ^/(data|image)/.*.(php|php5)$ { deny all; } } I found that if 192.168.1.123 access my server, due to this ip is blocked in http {}, so it will get a 500 response. And if someone (IP not blocked) try to access my data/*.php, he will get a 403 response. And all these 500 and 403 response will be put into my 403.log. Is it possible to put 500 response to a separate log? Then my 403 log will only log these who is trying to access the protected files. I understand that if I put "deny IP" in to server {}, it will get a 403 response. But I want to deny some IPs on the whole server level. Thanks Posted at Nginx Forum: https://forum.nginx.org/read.php?2,265680,265680#msg-265680 From nginx-forum at forum.nginx.org Mon Mar 28 09:59:38 2016 From: nginx-forum at forum.nginx.org (JoakimR) Date: Mon, 28 Mar 2016 05:59:38 -0400 Subject: nginx HttpSecureLinkModule php streaming In-Reply-To: <10801f1a50ef8788127ecf2e6fd7e8dd.NginxMailingListEnglish@forum.nginx.org> References: <10801f1a50ef8788127ecf2e6fd7e8dd.NginxMailingListEnglish@forum.nginx.org> Message-ID: Have strogled with this for 4 days now... could someone please help me? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,265663,265681#msg-265681 From nginx-forum at forum.nginx.org Mon Mar 28 10:11:05 2016 From: nginx-forum at forum.nginx.org (elanh) Date: Mon, 28 Mar 2016 06:11:05 -0400 Subject: proxy_ssl_certificate not working as expected In-Reply-To: <20160316201247.GM12808@mdounin.ru> References: <20160316201247.GM12808@mdounin.ru> Message-ID: Hello Maxim, The configuration is loaded correctly and is handling requests. "nginx -t" shows that all is OK and a 200 OK response is returned correctly. My front-end server is running version 1.9.10 (I ran "nginx -v"). So proxy_ssl_certificate is valid in my case. The backend server is running version 1.4.6 - but does this matter? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,265281,265682#msg-265682 From nginx-forum at forum.nginx.org Mon Mar 28 10:44:03 2016 From: nginx-forum at forum.nginx.org (elanh) Date: Mon, 28 Mar 2016 06:44:03 -0400 Subject: proxy_ssl_certificate not working as expected In-Reply-To: References: <20160316201247.GM12808@mdounin.ru> Message-ID: <5c1a5f188c06526a7d2d291852c3a603.NginxMailingListEnglish@forum.nginx.org> Here is my full "nginx -V" output: nginx version: nginx/1.9.10 built by gcc 4.9.2 (Debian 4.9.2-10) built with OpenSSL 1.0.1k 8 Jan 2015 TLS SNI support enabled configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx --group=nginx --with-http_ssl_module --with-http_realip_module --with-http_addition_module --with-http_sub_module --with-http_dav_module --with-http_flv_module --with-http_mp4_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_random_index_module --with-http_secure_link_module --with-http_stub_status_module --with-http_auth_request_module --with-threads --with-stream --with-stream_ssl_module --with-http_slice_module --with-mail --with-mail_ssl_module --with-file-aio --with-http_v2_module --with-cc-opt='-g -O2 -fstack-protector-strong -Wformat -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2' --with-ld-opt='-Wl,-z,relro -Wl,--as-needed' --with-ipv6 Posted at Nginx Forum: https://forum.nginx.org/read.php?2,265281,265684#msg-265684 From mdounin at mdounin.ru Mon Mar 28 13:27:23 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 28 Mar 2016 16:27:23 +0300 Subject: deny in http {}, get 500 response , how to log this? In-Reply-To: <672b12cc93e497044e2709c5631eb951.NginxMailingListEnglish@forum.nginx.org> References: <672b12cc93e497044e2709c5631eb951.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20160328132723.GX37718@mdounin.ru> Hello! On Mon, Mar 28, 2016 at 03:54:40AM -0400, meteor8488 wrote: > Hi All, > > I'm using deny to deny some IPs for my server. > > http { > deny 192.168.1.123; # this is an example > > > server { > > error_page 403 /error/403.htm; > error_page 404 /error/404.htm; > error_page 502 /error/502.htm; > error_page 503 /error/503.htm; > > location = /error/403.htm { > index 403.htm; > access_log /var/log/403.log main; > } > > location ~* ^/(data|image)/.*.(php|php5)$ { > deny all; > } > } > > I found that if 192.168.1.123 access my server, due to this ip is blocked in > http {}, so it will get a 500 response. > And if someone (IP not blocked) try to access my data/*.php, he will get a > 403 response. > > And all these 500 and 403 response will be put into my 403.log. That's because all of the requests are redirected /error/403.htm by the error_page directive, and you have logging to 403.log configured in the corresponding location. The 500 error code is logged for requests from blocked IPs because: - "deny" rule works in the location /error/403.htm, hence 403 error is triggered again; - you have recursive_error_pages (http://nginx.org/r/recursive_error_pages) enabled somewhere in your configuration, and your configuration causes redirect loop which in turn results in error 500 after 10 iterations. To resolve the redirect loop, consider using "allow all" in the location /error/403.htm. > Is it possible to put 500 response to a separate log? Then my 403 log will > only log these who is trying to access the protected files. Yes. You can configure different error pages for protected files and the rest of the site, and log them separately. E.g.: deny 192.168.1.123; error_page 403 /error/403.nolog.htm; location = /error/403.htm { allow all; access_log /path/to/403.log; } location = /error/403.nolog.htm { allow all; alias /error/403.htm; access_log off; } location /protected/ { deny all; error_page 403 /error/403.htm; } > I understand that if I put "deny IP" in to server {}, it will get a 403 > response. But I want to deny some IPs on the whole server level. No, there is no difference between "deny" specified at http{} or server{} level. -- Maxim Dounin http://nginx.org/ From nginx-forum at forum.nginx.org Mon Mar 28 13:51:40 2016 From: nginx-forum at forum.nginx.org (john_smith77) Date: Mon, 28 Mar 2016 09:51:40 -0400 Subject: Homepage cache and cookies In-Reply-To: References: Message-ID: Specifically I was writing that clients could get their cookies, but not served form the nginx cache. This has always been about whether or not the cached page was serving the cookies or not. With the exact same configuration for the cache, there is a difference between: location / and location /~home.html in regards to whether or not the cached version of the page has the Set-Cookie headers in it. I will try and research some more to see if that would be expected or not. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,265629,265686#msg-265686 From francis at daoine.org Mon Mar 28 15:36:03 2016 From: francis at daoine.org (Francis Daly) Date: Mon, 28 Mar 2016 16:36:03 +0100 Subject: nginx HttpSecureLinkModule php streaming In-Reply-To: <10801f1a50ef8788127ecf2e6fd7e8dd.NginxMailingListEnglish@forum.nginx.org> References: <10801f1a50ef8788127ecf2e6fd7e8dd.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20160328153603.GE28270@daoine.org> On Fri, Mar 25, 2016 at 10:01:28PM -0400, JoakimR wrote: Hi there, > The question is how do I get Nginx and php to make a timed encrypted link to > play a video? I'm not sure what specifically your question is. If it is "how do I use the nginx secure-link module?", then the content at http://nginx.org/en/docs/http/ngx_http_secure_link_module.html should be helpful. If it is "how do I use the nginx secure-link module with mp4 streams?", then the first hit on a web search for "site:nginx.org secure link mp4" for me is https://forum.nginx.org/read.php?2,228161,228183 which seems to show a working example. If it is "how do I create the url in php?", then the shell examples on the first page should give a hint; just implement that in php. > The php snippet that should be left unaltered if possible, since it will be > updated from time to time from the developer. So all modifications have to > made in NginX That makes it sound like the question is "how do I use the nginx secure-link module, when the algorithm to create the link can be anything at all?". And the answer there is "you don't". If you want your own secure-link creation system, you write your own module to match that. If you want to use the nginx secure-link module, you write you external link-creation tool to match it. > location /videos/ { > secure_link $arg_st,$arg_e; > secure_link_md5 secretstring$uri$arg_e; What request do you make for this test? What values do these variables have, when nginx receives that request? > Could some one please try to tell me how to make this happen, but maybe more > important why, in a passion that mae me understand how and why? Start small. What is the first specific thing you want to do? Do you have it working? If not, what do you have, what do you do, what response do you get, what response do you want? Repeat, for each other specific thing that you want to do. Good luck with it, f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Mon Mar 28 22:32:28 2016 From: nginx-forum at forum.nginx.org (meteor8488) Date: Mon, 28 Mar 2016 18:32:28 -0400 Subject: deny in http {}, get 500 response , how to log this? In-Reply-To: <20160328132723.GX37718@mdounin.ru> References: <20160328132723.GX37718@mdounin.ru> Message-ID: <6dcbaf38b16a8347884d4923b57af077.NginxMailingListEnglish@forum.nginx.org> Maxim Dounin Wrote: ------------------------------------------------------- > Hello! > > On Mon, Mar 28, 2016 at 03:54:40AM -0400, meteor8488 wrote: > > > Hi All, > > > > I'm using deny to deny some IPs for my server. > > > > http { > > deny 192.168.1.123; # this is an example > > > > > > server { > > > > error_page 403 /error/403.htm; > > error_page 404 /error/404.htm; > > error_page 502 /error/502.htm; > > error_page 503 /error/503.htm; > > > > location = /error/403.htm { > > index 403.htm; > > access_log /var/log/403.log main; > > } > > > > location ~* ^/(data|image)/.*.(php|php5)$ { > > deny all; > > } > > } > > > > I found that if 192.168.1.123 access my server, due to this ip is > blocked in > > http {}, so it will get a 500 response. > > And if someone (IP not blocked) try to access my data/*.php, he will > get a > > 403 response. > > > > And all these 500 and 403 response will be put into my 403.log. > > That's because all of the requests are redirected /error/403.htm > by the error_page directive, and you have logging to 403.log > configured in the corresponding location. > > The 500 error code is logged for requests from blocked IPs > because: > > - "deny" rule works in the location /error/403.htm, hence 403 > error is triggered again; > > - you have recursive_error_pages > (http://nginx.org/r/recursive_error_pages) enabled somewhere in your > > configuration, and your configuration causes redirect loop which > in turn results in error 500 after 10 iterations. > > To resolve the redirect loop, consider using "allow all" in the > location /error/403.htm. > > > Is it possible to put 500 response to a separate log? Then my 403 > log will > > only log these who is trying to access the protected files. > > Yes. You can configure different error pages for protected files > and the rest of the site, and log them separately. E.g.: > > deny 192.168.1.123; > > error_page 403 /error/403.nolog.htm; > > location = /error/403.htm { > allow all; > access_log /path/to/403.log; > } > > location = /error/403.nolog.htm { > allow all; > alias /error/403.htm; > access_log off; > } > > location /protected/ { > deny all; > error_page 403 /error/403.htm; > } > > > I understand that if I put "deny IP" in to server {}, it will get a > 403 > > response. But I want to deny some IPs on the whole server level. > > No, there is no difference between "deny" specified at http{} or > server{} level. > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Thanks for your quickly response. It's quite clear and easy to understand! Thanks again Posted at Nginx Forum: https://forum.nginx.org/read.php?2,265680,265695#msg-265695 From reallfqq-nginx at yahoo.fr Tue Mar 29 00:30:08 2016 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Tue, 29 Mar 2016 02:30:08 +0200 Subject: limit_req_status & limit_conn_status code Message-ID: Following RFC 6585 , there is a HTTP status code for 'too many requests', which applies well to the nginx module limiting requests, and by extension to the module limiting connections. Having in mind changing default values is somewhat tricky regarding backwards compatibility, would it be a possible enhancement to change the default value of limit_req_status and limit_conn_status from 503 to 429? --- *B. R.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From cole.putnamhill at comcast.net Tue Mar 29 03:30:32 2016 From: cole.putnamhill at comcast.net (Cole Tierney) Date: Mon, 28 Mar 2016 23:30:32 -0400 Subject: multiple named captures in map regex Message-ID: <71553A9F-06E6-488A-88B6-F5F8990E50A6@comcast.net> Hello, Is it possible to have more than one named capture in a map regex? When I try the following in a map: ~^/(?.)(?.) $a$b; ...I get [emerg] 28486#0: unknown "a$b? variable. I?ve tried "$a$b? and "${a}${b}? with no change. I?m running nginx/1.9.6. Cole From nginx-forum at forum.nginx.org Tue Mar 29 12:36:37 2016 From: nginx-forum at forum.nginx.org (pirish) Date: Tue, 29 Mar 2016 08:36:37 -0400 Subject: Nginx generating different md5 sums for same key causing cache to become corrupt Message-ID: <216d004a0afc0177656d8f31c6cb8846.NginxMailingListEnglish@forum.nginx.org> Hello, Currently we use nginx 1.8.1 on redhat 7. We have reverse proxy caching enabled but are experiencing unexpected behaviour. Our caching key is $scheme$request_method$host$request_uri$. On the first visit to a page, a cache file with the correct md5 sum of the key is created. If you visit the same url from the same browser, but a different tab, a new cache file is created with different md5 sum, but same content. The next time you visit the url from that browser, it will be random which page is selected from the cache. Now if you visit from a second browser a 3rd cache file will be created. Reloading from any browser or tab will randomly pick one of the created files. Are there other factors in creating the md5 other than the cache key ? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,265703,265703#msg-265703 From mdounin at mdounin.ru Tue Mar 29 13:07:17 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 29 Mar 2016 16:07:17 +0300 Subject: Nginx generating different md5 sums for same key causing cache to become corrupt In-Reply-To: <216d004a0afc0177656d8f31c6cb8846.NginxMailingListEnglish@forum.nginx.org> References: <216d004a0afc0177656d8f31c6cb8846.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20160329130717.GE36620@mdounin.ru> Hello! On Tue, Mar 29, 2016 at 08:36:37AM -0400, pirish wrote: > Hello, > Currently we use nginx 1.8.1 on redhat 7. We have reverse proxy caching > enabled but are experiencing unexpected behaviour. Our caching key is > $scheme$request_method$host$request_uri$. On the first visit to a page, a > cache file with the correct md5 sum of the key is created. If you visit the > same url from the same browser, but a different tab, a new cache file is > created with different md5 sum, but same content. The next time you visit > the url from that browser, it will be random which page is selected from the > cache. Now if you visit from a second browser a 3rd cache file will be > created. Reloading from any browser or tab will randomly pick one of the > created files. Are there other factors in creating the md5 other than the > cache key ? When a response uses Vary mechanism, multiple representation may be stored in cache under secondary cache keys derived from the request headers listed in Vary. You can use proxy_ignore_headers to instruct nginx to ignore Vary headers in response, see http://nginx.org/r/proxy_ignore_headers. -- Maxim Dounin http://nginx.org/ From gfrankliu at gmail.com Tue Mar 29 15:03:50 2016 From: gfrankliu at gmail.com (Frank Liu) Date: Tue, 29 Mar 2016 08:03:50 -0700 Subject: proxy_read_timeout vs proxy_next_upstream_timeout Message-ID: Hi If you set read timeout 2 min and next upstream timeout 50 seconds, will nginx break the current connection at 50 second or will it let the read finish until 2min? Thanks Frank -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Tue Mar 29 15:18:00 2016 From: francis at daoine.org (Francis Daly) Date: Tue, 29 Mar 2016 16:18:00 +0100 Subject: multiple named captures in map regex In-Reply-To: <71553A9F-06E6-488A-88B6-F5F8990E50A6@comcast.net> References: <71553A9F-06E6-488A-88B6-F5F8990E50A6@comcast.net> Message-ID: <20160329151800.GF28270@daoine.org> On Mon, Mar 28, 2016 at 11:30:32PM -0400, Cole Tierney wrote: Hi there, > Is it possible to have more than one named capture in a map regex? In a map regex, yes. In a "value" part of a map, no. http://nginx.org/r/map says The resulting value can be a string or another variable > When I try the following in a map: > ~^/(?.)(?.) $a$b; You can use $a and $b outside the map; but where you have "$a$b", you must instead have exactly one string or one variable. f -- Francis Daly francis at daoine.org From mdounin at mdounin.ru Tue Mar 29 15:32:25 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 29 Mar 2016 18:32:25 +0300 Subject: nginx-1.9.13 Message-ID: <20160329153225.GN36620@mdounin.ru> Changes with nginx 1.9.13 29 Mar 2016 *) Change: non-idempotent requests (POST, LOCK, PATCH) are no longer passed to the next server by default if a request has been sent to a backend; the "non_idempotent" parameter of the "proxy_next_upstream" directive explicitly allows retrying such requests. *) Feature: the ngx_http_perl_module can be built dynamically. *) Feature: UDP support in the stream module. *) Feature: the "aio_write" directive. *) Feature: now cache manager monitors number of elements in caches and tries to avoid cache keys zone overflows. *) Bugfix: "task already active" and "second aio post" alerts might appear in logs when using the "sendfile" and "aio" directives with subrequests. *) Bugfix: "zero size buf in output" alerts might appear in logs if caching was used and a client closed a connection prematurely. *) Bugfix: connections with clients might be closed needlessly if caching was used. Thanks to Justin Li. *) Bugfix: nginx might hog CPU if the "sendfile" directive was used on Linux or Solaris and a file being sent was changed during sending. *) Bugfix: connections might hang when using the "sendfile" and "aio threads" directives. *) Bugfix: in the "proxy_pass", "fastcgi_pass", "scgi_pass", and "uwsgi_pass" directives when using variables. Thanks to Piotr Sikora. *) Bugfix: in the ngx_http_sub_filter_module. *) Bugfix: if an error occurred in a cached backend connection, the request was passed to the next server regardless of the proxy_next_upstream directive. *) Bugfix: "CreateFile() failed" errors when creating temporary files on Windows. -- Maxim Dounin http://nginx.org/ From thresh at nginx.com Tue Mar 29 17:10:10 2016 From: thresh at nginx.com (Konstantin Pavlov) Date: Tue, 29 Mar 2016 20:10:10 +0300 Subject: packages for the dynamic modules. testing required. In-Reply-To: References: Message-ID: <56FAB6F2.4050309@nginx.com> Hello, We've just published the official packages for the 1.9.13 release and two more dynamic module packages were added on all supported platforms: - perl, nginx-module-perl - njs, nginx-module-njs Enjoy, On 24/02/2016 22:08, Sergey Budnevitch wrote: > Hello. > > Previously we built nginx with all modules, except those that required > extra libraries. With dynamic modules it is possible to build them as > the separate packages and nginx main package will not have extra > dependences. > > For nginx 1.9.12 we build additional packages with xslt, image-filter > and geoip modules. It is possible to install, for example, image filter > module on RHEL/CentOS with command: > > % yum install nginx-module-image-filter > > or on Ubuntu/Debian with command: > > % apt-get install nginx-module-image-filter > > then to enable module it is necessary add load_module directive: > > load_module modules/ngx_http_image_filter_module.so; > > to the main section of the nginx.conf > > Please test these modules, any feedback will be helpful. > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Konstantin Pavlov From reallfqq-nginx at yahoo.fr Tue Mar 29 17:12:15 2016 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Tue, 29 Mar 2016 19:12:15 +0200 Subject: proxy_read_timeout vs proxy_next_upstream_timeout In-Reply-To: References: Message-ID: Those directives works at very different levels. proxy_next_upstream_timeout allocates a time for nginx to find a proper upstream (configured with proxy_next_upstream ). By default, there is no limit, making nginx attempting every upstream one after the other until finding one healthy or running out of valid upstreams, which could take a while (maybe even an infinite time with health checks? Speculating here). proxy_read_timeout sets a timer 'between two read operations', say the docs, which is closer to your question. Those are to be defined, but I suppose that when there is nothing more to read in the buffer awaiting backend response, and the response is not complete, this timer kicks in and effectively close the connection, returning an error. ?They are not working at the same level at all: there is no way to mistake one for the other.? --- *B. R.* On Tue, Mar 29, 2016 at 5:03 PM, Frank Liu wrote: > Hi > > If you set read timeout 2 min and next upstream timeout 50 seconds, will > nginx break the current connection at 50 second or will it let the read > finish until 2min? > > Thanks > Frank > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kworthington at gmail.com Tue Mar 29 17:58:21 2016 From: kworthington at gmail.com (Kevin Worthington) Date: Tue, 29 Mar 2016 13:58:21 -0400 Subject: nginx-1.9.13 In-Reply-To: <20160329153225.GN36620@mdounin.ru> References: <20160329153225.GN36620@mdounin.ru> Message-ID: Hello Nginx users, Now available: Nginx 1.9.13 for Windows https://kevinworthington.com/nginxwin1913 (32-bit and 64-bit versions) These versions are to support legacy users who are already using Cygwin based builds of Nginx. Officially supported native Windows binaries are at nginx.org. Announcements are also available here: Twitter http://twitter.com/kworthington Google+ https://plus.google.com/+KevinWorthington/ Thank you, Kevin -- Kevin Worthington kworthington *@* (gmail] [dot} {com) http://kevinworthington.com/ http://twitter.com/kworthington https://plus.google.com/+KevinWorthington/ On Tue, Mar 29, 2016 at 11:32 AM, Maxim Dounin wrote: > Changes with nginx 1.9.13 29 Mar > 2016 > > *) Change: non-idempotent requests (POST, LOCK, PATCH) are no longer > passed to the next server by default if a request has been sent to a > backend; the "non_idempotent" parameter of the "proxy_next_upstream" > directive explicitly allows retrying such requests. > > *) Feature: the ngx_http_perl_module can be built dynamically. > > *) Feature: UDP support in the stream module. > > *) Feature: the "aio_write" directive. > > *) Feature: now cache manager monitors number of elements in caches and > tries to avoid cache keys zone overflows. > > *) Bugfix: "task already active" and "second aio post" alerts might > appear in logs when using the "sendfile" and "aio" directives with > subrequests. > > *) Bugfix: "zero size buf in output" alerts might appear in logs if > caching was used and a client closed a connection prematurely. > > *) Bugfix: connections with clients might be closed needlessly if > caching was used. > Thanks to Justin Li. > > *) Bugfix: nginx might hog CPU if the "sendfile" directive was used on > Linux or Solaris and a file being sent was changed during sending. > > *) Bugfix: connections might hang when using the "sendfile" and "aio > threads" directives. > > *) Bugfix: in the "proxy_pass", "fastcgi_pass", "scgi_pass", and > "uwsgi_pass" directives when using variables. > Thanks to Piotr Sikora. > > *) Bugfix: in the ngx_http_sub_filter_module. > > *) Bugfix: if an error occurred in a cached backend connection, the > request was passed to the next server regardless of the > proxy_next_upstream directive. > > *) Bugfix: "CreateFile() failed" errors when creating temporary files > on > Windows. > > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gfrankliu at gmail.com Wed Mar 30 01:34:59 2016 From: gfrankliu at gmail.com (Frank Liu) Date: Tue, 29 Mar 2016 18:34:59 -0700 Subject: 1.9.13 and non_idempotent Message-ID: If I explicitly configured to retry next upstream based on a certain http_xxx, will that stop working if a request is a POST with 1.9.13? For other http code, I like the idea of not retry if it is non idempotent but for one http_xxx, I want retry no matter what type of request. Thanks Frank -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Wed Mar 30 02:48:15 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 30 Mar 2016 05:48:15 +0300 Subject: 1.9.13 and non_idempotent In-Reply-To: References: Message-ID: <20160330024815.GH36620@mdounin.ru> Hello! On Tue, Mar 29, 2016 at 06:34:59PM -0700, Frank Liu wrote: > If I explicitly configured to retry next upstream based on a > certain http_xxx, will that stop working if a request is a POST with > 1.9.13? Yes. There is no real difference between a network error and an HTTP error returned from idempotence point of view. E.g., 502 error just means that a network error happened somewhere else. > For other http code, I like the idea of not retry if it is non > idempotent but for one http_xxx, I want retry no matter what type of > request. Just curious - which one? Note well that if you want finer control of how various HTTP errors are handled, you can use proxy_intercept_errors with appropriate error_pages configured. See http://nginx.org/r/proxy_intercept_errors. -- Maxim Dounin http://nginx.org/ From gfrankliu at gmail.com Wed Mar 30 03:04:33 2016 From: gfrankliu at gmail.com (Frank Liu) Date: Tue, 29 Mar 2016 20:04:33 -0700 Subject: 1.9.13 and non_idempotent In-Reply-To: <20160330024815.GH36620@mdounin.ru> References: <20160330024815.GH36620@mdounin.ru> Message-ID: It's a custom error code, think of it as if http_404, so if the first upstream can't handle this request , it will send "404" saying it is not for me, please try next, nginx should then send the same request to next upstream. On Tuesday, March 29, 2016, Maxim Dounin wrote: > Hello! > > On Tue, Mar 29, 2016 at 06:34:59PM -0700, Frank Liu wrote: > > > If I explicitly configured to retry next upstream based on a > > certain http_xxx, will that stop working if a request is a POST with > > 1.9.13? > > Yes. There is no real difference between a network error and an > HTTP error returned from idempotence point of view. E.g., 502 > error just means that a network error happened somewhere else. > > > For other http code, I like the idea of not retry if it is non > > idempotent but for one http_xxx, I want retry no matter what type of > > request. > > Just curious - which one? > > Note well that if you want finer control of how various HTTP errors are > handled, you can use proxy_intercept_errors with appropriate > error_pages configured. See http://nginx.org/r/proxy_intercept_errors. > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Wed Mar 30 03:26:30 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 30 Mar 2016 06:26:30 +0300 Subject: 1.9.13 and non_idempotent In-Reply-To: References: <20160330024815.GH36620@mdounin.ru> Message-ID: <20160330032629.GJ36620@mdounin.ru> Hello! On Tue, Mar 29, 2016 at 08:04:33PM -0700, Frank Liu wrote: > It's a custom error code, think of it as if http_404, so if the first > upstream can't handle this request , it will send "404" saying it is not > for me, please try next, nginx should then send the same request to next > upstream. Well, nginx can't handle custom error codes in proxy_next_upstream, so this is probably irrelevant anyway. Though I was considered excluding http_403 and http_404 from idempotence checks, it may make sense to do it if there are enough such use cases. > On Tuesday, March 29, 2016, Maxim Dounin wrote: > > > Hello! > > > > On Tue, Mar 29, 2016 at 06:34:59PM -0700, Frank Liu wrote: > > > > > If I explicitly configured to retry next upstream based on a > > > certain http_xxx, will that stop working if a request is a POST with > > > 1.9.13? > > > > Yes. There is no real difference between a network error and an > > HTTP error returned from idempotence point of view. E.g., 502 > > error just means that a network error happened somewhere else. > > > > > For other http code, I like the idea of not retry if it is non > > > idempotent but for one http_xxx, I want retry no matter what type of > > > request. > > > > Just curious - which one? > > > > Note well that if you want finer control of how various HTTP errors are > > handled, you can use proxy_intercept_errors with appropriate > > error_pages configured. See http://nginx.org/r/proxy_intercept_errors. > > > > -- > > Maxim Dounin > > http://nginx.org/ > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Maxim Dounin http://nginx.org/ From nginx-forum at forum.nginx.org Wed Mar 30 07:54:11 2016 From: nginx-forum at forum.nginx.org (gitl) Date: Wed, 30 Mar 2016 03:54:11 -0400 Subject: pcre2 Message-ID: Are there any plans to move from pcre (8.x) to pcre2 (10.x)? I realize that the API changed quite a bit but it would be awesome if there was a migration plan for it. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,265726,265726#msg-265726 From kvrico at gmail.com Wed Mar 30 08:24:53 2016 From: kvrico at gmail.com (Alexey S) Date: Wed, 30 Mar 2016 01:24:53 -0700 Subject: nginx counterpart of haproxy's acl dst Message-ID: Hi, does nginx have a variable, that represents the destination IP address and port, like it was seen/used by the client at the connection time? Thank you. WBR, Alexey -------------- next part -------------- An HTML attachment was scrubbed... URL: From al-nginx at none.at Wed Mar 30 10:50:19 2016 From: al-nginx at none.at (Aleksandar Lazic) Date: Wed, 30 Mar 2016 12:50:19 +0200 Subject: nginx counterpart of haproxy's acl dst In-Reply-To: References: Message-ID: <4f6212811425018997f1b964d1cc79b0@none.at> Hi. Am 30-03-2016 10:24, schrieb Alexey S: > Hi, > > does nginx have a variable, that represents the destination IP address > and port, like it was seen/used by the client at the connection time? Could you mean http://nginx.org/en/docs/http/ngx_http_upstream_module.html#var_upstream_addr Cheers Aleks > Thank you. > > WBR, > Alexey > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at forum.nginx.org Wed Mar 30 11:11:45 2016 From: nginx-forum at forum.nginx.org (neilstuartcraig) Date: Wed, 30 Mar 2016 07:11:45 -0400 Subject: $upstream_http_NAME and $sent_http_NAME vars not available in certain scopes Message-ID: <2cd29a1fdde6ddf5e7daf18b1cf43c05.NginxMailingListEnglish@forum.nginx.org> Hi all I am developing a proxy service which uses NGINX to reverse proxy, kind of CDN-like but very specific to our needs. During this. I have hit an issue which I *think* is a bug but wanted to ask in case anyone can point to a solution or some reading I can do. The problem I have is this: $upstream_http_NAME and $sent_http_NAME variables seem to be unpopulated/blank at some points in my config when other embedded variables *are* populated. This is probably best illustrated via an example, so here's a simplified test case I created: user nginx; worker_processes auto; worker_priority -15; worker_rlimit_nofile 50000; events { # worker_connections benefits from a large value in that it reduces error counts worker_connections 20000; multi_accept on; } http { # for dynamic upstreams resolver 8.8.8.8; # Default includes include /etc/nginx/current/mime.types; default_type application/octet-stream; include /etc/nginx/current/proxy.conf; # Tuning options - these are mainly quite GTM-specific server_tokens off; keepalive_requests 1024; keepalive_timeout 120s 120s; sendfile on; tcp_nodelay on; tcp_nopush on; client_header_timeout 5s; open_file_cache max=16384 inactive=600s; open_file_cache_valid 600s; open_file_cache_min_uses 0; open_file_cache_errors on; output_buffers 64 128k; # NEW - AIO aio on; directio 512; # For small files and heavy load, this gives ~5-6x greater throughput (avoids swamping workers with one request) postpone_output 0; reset_timedout_connection on; send_timeout 3s; sendfile_max_chunk 1m; large_client_header_buffers 8 8k; connection_pool_size 4096; # client_body_buffer_size - Sets buffer size for reading client request body. In case the request body is larger than the buffer, the whole body or only its part is written to a temporary file client_body_buffer_size 8k; client_header_buffer_size 8k; # client_max_body_size - Sets the maximum allowed size of the client request body, specified in the ?Content-Length? request header field. If the size in a request exceeds the configured value, the 413 (Request Entity Too Large) error is returned to the client client_max_body_size 8m; # We need to increase the hash bucket size as the R53 names are long! server_names_hash_bucket_size 128; # Same for proxy_headers_hash_max_size and proxy_headers_hash_bucket_size proxy_headers_hash_max_size 4096; proxy_headers_hash_bucket_size 1024; # Logging # NOTE: $host may need to in fact be $hostname log_format standard '"$remote_addr" "$time_iso8601" "$request_method" "$scheme" "$host" "$request_uri" "$server_protocol" "$status" "$bytes_sent" "$http_referer" "$http_user_agent" "$ssl_protocol" "$ssl_cipher" "$ssl_server_name" "$ssl_session_reused"'; access_log /var/log/nginx/main-access.log standard; error_log /var/log/nginx/main-error.log warn; recursive_error_pages on; # GeoIP config # This appears to need an absolute path, despite the docs suggesting it doesn't. # Path is defined in bake script so changes to that will break this # geoip_country /usr/local/GeoIP.dat; geoip_city /var/lib/GeoIP/GeoLiteCity.dat; #geoip_proxy 0.0.0.0/0; #geoip_proxy_recursive on; # Proxy global configuration # NOTES: # proxy_cache_path is an http scope (global) directive # keys_zone=shared_cache:XXXXm; denotes the amount of RAM to allow for cache index, 1MB ~7k-8k cached objects - exceeding the number of cached objects possible due to index size results in LRU invocation proxy_cache_path /mnt/gtm_cache_data levels=1:2 use_temp_path=on keys_zone=shared_cache:256m inactive=1440m; proxy_temp_path /mnt/gtm_cache_temp; # NGINX recommends HTTP 1.1 for keepalive, proxied conns. (which we use) proxy_http_version 1.1; # NGINX recommends clearing the connection request header for keepalive http 1.1 conns proxy_set_header Connection ""; # Conditions under which we want to try then next (if there is one) upstream server in the list & timeouts proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504; proxy_next_upstream_timeout 5s; proxy_next_upstream_tries 3; ssl_session_cache shared:global_ssl_cache:128m; ssl_session_timeout 120m; ssl_session_tickets on; #ssl_session_ticket_key /etc/nginx/current/tls/session/tkt.key; # TEST CASE: # Set up a DNS or hosts file entry to point to your NGINX instance running this config # Hit this with URL: https:///response-headers?Content-Type=text%2Fplain%3B+charset%3DUTF-8&via=1.1%20httpbin3&tester=hello # To try to check if we can work around the problem below (vars not existing (?) at time of use/need), we'll try to copy the var via a map map $upstream_http_via $copy_map_upstream_http_via { default $upstream_http_via; } upstream origin { server httpbin.org:443 resolve; } # Generic/common server for listen port configs server { # TMP removing fastopen=None backlog=-1 as the directives don't work! listen *:80 so_keepalive=120s:30s:20 default_server; listen *:443 ssl http2 reuseport deferred so_keepalive=120s:30s:20 default_server; # This cert & key will never actually be used but are needed to allow the :443 operation - without them the connection will be closed ssl_certificate /etc/nginx/current/tls/certs/default.crt; ssl_certificate_key /etc/nginx/current/tls/private/default.key; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; location / { # To try to check if we can work around the problem below (vars not existing (?) at time of use/need), we'll try to copy the var via a set set $copy_set_upstream_http_via $upstream_http_via; more_set_headers "SENT_HTTP_VIA: $sent_http_via"; more_set_headers "HTTP_VIA: $http_via"; more_set_headers "UPSTREAM_HTTP_VIA: $upstream_http_via"; # For ref, the upstream is sending e.g. "via: 1.1 string" # Problem: These do not match, $upstream_http_via appears not to be populated at this point # if ( $sent_http_via ~* "^[0-9]\.[0-9]\ .+$" ) { # if ($upstream_http_via ~* "^[0-9]\.[0-9]\ .+") { # if ($sent_http_via ~* "^[0-9]\.[0-9]\ .+") { # if ($upstream_http_via) { # This does match - for obvious reasons if ($upstream_http_via ~* ".*") { # Problem: $upstream_http_via appears not to be populated at this point... set $via_comp "$upstream_http_via 1.y BLAH"; more_set_headers "IF_VIA_COMP: Value is $via_comp"; # Just to demo the string concat - we'll use a different embedded var set $test_comp "$ssl_protocol BLAH"; more_set_headers "IF_TEST_COMP: Value is $test_comp"; # Does the map-copied var exist? set $via_comp_copy_map "$copy_map_upstream_http_via 1.y BLAH"; more_set_headers "IF_VIA_COMP_COPY_MAP: Value is $via_comp_copy_map"; # Does the set-copied var exist? set $via_comp_copy_set "$copy_set_upstream_http_via 1.y BLAH"; more_set_headers "IF_VIA_COMP_COPY_SET: Value is $via_comp_copy_set"; # Does a different $upstream_http_X var work? - NO set $alt_comp "$upstream_http_tester 1.y BLAH"; more_set_headers "IF_ALT_COMP: Value is $alt_comp"; # ...but $upstream_http_via IS populated at this point more_set_headers "UPSTREAM_HTTP_VIA_IF: $upstream_http_via"; more_set_headers "HTTP_VIA_IF: $http_via"; more_set_headers "SENT_HTTP_VIA_IF: $sent_http_via"; } proxy_pass https://origin; } # END TEST CASE } } (Sorry, couldn't figure out how to markup the config) The response headers from a request to this NGIX instance is e.g.: access-control-allow-credentials:true access-control-allow-origin:* content-length:161 content-type:text/plain; charset=UTF-8 date:Wed, 30 Mar 2016 10:31:55 GMT if_alt_comp:Value is 1.y BLAH if_test_comp:Value is TLSv1.2 BLAH if_via_comp:Value is 1.y BLAH if_via_comp_copy_map:Value is 1.y BLAH if_via_comp_copy_set:Value is 1.y BLAH sent_http_via:1.1 httpbin3 sent_http_via_if:1.1 httpbin3 server:nginx status:200 tester:hello upstream_http_via:1.1 httpbin3 upstream_http_via_if:1.1 httpbin3 via:1.1 httpbin3 So you can see from the section: if_alt_comp:Value is 1.y BLAH if_test_comp:Value is TLSv1.2 BLAH if_via_comp:Value is 1.y BLAH if_via_comp_copy_map:Value is 1.y BLAH if_via_comp_copy_set:Value is 1.y BLAH That there's a blank space where the value from $upstream_http_via or $sent_http_via should be. So my questions are: Can anyone see something I have done wrong? Is this expected behaviour (if yes, why? Seems strange some vars behave differently) Does anyone have a workaround or solution? Many thanks in advance if anyone can offer any help. Cheers Neil Posted at Nginx Forum: https://forum.nginx.org/read.php?2,265734,265734#msg-265734 From cole.putnamhill at comcast.net Wed Mar 30 13:05:37 2016 From: cole.putnamhill at comcast.net (Cole Tierney) Date: Wed, 30 Mar 2016 09:05:37 -0400 Subject: multiple named captures in map regex Message-ID: Thank you, Francis! That?s great. Cole > On Mon, Mar 28, 2016 at 11:30:32PM -0400, Cole Tierney wrote: > > Hi there, > > > Is it possible to have more than one named capture in a map regex? > > In a map regex, yes. > > In a "value" part of a map, no. > > http://nginx.org/r/map says > > The resulting value can be a string or another variable > > > When I try the following in a map: > > ~^/(?.)(?.) $a$b; > > You can use $a and $b outside the map; but where you have "$a$b", you > must instead have exactly one string or one variable. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Wed Mar 30 15:36:45 2016 From: nginx-forum at forum.nginx.org (neilstuartcraig) Date: Wed, 30 Mar 2016 11:36:45 -0400 Subject: $upstream_http_NAME and $sent_http_NAME vars not available in certain scopes In-Reply-To: <2cd29a1fdde6ddf5e7daf18b1cf43c05.NginxMailingListEnglish@forum.nginx.org> References: <2cd29a1fdde6ddf5e7daf18b1cf43c05.NginxMailingListEnglish@forum.nginx.org> Message-ID: Also, for info: nginx -V nginx version: nginx/1.9.13 built with OpenSSL 1.0.2g 1 Mar 2016 TLS SNI support enabled configure arguments: --prefix=/usr/local/nginx --sbin-path=/usr/sbin/nginx --conf-path=/etc/nginx/current/nginx.conf --pid-path=/var/run/nginx.pid --error-log-path=/var/log/nginx/default-error.log --http-log-path=/var/log/nginx/default-access.log --lock-path=/var/run/nginx.lock --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=gtmdaemon --group=gtmdaemon --with-http_realip_module --with-http_v2_module --with-http_ssl_module --with-http_geoip_module --with-pcre-jit --with-ipv6 --with-file-aio --with-cc-opt='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic' --add-module=/tmp/tmpOzxjVK/BUILD/nginx-1.9.13/headers-more-nginx-module --add-module=/tmp/tmpOzxjVK/BUILD/nginx-1.9.13/naxsi/naxsi_src --add-module=/tmp/tmpOzxjVK/BUILD/nginx-1.9.13/nginx-module-vts --add-module=/tmp/tmpOzxjVK/BUILD/nginx-1.9.13/nginx-upstream-dynamic-servers --with-openssl=/tmp/tmpOzxjVK/BUILD/nginx-1.9.13/openssl-1.0.2g uname -a Linux ip-10-13-149-100.eu-west-1.compute.internal 3.10.0-327.4.4.el7.x86_64 #1 SMP Tue Jan 5 16:07:00 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux Posted at Nginx Forum: https://forum.nginx.org/read.php?2,265734,265736#msg-265736 From nginx-forum at forum.nginx.org Wed Mar 30 15:37:21 2016 From: nginx-forum at forum.nginx.org (neilstuartcraig) Date: Wed, 30 Mar 2016 11:37:21 -0400 Subject: $upstream_http_NAME and $sent_http_NAME vars not available in certain scopes In-Reply-To: <2cd29a1fdde6ddf5e7daf18b1cf43c05.NginxMailingListEnglish@forum.nginx.org> References: <2cd29a1fdde6ddf5e7daf18b1cf43c05.NginxMailingListEnglish@forum.nginx.org> Message-ID: Also, for info: nginx -V nginx version: nginx/1.9.13 built with OpenSSL 1.0.2g 1 Mar 2016 TLS SNI support enabled configure arguments: --prefix=/usr/local/nginx --sbin-path=/usr/sbin/nginx --conf-path=/etc/nginx/current/nginx.conf --pid-path=/var/run/nginx.pid --error-log-path=/var/log/nginx/default-error.log --http-log-path=/var/log/nginx/default-access.log --lock-path=/var/run/nginx.lock --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=gtmdaemon --group=gtmdaemon --with-http_realip_module --with-http_v2_module --with-http_ssl_module --with-http_geoip_module --with-pcre-jit --with-ipv6 --with-file-aio --with-cc-opt='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic' --add-module=/tmp/tmpOzxjVK/BUILD/nginx-1.9.13/headers-more-nginx-module --add-module=/tmp/tmpOzxjVK/BUILD/nginx-1.9.13/naxsi/naxsi_src --add-module=/tmp/tmpOzxjVK/BUILD/nginx-1.9.13/nginx-module-vts --add-module=/tmp/tmpOzxjVK/BUILD/nginx-1.9.13/nginx-upstream-dynamic-servers --with-openssl=/tmp/tmpOzxjVK/BUILD/nginx-1.9.13/openssl-1.0.2g uname -a Linux ip-10-13-149-100.eu-west-1.compute.internal 3.10.0-327.4.4.el7.x86_64 #1 SMP Tue Jan 5 16:07:00 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux Posted at Nginx Forum: https://forum.nginx.org/read.php?2,265734,265737#msg-265737 From mdounin at mdounin.ru Wed Mar 30 15:37:33 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 30 Mar 2016 18:37:33 +0300 Subject: $upstream_http_NAME and $sent_http_NAME vars not available in certain scopes In-Reply-To: <2cd29a1fdde6ddf5e7daf18b1cf43c05.NginxMailingListEnglish@forum.nginx.org> References: <2cd29a1fdde6ddf5e7daf18b1cf43c05.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20160330153733.GO36620@mdounin.ru> Hello! On Wed, Mar 30, 2016 at 07:11:45AM -0400, neilstuartcraig wrote: > Hi all > > I am developing a proxy service which uses NGINX to reverse proxy, kind of > CDN-like but very specific to our needs. During this. I have hit an issue > which I *think* is a bug but wanted to ask in case anyone can point to a > solution or some reading I can do. The problem I have is this: > > $upstream_http_NAME and $sent_http_NAME variables seem to be > unpopulated/blank at some points in my config when other embedded variables > *are* populated. This is probably best illustrated via an example, so here's > a simplified test case I created: [...] > # This does match - for obvious reasons > if ($upstream_http_via ~* ".*") { > > # Problem: $upstream_http_via appears not to be populated at this > point... > set $via_comp "$upstream_http_via 1.y BLAH"; > more_set_headers "IF_VIA_COMP: Value is $via_comp"; [...] > So my questions are: > Can anyone see something I have done wrong? > Is this expected behaviour (if yes, why? Seems strange some vars behave > differently) > Does anyone have a workaround or solution? You are trying to use $upstream_http_via in the "if" and "set" directives, this is wrong. The $upstream_http_via variable is only available once a response is returned by an upstream server, but the "if" and "set" directives are evaluated before the request is sent to the upstream server, see here: http://nginx.org/en/docs/http/ngx_http_rewrite_module.html As far as I understand from your config, you want to add something to the Via response header. Correct approach would be to only use the header after the response is returned, e.g., in the "add_header" directive. Try something like this: map $upstream_http_via $is_via { "" ""; default ", "; } server { ... location / { proxy_pass http://upstream; proxy_hide_header Via; add_header Via "$upstream_http_via$is_via1.1 foo" } } -- Maxim Dounin http://nginx.org/ From nginx-forum at forum.nginx.org Wed Mar 30 20:05:16 2016 From: nginx-forum at forum.nginx.org (marcosbontempo) Date: Wed, 30 Mar 2016 16:05:16 -0400 Subject: Nginx configuration Message-ID: <7eeadf48bd82d6ab999c3aa45531164b.NginxMailingListEnglish@forum.nginx.org> Hello, I need to create a web interface to configure my nginx reverse proxy. I only know how to configure nginx with the configuration file. Is there another way to change the configurations, like a REST API, so I can make it dynamically? Any tip will be very helpful, Thanks. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,265742,265742#msg-265742 From anoopalias01 at gmail.com Thu Mar 31 03:29:41 2016 From: anoopalias01 at gmail.com (Anoop Alias) Date: Thu, 31 Mar 2016 08:59:41 +0530 Subject: Nginx configuration In-Reply-To: <7eeadf48bd82d6ab999c3aa45531164b.NginxMailingListEnglish@forum.nginx.org> References: <7eeadf48bd82d6ab999c3aa45531164b.NginxMailingListEnglish@forum.nginx.org> Message-ID: No there is no rest api for generating nginx config dynamically. Please see below link for other options: http://stackoverflow.com/questions/15277453/any-good-way-to-programmatically-change-nginx-config-file-from-python -- *Anoop P Alias* -------------- next part -------------- An HTML attachment was scrubbed... URL: From pat at suwalski.net Thu Mar 31 05:23:24 2016 From: pat at suwalski.net (Pat Suwalski) Date: Thu, 31 Mar 2016 01:23:24 -0400 Subject: DNSBL with mail proxy Message-ID: <56FCB44C.8050305@suwalski.net> Hello, I started using nginx as a proxy for incoming mail, for DDoS protection and hiding of origin. I have it set up as follows: mail { server_name foo.bar.com; auth_http localhost:8080/auth-smtppass.php; server { listen 25; protocol smtp; proxy on; timeout 5s; xclient off; smtp_auth none; } } And then I have a location handler that tells it where to actually go: location ~ .php$ { add_header Auth-Server 111.222.111.222; add_header Auth-Port 25; return 200; } This works great, except that the real mail server (111.222.111.222 in this example) doesn't see where the mail is actually coming from, and therefore loses its ability to apply the DNSBL. One obvious way to use the DNSBL would be to have an actual auth script that does the DNSBL checking. However, it's really nice to have it all handled without calling out to php or perl. I could also have a local postfix that does nothing but DNSBL and relay to the real server, but that seems like just another layer of complication. Anyone have any creative ideas on how this could be implemented right in nginx? Maybe someone's written an auth script that does DNSBL? Thanks, --Pat From kvrico at gmail.com Thu Mar 31 06:59:40 2016 From: kvrico at gmail.com (Alexey S) Date: Wed, 30 Mar 2016 23:59:40 -0700 Subject: nginx counterpart of haproxy's acl dst In-Reply-To: <4f6212811425018997f1b964d1cc79b0@none.at> References: <4f6212811425018997f1b964d1cc79b0@none.at> Message-ID: Hi Aleks, I think it's not the one. AFAIU the closest match for HAProxy "dst" is $server_addr, but it doesn't work as good with DNAT, because it hides original destination IP used by a client, even though there is a way to retrieve this information [1]. My usecase is following: a) I create iptables rules on the host: iptables -t nat -A OUTPUT -p tcp -d 192.168.170.1 --dport 7654 -j DNAT --to-destination 127.0.0.1:11123 iptables -t nat -A OUTPUT -p tcp -d 192.168.170.2 --dport 7654 -j DNAT --to-destination 127.0.0.1:11123 b) Run load balancer on localhost port 11123 c) Use telnet to hit 192.168.170.1:7654 and 192.168.170.2:7654 d) I need load balancer to choose different upstreams depending on the address I specified on step (c) It works with HAProxy, but unfortunately I can't find how to make it work with NGINX :( WBR, Alexey. [1] https://github.com/haproxy/haproxy/blob/master/src/proto_tcp.c#L600 On Wed, Mar 30, 2016 at 3:50 AM, Aleksandar Lazic wrote: > Hi. > > Am 30-03-2016 10:24, schrieb Alexey S: > >> Hi, >> >> does nginx have a variable, that represents the destination IP address >> and port, like it was seen/used by the client at the connection time? >> > > Could you mean > > > http://nginx.org/en/docs/http/ngx_http_upstream_module.html#var_upstream_addr > > Cheers Aleks > > Thank you. >> >> WBR, >> Alexey >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gfrankliu at gmail.com Thu Mar 31 16:54:39 2016 From: gfrankliu at gmail.com (Frank Liu) Date: Thu, 31 Mar 2016 09:54:39 -0700 Subject: proxy_read_timeout vs proxy_next_upstream_timeout In-Reply-To: References: Message-ID: Given this config: proxy_next_upstream timeout; proxy_next_upstream_timeout 50; proxy_connect_timeout 10; proxy_read_timeout 100; If upstream has issues causing connect timeout, nginx will re-try 5 upstream servers until hitting 50, then fail. If one upstream has issues causing read timeout, nginx will keep wait to read, until 100, then timeout, then checks the proxy_next_upstream_timeout which is 50 and already passed, so nginx won't retry next upstream. I am trying to setup nginx to only retry on connect timeout, not read timeout, will above work? On Tue, Mar 29, 2016 at 10:12 AM, B.R. wrote: > Those directives works at very different levels. > > proxy_next_upstream_timeout > > allocates a time for nginx to find a proper upstream (configured with > proxy_next_upstream > > ). > By default, there is no limit, making nginx attempting every upstream one > after the other until finding one healthy or running out of valid > upstreams, which could take a while (maybe even an infinite time with > health checks? Speculating here). > > proxy_read_timeout > > sets a timer 'between two read operations', say the docs, which is closer > to your question. > Those are to be defined, but I suppose that when there is nothing more to > read in the buffer awaiting backend response, and the response is not > complete, this timer kicks in and effectively close the connection, > returning an error. > > ?They are not working at the same level at all: there is no way to mistake > one for the other.? > --- > *B. R.* > > On Tue, Mar 29, 2016 at 5:03 PM, Frank Liu wrote: > >> Hi >> >> If you set read timeout 2 min and next upstream timeout 50 seconds, will >> nginx break the current connection at 50 second or will it let the read >> finish until 2min? >> >> Thanks >> Frank >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zxcvbn4038 at gmail.com Thu Mar 31 17:21:02 2016 From: zxcvbn4038 at gmail.com (CJ Ess) Date: Thu, 31 Mar 2016 13:21:02 -0400 Subject: Nginx servers on both *:80 and :80? also duplicate listen parameters error when binding by just specific ips Message-ID: I would like to have an Nginx setup where I have specific logic depending on which interface (ip) the request arrived on. I was able to make this work by having a server stanza for each ip on the server, but was't able to do a combination of a specific ip and a wildcard ip (as a catchall) - is there a way to do that with some option combination (i.e. nginx listens on *:80, but matches the server stanza by ip?) The scenario I'm playing towards is that I have a dedicated connection to a CDN and I want to pass thru certain headers if they arrive via the dedicated interface, strip them if they arrive on other interface. When I did the server{} per IP approach nginx complained about duplicate listen settings for the second IP even though both server stanzas were bound to a specific port/interface. Is this a bug per chance? This was with Nginx 1.9.0 btw, perhaps newer versions have a different behavior? -------------- next part -------------- An HTML attachment was scrubbed... URL: From pat at suwalski.net Thu Mar 31 18:17:56 2016 From: pat at suwalski.net (Pat Suwalski) Date: Thu, 31 Mar 2016 14:17:56 -0400 Subject: DNSBL with mail proxy In-Reply-To: <56FCB44C.8050305@suwalski.net> References: <56FCB44C.8050305@suwalski.net> Message-ID: <56FD69D4.3020405@suwalski.net> I managed to solve my own problem with the use of XCLIENT. There isn't a whole lot of information out there, so maybe search engines will pick up on this post and help someone else. It was very easy to set up. Simply turn "xclient off" to "xclient on" in the nginx configuration as quoted below. Then, in the postfix configuration, enable xclient for the proxy's IP: smtpd_authorized_xclient_hosts = 1.2.3.4 (It can be turned on globally, but I like being specific). This seemed to work immediately with the DNSBL/rbl rules already in postfix. --Pat On 2016-03-31 01:23 AM, Pat Suwalski wrote: > Hello, > > I started using nginx as a proxy for incoming mail, for DDoS protection > and hiding of origin. > > I have it set up as follows: > > mail { > server_name foo.bar.com; > auth_http localhost:8080/auth-smtppass.php; > > server { > listen 25; > protocol smtp; > proxy on; > timeout 5s; > xclient off; > smtp_auth none; > } > } > > And then I have a location handler that tells it where to actually go: > > location ~ .php$ { > add_header Auth-Server 111.222.111.222; > add_header Auth-Port 25; > return 200; > } > > This works great, except that the real mail server (111.222.111.222 in > this example) doesn't see where the mail is actually coming from, and > therefore loses its ability to apply the DNSBL. > > One obvious way to use the DNSBL would be to have an actual auth script > that does the DNSBL checking. However, it's really nice to have it all > handled without calling out to php or perl. > > I could also have a local postfix that does nothing but DNSBL and relay > to the real server, but that seems like just another layer of complication. > > Anyone have any creative ideas on how this could be implemented right in > nginx? Maybe someone's written an auth script that does DNSBL? > > Thanks, > --Pat > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at forum.nginx.org Thu Mar 31 20:05:14 2016 From: nginx-forum at forum.nginx.org (marcosbontempo) Date: Thu, 31 Mar 2016 16:05:14 -0400 Subject: Nginx configuration In-Reply-To: References: Message-ID: <174d5754af89382c80d4afbed595027d.NginxMailingListEnglish@forum.nginx.org> Thanks for your answer! It was exactly what I wanted. I'll use nginx-conf. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,265742,265769#msg-265769 From ingwie2000 at googlemail.com Thu Mar 31 20:14:28 2016 From: ingwie2000 at googlemail.com (Kevin "Ingwie Phoenix" Ingwersen) Date: Thu, 31 Mar 2016 22:14:28 +0200 Subject: Nginx configuration In-Reply-To: <174d5754af89382c80d4afbed595027d.NginxMailingListEnglish@forum.nginx.org> References: <174d5754af89382c80d4afbed595027d.NginxMailingListEnglish@forum.nginx.org> Message-ID: > Am 31.03.2016 um 22:05 schrieb marcosbontempo : > > Thanks for your answer! It was exactly what I wanted. I'll use nginx-conf. > > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,265742,265769#msg-265769 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From francis at daoine.org Thu Mar 31 20:29:28 2016 From: francis at daoine.org (Francis Daly) Date: Thu, 31 Mar 2016 21:29:28 +0100 Subject: Nginx servers on both *:80 and :80? also duplicate listen parameters error when binding by just specific ips In-Reply-To: References: Message-ID: <20160331202928.GG28270@daoine.org> On Thu, Mar 31, 2016 at 01:21:02PM -0400, CJ Ess wrote: Hi there, > I would like to have an Nginx setup where I have specific logic depending > on which interface (ip) the request arrived on. multiple server{} with different "listen"; possibly with an "include common-config" entry. Note: "listen" is on an ip, not an interface. > I was able to make this work by having a server stanza for each ip on the > server, but was't able to do a combination of a specific ip and a wildcard > ip (as a catchall) - is there a way to do that with some option combination > (i.e. nginx listens on *:80, but matches the server stanza by ip?) I don't understand what you are describing. Could you try again, perhaps with a config example? When I use === server { listen 127.0.0.1:8088; return 200 "listen 127.0.0.1:8088\n"; } server { listen 10.0.1.2:8088; return 200 "listen 10.0.1.2:8088\n"; } server { listen 8088; return 200 "listen 8088\n"; } === I get the following output, which is what I expect: $ curl http://127.0.0.1:8088/ listen 127.0.0.1:8088 $ curl http://127.0.0.2:8088/ listen 8088 > The scenario I'm playing towards is that I have a dedicated connection to a > CDN and I want to pass thru certain headers if they arrive via the > dedicated interface, strip them if they arrive on other interface. As above, if "interface" is replaced with "ip", this can work with two server{} blocks. > When I did the server{} per IP approach nginx complained about duplicate > listen settings for the second IP even though both server stanzas were > bound to a specific port/interface. Is this a bug per chance? What short server{} config can I use to reproduce the complaint? f -- Francis Daly francis at daoine.org From reallfqq-nginx at yahoo.fr Thu Mar 31 22:32:35 2016 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Fri, 1 Apr 2016 00:32:35 +0200 Subject: proxy_read_timeout vs proxy_next_upstream_timeout In-Reply-To: References: Message-ID: On Thu, Mar 31, 2016 at 6:54 PM, Frank Liu wrote: > Given this config: > proxy_next_upstream timeout; > proxy_next_upstream_timeout 50; > proxy_connect_timeout 10; > proxy_read_timeout 100; > If upstream has issues causing connect timeout, nginx will re-try 5 > upstream servers until hitting 50, then fail. > http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_next_upstream_timeout '[...] hitting 50s [...]' (cf. http://nginx.org/en/docs/syntax.html)? > If one upstream has issues causing read timeout, nginx will keep wait to > read, until 100, then timeout, > http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_read_timeout ?nginx will wait for a maximum of 100s between two successful read operations (or before the first one) and will fail if exceeded. ? > then checks the proxy_next_upstream_timeout which is 50 and already > passed, so nginx won't retry next upstream. > > I am trying to setup nginx to only retry on connect timeout, not read > timeout, will above work? > ? http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_next_upstream The 'timeout' property regroups conneciton timeout and header read timeout?. Content read timeout is set by proxy_read_timeout (default 60s) --- *B. R.* -------------- next part -------------- An HTML attachment was scrubbed... URL: