From mdounin at mdounin.ru Thu Aug 1 11:06:24 2019 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 1 Aug 2019 14:06:24 +0300 Subject: Crash in mail module during SMTP setup In-Reply-To: <676c08b5-d10f-4cf5-a756-a98494f46d5f@www.fastmail.com> References: <20190729182642.GF1877@mdounin.ru> <3a8fd0ea-463a-4018-92f3-5a9082297e61@www.fastmail.com> <20190730153243.GJ1877@mdounin.ru> <20190730161132.GK1877@mdounin.ru> <676c08b5-d10f-4cf5-a756-a98494f46d5f@www.fastmail.com> Message-ID: <20190801110624.GQ1877@mdounin.ru> Hello! On Thu, Aug 01, 2019 at 09:32:20AM +1000, Rob N ? wrote: > On Wed, 31 Jul 2019, at 2:11 AM, Maxim Dounin wrote: > > > I think I see the problem - when using SMTP with SSL and > > > resolver, read events might be enabled during address > > > resolving, leading to duplicate > > > ngx_mail_ssl_handshake_handler() calls if something arrives > > > from the client, and duplicate session initialization - > > > including starting another resolving. > > That neatly explains why the problem became more noticeable as > number of connections went up. With the load a little higher, > DNS resolution could conceivable take a little longer, making it > more likely that the bug would be triggered. > > > > The following patch should resolve this: > > I've been running the second patch you posted for ~22hrs with no > crashes, compared to one every 10-20mins previously. So I think > you got it! Thank you so much! Thanks for testing, committed: http://hg.nginx.org/nginx/rev/fcd92ad76b7b -- Maxim Dounin http://mdounin.ru/ From cello86 at gmail.com Thu Aug 1 11:16:45 2019 From: cello86 at gmail.com (Marcello Lorenzi) Date: Thu, 1 Aug 2019 13:16:45 +0200 Subject: Nginx and conditional logformat In-Reply-To: <20190729130220.GY1877@mdounin.ru> References: <267aad0446536981eff69f91ab88da58.NginxMailingListEnglish@forum.nginx.org> <20190725131931.GU1877@mdounin.ru> <20190729130220.GY1877@mdounin.ru> Message-ID: Hi Max. In our idea with this configuration all the requests use the sslclient logformat and in case of failure with the certificate the logformat that will be used is sslclientfull. Into the location / we enable the ssl_verify_client to optional and actually we have some if with the variabile $ssl_client_verify to provide a customized message to the users. It seems that the set variable is not working into the if statement. We can?t use the ?map? parameter because it works only to a http block and not into location block. Thanks, Marcello Regards, Marcello On Mon, Jul 29, 2019 at 3:02 PM Maxim Dounin wrote: > Hello! > > On Fri, Jul 26, 2019 at 03:49:05PM +0200, Marcello Lorenzi wrote: > > > Hi Maxim, > > I tried to configure the location with this example: > > > > server { > > access_log logs/access_log sslclient; > > > > location / { > > > > if ($ssl_client_verify != "SUCCESS") { > > set $loggingcert 1; > > } > > > > access_log logs/access_log sslclientfull if=$loggingcert; > > > > } > > > > } > > > > I noticed that all the request inherit the first access_log configuration > > and not the conditional. > > In the configuration in question, all requests handled in > "location /" will either use the "sslclientfull" log format, or > won't be logged at all. > > If this is not what you observe, most likely you've missed > something an requests are either not handled in the server in > question, or not handled in "location /". For example, this > may happen if you are using "ssl_verify_client on;" and all > requests without proper client certificates are terminated with > error 400 in the server context before any processing, hence they > are not handled in "location /". > > -- > Maxim Dounin > http://mdounin.ru/ > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From yongtao_you at yahoo.com Thu Aug 1 12:09:28 2019 From: yongtao_you at yahoo.com (=?utf-8?B?eW9uZ3Rhb195b3VAeWFob28uY29t?=) Date: Thu, 01 Aug 2019 20:09:28 +0800 Subject: Timeout when upstream response is big Message-ID: Hi, I have a backend server behind nginx (version 1.12.1) which acts as a reserve proxy. Everything works fine when the responses from the backend is small. However, if the backend server's response is big (> 85K), then the nginx will return the first 85941 bytes in the response body, and then timeout while trying to read the rest of the response from upstream. This happens consistently. And if I hit the backend server directly with the same request, I get back the whole response, instantly. Did I run into some kind of limit? The nginx config is minimal. Pretty much everything is default. Any hints? Thanks! Yongtao -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Thu Aug 1 13:44:20 2019 From: nginx-forum at forum.nginx.org (abkulkar) Date: Thu, 01 Aug 2019 09:44:20 -0400 Subject: =?UTF-8?Q?Does_compiling_nginx_with_=E2=80=9C--with-debug=E2=80=9D_option_?= =?UTF-8?Q?reduce_performance_even_when_not_enabling_debug_level=3F?= Message-ID: I want to build nginx with the debug options and use it in production. But the instructions provided here: https://nginx.org/en/docs/debugging_log.html do not say if it will have performance issues even when I DON'T enable debug level logging. The reason to build nginx with debug options is to quickly debug any issues that come up in production rather than patching nginx after building it again. It reduces one painful step. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,285066,285066#msg-285066 From francis at daoine.org Thu Aug 1 16:54:29 2019 From: francis at daoine.org (Francis Daly) Date: Thu, 1 Aug 2019 17:54:29 +0100 Subject: Why 301 permanent redirect with appended slash? In-Reply-To: <20190731164558.dcyoyi3n2shgwgaq@mink.imca.aps.anl.gov> References: <20190730221201.lypgmwxq5kg7ydxo@mink.imca.aps.anl.gov> <20190731120527.6nrhmod7u5z4ekq7@daoine.org> <20190731164558.dcyoyi3n2shgwgaq@mink.imca.aps.anl.gov> Message-ID: <20190801165429.fxbsjcfrmxbdkmhg@daoine.org> On Wed, Jul 31, 2019 at 11:45:58AM -0500, J. Lewis Muir wrote: > On 07/31, Francis Daly wrote: > > On Tue, Jul 30, 2019 at 05:12:01PM -0500, J. Lewis Muir wrote: Hi there, > > As in: your request for "/foo" does not match any location{}, and so is > > handled at server-level, which runs the filesystem handler and returns > > 200 if the file "foo" exists, 301 if the directory "foo" exists, and > > 404 otherwise. > > Yes, thank you very much for the explanation! That all makes sense. > I couldn't find this behavior documented anywhere; is it documented > somewhere that I've missed? I've not looked much for documentation on what nginx's filesystem handler does recently, mostly because it seems to do what I expect. https://docs.nginx.com/nginx/admin-guide/web-server/serving-static-content/ seems to describe some of the above. Cheers, f -- Francis Daly francis at daoine.org From francis at daoine.org Thu Aug 1 16:57:53 2019 From: francis at daoine.org (Francis Daly) Date: Thu, 1 Aug 2019 17:57:53 +0100 Subject: Nginx and conditional logformat In-Reply-To: References: <267aad0446536981eff69f91ab88da58.NginxMailingListEnglish@forum.nginx.org> <20190725131931.GU1877@mdounin.ru> <20190729130220.GY1877@mdounin.ru> Message-ID: <20190801165753.u7qawc3cfhtwenz3@daoine.org> On Thu, Aug 01, 2019 at 01:16:45PM +0200, Marcello Lorenzi wrote: Hi there, just as a small interruption... > It seems that the set variable is not > working into the if statement. "if" inside "location" is not great to use, unless you know what it does. > We can?t use the ?map? parameter because it works only to a http block and > not into location block. You probably can use "map" at http level to define the access_log-if-variable. Try it, you might be pleasantly surprised. Cheers, f -- Francis Daly francis at daoine.org From francis at daoine.org Thu Aug 1 17:12:40 2019 From: francis at daoine.org (Francis Daly) Date: Thu, 1 Aug 2019 18:12:40 +0100 Subject: =?UTF-8?Q?Re=3A_Does_compiling_nginx_with_=E2=80=9C--with-debug=E2=80=9D_o?= =?UTF-8?Q?ption_reduce_performance_even_when_not_enabling_debug_level=3F?= In-Reply-To: References: Message-ID: <20190801171240.hdlfnctacoiplyir@daoine.org> On Thu, Aug 01, 2019 at 09:44:20AM -0400, abkulkar wrote: Hi there, > I want to build nginx with the debug options and use it in production. > But the instructions provided here: > https://nginx.org/en/docs/debugging_log.html do not say if it will have > performance issues even when I DON'T enable debug level logging. I think that the answer to every performance / optimisation question is the same: if you have not measured a problem, then there is not a problem that you care about. The page you link to does not appear to say anything about performance issues, apart from the line """ Logging to the memory buffer on the debug level does not have significant impact on performance even under high load. """ I suspect that the only way for you to know is for you to test. On your hardware, with your configuration, maybe the theoretical maximum throughput from nginx will change with debugging compiled in; but if it still exceeds the practical maximum that you observe, then the change is probably not important to you. Good luck with it, f -- Francis Daly francis at daoine.org From francis at daoine.org Thu Aug 1 17:27:02 2019 From: francis at daoine.org (Francis Daly) Date: Thu, 1 Aug 2019 18:27:02 +0100 Subject: Nginx cache-control headers issue In-Reply-To: References: <3cb9e736e154956bdbd93dac89b536cd.NginxMailingListEnglish@forum.nginx.org> <20190719224715.orupm3jbijnkjh5g@daoine.org> <20190720073855.5f425332gqreo6x6@daoine.org> <20190721122634.7rsaa2e746qybvsh@daoine.org> Message-ID: <20190801172702.oet6gtshgeegjdm6@daoine.org> On Thu, Jul 25, 2019 at 12:02:27PM +0000, Andrew Andonopoulos wrote: Hi there, > Nginx decide which content to cache based on the configuration under "Location" + the cache key? For example I have proxy_cache which means will cache everything which match the specific location? > nginx caching is described at, for example, https://www.nginx.com/blog/nginx-caching-guide/ with some more information at http://nginx.org/r/proxy_cache_valid Whether the response from upstream is cached by nginx, depends on the configuration and the details of the response. Whether nginx responds to a client request with something from the cache, also depends on the configuration. And if you use third party modules, it is possible that they do something that interferes with nginx's caching policy. > I don't yet why I am getting cache miss for all the token based requests (m3u8 & ts), but I am wondering if is related to cache key and if will need to instruct neginx to check the token first and then cache it? Can this be done? > I'm afraid "try it and see" is the best I can suggest. If you see everything using the cache as desired when the tokens are not used, but failing to do that when the tokens are used, then you get to play "spot the difference" between the two pairs of requests and responses. Can you see -- does the upstream response go into the nginx cache in both cases? If not, what is different in the case where it does not go in? If the response does do into the nginx cache, then presumably it is not taken from the cache in one case -- what is different there, instead? Is the "token" part of the cache key? And do different requests have different tokens for the same content? If so, then the second request will not try to read from the cache entry of the first request. Good luck with it, f -- Francis Daly francis at daoine.org From francis at daoine.org Thu Aug 1 17:31:24 2019 From: francis at daoine.org (Francis Daly) Date: Thu, 1 Aug 2019 18:31:24 +0100 Subject: How do I add multiple proxy_pass In-Reply-To: <12b2e98878b9c601a5ea97e5fe044d7f.NginxMailingListEnglish@forum.nginx.org> References: <12b2e98878b9c601a5ea97e5fe044d7f.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20190801173124.fop3wkivdp5ka7iu@daoine.org> On Thu, Jul 25, 2019 at 04:07:57AM -0400, blason wrote: Hi there, > My reverse proxy set it up as www.example.com and location / is set it as > location / { > proxy_pass https://www.example.com:8084; > Now URL is getting opened properly when I login it again diverts to port 88 > on the same server so my query is how do I add multiple proxy pass for same > server > > like proxy_pass https://www.example.com:88 How would *you* know that the first request should be proxy_pass'ed to port 8084, and the second to port 88? Tell nginx the same thing. Probably using either a different server{} block or a different location{} block. f -- Francis Daly francis at daoine.org From francis at daoine.org Thu Aug 1 20:07:07 2019 From: francis at daoine.org (Francis Daly) Date: Thu, 1 Aug 2019 21:07:07 +0100 Subject: Setting Charset on Nginx PHP virtual host In-Reply-To: <16fc46cb-e503-9cdd-686a-80920d4ab711@free.fr> References: <16fc46cb-e503-9cdd-686a-80920d4ab711@free.fr> Message-ID: <20190801200707.vmbtbqhojlpqeiz2@daoine.org> On Wed, Jul 31, 2019 at 05:29:37PM +0200, Vincent M. wrote: Hi there, > I have specified charset and overried_charset on both server and location > and yet, it was still sending headers in UTF-8. What does the error log say? Is there something like no "charset_map" between the charsets "utf-8" and "iso-8859-1" while reading response header from upstream there? Because that could explain why the conversion does not happen. > On Apache we can do: > > > ... > ??? Header set Content-Type "text/html; charset=iso-8859-1" > > > How to do the same on Nginx? I think that that Apache config will set the Content-Type header on responses it sends; but will not do anything to actually make the response body be valid iso-8859-1. The nginx "charset" module expects to modify the response body if necessary. You can try adding a charset_map (http://nginx.org/r/charset_map) -- either a full one that maps between the one-byte iso-8859-1 values and utf-8 values that differ; or just an empty one and let the &# conversion happen instead. charset_map iso-8859-1 utf-8 { } or include extra pieces like E9 C3A9 ; # LATIN SMALL LETTER E WITH ACUTE if you want explicit conversion. f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Fri Aug 2 03:22:50 2019 From: nginx-forum at forum.nginx.org (blason) Date: Thu, 01 Aug 2019 23:22:50 -0400 Subject: How do I add multiple proxy_pass In-Reply-To: <20190801173124.fop3wkivdp5ka7iu@daoine.org> References: <20190801173124.fop3wkivdp5ka7iu@daoine.org> Message-ID: <02816ed6861ad24600cb0eb8156fccaa.NginxMailingListEnglish@forum.nginx.org> yeah that's a good point and let me try out that. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,284983,285078#msg-285078 From nginx-forum at forum.nginx.org Fri Aug 2 04:23:55 2019 From: nginx-forum at forum.nginx.org (vz19) Date: Fri, 02 Aug 2019 00:23:55 -0400 Subject: Client Certificate subject information Message-ID: Hi, My application uses NGINX as its web server and I am adding support for client certificate authentication. I have a requirement where after NGINX validates the client certificate and provides access to my application, I need to obtain the Subject field of the client certificate to parse certain certificate details from my application. Is there a way to obtain this information from the application level or does this information reside only on the NGINX layer? I tried using APIs like ngx_ssl_get_subject_dn from my application but that didn't work. Please provide some inputs or point me in the right direction if I'm missing something. Thanks Posted at Nginx Forum: https://forum.nginx.org/read.php?2,285079,285079#msg-285079 From kohenkatz at gmail.com Fri Aug 2 05:36:56 2019 From: kohenkatz at gmail.com (Moshe Katz) Date: Fri, 2 Aug 2019 01:36:56 -0400 Subject: Client Certificate subject information In-Reply-To: References: Message-ID: If your application is using fastcgi or proxy configuration in nginx, you need to have nginx put the information from the certificate into a FastCGI parameter or an http header that your application can read. Use something like `fastcgi_param DN $ssl_client_s_dn;` for FastCGI or `proxy_set_header X-ClientCert-DN $ssl_client_s_dn;` for proxy. This is a good resource I have used in the past for configuring client certificates: http://blog.nategood.com/client-side-certificate-authentication-in-ngi Alternatively, you can pass the entire certificate to your application and let the application parse it all over again to extract what it wants with something like this: `proxy_set_header X-SSL-CERT $ssl_client_escaped_cert`. See here for more about that: https://serverfault.com/a/629017/105107 On Fri, Aug 2, 2019, 12:24 AM vz19 wrote: > Hi, > > My application uses NGINX as its web server and I am adding support for > client certificate authentication. I have a requirement where after NGINX > validates the client certificate and provides access to my application, I > need to obtain the Subject field of the client certificate to parse certain > certificate details from my application. Is there a way to obtain this > information from the application level or does this information reside only > on the NGINX layer? I tried using APIs like ngx_ssl_get_subject_dn from my > application but that didn't work. Please provide some inputs or point me in > the right direction if I'm missing something. > > Thanks > > Posted at Nginx Forum: > https://forum.nginx.org/read.php?2,285079,285079#msg-285079 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From peter_booth at me.com Fri Aug 2 05:57:01 2019 From: peter_booth at me.com (Peter Booth) Date: Fri, 2 Aug 2019 01:57:01 -0400 Subject: Resident memory not released In-Reply-To: <20190729191013.GG1877@mdounin.ru> References: <20190729170228.GD1877@mdounin.ru> <8583e275a38b5b5202f18b7bf648e0fe.NginxMailingListEnglish@forum.nginx.org> <20190729191013.GG1877@mdounin.ru> Message-ID: I?m wondering if you are overthinking this. You said that the memory was reused when the workload increased again. Linux memory management is unintuitive. What would happen if you used a different metric, say # active connections, as your autoscaling metric? It sounds like this would behave ?better?. > On 29 Jul 2019, at 3:10 PM, Maxim Dounin wrote: > > Hello! > > On Mon, Jul 29, 2019 at 02:52:47PM -0400, aledbf wrote: > >>> on your system allocator and its settings. >> >> Do you have a suggestion to enable this behavior (release of memory) using a >> particular allocator or setting? >> Thanks! > > On FreeBSD and/or on any system with jemalloc(), I would expect > memory to be returned to the OS more or less effectively. > > On Linux with standard glibc allocator, consider tuning > MALLOC_MMAP_THRESHOLD_ and MALLOC_TRIM_THRESHOLD_ environment > variables, as documented here: > > http://man7.org/linux/man-pages/man3/mallopt.3.html > > Note that you may need to use the "env" directive to > make sure these are passed to nginx worker processes as well. > > -- > Maxim Dounin > http://mdounin.ru/ > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at forum.nginx.org Fri Aug 2 06:15:33 2019 From: nginx-forum at forum.nginx.org (vz19) Date: Fri, 02 Aug 2019 02:15:33 -0400 Subject: Client Certificate subject information In-Reply-To: References: Message-ID: <0413f99ac403cd702f7138c42a777024.NginxMailingListEnglish@forum.nginx.org> This is perfect, just what I needed! Thanks a lot Moshe! Posted at Nginx Forum: https://forum.nginx.org/read.php?2,285079,285082#msg-285082 From nginx-forum at forum.nginx.org Fri Aug 2 08:34:37 2019 From: nginx-forum at forum.nginx.org (cello86@gmail.com) Date: Fri, 02 Aug 2019 04:34:37 -0400 Subject: Nginx and conditional logformat In-Reply-To: <20190801165753.u7qawc3cfhtwenz3@daoine.org> References: <20190801165753.u7qawc3cfhtwenz3@daoine.org> Message-ID: Hi Francis, I need to use if to configure some behavior during the ssl client authentication because it's enabled at server block and we need to exclude some locations from the authentication. Marcello Posted at Nginx Forum: https://forum.nginx.org/read.php?2,284984,285085#msg-285085 From mouseless at free.fr Fri Aug 2 13:11:05 2019 From: mouseless at free.fr (Vincent M.) Date: Fri, 2 Aug 2019 15:11:05 +0200 Subject: Setting Charset on Nginx PHP virtual host In-Reply-To: <20190801200707.vmbtbqhojlpqeiz2@daoine.org> References: <16fc46cb-e503-9cdd-686a-80920d4ab711@free.fr> <20190801200707.vmbtbqhojlpqeiz2@daoine.org> Message-ID: <37961db8-3ec5-a320-9bef-1060c06c9b2b@free.fr> Hello Francis, You are right, the charset map is missing and specified in the nginx log error file: /2019/08/02 12:53:42 [error] 19151#19151: *28013 no "charset_map" between the charsets "UTF-8" and "iso-8859-1" while reading response header from upstream, client: .../ I have found 3 charset maps in my nginx default install, but none for iso-8859-1 to utf8 conversion: koi-utf, koi-win, win-utf So I tried in http with empty charset_map: ??? ??? charset_map iso-8859-1 utf-8 { } But special characters like ? are displayed with ? Where to find a charset_map? Thank you for your help, Vincent. Le 01/08/2019 ? 22:07, Francis Daly a ?crit?: > On Wed, Jul 31, 2019 at 05:29:37PM +0200, Vincent M. wrote: > > Hi there, > >> I have specified charset and overried_charset on both server and location >> and yet, it was still sending headers in UTF-8. > What does the error log say? > > Is there something like > > no "charset_map" between the charsets "utf-8" and "iso-8859-1" while reading response header from upstream > > there? Because that could explain why the conversion does not happen. > >> On Apache we can do: >> >> >> ... >> ??? Header set Content-Type "text/html; charset=iso-8859-1" >> >> >> How to do the same on Nginx? > I think that that Apache config will set the Content-Type header > on responses it sends; but will not do anything to actually make the > response body be valid iso-8859-1. > > The nginx "charset" module expects to modify the response body if > necessary. > > You can try adding a charset_map (http://nginx.org/r/charset_map) -- > either a full one that maps between the one-byte iso-8859-1 values and > utf-8 values that differ; or just an empty one and let the &# conversion > happen instead. > > charset_map iso-8859-1 utf-8 { } > > or include extra pieces like > > E9 C3A9 ; # LATIN SMALL LETTER E WITH ACUTE > > if you want explicit conversion. > > f -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Fri Aug 2 14:01:02 2019 From: francis at daoine.org (Francis Daly) Date: Fri, 2 Aug 2019 15:01:02 +0100 Subject: Nginx and conditional logformat In-Reply-To: References: <20190801165753.u7qawc3cfhtwenz3@daoine.org> Message-ID: <20190802140102.jl3fhcko5gnflzlx@daoine.org> On Fri, Aug 02, 2019 at 04:34:37AM -0400, cello86 at gmail.com wrote: Hi there, > I need to use if to configure some behavior during the ssl client > authentication because it's enabled at server block and we need to exclude > some locations from the authentication. The report I thought I was responding to was that a variable that is "set" inside an "if" inside a "location", does not appear to have the "set" value outside that "if" (for the access_log directive). In that case, defining the variable in a "map" at "http" level and only referring to it in the access_log directive, should Just Work. If it doesn't work for you, then I guess you'll need to find some alternative. Good luck with it, f -- Francis Daly francis at daoine.org From francis at daoine.org Fri Aug 2 15:05:54 2019 From: francis at daoine.org (Francis Daly) Date: Fri, 2 Aug 2019 16:05:54 +0100 Subject: Setting Charset on Nginx PHP virtual host In-Reply-To: <37961db8-3ec5-a320-9bef-1060c06c9b2b@free.fr> References: <16fc46cb-e503-9cdd-686a-80920d4ab711@free.fr> <20190801200707.vmbtbqhojlpqeiz2@daoine.org> <37961db8-3ec5-a320-9bef-1060c06c9b2b@free.fr> Message-ID: <20190802150554.e6frf63iq7yi47cb@daoine.org> On Fri, Aug 02, 2019 at 03:11:05PM +0200, Vincent M. wrote: Hi there, > So I tried in http with empty charset_map: > ??? ??? charset_map iso-8859-1 utf-8 { } > But special characters like ? are displayed with ? It seems to work for me as-is. What is different for you? "work for me" means "the utf-8 character ? becomes the 6 characters é, which the html-viewer is expected to display as LATIN SMALL LETTER E WITH ACUTE". nginx.conf: === http { charset_map iso-8859-1 utf-8 { } server { listen 9876; charset utf-8; } server { listen 9877; charset iso-8859-1; override_charset on; location /x/ { proxy_pass http://127.0.0.1:9876/; } } } === $ cat html/a/index.html little e: ?; big E: ? $ od -bc html/a/index.html 0000000 154 151 164 164 154 145 040 145 072 040 303 251 073 040 142 151 l i t t l e e : 303 251 ; b i 0000020 147 040 105 072 040 303 211 040 012 g E : 303 211 \n 0000031 $ curl -i http://127.0.0.10:9876/a/ # headers edited HTTP/1.1 200 OK Server: nginx/1.17.2 Content-Type: text/html; charset=utf-8 little e: ?; big E: ? $ curl -i http://127.0.0.10:9877/x/a/ # headers edited HTTP/1.1 200 OK Server: nginx/1.17.2 Content-Type: text/html; charset=iso-8859-1 little e: é; big E: É And when I change nginx.conf to include a partial "correct" charset map: === charset_map iso-8859-1 utf-8 { E9 C3A9; } === $ curl -i http://127.0.0.10:9877/x/a/ HTTP/1.1 200 OK Server: nginx/1.17.2 Content-Type: text/html; charset=iso-8859-1 little e: ?; big E: É $ curl -i http://127.0.0.10:9877/x/a/ | tail -n 1 | od -bc 0000000 154 151 164 164 154 145 040 145 072 040 351 073 040 142 151 147 l i t t l e e : 351 ; b i g 0000020 040 105 072 040 046 043 062 060 061 073 040 012 E : & # 2 0 1 ; \n 0000034 The utf-8 e-acute was changed to the correct iso-8859-1 octet (octal 351/hex e9/decimal 233), which my terminal renders as "unknown" because it is invalid utf-8. > Where to find a charset_map? It should not be necessary, according to the nginx docs, due to the html-replacement; but if you want one, you can find-or-create one. Basically, every octet from A0 to FF maps to the utf-8 equivalent from C2A0 to C2BF and from C380 to C3BF. The format matches the three example charset-map files that nginx provides. Oh - as one other wrinkle -- it is possible that the visual character e-acute is *not* sent as the octets C3A9; but is instead sent as the octets 65CC81 (e, following by a combining acute accent) -- and off-hand, I don't know nginx will convert that. Possibly é, which might not render very nicely in your html viewer. But before you worry about that extra wrinkle, see what octets are sent, and see where the problem comes in that makes something show as the ? Cheers, f -- Francis Daly francis at daoine.org From roger at netskrt.io Fri Aug 2 23:05:28 2019 From: roger at netskrt.io (Roger Fischer) Date: Fri, 2 Aug 2019 16:05:28 -0700 Subject: NGINX send 206 but wget retries Message-ID: <17F6B0C1-7DD7-4AAD-8C5B-08A25445F45B@netskrt.io> Hello, I am making a byte range request to NGINX using wget. NGINX responds with status code 206 (partial content), but instead of downloading the content, wget retries. Request: wget -S 'http://cache.example.com/video5.ts' --header="Range: bytes=0-1023" Output from wget: --2019-08-02 15:36:44-- http://cache.example.com/video5.ts Resolving cache.example.com (cache.example.com)... 172.16.200.5 Connecting to cache.example.com (cache.example.com)|172.16.200.5|:80... connected. HTTP request sent, awaiting response... HTTP/1.1 206 Partial Content Server: nginx/1.17.0a Date: Fri, 02 Aug 2019 22:36:45 GMT Content-Type: video/MP2T Content-Length: 1024 Connection: keep-alive X-Server-IP: 172.16.200.5 Access-Control-Allow-Origin: * Access-Control-Allow-Headers: Range Access-Control-Allow-Methods: GET Timing-Allow-Origin: * Access-Control-Max-Age: 86400 Access-Control-Expose-Headers: X-Server-IP,Location Age: 86555 Content-Range: bytes 0-1023/135090032 Retrying. Then wget sends the request again, with the same result. The same wget request to the origin succeeds. The data is cached properly (wget without the range header succeeds). Adding ?debug to the wget provides not much more. Registered socket 3 for persistent reuse. Disabling further reuse of socket 3. Closed fd 3 Retrying. There is nothing in the NGINX error log. I am speculating that NGINX closes the connection before the data is delivered. But why? Thanks? Roger -------------- next part -------------- An HTML attachment was scrubbed... URL: From mouseless at free.fr Sun Aug 4 13:11:36 2019 From: mouseless at free.fr (Vincent M.) Date: Sun, 4 Aug 2019 15:11:36 +0200 Subject: Setting Charset on Nginx PHP virtual host In-Reply-To: <20190802150554.e6frf63iq7yi47cb@daoine.org> References: <16fc46cb-e503-9cdd-686a-80920d4ab711@free.fr> <20190801200707.vmbtbqhojlpqeiz2@daoine.org> <37961db8-3ec5-a320-9bef-1060c06c9b2b@free.fr> <20190802150554.e6frf63iq7yi47cb@daoine.org> Message-ID: <5a043507-786b-5973-30a2-453136f956e0@free.fr> Unfortunately, I couldn't make it work by this way! I have my server running on port 443 and specified on nginx.conf: So I tried : http { ??? charset_map iso-8859-1 utf-8 { } ??? server { ??????? listen 9876; ??????? charset utf-8; ??? } } On mywebsite conf file: ??? server { ??????? charset_map iso-8859-1 utf-8 { }; ??????? override_charset on; ??????? location /var/www/mywebsite.com/ { ??????????????? proxy_pass http://127.0.0.1:9876/; ??????? } ? ? } But the special characters was displayed with "?" not with ? . Anyway, it's a PHP issue not Nginx. The default PHP charset config is set to "utf-8" and to overwrite it, I have added on the beginning of my script: ??? ini_set('default_charset', 'iso-8859-1'); And it's working fine... Thanks all for your help, Vincent. Le 02/08/2019 ? 17:05, Francis Daly a ?crit?: > On Fri, Aug 02, 2019 at 03:11:05PM +0200, Vincent M. wrote: > > Hi there, > >> So I tried in http with empty charset_map: >> ??? ??? charset_map iso-8859-1 utf-8 { } >> But special characters like ? are displayed with ? > It seems to work for me as-is. What is different for you? > > "work for me" means "the utf-8 character ? becomes the 6 characters > é, which the html-viewer is expected to display as LATIN SMALL > LETTER E WITH ACUTE". > > nginx.conf: > === > http { > charset_map iso-8859-1 utf-8 { } > server { > listen 9876; > charset utf-8; > } > server { > listen 9877; > charset iso-8859-1; > override_charset on; > location /x/ { > proxy_pass http://127.0.0.1:9876/; > } > } > } > === > > $ cat html/a/index.html > little e: ?; big E: ? > $ od -bc html/a/index.html > 0000000 154 151 164 164 154 145 040 145 072 040 303 251 073 040 142 151 > l i t t l e e : 303 251 ; b i > 0000020 147 040 105 072 040 303 211 040 012 > g E : 303 211 \n > 0000031 > > $ curl -i http://127.0.0.10:9876/a/ # headers edited > HTTP/1.1 200 OK > Server: nginx/1.17.2 > Content-Type: text/html; charset=utf-8 > > little e: ?; big E: ? > > $ curl -i http://127.0.0.10:9877/x/a/ # headers edited > HTTP/1.1 200 OK > Server: nginx/1.17.2 > Content-Type: text/html; charset=iso-8859-1 > > little e: é; big E: É > > > And when I change nginx.conf to include a partial "correct" charset map: > > === > charset_map iso-8859-1 utf-8 { > E9 C3A9; > } > === > > $ curl -i http://127.0.0.10:9877/x/a/ > HTTP/1.1 200 OK > Server: nginx/1.17.2 > Content-Type: text/html; charset=iso-8859-1 > > little e: ?; big E: É > > $ curl -i http://127.0.0.10:9877/x/a/ | tail -n 1 | od -bc > 0000000 154 151 164 164 154 145 040 145 072 040 351 073 040 142 151 147 > l i t t l e e : 351 ; b i g > 0000020 040 105 072 040 046 043 062 060 061 073 040 012 > E : & # 2 0 1 ; \n > 0000034 > > The utf-8 e-acute was changed to the correct iso-8859-1 octet (octal > 351/hex e9/decimal 233), which my terminal renders as "unknown" because > it is invalid utf-8. > >> Where to find a charset_map? > It should not be necessary, according to the nginx docs, due to the > html-replacement; but if you want one, you can find-or-create one. > > Basically, every octet from A0 to FF maps to the utf-8 equivalent from > C2A0 to C2BF and from C380 to C3BF. > > The format matches the three example charset-map files that nginx > provides. > > Oh - as one other wrinkle -- it is possible that the visual character > e-acute is *not* sent as the octets C3A9; but is instead sent as the > octets 65CC81 (e, following by a combining acute accent) -- and off-hand, > I don't know nginx will convert that. Possibly é, which might not > render very nicely in your html viewer. > > But before you worry about that extra wrinkle, see what octets are sent, > and see where the problem comes in that makes something show as the ? > > Cheers, > > f From francis at daoine.org Sun Aug 4 21:57:15 2019 From: francis at daoine.org (Francis Daly) Date: Sun, 4 Aug 2019 22:57:15 +0100 Subject: Setting Charset on Nginx PHP virtual host In-Reply-To: <5a043507-786b-5973-30a2-453136f956e0@free.fr> References: <16fc46cb-e503-9cdd-686a-80920d4ab711@free.fr> <20190801200707.vmbtbqhojlpqeiz2@daoine.org> <37961db8-3ec5-a320-9bef-1060c06c9b2b@free.fr> <20190802150554.e6frf63iq7yi47cb@daoine.org> <5a043507-786b-5973-30a2-453136f956e0@free.fr> Message-ID: <20190804215715.4nlz5bvli5iun6w3@daoine.org> On Sun, Aug 04, 2019 at 03:11:36PM +0200, Vincent M. wrote: Hi there, > But the special characters was displayed with "?" not with ? . I wonder... The nginx docs say """Missing characters in the range 80-FF are replaced with ???. """ Is there any chance that the response body is actually iso-8859-1, but the header claims that it is utf-8? In that case, you would want to change the header, but *not* convert the body. (Strictly: you would want to fix the source, so that it does not lie about its content. But if you can't do that, then "just" changing the header in nginx may be adequate.) > Anyway, it's a PHP issue not Nginx. The default PHP charset config is set to > "utf-8" and to overwrite it, I have added on the beginning of my script: > ??? ini_set('default_charset', 'iso-8859-1'); > And it's working fine... Great that you have an overall configuration that works for you. Cheers, f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Mon Aug 5 08:30:58 2019 From: nginx-forum at forum.nginx.org (blason) Date: Mon, 05 Aug 2019 04:30:58 -0400 Subject: Need help on Oauth-2.0 Token with Nginx reverse proxy In-Reply-To: References: Message-ID: <51c52a54acad9a49be13a350db12d985.NginxMailingListEnglish@forum.nginx.org> Hi Folks, Really no solution for this? Can someone please help? Now I am seeing beloe error in access.log and my file is like this 11.22.33.44 - - [05/Aug/2019:14:50:58 +0530] "POST /connect/token HTTP/1.1" 404 191 "https://test.example.net/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/76.0.3809.87 Safari/537.36" location = /connect/token { internal; proxy_set_header Authorization "bearer xxxxx"; proxy_set_header Content-Type "application/x-www-form-urlencoded"; proxy_method POST; # proxy_pass_header Authorization; proxy_pass https://test.example.net:99/connect/token; } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,285048,285107#msg-285107 From nginx-forum at forum.nginx.org Mon Aug 5 10:00:33 2019 From: nginx-forum at forum.nginx.org (blason) Date: Mon, 05 Aug 2019 06:00:33 -0400 Subject: Need help on Oauth-2.0 Token with Nginx reverse proxy In-Reply-To: References: Message-ID: <82b2dc3a77673e918674a513a6655d99.NginxMailingListEnglish@forum.nginx.org> Hi Folks, Really no solution for this? Can someone please help? Now I am seeing beloe error in access.log and my file is like this 11.22.33.44 - - [05/Aug/2019:14:50:58 +0530] "POST /connect/token HTTP/1.1" 404 191 "https://test.example.net/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/76.0.3809.87 Safari/537.36" location = /connect/token { internal; proxy_set_header Authorization "bearer xxxxx"; proxy_set_header Content-Type "application/x-www-form-urlencoded"; proxy_method POST; # proxy_pass_header Authorization; proxy_pass https://test.example.net:99/connect/token; } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,285048,285108#msg-285108 From nginx-forum at forum.nginx.org Mon Aug 5 11:12:55 2019 From: nginx-forum at forum.nginx.org (fredr) Date: Mon, 05 Aug 2019 07:12:55 -0400 Subject: Resident memory not released In-Reply-To: References: Message-ID: I guess you are right. The main reason I want to scale on memory rather than number of connections, is that we wouldn't have to calculate how many connections a node can handle. Eg, if we change the memory size of each node, we also have to update the automatic scaling metric, or lets say there is a new version of nginx that uses less memory per connection, then we would have to re-calibrate the scaling. I did read up a bit on the MALLOC_CHECK_ variable, and it sound like that should not be used in production, as it reduces the overall performance. I've also tried to compile jemalloc and load that via the LD_PRELOAD environment variable. That seems to work pretty good, memory is released/reclaimed as I would expect, but only when running nginx as root. Not sure why it doesn't work when starting nginx as an other user though. But I think I'll do as you suggested and scale it on number of connections for now. I'm a bit out of my depth here :) Peter Booth via nginx Wrote: ------------------------------------------------------- > I?m wondering if you are overthinking this. You said that the memory > was reused when the workload increased again. Linux memory management > is unintuitive. What would happen if you used a different metric, say > # active connections, as your autoscaling metric? It sounds like this > would behave ?better?. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,285025,285109#msg-285109 From mouseless at free.fr Mon Aug 5 12:31:00 2019 From: mouseless at free.fr (Vincent M.) Date: Mon, 5 Aug 2019 14:31:00 +0200 Subject: Setting Charset on Nginx PHP virtual host In-Reply-To: <20190804215715.4nlz5bvli5iun6w3@daoine.org> References: <16fc46cb-e503-9cdd-686a-80920d4ab711@free.fr> <20190801200707.vmbtbqhojlpqeiz2@daoine.org> <37961db8-3ec5-a320-9bef-1060c06c9b2b@free.fr> <20190802150554.e6frf63iq7yi47cb@daoine.org> <5a043507-786b-5973-30a2-453136f956e0@free.fr> <20190804215715.4nlz5bvli5iun6w3@daoine.org> Message-ID: <9328f454-d0ac-45e9-27b1-ff63caab35b6@free.fr> Le 04/08/2019 ? 23:57, Francis Daly a ?crit?: > On Sun, Aug 04, 2019 at 03:11:36PM +0200, Vincent M. wrote: > > Hi there, > >> But the special characters was displayed with "?" not with ? . > I wonder... > > The nginx docs say """Missing characters in the range 80-FF are replaced > with ???. """ > > Is there any chance that the response body is actually iso-8859-1, > but the header claims that it is utf-8? In that case, you would want to > change the header, but *not* convert the body. The header tag for the charset has not been changed is set to "iso-8859-1": > (Strictly: you would want to fix the source, so that it does not lie > about its content. But if you can't do that, then "just" changing the > header in nginx may be adequate.) > >> Anyway, it's a PHP issue not Nginx. The default PHP charset config is set to >> "utf-8" and to overwrite it, I have added on the beginning of my script: >> ??? ini_set('default_charset', 'iso-8859-1'); >> And it's working fine... > Great that you have an overall configuration that works for you. Yes, it's an old website for which I don't want to loose too many time. > > Cheers, > > f -------------- next part -------------- An HTML attachment was scrubbed... URL: From rajkumaradass at avaya.com Mon Aug 5 12:55:39 2019 From: rajkumaradass at avaya.com (R, Rajkumar (Raj)) Date: Mon, 5 Aug 2019 12:55:39 +0000 Subject: Resident memory not released In-Reply-To: References: Message-ID: I'm also facing this problem of memory not being released in Kubernetes, but we are using alpine image which includes musl library. Could you please provide your thoughts in this case. thanks, raj -----Original Message----- From: nginx On Behalf Of fredr Sent: Monday, August 5, 2019 4:43 PM To: nginx at nginx.org Subject: Re: Resident memory not released I guess you are right. The main reason I want to scale on memory rather than number of connections, is that we wouldn't have to calculate how many connections a node can handle. Eg, if we change the memory size of each node, we also have to update the automatic scaling metric, or lets say there is a new version of nginx that uses less memory per connection, then we would have to re-calibrate the scaling. I did read up a bit on the MALLOC_CHECK_ variable, and it sound like that should not be used in production, as it reduces the overall performance. I've also tried to compile jemalloc and load that via the LD_PRELOAD environment variable. That seems to work pretty good, memory is released/reclaimed as I would expect, but only when running nginx as root. Not sure why it doesn't work when starting nginx as an other user though. But I think I'll do as you suggested and scale it on number of connections for now. I'm a bit out of my depth here :) Peter Booth via nginx Wrote: ------------------------------------------------------- > I?m wondering if you are overthinking this. You said that the memory > was reused when the workload increased again. Linux memory management > is unintuitive. What would happen if you used a different metric, say > # active connections, as your autoscaling metric? It sounds like this > would behave ?better?. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,285025,285109#msg-285109 _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at forum.nginx.org Mon Aug 5 16:37:47 2019 From: nginx-forum at forum.nginx.org (aledbf) Date: Mon, 05 Aug 2019 12:37:47 -0400 Subject: Resident memory not released In-Reply-To: References: Message-ID: > I've also tried to compile jemalloc and load that via the LD_PRELOAD environment variable. We used jemalloc in the past, but that approach also introduces different issues like not being able to use third-party monitory agents like dynatrace. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,285025,285122#msg-285122 From nginx-forum at forum.nginx.org Tue Aug 6 07:52:45 2019 From: nginx-forum at forum.nginx.org (fredr) Date: Tue, 06 Aug 2019 03:52:45 -0400 Subject: Resident memory not released In-Reply-To: References: Message-ID: <4937d04a002d386830adf612c77023dd.NginxMailingListEnglish@forum.nginx.org> aledbf Wrote: ------------------------------------------------------- > We used jemalloc in the past, but that approach also introduces > different issues like not being able to use third-party monitory > agents like dynatrace. Was it for the same reason as me you tried jemalloc? did you find any other solutions? Also, did you set it up via LD_PRELOAD, or how did you set it up? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,285025,285127#msg-285127 From nginx-forum at forum.nginx.org Tue Aug 6 14:27:57 2019 From: nginx-forum at forum.nginx.org (jvanetten) Date: Tue, 06 Aug 2019 10:27:57 -0400 Subject: Proxy Caching ignore path Message-ID: I have a situation where I need to enable proxy cache for a gateway but anything that goes to /gateway/public/files/ I do not want to cache. I have tried nested locations and all kinds of configurations with no success. It usually lands up with 404 errors on /gateway/public/files/ the complete url includes a file reference like so /gateway/public/files/kd774831ldja3 caching is working for anything going to /gateway/ location /gateway/ { proxy_cache gateway_cache; proxy_cache_valid 200 302 10m; proxy_redirect off; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_hide_header Access-Control-Allow-Origin; add_header 'Access-Control-Allow-Origin' '*' always; rewrite ^/gateway/(.*) /$1 break; proxy_pass $elb; } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,285128,285128#msg-285128 From nginx-forum at forum.nginx.org Tue Aug 6 15:27:21 2019 From: nginx-forum at forum.nginx.org (itpp2012) Date: Tue, 06 Aug 2019 11:27:21 -0400 Subject: Proxy Caching ignore path In-Reply-To: References: Message-ID: The first location match is the active one, add another location before this one to change its caching behavior. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,285128,285130#msg-285130 From nginx-forum at forum.nginx.org Tue Aug 6 16:12:34 2019 From: nginx-forum at forum.nginx.org (jvanetten) Date: Tue, 06 Aug 2019 12:12:34 -0400 Subject: Proxy Caching ignore path In-Reply-To: References: Message-ID: itpp2012 Wrote: ------------------------------------------------------- > The first location match is the active one, add another location > before this one to change its caching behavior. So adding the following below the other location should work?: location /gateway/public/files/ { expires -1; add_header 'Cache-Control' 'no-store, no-cache, must-revalidate, proxy-revalidate, max-age=0'; } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,285128,285131#msg-285131 From nginx-forum at forum.nginx.org Tue Aug 6 17:08:57 2019 From: nginx-forum at forum.nginx.org (aledbf) Date: Tue, 06 Aug 2019 13:08:57 -0400 Subject: Resident memory not released In-Reply-To: <4937d04a002d386830adf612c77023dd.NginxMailingListEnglish@forum.nginx.org> References: <4937d04a002d386830adf612c77023dd.NginxMailingListEnglish@forum.nginx.org> Message-ID: > Was it for the same reason as me you tried jemalloc? Yes > did you find any other solutions? No > Also, did you set it up via LD_PRELOAD, or how did you set it up? No. I added the -ljemalloc option in --with-ld-opt flag in the build process Posted at Nginx Forum: https://forum.nginx.org/read.php?2,285025,285134#msg-285134 From nginx-forum at forum.nginx.org Tue Aug 6 20:04:44 2019 From: nginx-forum at forum.nginx.org (itpp2012) Date: Tue, 06 Aug 2019 16:04:44 -0400 Subject: Proxy Caching ignore path In-Reply-To: References: Message-ID: Above not below. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,285128,285135#msg-285135 From nginx-forum at forum.nginx.org Wed Aug 7 04:28:24 2019 From: nginx-forum at forum.nginx.org (atulsharma1989) Date: Wed, 07 Aug 2019 00:28:24 -0400 Subject: =?UTF-8?Q?NGINX_open_source_reverse_proxy_is_not_connected__with_same_ip_a?= =?UTF-8?Q?s_incoming_request_=2C_it=E2=80=99s_changing_ip_due_to_this_cli?= =?UTF-8?Q?ent_connnection_is_not_happening_-_UDP?= Message-ID: <7f122d3531b02f5e50dc7184515d072c.NginxMailingListEnglish@forum.nginx.org> We are using NGINX open source version 1.12.2 and we have NaT ip configure on vm physical interface. Issue we have , NGINX is changing IP address during reverse proxy step. udp Incoming request is coming from client in internet and request transfer to connected NAT interface ip Then it split connect to private ip on other physical interface and forward request to backend Client, But during same reverse process while going out NGINX is changing IP address. It should be same nat IP as request was recieved, but it?s trying to connect with client via private Ip due to this connection is not happening and our service is not working Note: We are using UDP protocol. Please suggest how to fix this Does open source NGINX support UDP load balancing ? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,285136,285136#msg-285136 From francis at daoine.org Wed Aug 7 07:15:28 2019 From: francis at daoine.org (Francis Daly) Date: Wed, 7 Aug 2019 08:15:28 +0100 Subject: Proxy Caching ignore path In-Reply-To: References: Message-ID: <20190807071528.mjd5uead45ttcrsi@daoine.org> On Tue, Aug 06, 2019 at 11:27:21AM -0400, itpp2012 wrote: Hi there, > The first location match is the active one, add another location before this > one to change its caching behavior. No. *regex* locations care about the order in the config. Other locations do not -- longest match is best (plus some more details). Cheers, f -- Francis Daly francis at daoine.org From francis at daoine.org Wed Aug 7 07:28:28 2019 From: francis at daoine.org (Francis Daly) Date: Wed, 7 Aug 2019 08:28:28 +0100 Subject: Proxy Caching ignore path In-Reply-To: References: Message-ID: <20190807072828.6ejuttecfh4itknk@daoine.org> On Tue, Aug 06, 2019 at 10:27:57AM -0400, jvanetten wrote: Hi there, > I have a situation where I need to enable proxy cache for a gateway but > anything that goes to /gateway/public/files/ I do not want to cache. I have > tried nested locations and all kinds of configurations with no success. It > usually lands up with 404 errors on /gateway/public/files/ the complete url > includes a file reference like so /gateway/public/files/kd774831ldja3 > > caching is working for anything going to /gateway/ In nginx, one request is handled in one location. Only the config in, or inherited into, that location, matters. And you can nest some location{}s, if you want to inherit config between them. So if I've understood what you want to do, the simplest way is probably to add a nested location /gateway/public/files/ { proxy_cache off; rewrite ^/gateway/(.*) /$1 break; proxy_pass $elb; } within your current "location /gateway/ {" block. "proxy_pass" does not inherit, so needs to be there. "rewrite" does not inherit, so needs to be there. "proxy_cache" does inherit, so needs to be disabled if you do not want to proxy_cache things from upstream's /public/files/. Good luck with it, f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Wed Aug 7 10:16:29 2019 From: nginx-forum at forum.nginx.org (neomaq) Date: Wed, 07 Aug 2019 06:16:29 -0400 Subject: slow connection on SSL port (TTFB) Message-ID: Hello there is a problem: slow connection to nginx server telnet server 443 1-8 random sec before TTFB all possible network stack tunings are applied, similar problems are not observed on other(non nginx) ports 32 vCPU Intel(R) Xeon(R) CPU E5-2630 v4 96 GB RAM avg CPU load -20% 1 GB network (tested on local internal network) there are over 1400 virtual hosts with SSL the problem is observed during busy hours nginx: user www-data; worker_processes 64; pid /run/nginx.pid; worker_rlimit_nofile 16384; events { use epoll; worker_connections 16384; multi_accept on;} http { sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; server_names_hash_max_size 524280; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE ssl_prefer_server_ciphers on; } ---------------------------------------- there are 5-15K ESTANLISHED connections and over 17K open/TIME_WAIT ports What can be done to reduce the connection time to the server? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,285142,285142#msg-285142 From mdounin at mdounin.ru Wed Aug 7 10:53:33 2019 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 7 Aug 2019 13:53:33 +0300 Subject: =?UTF-8?Q?Re=3A_NGINX_open_source_reverse_proxy_is_not_connected__with_sam?= =?UTF-8?Q?e_ip_as_incoming_request_=2C_it=E2=80=99s_changing_ip_due_to_th?= =?UTF-8?Q?is_client_connnection_is_not_happening_-_UDP?= In-Reply-To: <7f122d3531b02f5e50dc7184515d072c.NginxMailingListEnglish@forum.nginx.org> References: <7f122d3531b02f5e50dc7184515d072c.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20190807105333.GV1877@mdounin.ru> Hello! On Wed, Aug 07, 2019 at 12:28:24AM -0400, atulsharma1989 wrote: > We are using NGINX open source version 1.12.2 and we have NaT ip configure > on vm physical interface. > > Issue we have , NGINX is changing IP address during reverse proxy step. Changes with nginx 1.13.0, 25 Apr 2017: *) Bugfix: if a server in the stream module listened on a wildcard address, the source address of a response UDP datagram could differ from the original datagram destination address. Upgrade, it should fix this. -- Maxim Dounin http://mdounin.ru/ From anoopalias01 at gmail.com Wed Aug 7 11:30:43 2019 From: anoopalias01 at gmail.com (Anoop Alias) Date: Wed, 7 Aug 2019 17:00:43 +0530 Subject: slow connection on SSL port (TTFB) In-Reply-To: References: Message-ID: Do you see a large ttfb on a static html page ? , if an upstream like proxy/fastcgi is involved and they are slow to respond the ttfb also will be high 17K open/TIME_WAIT -- investigate this as this dont seem normal On Wed, Aug 7, 2019 at 3:46 PM neomaq wrote: > Hello > there is a problem: > slow connection to nginx server > > telnet server 443 > 1-8 random sec before TTFB > > all possible network stack tunings are applied, similar problems are not > observed on other(non nginx) ports > > 32 vCPU Intel(R) Xeon(R) CPU E5-2630 v4 > 96 GB RAM > avg CPU load -20% > 1 GB network (tested on local internal network) > > there are over 1400 virtual hosts with SSL > the problem is observed during busy hours > > nginx: > user www-data; > worker_processes 64; > pid /run/nginx.pid; > worker_rlimit_nofile 16384; > events { > use epoll; > worker_connections 16384; > multi_accept on;} > http { > sendfile on; > tcp_nopush on; > tcp_nodelay on; > keepalive_timeout 65; > types_hash_max_size 2048; > server_names_hash_max_size 524280; > ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE > ssl_prefer_server_ciphers on; > } > ---------------------------------------- > there are 5-15K ESTANLISHED connections and over 17K open/TIME_WAIT ports > > What can be done to reduce the connection time to the server? > > Posted at Nginx Forum: > https://forum.nginx.org/read.php?2,285142,285142#msg-285142 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- *Anoop P Alias* -------------- next part -------------- An HTML attachment was scrubbed... URL: From 15555513217 at 163.com Fri Aug 9 05:52:15 2019 From: 15555513217 at 163.com (=?GBK?B?vaqyrtHz?=) Date: Fri, 9 Aug 2019 13:52:15 +0800 (CST) Subject: Static resource failed to pass through cookie Message-ID: <75e4c884.5b3b.16c74ef4e04.Coremail.15555513217@163.com> map $cookie_test_debug $forward_to_gray { # forward to gray1 9cb88042edc55bf85c22e89cf880c63b 10.0.0.1; } location ~ ^/test/ { root /data/www/project; index index.html; if ( $uri !~ (css|js)$ ) { rewrite ^.*$ /test/index.html break; } if ( $forward_to_gray != '' ) { proxy_pass http://$forward_to_gray$request_uri; break; } } location / { proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; try_files $uri $uri/ /index.php?$query_string; if ( $forward_to_gray != '' ) { proxy_pass http://$forward_to_gray$request_uri; break; } } -------------- next part -------------- An HTML attachment was scrubbed... URL: From 15555513217 at 163.com Fri Aug 9 08:26:13 2019 From: 15555513217 at 163.com (=?GBK?B?vaqyrtHz?=) Date: Fri, 9 Aug 2019 16:26:13 +0800 (CST) Subject: Static resource failed to pass through cookie Message-ID: <24a06404.7e53.16c757c4137.Coremail.15555513217@163.com> map $cookie_test_debug $forward_to_gray { # forward to gray1 9cb88042edc55bf85c22e89cf880c63b 10.0.0.1; } location ~ ^/test/ { root /data/www/project; index index.html; if ( $uri !~ (css|js)$ ) { rewrite ^.*$ /test/index.html break; } if ( $forward_to_gray != '' ) { proxy_pass http://$forward_to_gray$request_uri; break; } } location / { proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; try_files $uri $uri/ /index.php?$query_string; if ( $forward_to_gray != '' ) { proxy_pass http://$forward_to_gray$request_uri; break; } } According to the above configuration, my php request can be routed through a cookie, but the static resource will report 404, I don't know if there is a problem with my configuration. -------------- next part -------------- An HTML attachment was scrubbed... URL: From koocr at mailc.net Fri Aug 9 15:54:26 2019 From: koocr at mailc.net (koocr at mailc.net) Date: Fri, 09 Aug 2019 08:54:26 -0700 Subject: Fallback default server sharing cert information about other domains than for the URL you visit ? Message-ID: <8147cade-58e9-44bc-b4a7-04ff2e9ce8a9@www.fastmail.com> Hi, My own domain, let's say 'example.com', is registered in the HSTS preload database (https://hstspreload.org). I setup my domain as virtual host in Nginx, /etc/nginx/sites-enabled/example.conf server { listen 172.17.0.1:80; server_name example.com www.example.com; location / { return 301 https://example.com$request_uri; } } server { listen 172.17.0.1:443 ssl http2; server_name example.com www.example.com; ssl_trusted_certificate "/etc/ssl/trusted.crt.pem"; ssl_certificate "/etc/ssl/chain.crt.pem"; ssl_certificate_key "/etc/ssl/privkey.pem"; add_header Strict-Transport-Security "max-age=315360000; includeSubDomains; preload"; location / {...} } The cert is good for example.com + www.example.com. When I go to https://example.com it works like you would expect. I also set up a fallback, default server in my main nginx config /etc/nginx/nginx.conf ... server { listen 80 default_server; listen [::]:80 ipv6only=on default_server; server_name _; return 301 https://$host; } server { listen 443 ssl http2 default_server; listen [::]:443 ssl http2 ipv6only=on default_server; server_name _; ssl_trusted_certificate "/etc/ssl/trusted.crt.pem"; ssl_certificate "/etc/ssl/null.crt.pem"; ssl_certificate_key "/etc/ssl/nullkey.pem"; return 444; } include sites-enabled/*.conf; If I go to a subdomain of my domain that has a DNS A-record pointing to the same IP, but no Nginx virtual hosted site, https://subdomain.example.com in the browser I get this message Did Not Connect: Potential Security Issue Firefox detected a potential security threat and did not continue to subdomain.example.com because this website requires a secure connection. What can you do about it? subdomain.example.com has a security policy called HTTP Strict Transport Security (HSTS), which means that Firefox can only connect to it securely. You can?t add an exception to visit this site. The issue is most likely with the website, and there is nothing you can do to resolve it. You can notify the website?s administrator about the problem. Learn more? Websites prove their identity via certificates. Firefox does not trust this site because it uses a certificate that is not valid for subdomain.example.com. The certificate is only valid for the following names: example.com, www.example.com Error code: SSL_ERROR_BAD_CERT_DOMAIN View Certificate I expect it to fail with a 444, and only have info about the failed subdomain. Why does it respond with cert info about the "example.com, www.example.com " certs at all? Those are only for the full-domain site. What do I need to set up to just get a fallback 444 response and NO information about any other domain's certs etc, when I visit the un-hosted subdomain.example.com? From r at roze.lv Fri Aug 9 17:06:16 2019 From: r at roze.lv (Reinis Rozitis) Date: Fri, 9 Aug 2019 20:06:16 +0300 Subject: Fallback default server sharing cert information about other domains than for the URL you visit ? In-Reply-To: <8147cade-58e9-44bc-b4a7-04ff2e9ce8a9@www.fastmail.com> References: <8147cade-58e9-44bc-b4a7-04ff2e9ce8a9@www.fastmail.com> Message-ID: <002a01d54ed4$c16d80b0$44488210$@roze.lv> > I expect it to fail with a 444, and only have info about the failed subdomain. The SSL handshake happens before the http status and since the browser doesn't get a valid certificate it immediately throws an error and ignores the rest. Unless the users override the error on the browser side (iirc with HSTS it's not even possible as with invalid or expired certs) you can't expect that they will get the return code. > Why does it respond with cert info about the "example.com, www.example.com > " certs at all? Those are only for the full-domain site. I might be wrong (needs a clarification from nginx dev/support people) but if the configuration is as you have included in the email it might be that the default_server directive doesn't work as expected since you have different listen blocks: listen 172.17.0.1:443 ssl http2; vs listen 443 ssl http2 default_server; Since nginx can have a different default_server for different 'address:port' pairs depending on what 'listen 443' is actually expanded to (just a guess 0.0.0.0:443) it could be that the nginx decides that it has to use the certificates from first server {} (in the order in configuration) rather than the catch all fallback. Just for testing purposes (if possible) you could either add the IP to both listen directives or remove the ip part from the full-domain server {} block to see if it changes anything. Other than that depending on the requirements the other options are just to make a matching server block with a valid certificate (with Lets Encrypt it's quite simple and free) or have an *.example.com wildcard SSL so the browsers are satisfied with whateversubdomain.example.com. rr From koocr at mailc.net Fri Aug 9 17:27:38 2019 From: koocr at mailc.net (koocr at mailc.net) Date: Fri, 09 Aug 2019 10:27:38 -0700 Subject: Fallback default server sharing cert information about other domains than for the URL you visit ? In-Reply-To: <002a01d54ed4$c16d80b0$44488210$@roze.lv> References: <8147cade-58e9-44bc-b4a7-04ff2e9ce8a9@www.fastmail.com> <002a01d54ed4$c16d80b0$44488210$@roze.lv> Message-ID: <67bd658e-08cf-4d92-8a4a-133fba3f7e2f@www.fastmail.com> Hi, > you can't expect that they will get the return code. Okay I guess that makes sense. Is there any other way to get an attempt to connect to a un-hosted site to get a "nobody home, go away" response? Something other than the current "there's a problem with the cert" mis-message? > I might be wrong (needs a clarification from nginx dev/support people) No worry. Hope somebody that's sure will chime in eventually. > Just for testing purposes (if possible) you could either add the IP to > both listen directives or remove the ip part from the full-domain > server {} block to see if it changes anything. Hm. That doesn't really make sense to me. This server has multiple IPs. The hosted server needs to respond on a specific IP, so it needs the specific IP. The fallback is supposed to work for all "whenever it doesn't match" cases, so it doesn't get an IP, right? Did I misunderstand your point? > Other than that depending on the requirements the other options are > just to make a matching server block with a valid certificate (with > Lets Encrypt it's quite simple and free) or have an *.example.com > wildcard SSL so the browsers are satisfied with A subdomain wildcard like that assumes that ALL subdomains of example.com are unhosted. That's not true here. There are an infinite number of possible mismatches. I can't really set up a "valid cert" for each one. This is about the fallback. I thought that's what the fallback is supposed to handle. Let's see if a 'dev' has some other comments. Thanks! From r at roze.lv Fri Aug 9 18:14:05 2019 From: r at roze.lv (Reinis Rozitis) Date: Fri, 9 Aug 2019 21:14:05 +0300 Subject: Fallback default server sharing cert information about other domains than for the URL you visit ? In-Reply-To: <67bd658e-08cf-4d92-8a4a-133fba3f7e2f@www.fastmail.com> References: <8147cade-58e9-44bc-b4a7-04ff2e9ce8a9@www.fastmail.com> <002a01d54ed4$c16d80b0$44488210$@roze.lv> <67bd658e-08cf-4d92-8a4a-133fba3f7e2f@www.fastmail.com> Message-ID: <002d01d54ede$3b2319c0$b1694d40$@roze.lv> > > Just for testing purposes (if possible) you could either add the IP to > > both listen directives or remove the ip part from the full-domain > > server {} block to see if it changes anything. > > Hm. That doesn't really make sense to me. > > This server has multiple IPs. The hosted server needs to respond on a specific IP, > so it needs the specific IP. > > The fallback is supposed to work for all "whenever it doesn't match" cases, so it > doesn't get an IP, right? Yes and no the 'default' fallback works for particular 'address:port' and listen ip:443 seems to be different than just listen 443; Out of personal interest I spun up an instance to replicate your setup and it kind of confirms my suspicion: If you have: server { listen 443 ssl http2; ssl_certificate realdomain.crt; ssl_certificate_key realdomain.key; server_name realdomain; return 403; } server { listen 443 ssl http2 default; ssl_certificate dummy.crt; ssl_certificate_key dummy.key; server_name _; return 402; } Everything works as expected - you get first server for https://realdomain and dummy cert for anything else. The moment you change the first listen to listen real.ip:443 ssl http2; the 'default_server' doesn't work anymore and you always get the 'realdomain' certificate (and also the test 403 response) for nondefined subdomain requests and the order of server {} block The workaround I found is then is to also define a dummy listen ip:port for the catch server then the real certificate is not "leaked" in random requests: server { listen real.ip:443 ssl http2 default; ssl_certificate dummy.crt; ..... } Unless there are (old) clients which don't support SNI (server name indication) in general specifying the IP only on dns-level and using just 'listen 443 will make the configuration more simple. rr From r at roze.lv Fri Aug 9 18:17:34 2019 From: r at roze.lv (Reinis Rozitis) Date: Fri, 9 Aug 2019 21:17:34 +0300 Subject: Fallback default server sharing cert information about other domains than for the URL you visit ? References: <8147cade-58e9-44bc-b4a7-04ff2e9ce8a9@www.fastmail.com> <002a01d54ed4$c16d80b0$44488210$@roze.lv> <67bd658e-08cf-4d92-8a4a-133fba3f7e2f@www.fastmail.com> Message-ID: <003101d54ede$b7a04d60$26e0e820$@roze.lv> > certificate (and also the test 403 response) for nondefined subdomain requests > and the order of server {} block Missed the ending of sentence - .. the order of server {} blocks doesn't matter (in the test case). rr From koocr at mailc.net Fri Aug 9 18:25:06 2019 From: koocr at mailc.net (koocr at mailc.net) Date: Fri, 09 Aug 2019 11:25:06 -0700 Subject: Fallback default server sharing cert information about other domains than for the URL you visit ? In-Reply-To: <003101d54ede$b7a04d60$26e0e820$@roze.lv> References: <8147cade-58e9-44bc-b4a7-04ff2e9ce8a9@www.fastmail.com> <002a01d54ed4$c16d80b0$44488210$@roze.lv> <67bd658e-08cf-4d92-8a4a-133fba3f7e2f@www.fastmail.com> <003101d54ede$b7a04d60$26e0e820$@roze.lv> Message-ID: <1f351ecc-080c-498e-825b-5d88422b6812@www.fastmail.com> I'll get a set up I can fool around with that more easily and see how that works here. I notice that you're not using 'default_server" in your listen directive, just 'default'. Reading here https://nginx.org/en/docs/http/ngx_http_core_module.html#listen It's not a listed option and it says "In versions prior to 0.8.21 this parameter is named simply default. " Was that a typo? Or is there a new or different usage now ? From r at roze.lv Fri Aug 9 18:43:17 2019 From: r at roze.lv (Reinis Rozitis) Date: Fri, 9 Aug 2019 21:43:17 +0300 Subject: Fallback default server sharing cert information about other domains than for the URL you visit ? In-Reply-To: <1f351ecc-080c-498e-825b-5d88422b6812@www.fastmail.com> References: <8147cade-58e9-44bc-b4a7-04ff2e9ce8a9@www.fastmail.com> <002a01d54ed4$c16d80b0$44488210$@roze.lv> <67bd658e-08cf-4d92-8a4a-133fba3f7e2f@www.fastmail.com> <003101d54ede$b7a04d60$26e0e820$@roze.lv> <1f351ecc-080c-498e-825b-5d88422b6812@www.fastmail.com> Message-ID: <003201d54ee2$4f15c5a0$ed4150e0$@roze.lv> > "In versions prior to 0.8.21 this parameter is named simply default. " > > Was that a typo? Or is there a new or different usage now ? Not a typo just nginx being backwards compatible and me using it since 0.5.x or even earlier (and being lazy). As far as I remember the directive has been renamed to better convey the meaning. rr From koocr at mailc.net Fri Aug 9 18:48:58 2019 From: koocr at mailc.net (koocr at mailc.net) Date: Fri, 09 Aug 2019 11:48:58 -0700 Subject: Fallback default server sharing cert information about other domains than for the URL you visit ? In-Reply-To: <003201d54ee2$4f15c5a0$ed4150e0$@roze.lv> References: <8147cade-58e9-44bc-b4a7-04ff2e9ce8a9@www.fastmail.com> <002a01d54ed4$c16d80b0$44488210$@roze.lv> <67bd658e-08cf-4d92-8a4a-133fba3f7e2f@www.fastmail.com> <003101d54ede$b7a04d60$26e0e820$@roze.lv> <1f351ecc-080c-498e-825b-5d88422b6812@www.fastmail.com> <003201d54ee2$4f15c5a0$ed4150e0$@roze.lv> Message-ID: Thanks for the help. I'm really feeling pretty stupid atm since I can't seem to find & understand a how-to document to get this right :-/ So I have this config server { listen 80 http2 default_server; listen [::]:80 http2 ipv6only=on default_server; server_name _; return 301 https://$host; } server { listen 172.17.0.1:443 ssl http2 default_server; listen [FE80:...:0001]:443 ssl http2 ipv6only=on default_server; server_name _; ssl_trusted_certificate "/etc/ssl/trusted.crt.pem"; ssl_certificate "/etc/ssl/dummy.crt.pem"; ssl_certificate_key "/etc/ssl/dummy.key.pem"; return 444; } server { listen 443 ssl http2 default_server; listen [::]:443 ssl http2 ipv6only=on default_server; server_name _; ssl_trusted_certificate "/etc/ssl/trusted.crt.pem"; ssl_certificate "/etc/ssl/dummy.crt.pem"; ssl_certificate_key "/etc/ssl/dummy.key.pem"; return 444; } server { listen 172.17.0.1:80 http2; listen [FE80:...:0001]:80 http2; server_name example.com www.example.com; location / { return 301 https://example.com$request_uri; } } server { listen 172.17.0.1:443 ssl http2; listen [FE80:...:0001]:443 ssl http2 ipv6only=on default_server; server_name example.com www.example.com; ssl_trusted_certificate "/etc/ssl/trusted.crt.pem"; ssl_certificate "/etc/ssl/chain.crt.pem"; ssl_certificate_key "/etc/ssl/privkey.pem"; add_header Strict-Transport-Security "max-age=315360000; includeSubDomains; preload"; location / {...} } With that config when I try to launch nginx it fails with these errors Aug 09 11:29:21 myhost nginx[10095]: nginx: [emerg] bind() to [::]:443 failed (98: Address already in use) If I comment out the IP-less listener # server { # listen 443 ssl http2 default_server; # listen [::]:443 ssl http2 ipv6only=on default_server; # server_name _; # ssl_trusted_certificate "/etc/ssl/trusted.crt.pem"; # ssl_certificate "/etc/ssl/dummy.crt.pem"; # ssl_certificate_key "/etc/ssl/dummy.key.pem"; # return 444; # } and try again, I do get a site fail with that "Websites prove their identity via certificates. Firefox does not trust this site because it uses a certificate that is not valid for ..." error again. From r at roze.lv Fri Aug 9 19:19:58 2019 From: r at roze.lv (Reinis Rozitis) Date: Fri, 9 Aug 2019 22:19:58 +0300 Subject: Fallback default server sharing cert information about other domains than for the URL you visit ? In-Reply-To: References: <8147cade-58e9-44bc-b4a7-04ff2e9ce8a9@www.fastmail.com> <002a01d54ed4$c16d80b0$44488210$@roze.lv> <67bd658e-08cf-4d92-8a4a-133fba3f7e2f@www.fastmail.com> <003101d54ede$b7a04d60$26e0e820$@roze.lv> <1f351ecc-080c-498e-825b-5d88422b6812@www.fastmail.com> <003201d54ee2$4f15c5a0$ed4150e0$@roze.lv> Message-ID: <003801d54ee7$6f6eec50$4e4cc4f0$@roze.lv> > With that config when I try to launch nginx it fails with these errors > > Aug 09 11:29:21 myhost nginx[10095]: nginx: [emerg] bind() to [::]:443 > failed (98: Address already in use) Try to remove the ipv6only=on option it should work just fine without. Imo the [FE80:...:0001]:443 conflicts with [::]:443 in linux since the ipv6only option forces nginx to try to create a separate listening socket while the port is in use (hence the error). rr From al-nginx at none.at Sat Aug 10 08:46:29 2019 From: al-nginx at none.at (Aleksandar Lazic) Date: Sat, 10 Aug 2019 10:46:29 +0200 Subject: Static resource failed to pass through cookie In-Reply-To: <24a06404.7e53.16c757c4137.Coremail.15555513217@163.com> References: <24a06404.7e53.16c757c4137.Coremail.15555513217@163.com> Message-ID: <9225b98d-070a-2e0a-a56c-9e546d332f0a@none.at> Hi. Am 09.08.2019 um 10:26 schrieb ???: > map $cookie_test_debug $forward_to_gray { default 10.0.0.2; I would suggest to add this entry according to the doc and remove the if blocks. https://nginx.org/en/docs/http/ngx_http_map_module.html#map > ? ? # forward to gray1 > ? ? 9cb88042edc55bf85c22e89cf880c63b 10.0.0.1; > } > ? ? location ~ ^/test/ { > ? ? ? ? root /data/www/project; > ? ? ? ? index index.html; > ? ? ? ? if ( $uri !~ (css|js)$ ) { > ? ? ? ? ? ? rewrite? ^.*$? /test/index.html? break; > ? ? ? ? } > ? ? ? ? if ( $forward_to_gray != '' ) { > ? ? ? ? ? ? proxy_pass http://$forward_to_gray$request_uri; > ? ? ? ? ? ? break; > ? ? ? ? } > ? ? } > ? ? location / { > ? ? ? ? proxy_set_header Host $host; > ? ? ? ? proxy_set_header X-Real-IP? $remote_addr; > ? ? ? ? proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; > ? ? ? ? try_files $uri $uri/ /index.php?$query_string; > ? ? ? ? if ( $forward_to_gray != '' ) { > ? ? ? ? ? ? proxy_pass http://$forward_to_gray$request_uri; > ? ? ? ? ? ? break; > ? ? ? ? } > ? ? } > > According to the above configuration, my php request can be routed through a > cookie, but the static resource will report 404, I don't know if there is a > problem with my configuration. From nginx-forum at forum.nginx.org Sun Aug 11 10:26:11 2019 From: nginx-forum at forum.nginx.org (atulsharma1989) Date: Sun, 11 Aug 2019 06:26:11 -0400 Subject: =?UTF-8?Q?Re=3A_NGINX_open_source_reverse_proxy_is_not_connected__with_sam?= =?UTF-8?Q?e_ip_as_incoming_request_=2C_it=E2=80=99s_changing_ip_due_to_th?= =?UTF-8?Q?is_client_connnection_is_not_happening_-_UDP?= In-Reply-To: <20190807105333.GV1877@mdounin.ru> References: <20190807105333.GV1877@mdounin.ru> Message-ID: <70d70770b9f63b5d96a0e479d80cc008.NginxMailingListEnglish@forum.nginx.org> i am registered user and we are planning to upgrade it means we are on right track... thanks for response. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,285136,285186#msg-285186 From nginx-forum at forum.nginx.org Mon Aug 12 05:14:46 2019 From: nginx-forum at forum.nginx.org (blason) Date: Mon, 12 Aug 2019 01:14:46 -0400 Subject: Can we use JWT authentication with Nginx Open source version? Message-ID: <596a74e6773041d5f876ed824fc6475d.NginxMailingListEnglish@forum.nginx.org> Hi Folks, I was referring lot of other articles on internet and seems that jwt authentication is only possible with Nginx plus version; wondering if this is possible with Nginx Open source version as well? TIA Blason R Posted at Nginx Forum: https://forum.nginx.org/read.php?2,285190,285190#msg-285190 From nginx-forum at forum.nginx.org Mon Aug 12 08:44:46 2019 From: nginx-forum at forum.nginx.org (Danila) Date: Mon, 12 Aug 2019 04:44:46 -0400 Subject: Nginx + ldap auth Message-ID: Hello i have nginx 1.16.0 and some modules: nginx-auth-ldap, nginx-dav-ext-module, headers-more-nginx-module, nginx-upload-module. I try do ldap auth on some directory. config http { ####Block_integration_with_ldap ############## ldap_server mydomain{ url "ldap://mydomain:3268/DC=mydimain,DC=local?sAMAccountName?sub?(objectClass=person)"; binddn 'admin at mydomain.local'; binddn_passwd 'adm_pass'; require valid_user; } ldap_server mydomain2{ url "ldap://mydomain:3268/DC=mydimain,DC=local?sAMAccountName?sub?(objectClass=person)"; require user "CN=test,DC=MYDOMAIN,DC=LOCAL"; group_attribute uniquemember; group_attribute_is_dn on; referral on; } ############Block log ######################## log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; ##############Block gzip settings######################## gzip on; gzip_comp_level 2; gzip_vary on; gzip_min_length 1; gzip_proxied any; gzip_types text/plain text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript application/javascript application/json; ########################################################## sendfile on; tcp_nopush on; charset utf-8; keepalive_timeout 65; include /etc/nginx/mime.types; default_type application/octet-stream; include /etc/nginx/conf.d/*.conf; } With first auth "mydomain" on location / all Ok. But With second auth "mydomain2" on location /user ask login and password but not work Log: http_auth_ldap: Initial bind failed (49: Invalid credentials [80090308: LdapErr: DSID-0C090400, comment: AcceptSecurityContext error, data 52e, v1db1]) 49: Invalid credentials talk about incorrect password. But i sure what password is correct. Has anyone had such problems? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,285191,285191#msg-285191 From nginx-forum at forum.nginx.org Mon Aug 12 10:53:15 2019 From: nginx-forum at forum.nginx.org (rambabuy) Date: Mon, 12 Aug 2019 06:53:15 -0400 Subject: new connection for POST request and keepalive idle connection for GET Message-ID: <58ea693fedc4f8e7947db819d0408865.NginxMailingListEnglish@forum.nginx.org> Hi I am facing some issue with POST requests. I am adding some delay to POST request , after completing processing , it trying to use idle connection where its idle timeout is near to zero. then upstream close/reset connection . causing nginx 502 error. How can I use idle connections to GET requests and new connection to POST requests ? Thanks Ram Posted at Nginx Forum: https://forum.nginx.org/read.php?2,285193,285193#msg-285193 From koocr at mailc.net Mon Aug 12 16:37:49 2019 From: koocr at mailc.net (koocr at mailc.net) Date: Mon, 12 Aug 2019 09:37:49 -0700 Subject: How to get nginx + uwsgi to exec, not display, perl cgi script? Message-ID: <1cdee3da-90b5-4d9c-b0e1-f52c25d93e51@www.fastmail.com> Hi all. I'm setting up a local Git server, with Gitweb + Gitolite. The gitolite wrapper is installed & working. Now I'm working on the Gitweb frontend. I run Nginx as my webserver. Usually with PHP, using fpm. Gitweb's gitweb.cgi looks like it needs perl CGI. For perl cgi I'm trying to get it working with UWSGI, https://uwsgi-docs.readthedocs.io/en/latest/Nginx.html https://nginx.org/en/docs/http/ngx_http_uwsgi_module.html#example I installed git --version git version 2.22.0 ls -al /usr/share/gitweb/gitweb.cgi -rwxr-xr-x 1 root root 247K Jul 24 05:27 /usr/share/gitweb/gitweb.cgi grep "\$version =" /usr/share/gitweb/gitweb.cgi our $version = "2.22.0"; nginx -v nginx version: nginx/1.17.1 uwsgi --version 2.0.18 I set up the nginx vhost server { listen 127.0.0.1:60080 http2; root /usr/share/gitweb; index gitweb.cgi; location / { try_files $uri $uri/ @gitweb; } location @gitweb { root /usr/share/gitweb; include uwsgi_params; gzip off; uwsgi_param UWSGI_SCRIPT gitweb; uwsgi_param GITWEB_CONFIG /etc/gitweb/gitweb.conf; uwsgi_pass unix:/run/uwsgi/uwsgi.sock; uwsgi_modifier1 5; } } and the uwsgi server /etc/uwsgi/uwsgi.ini [uwsgi] strict = 1 master = true processes = 2 binary-path = /usr/sbin/uwsgi plugin-dir = /usr/lib64/uwsgi logto = /var/log/uwsgi/uwsgi.log uid = wwwrun gid = www umask = 022 uwsgi-socket = /run/uwsgi/uwsgi.sock chmod-socket = 660 chown-socket = wwwrun:www plugins = http,psgi chdir = /usr/share/gitweb psgi = gitweb.cgi nginx & uwsgi services are both running ps aux | egrep "nginx|uwsgi" wwwrun 17463 0.0 0.1 89468 23704 ? Ss 07:03 0:00 /usr/sbin/uwsgi --autoload --ini /etc/uwsgi/uwsgi.ini wwwrun 17465 0.0 0.1 97664 17184 ? Sl 07:03 0:00 /usr/sbin/uwsgi --autoload --ini /etc/uwsgi/uwsgi.ini wwwrun 17468 0.0 0.1 97664 17184 ? Sl 07:03 0:00 /usr/sbin/uwsgi --autoload --ini /etc/uwsgi/uwsgi.ini root 18006 0.0 0.0 211264 4276 ? Ss 07:10 0:00 nginx: master process /opt/nginx/sbin/nginx -c /etc/nginx/nginx.conf -g pid /run/nginx.pid; wwwrun 18007 0.0 0.0 211416 5492 ? S 07:10 0:00 nginx: worker process wwwrun 18008 0.0 0.0 212068 10300 ? S 07:10 0:00 nginx: worker process wwwrun 18009 0.0 0.0 211416 5492 ? S 07:10 0:00 nginx: worker process wwwrun 18011 0.0 0.0 211416 5492 ? S 07:10 0:00 nginx: worker process wwwrun 18012 0.0 0.0 211452 5052 ? S 07:10 0:00 nginx: cache manager process ls -al /run/uwsgi/uwsgi.sock srw-rw---- 1 wwwrun www 0 Aug 12 07:03 /run/uwsgi/uwsgi.sock= when I go to the site http://127.0.0.1:60080/ I just get the script listing in the browser #!/usr/bin/perl # gitweb - simple web interface to track changes in git repositories # # (C) 2005-2006, Kay Sievers # (C) 2005, Christian Gierke # # This program is licensed under the GPLv2 use 5.008; use strict; use warnings; ... no errors anywhere, just the script display. I'm missing something basic since it's not running the script. :-/ Anyone have any experience with gitweb + uwsgi on nginx? Or know a good working example? Thanks! From nginx-forum at forum.nginx.org Mon Aug 12 22:14:34 2019 From: nginx-forum at forum.nginx.org (rjonesatl) Date: Mon, 12 Aug 2019 18:14:34 -0400 Subject: Configuring NGINX to proxy IBM MQ Message-ID: I would like to configure NGINX as a reverse proxy for (WebSphere) IBM MQ message broker. I need to support bi-directional messaging, including both P2P and pub/sub. Clients are JMS clients. Is it possible to configure NGINX to support such a configuration? What would such an NGINX configuration look like? And what would the JMS client connection string look like? I can't seem to find this information anywhere. Any help is greatly appreciated. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,285206,285206#msg-285206 From wizard at bnnorth.net Tue Aug 13 03:35:12 2019 From: wizard at bnnorth.net (Ken Wright) Date: Mon, 12 Aug 2019 23:35:12 -0400 Subject: 502 Bad Gateway Message-ID: <4f90b7cd-5260-d34c-df7e-bc0d699ec543@bnnorth.net> I'm running nginx 1.14.0 on Ubuntu Server 18.04 with PHP 7.2.19 and as of this morning I'm getting 502 errors when I try to log into Nextcloud (16.0.3, if it matters).? I know I've seen fixes for 502 before, but nothing I've been able to find thus far has helped.? Further information available on request, if anyone wants to help.? Thanks in advance! Ken -- Registered Linux user #483005 If you ever think international relations make sense, remember this: because a Serb shot an Austrian in Bosnia, Germany invaded Belgium. From mdounin at mdounin.ru Tue Aug 13 11:44:58 2019 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 13 Aug 2019 14:44:58 +0300 Subject: 502 Bad Gateway In-Reply-To: <4f90b7cd-5260-d34c-df7e-bc0d699ec543@bnnorth.net> References: <4f90b7cd-5260-d34c-df7e-bc0d699ec543@bnnorth.net> Message-ID: <20190813114458.GJ1877@mdounin.ru> Hello! On Mon, Aug 12, 2019 at 11:35:12PM -0400, Ken Wright wrote: > I'm running nginx 1.14.0 on Ubuntu Server 18.04 with PHP 7.2.19 and as > of this morning I'm getting 502 errors when I try to log into Nextcloud > (16.0.3, if it matters).? I know I've seen fixes for 502 before, but > nothing I've been able to find thus far has helped.? Further information > available on request, if anyone wants to help.? Thanks in advance! The 502 error suggests that your backend isn't responding properly. nginx error log might contain some additional details about the problem, though in general you have to look into what's happened with your backend and how to fix it. -- Maxim Dounin http://mdounin.ru/ From francis at daoine.org Tue Aug 13 13:20:47 2019 From: francis at daoine.org (Francis Daly) Date: Tue, 13 Aug 2019 14:20:47 +0100 Subject: How to get nginx + uwsgi to exec, not display, perl cgi script? In-Reply-To: <1cdee3da-90b5-4d9c-b0e1-f52c25d93e51@www.fastmail.com> References: <1cdee3da-90b5-4d9c-b0e1-f52c25d93e51@www.fastmail.com> Message-ID: <20190813132047.l4oq7fnscjfhndov@daoine.org> On Mon, Aug 12, 2019 at 09:37:49AM -0700, koocr at mailc.net wrote: Hi there, > I run Nginx as my webserver. Usually with PHP, using fpm. > > Gitweb's gitweb.cgi looks like it needs perl CGI. > > For perl cgi I'm trying to get it working with UWSGI, Why? UWSGI and CGI are different things. For what it's worth, when I search Google for "nginx gitweb", the first few results all suggest to use "fastcgi". (Which is also different from CGI; but there are some well-known fastcgi-wrapper services that handle those differences.) When I search for "nginx gitweb uwsgi" there are not a lot of immediately-obviously-relevant results. So if the aim is "run gitweb, behind nginx", then probably "use fastcgi" is the path of least resistance. If the aim is to use uwsgi, then you will probably want to investigate how to make *this* cgi script accessible via the uwsgi protocol -- maybe there is a generic uwsgi/cgi wrapping tool; or maybe this cgi script has a works-with-another-protocol mode. Good luck with it, f -- Francis Daly francis at daoine.org From francis at daoine.org Tue Aug 13 13:53:25 2019 From: francis at daoine.org (Francis Daly) Date: Tue, 13 Aug 2019 14:53:25 +0100 Subject: Can we use JWT authentication with Nginx Open source version? In-Reply-To: <596a74e6773041d5f876ed824fc6475d.NginxMailingListEnglish@forum.nginx.org> References: <596a74e6773041d5f876ed824fc6475d.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20190813135325.z7ecawite7ltwqzv@daoine.org> On Mon, Aug 12, 2019 at 01:14:46AM -0400, blason wrote: Hi there, > I was referring lot of other articles on internet and seems that jwt > authentication is only possible with Nginx plus version; wondering if this > is possible with Nginx Open source version as well? When I search in Google for "nginx jwt", the first few results are on nginx.com domains which eventually refer to http://nginx.org/en/docs/http/ngx_http_auth_jwt_module.html which says it is in the commercial subscription. The next few results are on github.com domains; one is a third-party module which claims to "do" jwt; and another is a Lua script that does the same in conjunction with the "openresty" distribution of nginx. Perhaps one of those can be used to do what you want? Good luck with it, f -- Francis Daly francis at daoine.org From xeioex at nginx.com Tue Aug 13 16:10:33 2019 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Tue, 13 Aug 2019 19:10:33 +0300 Subject: njs-0.3.4 Message-ID: Hello, I'm glad to announce a new release of NGINX JavaScript module (njs). This release proceeds to extend the coverage of ECMAScript specifications. Apart from specs conformance fuzzing under Memory-Sanitizer is introduced which allowed to catch new types of bugs. Notable new features: - Shorthand method names (ES2015): : > ({foo(){return 123}}).foo() // ({foo:function(){return 123}}) : 123 - Computed property names (ES2015) : > ({['b' + 'ar']:123}).bar : 123 - added getter/setter literal support: : > ({get foo(){return 123}}).foo : 123 : > ({get ['f' + 'oo'](){return 123}}).foo : 123 You can learn more about njs: - Overview and introduction: http://nginx.org/en/docs/njs/ - Presentation: https://youtu.be/Jc_L6UffFOs Feel free to try it and give us feedback on: - Github: https://github.com/nginx/njs/issues - Mailing list: http://mailman.nginx.org/mailman/listinfo/nginx-devel Changes with njs 0.3.4 13 Aug 2019 Core: *) Feature: added Object shorthand methods and computed property names. Thanks to ??? (Hong Zhi Dao) and Artem S. Povalyukhin. *) Feature: added getter/setter literal support. Thanks to ??? (Hong Zhi Dao) and Artem S. Povalyukhin. *) Feature: added fs.renameSync(). *) Feature: added String.prototype.trimStart() and String.prototype.trimEnd(). *) Improvement: added memory-sanitizer support. *) Improvement: Unicode case tables updated to version 12.1. *) Improvement: added UTF8 validation for string literals. *) Bugfix: fixed reading files with zero size in fs.readFileSync(). *) Bugfix: extended the list of space separators in String.prototype.trim(). *) Bugfix: fixed using of uninitialized value in String.prototype.padStart(). *) Bugfix: fixed String.prototype.replace() for '$0' and '$&' replacement string. *) Bugfix: fixed String.prototype.replace() for byte strings with regex argument. *) Bugfix: fixed global match in String.prototype.replace() with regexp argument. *) Bugfix: fixed Array.prototype.slice() for primitive types. *) Bugfix: fixed heap-buffer-overflow while importing module. *) Bugfix: fixed UTF-8 character escaping. *) Bugfix: fixed Object.values() and Object.entries() for shared objects. *) Bugfix: fixed uninitialized memory access in String.prototype.match(). *) Bugfix: fixed String.prototype.match() for byte strings with regex argument. *) Bugfix: fixed Array.prototype.lastIndexOf() with undefined arguments. *) Bugfix: fixed String.prototype.substring() with empty substring. *) Bugfix: fixed invalid memory access in String.prototype.substring(). *) Bugfix: fixed String.fromCharCode() for code points > 65535 and NaN. *) Bugfix: fixed String.prototype.toLowerCase() and String.prototype.toUpperCase(). *) Bugfix: fixed Error() constructor with no arguments. *) Bugfix: fixed "in" operator for values with accessor descriptors. *) Bugfix: fixed Object.defineProperty() for non-boolean descriptor props. *) Bugfix: fixed Error.prototype.toString() with UTF8 string properties. *) Bugfix: fixed Error.prototype.toString() with non-string values for "name" and "message". From mdounin at mdounin.ru Tue Aug 13 17:03:53 2019 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 13 Aug 2019 20:03:53 +0300 Subject: nginx-1.17.3 Message-ID: <20190813170353.GN1877@mdounin.ru> Changes with nginx 1.17.3 13 Aug 2019 *) Security: when using HTTP/2 a client might cause excessive memory consumption and CPU usage (CVE-2019-9511, CVE-2019-9513, CVE-2019-9516). *) Bugfix: "zero size buf" alerts might appear in logs when using gzipping; the bug had appeared in 1.17.2. *) Bugfix: a segmentation fault might occur in a worker process if the "resolver" directive was used in SMTP proxy. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Tue Aug 13 17:04:16 2019 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 13 Aug 2019 20:04:16 +0300 Subject: nginx-1.16.1 Message-ID: <20190813170416.GR1877@mdounin.ru> Changes with nginx 1.16.1 13 Aug 2019 *) Security: when using HTTP/2 a client might cause excessive memory consumption and CPU usage (CVE-2019-9511, CVE-2019-9513, CVE-2019-9516). -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Tue Aug 13 17:04:40 2019 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 13 Aug 2019 20:04:40 +0300 Subject: nginx security advisory (CVE-2019-9511, CVE-2019-9513, CVE-2019-9516) Message-ID: <20190813170440.GV1877@mdounin.ru> Hello! Several security issues were identified in nginx HTTP/2 implementation, which might cause excessive memory consumption and CPU usage (CVE-2019-9511, CVE-2019-9513, CVE-2019-9516). The issues affect nginx compiled with the ngx_http_v2_module (not compiled by default) if the "http2" option of the "listen" directive is used in a configuration file. The issues affect nginx 1.9.5 - 1.17.2. The issues are fixed in nginx 1.17.3, 1.16.1. Thanks to Jonathan Looney from Netflix for discovering these issues. -- Maxim Dounin http://nginx.org/ From kworthington at gmail.com Tue Aug 13 18:20:34 2019 From: kworthington at gmail.com (Kevin Worthington) Date: Tue, 13 Aug 2019 14:20:34 -0400 Subject: [nginx-announce] nginx-1.16.1 In-Reply-To: <20190813170422.GS1877@mdounin.ru> References: <20190813170422.GS1877@mdounin.ru> Message-ID: Hello Nginx users, Now available: Nginx 1.16.1 for Windows https://kevinworthington.com/nginxwin1161 (32-bit and 64-bit versions) These versions are to support legacy users who are already using Cygwin based builds of Nginx. Officially supported native Windows binaries are at nginx.org. Announcements are also available here: Twitter http://twitter.com/kworthington Thank you, Kevin -- Kevin Worthington kworthington *@* (gmail] [dot} {com) https://kevinworthington.com/ https://twitter.com/kworthington On Tue, Aug 13, 2019 at 1:05 PM Maxim Dounin wrote: > Changes with nginx 1.16.1 13 Aug > 2019 > > *) Security: when using HTTP/2 a client might cause excessive memory > consumption and CPU usage (CVE-2019-9511, CVE-2019-9513, > CVE-2019-9516). > > > -- > Maxim Dounin > http://nginx.org/ > _______________________________________________ > nginx-announce mailing list > nginx-announce at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-announce > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kworthington at gmail.com Tue Aug 13 18:20:43 2019 From: kworthington at gmail.com (Kevin Worthington) Date: Tue, 13 Aug 2019 14:20:43 -0400 Subject: [nginx-announce] nginx-1.17.3 In-Reply-To: <20190813170358.GO1877@mdounin.ru> References: <20190813170358.GO1877@mdounin.ru> Message-ID: Hello Nginx users, Now available: Nginx 1.17.3 for Windows https://kevinworthington.com/nginxwin1173 (32-bit and 64-bit versions) These versions are to support legacy users who are already using Cygwin based builds of Nginx. Officially supported native Windows binaries are at nginx.org. Announcements are also available here: Twitter http://twitter.com/kworthington Thank you, Kevin -- Kevin Worthington kworthington *@* (gmail] [dot} {com) https://kevinworthington.com/ https://twitter.com/kworthington On Tue, Aug 13, 2019 at 1:04 PM Maxim Dounin wrote: > Changes with nginx 1.17.3 13 Aug > 2019 > > *) Security: when using HTTP/2 a client might cause excessive memory > consumption and CPU usage (CVE-2019-9511, CVE-2019-9513, > CVE-2019-9516). > > *) Bugfix: "zero size buf" alerts might appear in logs when using > gzipping; the bug had appeared in 1.17.2. > > *) Bugfix: a segmentation fault might occur in a worker process if the > "resolver" directive was used in SMTP proxy. > > > -- > Maxim Dounin > http://nginx.org/ > _______________________________________________ > nginx-announce mailing list > nginx-announce at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-announce > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx+phil.pennock at spodhuis.org Tue Aug 13 19:14:18 2019 From: nginx+phil.pennock at spodhuis.org (Phil Pennock) Date: Tue, 13 Aug 2019 15:14:18 -0400 Subject: 3rd party module move: nginx-openssl-version Message-ID: <20190813191417.GA9640@spodhuis.org> This is about a third-party module: nginx-openssl-version and its sudden new home. Back when HeartBleed struck, I wrote an nginx module to provide for configuration to be able to specify a minimum acceptable version of the OpenSSL library and turn non-matches into fatal configuration errors, trading off availability for security. I know that a few people started using it. It's not massively popular, but it is used. My employer at the time was Apcera and the module was published under their GitHub repo. Apcera was purchased a few years ago, and today the new owner suddenly closed all non-fork GitHub repos without notice. A few people have forks; the code has not seen updates, but only because it _works_ and hasn't needed changes. I still routinely build nginx using this module. If there are further changes needed, then I will make my changes available under the same (MIT) license. Since I wrote the code in the first place, I think that I can get away with decreeing that my GitHub fork is now the canonical home. https://github.com/PennockTech/nginx-openssl-version Replace `--add-module` references: old: github.com/apcera/nginx-openssl-version new: github.com/PennockTech/nginx-openssl-version I will submit a wiki PR shortly. Thanks for reading, -Phil From francis at daoine.org Tue Aug 13 21:50:15 2019 From: francis at daoine.org (Francis Daly) Date: Tue, 13 Aug 2019 22:50:15 +0100 Subject: Nginx + ldap auth In-Reply-To: References: Message-ID: <20190813215015.2n5bflm42bqgrqed@daoine.org> On Mon, Aug 12, 2019 at 04:44:46AM -0400, Danila wrote: Hi there, > Hello i have nginx 1.16.0 and some modules: nginx-auth-ldap, > nginx-dav-ext-module, headers-more-nginx-module, nginx-upload-module. > ldap_server mydomain{ > url > "ldap://mydomain:3268/DC=mydimain,DC=local?sAMAccountName?sub?(objectClass=person)"; > binddn 'admin at mydomain.local'; > binddn_passwd 'adm_pass'; > require valid_user; > } You report that that one works. Note that it does have a binddn and a binddn_passwd. > ldap_server mydomain2{ > url > "ldap://mydomain:3268/DC=mydimain,DC=local?sAMAccountName?sub?(objectClass=person)"; > require user "CN=test,DC=MYDOMAIN,DC=LOCAL"; > group_attribute uniquemember; > group_attribute_is_dn on; > referral on; > } You report that that one fails on the initial bind. It has no binddn and no binddn_passwd. If you copy the matching lines from the other block to here, does that make a difference? (Or: if you remove the bind* lines from the first block, does that one stay working?) Note that nginx-auth-ldap is not in stock-nginx; possibly the documentation for whatever module you are using will have more information. Good luck with it, f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Wed Aug 14 05:01:30 2019 From: nginx-forum at forum.nginx.org (Radjin) Date: Wed, 14 Aug 2019 01:01:30 -0400 Subject: 502 Bad Gateway In-Reply-To: <20190813114458.GJ1877@mdounin.ru> References: <20190813114458.GJ1877@mdounin.ru> Message-ID: I am also having this problem on my Linux box I run at home. I had the webserver running perfectly then followed directions to activate virtual hosting. That also started out working perfectly for a while then suddenly I was getting the 502 error when launching anything from the static site. I run Wordpress and Piwigo in frames within my static site. I have tried the sock to :9000 listen option with no change. nginx version: nginx/1.14.2 PHP 7.3.4-2 (cli) (built: Apr 13 2019 19:05:48) ( NTS ) Copyright (c) 1997-2018 The PHP Group Zend Engine v3.3.4, Copyright (c) 1998-2018 Zend Technologies with Zend OPcache v7.3.4-2, Copyright (c) 1999-2018, by Zend Technologies Linux webserver 4.19.0-5-amd64 #1 SMP Debian 4.19.37-5+deb10u2 (2019-08-08) x86_64 Any help would be much appreciated. I am quite the noob when it comes to setting up a raw Linux webserver so am learning as I go. Radjin~ Posted at Nginx Forum: https://forum.nginx.org/read.php?2,285209,285261#msg-285261 From wizard at bnnorth.net Wed Aug 14 05:47:23 2019 From: wizard at bnnorth.net (Ken Wright) Date: Wed, 14 Aug 2019 01:47:23 -0400 Subject: 502 Bad Gateway In-Reply-To: References: Message-ID: Maxim and anyone else who cares to chime in, I'm still enough of a newbie that I have trouble understanding the error logs.? The one for nginx reads the following at the end: 2019/08/12 22:48:51 [error] 8274#8274: *1 upstream sent too big header while reading response header from upstream, client: 192.168.1.133, server: _, request: "GET /nextcloud/index.php HTTP/1.1", upstream: "fastcgi://unix:/var/run/php/php7.2-fpm.sock:", host: "192.168.1.101", referrer: "http://192.168.1.101/nextcloud/" I don't understand how to make the header smaller.? I really don't understand what's going on; nginx says it's working, and php shows the phpinfo page, but when I actually try to run an application nothing works! Ken -- Registered Linux user #483005 If you ever think international relations make sense, remember this: because a Serb shot an Austrian in Bosnia, Germany invaded Belgium. From iippolitov at nginx.com Wed Aug 14 07:37:30 2019 From: iippolitov at nginx.com (Igor A. Ippolitov) Date: Wed, 14 Aug 2019 10:37:30 +0300 Subject: 502 Bad Gateway In-Reply-To: References: Message-ID: Ken, Try setting 'proxy_buffer_size' to a higher value. Say 128k. On 14.08.2019 8:47, Ken Wright wrote: > Maxim and anyone else who cares to chime in, > > I'm still enough of a newbie that I have trouble understanding the error > logs.? The one for nginx reads the following at the end: > > 2019/08/12 22:48:51 [error] 8274#8274: *1 upstream sent too big header > while reading response header from upstream, client: 192.168.1.133, > server: _, request: "GET /nextcloud/index.php HTTP/1.1", upstream: > "fastcgi://unix:/var/run/php/php7.2-fpm.sock:", host: "192.168.1.101", > referrer: "http://192.168.1.101/nextcloud/" > > I don't understand how to make the header smaller.? I really don't > understand what's going on; nginx says it's working, and php shows the > phpinfo page, but when I actually try to run an application nothing works! > > Ken > From nginx-forum at forum.nginx.org Wed Aug 14 08:11:31 2019 From: nginx-forum at forum.nginx.org (Radjin) Date: Wed, 14 Aug 2019 04:11:31 -0400 Subject: 502 Bad Gateway In-Reply-To: References: Message-ID: <4a05942406d62f3c0432d021550e54c3.NginxMailingListEnglish@forum.nginx.org> > understand what's going on; nginx says it's working, and php shows the phpinfo page Thanks for that. Reading your comment about the phpinfo page made me repoint my root to the default html directory root web directory to test for the phpinfo page and I do not get it. I checked my sights-available files in case I had left out a comment or had commented out a critical item but it appeared ok. I checked the syntax with sudo ?nginx -t? and it was ok. So what could I have broke while creating the virtual hosts that would have stopped php and how can I restart it? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,285209,285264#msg-285264 From nginx-forum at forum.nginx.org Wed Aug 14 19:34:50 2019 From: nginx-forum at forum.nginx.org (hhypnos) Date: Wed, 14 Aug 2019 15:34:50 -0400 Subject: Caching Method Message-ID: <01a1e772b71d5615783b32996c7cb29e.NginxMailingListEnglish@forum.nginx.org> I have configured nginx to cache static content, but i cant see any file in caching folder, also when i'm opening page in DevTool on network tab it show me: Response Header: access-control-allow-credentials: true access-control-allow-methods: GET access-control-allow-origin: http://google.com cache-control: no-cache, private cf-ray: 50655a0c2cf0d413-BUD content-encoding: br content-type: text/html; charset=UTF-8 date: Wed, 14 Aug 2019 19:31:55 GMT expect-ct: max-age=604800 set-cookie: XSRF-TOKEN=eyJpdiI6ImdoMmRFbmFnTldMSUUrYUlJZm1LWmc9PSIsInZhbHVlIjoiUkY1akJlK0FHRktnYkZMeFVxRXA5dnZHSXlwTTlrdlwvVHhvMkpHcFlUbTBab0xYaEpCbm1RVmdEQWNVT0NKdVwvIiwibWFjIjoiZDliYWI2NjQ1MTliOGZkZTA1OWU4OTlkYTFlYjBlY2ExYmI5NjQ0OGI0YjVkMDFiMmI0ODUzOWYxMGI5MTUwZCJ9; expires=Wed, 14-Aug-2019 21:31:54 GMT; Max-Age=7200; path=/ set-cookie: watchbox_session=(heregoesmycookies); expires=Wed, 14-Aug-2019 21:31:54 GMT; Max-Age=7200; path=/; httponly status: 200 vary: Accept-Encoding Request Header: :authority: watchbox.ge :method: GET :path: / :scheme: https accept: image/webp,image/apng,image/*,*/*;q=0.8 accept-encoding: gzip, deflate, br accept-language: en-US,en;q=0.9 cache-control: no-cache cookie: __cfduid=dfbc73480dfead53f4eabc68bb277065c1565737966; remember_web_59ba36addc2b2f9401580f014c7f58ea4e30989d=eyJpdiI6IjE3cGJtVGk3RkY2NGU0cVJwOXhsa0E9PSIsInZhbHVlIjoiZDJYUm1TRVlXZHpKd0t0Znh1UFYxVmpRWmJJMm8wRzVFVVwvMzlrcHdSXC9QNVkxd2NOWklZcWJvOElCRHpHVDQ3YjVcL1Q2alIxMjhqeHY5emJIV2pcL0txMGFPMW5pK1JpVTB1cjN0eGdhaEdscjZIN1JodDluNXAyV1JsbUhZUHIwU3lsUFpEUzE1VlFZZ0NrVFEya3hTRVREcjJmdHRvU3JSUFo5ZlQ1NjgxVT0iLCJtYWMiOiI3MzBjMjQzYjgwMjlmMzY2ZTI4NDdhYTgzOTE1YWVlYjc5OTAwN2ViODIxM2NmZTRmY2RiODIyYTMzYjJjYWMxIn0%3D; XSRF-TOKEN=eyJpdiI6IiswV0VGeThJdmlySHV5TzZ1MTF6XC9RPT0iLCJ2YWx1ZSI6Ik9xdzFjOTlKczhTamlVbExkdEJqeVFvRExTcVdMRlZVaXpoM3ppY0tWb0pWUHRzOWJ5ZlhHTlF5VnRcL05HRTZsIiwibWFjIjoiM2VhYTAyYjBjNjFkYzkyYjEzOWE4NTEzMTQyZDQyMWMxMTY2MzJkMjQ4ZTc5MjljZTI4ZTNmODg4NzZjNmE2ZiJ9; watchbox_session=(heregoesmysession) pragma: no-cache referer: https://watchbox.ge/ user-agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/72.0.3626.119 Safari/537.36 My nginx configuration: ssl_session_cache shared:SSL:10m; ssl_session_timeout 10m; proxy_cache_path /etc/nginx/cached_content levels=1:2 keys_zone=watchbox_cache:10m max_size=4g inactive=24h; server { listen 80; listen [::]:80; server_name 46.101.166.170; # Redirect all traffic comming from your-server-ip to your domain return 301 $scheme://watchbox.ge; } server { listen 80; server_name www.watchbox.ge watchbox.ge; return 301 https://watchbox.ge$request_uri; #redirect to https } server { add_header 'X-GG-Cache-Status' $upstream_cache_status; add_header 'Access-Control-Allow-Origin' 'http://google.com'; add_header 'Access-Control-Allow-Credentials' 'true'; add_header 'Access-Control-Allow-Methods' 'GET'; # SSL configuration # listen 443 ssl; ssl_certificate /etc/nginx/ssl/watchbox.ge/certificate.crt; ssl_certificate_key /etc/nginx/ssl/watchbox.ge/private.key; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_prefer_server_ciphers on; ssl_ciphers AES256+EECDH:AES256+EDH:!aNULL; #IMPROVE PERFOMANCE OF PAGE WITH GZIP + CACHING gzip on; gzip_comp_level 5; gzip_min_length 256; gzip_proxied any; gzip_vary on; gzip_types application/atom+xml application/javascript application/json application/ld+json application/manifest+json application/rss+xml application/vnd.geo+json application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/bmp image/svg+xml image/x-icon text/cache-manifest text/css text/plain text/vcard text/vnd.rim.location.xloc text/vtt text/x-component text/x-cross-domain-policy; # text/html is always compressed by gzip module location ~* \.(jpg|jpeg|png|gif|ico|css|js|pdf)$ { expires 7d; } #END IMPROVE PERFOMANCE OF PAGE WITH GZIP + CACHING root /var/www/watchbox.ge/public_html; # Add index.php to the list if you are using PHP index index.php index.html index.htm index.nginx-debian.html; server_name www.watchbox.ge watchbox.ge; keepalive_timeout 70; location / { add_header Access-Control-Allow-Origin "*"; proxy_pass https://watchbox.ge; proxy_set_header Host $host; proxy_buffering on; proxy_cache watchbox_cache; proxy_ignore_headers Cache-Control; proxy_cache_valid 200 1d; proxy_ignore_headers "Set-Cookie"; proxy_hide_header "Set-Cookie"; proxy_cache_valid 200 1d; proxy_cache_min_uses 3; proxy_cache_bypass $cookie_nocache $arg_nocache$arg_comment; proxy_cache_revalidate on; proxy_cache_use_stale error timeout updating http_500 http_502 http_503 http_504; proxy_cache_background_update on; proxy_cache_lock on; try_files $uri $uri/ /index.php$is_args$args; } # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # location ~ \.php$ { #try_files $uri =404; include snippets/fastcgi-php.conf; # # # With php7.0-cgi alone: # fastcgi_pass 127.0.0.1:9000; # With php7.0-fpm: fastcgi_pass unix:/run/php/php7.2-fpm.sock; # TIMEOUT fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; #fastcgi_read_timeout 300; } } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,285284,285284#msg-285284 From r at roze.lv Wed Aug 14 21:30:13 2019 From: r at roze.lv (Reinis Rozitis) Date: Thu, 15 Aug 2019 00:30:13 +0300 Subject: Caching Method In-Reply-To: <01a1e772b71d5615783b32996c7cb29e.NginxMailingListEnglish@forum.nginx.org> References: <01a1e772b71d5615783b32996c7cb29e.NginxMailingListEnglish@forum.nginx.org> Message-ID: <000201d552e7$75148620$5f3d9260$@roze.lv> > I have configured nginx to cache static content, but i cant see any file in caching > folder, also when i'm opening page in DevTool on network tab it show Unless you have somehow messed up the configuration in the email, something like: server { listen 443 ssl; server_name www.watchbox.ge watchbox.ge; location / { proxy_pass https://watchbox.ge; } doesn't make sense (to me) as it would make an infinite nested loop - the server proxies itself (unless internally the watchbox.ge resolves to some other server). Also by looking at the headers: cf-ray: 50655a0c2cf0d413-BUD is there also CloudFlare somewhere in-between? Maybe the object is already cached on CF and isn't even requested from the origin server? At least in the response I don't see this header at all: add_header 'X-GG-Cache-Status' $upstream_cache_status; I would suggest to try with simplified configuration. It's quite hard to help in this case. rr From wizard at bnnorth.net Wed Aug 14 22:59:26 2019 From: wizard at bnnorth.net (Ken Wright) Date: Wed, 14 Aug 2019 18:59:26 -0400 Subject: 502 Bad Gateway In-Reply-To: References: Message-ID: <40abd274-1d55-6c93-866b-e0618f319c30@bnnorth.net> On 8/14/19 3:37 AM, Igor A. Ippolitov wrote: > Ken, > > Try setting 'proxy_buffer_size' to a higher value. Say 128k. Umm, what file would I find that in?? I've seen so many similar statements lately I can't keep them straight.? Sorry for being so dumb! Ken -- Registered Linux user #483005 If you ever think international relations make sense, remember this: because a Serb shot an Austrian in Bosnia, Germany invaded Belgium. From nginx-forum at forum.nginx.org Thu Aug 15 02:55:36 2019 From: nginx-forum at forum.nginx.org (justcode) Date: Wed, 14 Aug 2019 22:55:36 -0400 Subject: CORS Error Message-ID: Here is the code: location ~ \.php$ { include snippets/fastcgi-php.conf; include fastcgi_params; fastcgi_pass unix:/run/php/php7.2-fpm.sock; fastcgi_param SCRIPT_FILENAME /var/www/test/public$fastcgi_script_name; #CORS SETTINGS add_header 'Access-Control-Allow-Origin' '*' always; add_header 'Access-Control-Allow-Methods' 'GET, POST, PUT, OPTIONS, DELETE'; add_header 'Access-Control-Allow-Headers' 'DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache- Control,Content-Type,Content-Range,Range,Authorization'; } ERROR invalid number of arguments in "add_header" CAUSE add_header 'Access-Control-Allow-Origin' '*' always; when I do add the word "always" Posted at Nginx Forum: https://forum.nginx.org/read.php?2,285289,285289#msg-285289 From iippolitov at nginx.com Thu Aug 15 04:52:13 2019 From: iippolitov at nginx.com (Igor A. Ippolitov) Date: Thu, 15 Aug 2019 07:52:13 +0300 Subject: 502 Bad Gateway In-Reply-To: <40abd274-1d55-6c93-866b-e0618f319c30@bnnorth.net> References: <40abd274-1d55-6c93-866b-e0618f319c30@bnnorth.net> Message-ID: Ken, proxy_buffer_size and proxy_buffers are very similar. These configure buffers allocated for a response from an upstream. Both are documented here: http://nginx.org/en/docs/http/ngx_http_proxy_module.html . Please, have a look. The difference is that proxy_buffer_size buffer is always allocated to read the response header And then nginx allocates memory for the response body using 'proxy_buffers' rules. So, where to put 'proxy_buffer_size': put it right next to your 'proxy_pass' statement. This should work. Regards, Igor On 15.08.2019 1:59, Ken Wright wrote: > On 8/14/19 3:37 AM, Igor A. Ippolitov wrote: >> Ken, >> >> Try setting 'proxy_buffer_size' to a higher value. Say 128k. > Umm, what file would I find that in?? I've seen so many similar > statements lately I can't keep them straight.? Sorry for being so dumb! > > Ken > From nginx-forum at forum.nginx.org Thu Aug 15 13:05:42 2019 From: nginx-forum at forum.nginx.org (TC_Hessen) Date: Thu, 15 Aug 2019 09:05:42 -0400 Subject: nginx-1.17.3 and TLS v1.3 Message-ID: Hi, I am new to this forum, but not new to nginx. I am running multiple debian servers (stretch) with nginx 1.14.1 and TLS 1.3 support, i.e. nginx version: nginx/1.14.1 built with OpenSSL 1.1.0f 25 May 2017 (running with OpenSSL 1.1.1c 28 May 2019) TLS SNI support enabled To prevent the servers agains the new bugs, I tried to upgrade directly to 1.17.3 provided by nginx.org. That works without any problems, but TLS 1.3 is not running anymore: nginx version: nginx/1.17.3 built by gcc 6.3.0 20170516 (Debian 6.3.0-18+deb9u1) built with OpenSSL 1.1.0j 20 Nov 2018 (running with OpenSSL 1.1.1c 28 May 2019) TLS SNI support enabled Where is the error? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,285294,285294#msg-285294 From targon at technologist.com Thu Aug 15 13:30:31 2019 From: targon at technologist.com (targon at technologist.com) Date: Thu, 15 Aug 2019 21:30:31 +0800 Subject: nginx-1.17.3 and TLS v1.3 In-Reply-To: References: Message-ID: <5D91E862-DB06-41A5-97E2-D2D02FB6D4A5@technologist.com> I suggest you consider investigating Intels' Clear Linux. https://docs.01.org/clearlinux/latest/index.html https://docs.01.org/clearlinux/latest/about.html# https://docs.01.org/clearlinux/latest/reference/bundles/bundles.html read specifically about swupd and bundles. This is a ?Stateless? OS In particular to your issues, on Clear Linux you'd install nginx-mainline bundle, all the source packages and dependancies are tested with the bundle before distribution to swupd Example, the nginx-mainline bundle version requires lib-openssl, the and only compatible tested lib-openssl package version will be included. This strategy eliminates all those fragmented dependancy issues every other Linux distro, where you install nginx but you?ve no real idea what openssl version is going to work with it. Admittedly, Clear Linux is a little unfamiliar at first but give it a try, there?s far less headaches to deal with than other the ?popular? distros. Apologies for not addressing your issue directly. > On 15 Aug 2019, at 21:05, TC_Hessen wrote: > > Hi, > > I am new to this forum, but not new to nginx. I am running multiple debian > servers (stretch) with nginx 1.14.1 and TLS 1.3 support, i.e. > > nginx version: nginx/1.14.1 > built with OpenSSL 1.1.0f 25 May 2017 (running with OpenSSL 1.1.1c 28 May > 2019) > TLS SNI support enabled > > To prevent the servers agains the new bugs, I tried to upgrade directly to > 1.17.3 provided by nginx.org. That works without any problems, but TLS 1.3 > is not running anymore: > > nginx version: nginx/1.17.3 > built by gcc 6.3.0 20170516 (Debian 6.3.0-18+deb9u1) > built with OpenSSL 1.1.0j 20 Nov 2018 (running with OpenSSL 1.1.1c 28 May > 2019) > TLS SNI support enabled > > Where is the error? > > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,285294,285294#msg-285294 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Thu Aug 15 14:03:07 2019 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 15 Aug 2019 17:03:07 +0300 Subject: nginx-1.17.3 and TLS v1.3 In-Reply-To: References: Message-ID: <20190815140307.GG1877@mdounin.ru> Hello! On Thu, Aug 15, 2019 at 09:05:42AM -0400, TC_Hessen wrote: > Hi, > > I am new to this forum, but not new to nginx. I am running multiple debian > servers (stretch) with nginx 1.14.1 and TLS 1.3 support, i.e. > > nginx version: nginx/1.14.1 > built with OpenSSL 1.1.0f 25 May 2017 (running with OpenSSL 1.1.1c 28 May > 2019) > TLS SNI support enabled > > To prevent the servers agains the new bugs, I tried to upgrade directly to > 1.17.3 provided by nginx.org. That works without any problems, but TLS 1.3 > is not running anymore: > > nginx version: nginx/1.17.3 > built by gcc 6.3.0 20170516 (Debian 6.3.0-18+deb9u1) > built with OpenSSL 1.1.0j 20 Nov 2018 (running with OpenSSL 1.1.1c 28 May > 2019) > TLS SNI support enabled > > Where is the error? OS you are using is shipped with OpenSSL 1.1.0j, and nginx is built with this old OpenSSL version. As such, TLSv1.3 is not available. There was a bug which made TLSv1.3 always enabled when was compiled with OpenSSL 1.1.0 and running with OpenSSL 1.1.1, it was fixed in nginx 1.15.6 and 1.14.2 (quote from http://nginx.org/en/CHANGES-1.14): *) Bugfix: if nginx was built with OpenSSL 1.1.0 and used with OpenSSL 1.1.1, the TLS 1.3 protocol was always enabled. Since you were using nginx 1.14.1 previously, TLS 1.3 was enabled due to this bug. -- Maxim Dounin http://mdounin.ru/ From nginx-forum at forum.nginx.org Thu Aug 15 16:28:59 2019 From: nginx-forum at forum.nginx.org (itpp2012) Date: Thu, 15 Aug 2019 12:28:59 -0400 Subject: Proxy stream pop3 110 to secure 995 Message-ID: Considering this example https://docs.nginx.com/nginx/admin-guide/security-controls/securing-tcp-traffic-upstream/ How would you stream plain unsecured pop3 traffic to a secure endpoint elsewhere ? (without the backend certificates) ea. stream { listen 110; proxy_ssl on; proxy_pass site.com:995; } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,285302,285302#msg-285302 From xeioex at nginx.com Thu Aug 15 17:06:55 2019 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Thu, 15 Aug 2019 20:06:55 +0300 Subject: njs-0.3.5 Message-ID: This is a bugfix release that eliminates heap-use-after-free introduced in 0.3.4. What installations are affected: - Importing built-in modules (crypto, fs) using require(). You can learn more about njs: - Overview and introduction: http://nginx.org/en/docs/njs/ - Presentation: https://youtu.be/Jc_L6UffFOs Feel free to try it and give us feedback on: - Github: https://github.com/nginx/njs/issues - Mailing list: http://mailman.nginx.org/mailman/listinfo/nginx-devel Changes with njs 0.3.5 15 Aug 2019 Core: *) Bugfix: fixed module importing using require(). The bug was introduced in 0.3.4. *) Bugfix: fixed [[SetPrototypeOf]]. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Thu Aug 15 23:38:14 2019 From: nginx-forum at forum.nginx.org (hhypnos) Date: Thu, 15 Aug 2019 19:38:14 -0400 Subject: Caching Method In-Reply-To: <000201d552e7$75148620$5f3d9260$@roze.lv> References: <000201d552e7$75148620$5f3d9260$@roze.lv> Message-ID: <8d20779f19419bfd2fb65b8619c217db.NginxMailingListEnglish@forum.nginx.org> can i use catching without proxy_pass? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,285284,285310#msg-285310 From randy at randy.cc Fri Aug 16 05:01:50 2019 From: randy at randy.cc (Randy Johnson) Date: Fri, 16 Aug 2019 01:01:50 -0400 Subject: Location Rewrite Issue Message-ID: Here is the locations part of my nginx host file: server { root /var/www/html; index index.php index.html index.htm index.nginx-debian.html; location / { try_files $uri $uri/ @extensionless-php; } location ~ \.php$ { include snippets/fastcgi-php.conf; fastcgi_pass unix:/var/run/php/php7.2-fpm.sock; } location @extensionless-php { rewrite ^(.*)$ $1.php last; } I tried adding the following line in there in a couple different places but all it does is download the php file. location /blog { rewrite ^/blog/([A-Za-z0-9-]+)/?$ /blog-article.php?slug=$1 break; } In addition it does not load the /blog page. It throws a 404. I am not quite sure what I need to do to get it working. Thank You, Randy -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Fri Aug 16 06:22:31 2019 From: nginx-forum at forum.nginx.org (hhypnos) Date: Fri, 16 Aug 2019 02:22:31 -0400 Subject: Caching Method In-Reply-To: <000201d552e7$75148620$5f3d9260$@roze.lv> References: <000201d552e7$75148620$5f3d9260$@roze.lv> Message-ID: hi , can i use catching without proxy_pass? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,285284,285309#msg-285309 From francis at daoine.org Fri Aug 16 07:43:05 2019 From: francis at daoine.org (Francis Daly) Date: Fri, 16 Aug 2019 08:43:05 +0100 Subject: Caching Method In-Reply-To: References: <000201d552e7$75148620$5f3d9260$@roze.lv> Message-ID: <20190816074305.vdojd6nbjpnim55r@daoine.org> On Fri, Aug 16, 2019 at 02:22:31AM -0400, hhypnos wrote: Hi there, > hi , can i use catching without proxy_pass? proxy_cache is for nginx to cache the response from a proxy_pass request to an upstream server. fastcgi_cache is for nginx to cache the response from a fastcgi_pass request to an upstream server. "Sending http headers so that the client can be invited to cache the response from nginx" is independent of both. So the answer is "yes, depending on what exactly you want to do". f -- Francis Daly francis at daoine.org From francis at daoine.org Fri Aug 16 08:02:09 2019 From: francis at daoine.org (Francis Daly) Date: Fri, 16 Aug 2019 09:02:09 +0100 Subject: CORS Error In-Reply-To: References: Message-ID: <20190816080209.em4m2gahhjduodr5@daoine.org> On Wed, Aug 14, 2019 at 10:55:36PM -0400, justcode wrote: Hi there, > add_header 'Access-Control-Allow-Origin' '*' always; > add_header 'Access-Control-Allow-Methods' 'GET, POST, PUT, OPTIONS, DELETE'; > ERROR > invalid number of arguments in "add_header" > > CAUSE > add_header 'Access-Control-Allow-Origin' '*' always; > when I do add the word "always" If add_header 'Access-Control-Allow-Origin' '*' always; causes stock-nginx to reply "invalid number of arguments", and just removing the "always" causes nginx to accept the configuration, then (from http://nginx.org/r/add_header) you probably have nginx older than 1.7.5. What does nginx -v or nginx -V say? f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Fri Aug 16 08:25:19 2019 From: nginx-forum at forum.nginx.org (b8077691) Date: Fri, 16 Aug 2019 04:25:19 -0400 Subject: nginx-1.16.1 In-Reply-To: <20190813170416.GR1877@mdounin.ru> References: <20190813170416.GR1877@mdounin.ru> Message-ID: Can the patches be safely applied on the nginx-1.14.2? Thanks. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,285234,285314#msg-285314 From r at roze.lv Fri Aug 16 09:14:50 2019 From: r at roze.lv (Reinis Rozitis) Date: Fri, 16 Aug 2019 12:14:50 +0300 Subject: Location Rewrite Issue In-Reply-To: References: Message-ID: <000201d55413$0ef35ec0$2cda1c40$@roze.lv> > I tried adding the following line in there in a couple different places but all it does is download the php file. > > location /blog { > rewrite ^/blog/([A-Za-z0-9-]+)/?$ /blog-article.php?slug=$1 break; > } Try to switch from 'break' to 'last'. By using 'break' it means that nginx stops the rewrite and also doesn't search for any other location so the request doesn't land in the 'location ~ \.php$' and is never processed by php. rr From randy at randy.cc Fri Aug 16 17:01:29 2019 From: randy at randy.cc (Randy Johnson) Date: Fri, 16 Aug 2019 13:01:29 -0400 Subject: Location Rewrite Issue In-Reply-To: <000201d55413$0ef35ec0$2cda1c40$@roze.lv> References: <000201d55413$0ef35ec0$2cda1c40$@roze.lv> Message-ID: Thank you. That was indeed the issue. Now I can see the individual blog entries at /blog/slug-of-blog but /blog and /blog/ urls are both throwing a 404. Is that an easy fix? -Randy On Fri, Aug 16, 2019 at 5:15 AM Reinis Rozitis wrote: > > I tried adding the following line in there in a couple different places > but all it does is download the php file. > > > > location /blog { > > rewrite ^/blog/([A-Za-z0-9-]+)/?$ /blog-article.php?slug=$1 break; > > } > > Try to switch from 'break' to 'last'. > > By using 'break' it means that nginx stops the rewrite and also doesn't > search for any other location so the request doesn't land in the 'location > ~ \.php$' and is never processed by php. > > rr > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tiago4orion at gmail.com Fri Aug 16 17:56:41 2019 From: tiago4orion at gmail.com (Tiago Natel de Moura) Date: Fri, 16 Aug 2019 18:56:41 +0100 Subject: Unit process isolation / namespaces Message-ID: Hi, I would like to present a new feature I'm working on that adds OS based process isolation to Unit. For now, it implements just the basic building block of containers: Linux namespaces. Let me know what you think, if it's useful or not, etc. To start using it, you just need to add a new "isolation" field to your app's config: { "type": "external", "executable": "/bin/app", "isolation": { "namespaces": { "user": true, "mount": true } } } The list of allowed namespaces are: user, mount, network, pid, uts, cgroup. The ipc namespace is not allowed because Unit uses shared memory to communicate with workers. In the future, if Unit could proxy general processes (and manage them also), we can allow the ipc namespace as well, them giving full isolation. Linux namespaces require CAP_SYS_ADMIN to be created if not used in conjunction with user namespace. Then, if you want to keep running Unit as an unprivileged user, you need to set "user" namespace in addition to the other flags. The PR is here (still working on it): https://github.com/nginx/unit/pull/289 When using user namespace, you can set mapping files for uid and gid ranges inside the namespace. For uid, the file is /proc//uid_map and for gid it is /proc//gid_map. Then, you can map an unprivileged user id in the host (parent ns) to a privileged id inside the child namespace. I added two config fields for this mappings. { "isolation": { "namespaces": { "user": true, "mount": true }, "uidmap": [ {"containerID": 0, "hostID": 1000, "size": 1} ], "gidmap": [ {"containerID": 0, "hostID": 1000, "size": 1} ], } The config is an array because you can map several ranges. For now, if you don't set a map config, Unit will use a common default (the example above, but using process current euid instead of 1000). Some distributions come with an /etc/subuid and /etc/subgid file with application's mappings. We can make unit lookup for a mapping from this file also in the future. The config is based on the OCI Spec: https://github.com/opencontainers/runtime-spec/blob/master/config-linux.md#user-namespace-mappings I don't like it much, let me know if you know a better way of configuring it. The uid/gid mapping affects the user and group you pass in the application config. Then, my first question: If the user pass a "user" or "group" that's not mapped inside the container, what should we do? I would like to keep user experience very simple, but having to deal with uid/gid mappings seems a bit complex. What do you folks think about doing some auto mappings in case the user pass a user from host (without setting any mapping)? Is this confuse? If you think it's useful, what can be the next steps? I would like to add a "rootfs" field to chroot applications, also a "mounts" field to mount additional filesystems inside the rootfs (kernfs, tmpfs, procfs and also user defined bind mounts from the host filesystem). About the isolation mechanism, I did some experiments with FreeBSD jails and maybe we can deliver something useful there also. Jails are significantly more secure than Linux namespaces, and I think we can implement it relatively easy. That's all folks! From nginx-forum at forum.nginx.org Fri Aug 16 18:15:22 2019 From: nginx-forum at forum.nginx.org (benztoy) Date: Fri, 16 Aug 2019 14:15:22 -0400 Subject: nginx 1.17.3 and TLSv1.3 Message-ID: <05dbf7d1ed8ac27bf5327d1028cc16ac.NginxMailingListEnglish@forum.nginx.org> I want to run two nginx services on one host. They are nginxA and nginxB nginxA listening on https443 port. Only the tslv1.3 protocol is available. The configuration file is as follows: ##################### #user nobody; Worker_processes 1; #error_log logs/error.log; #error_log logs/error.log notice; #error_log logs/error.log info; #pid logs/nginx.pid; Events { ????Worker_connections 1024; } Http { ????Include mime.types; ????Default_type application/octet-stream; ????#log_format main '$remote_addr - $remote_user [$time_local] "$request" ' ????# '$status $body_bytes_sent "$http_referer" ' ????# '"$http_user_agent" "$http_x_forwarded_for"'; ????#access_log logs/access.log main; ????Sendfile on; ????#tcp_nopush on; ????#keepalive_timeout 0; ????Keepalive_timeout 65; ????#gzip on; ????Server { ????????Listen 80; ????????Server_name localhost; ????????#charset koi8-r; ????????#access_log logs/host.access.log main; ????????Location / { ????????????Root html; ????????????Index index.html index.htm; ????????} ????????#error_page 404 /404.html; ????????# redirect server error pages to the static page /50x.html ????????# ????????Error_page 500 502 503 504 /50x.html; ????????Location = /50x.html { ????????????Root html; ????????} ????????#?? the PHP scripts to Apache listening on 127.0.0.1:80 ????????# ????????#location ~ \.php$ { ????????# proxy_pass http://127.0.0.1; ????????#} ????????# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 ????????# ????????#location ~ \.php$ { ????????# root html; ????????# fastcgi_pass 127.0.0.1:9000; ????????# fastcgi_index index.php; ????????# fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name; ????????#include fastcgi_params; ????????#} ????????# deny access to .htaccess files, if Apache's document root ????????# concurs with nginx's one ????????# ????????#location ~ /\.ht { ????????# deny all; ????????#} ????} ????# another virtual host using mix of IP-, name-, and port-based configuration ????# ????#server { ????#? 8000; ????#? somename:8080; ????# server_name somename alias another.alias; ????# location / { ????# root html; ????# index index.html index.htm; ????# } ????#} ????# HTTPS server ????# ????Server { ????????Listen 443 ssl; ????????Server_name localhost; ????????Ssl_certificate cert.pem; ????????Ssl_certificate_key cert.key; ????????Ssl_session_cache shared: SSL: 1m; ????????Ssl_session_timeout 5m; Ssl_protocols TLSv1.3; Ssl_ciphers TLS13-AES-128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE:ECDH:AES:HIGH:!NULL:!aNULL:!MD5:!ADH:!RC4; ????????Ssl_prefer_server_ciphers on; ????????Location / { ????????????Root html; ????????????Index index.html index.htm; ????????} ????} } ############### nginxB listening on the https444 port. Just provide the proxy function, redirect to the https443 port(nginxA), and only provide the tslv1.3 protocol, the configuration file is as follows: ############### #user nobody; worker_processes 1; #error_log logs/error.log; #error_log logs/error.log notice; #error_log logs/error.log info; #pid logs/nginx.pid; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; #log_format main '$remote_addr - $remote_user [$time_local] "$request" ' # '$status $body_bytes_sent "$http_referer" ' # '"$http_user_agent" "$http_x_forwarded_for"'; #access_log logs/access.log main; sendfile on; #tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; #gzip on; server { listen 80; server_name localhost; #charset koi8-r; #access_log logs/host.access.log main; location / { root html; index index.html index.htm; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } # proxy the PHP scripts to Apache listening on 127.0.0.1:80 # #location ~ \.php$ { # proxy_pass http://127.0.0.1; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # #location ~ \.php$ { # root html; # fastcgi_pass 127.0.0.1:9000; # fastcgi_index index.php; # fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name; # include fastcgi_params; #} # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} } # another virtual host using mix of IP-, name-, and port-based configuration # #server { # listen 8000; # listen somename:8080; # server_name somename alias another.alias; # location / { # root html; # index index.html index.htm; # } #} # HTTPS server # #server { # listen 443 ssl; # server_name localhost; # ssl_certificate cert.pem; # ssl_certificate_key cert.key; # ssl_session_cache shared:SSL:1m; # ssl_session_timeout 5m; # ssl_ciphers HIGH:!aNULL:!MD5; # ssl_prefer_server_ciphers on; # location / { # root html; # index index.html index.htm; # } #} server { listen 444 ssl; server_name localhost; ssl_certificate cert.pem; ssl_certificate_key cert.key; ssl_session_cache shared:SSL:1m; ssl_session_timeout 5m; ssl_protocols TLSv1.1 TLSv1.2 TLSv1.3; ssl_ciphers TLS13-AES-128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE:ECDH:AES:HIGH:!NULL:!aNULL:!MD5:!ADH:!RC4; ssl_prefer_server_ciphers on; location / { proxy_pass https://127.0.0.1/; proxy_ssl_session_reuse off; } } } ############### In fact, their relationship is nginxB(listening on 444 tslv1.3)=> proxy => nginxA(listening on 443 tslv1.3 ) But when I visit https://127.0.0.1:444 Return to 502 Bad Gateway Among them, nginx serving port 444 has error.log: SSL_do_handshake() failed (SSL: error:1409442E:SSL routines:ssl3_read_bytes:tlsv1 alert protocol version:SSL alert number 70) while SSL handshaking to upstream, client: 127.0.0.1, server: localhost, request: "GET / HTTP/1.1 ", upstream: "https://127.0.0.1:443/", host: "127.0.0.1:444" Dear friends, What is the reason for this? My first service ssl protocol version of nginxA must be tslv1.3 only. There is no other lower version. Can I successfully access https://127.0.0.1:444 by modifying the nginxA or nginxB configuration file? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,285325,285325#msg-285325 From mdounin at mdounin.ru Fri Aug 16 18:32:40 2019 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 16 Aug 2019 21:32:40 +0300 Subject: nginx 1.17.3 and TLSv1.3 In-Reply-To: <05dbf7d1ed8ac27bf5327d1028cc16ac.NginxMailingListEnglish@forum.nginx.org> References: <05dbf7d1ed8ac27bf5327d1028cc16ac.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20190816183240.GO1877@mdounin.ru> Hello! On Fri, Aug 16, 2019 at 02:15:22PM -0400, benztoy wrote: > I want to run two nginx services on one host. They are nginxA and nginxB > nginxA listening on https443 port. Only the tslv1.3 protocol is available. > The configuration file is as follows: [...] > ????Server { > ????????Listen 443 ssl; > ????????Server_name localhost; > > ????????Ssl_certificate cert.pem; > ????????Ssl_certificate_key cert.key; > > ????????Ssl_session_cache shared: SSL: 1m; > ????????Ssl_session_timeout 5m; > Ssl_protocols TLSv1.3; So only TLSv1.3 is enabled on the 443 port. [...] > location / { > proxy_pass https://127.0.0.1/; > proxy_ssl_session_reuse off; > } And no proxy_ssl_protocols set for proxying, so it only has TLSv1, TLSv1.1, and TLSv1.2 enabled by default. [...] > But when I visit https://127.0.0.1:444 > Return to 502 Bad Gateway > Among them, nginx serving port 444 has error.log: > SSL_do_handshake() failed (SSL: error:1409442E:SSL > routines:ssl3_read_bytes:tlsv1 alert protocol version:SSL alert number 70) > while SSL handshaking to upstream, client: 127.0.0.1, server: localhost, > request: "GET / HTTP/1.1 ", upstream: "https://127.0.0.1:443/", host: > "127.0.0.1:444" > > > Dear friends, What is the reason for this? > My first service ssl protocol version of nginxA must be tslv1.3 only. There > is no other lower version. Can I successfully access https://127.0.0.1:444 > by modifying the nginxA or nginxB configuration file? The problem is that you are trying to connect to a TLSv1.3-only port by using the proxy not configured to use TLSv1.3. You have to enable TLSv1.3 in your proxy configuration, something like: proxy_ssl_protocol TLSv1 TLSv1.1 TLSv1.2 TLSv1.3; should work. See http://nginx.org/r/proxy_ssl_protocols for additional details. -- Maxim Dounin http://mdounin.ru/ From nginx-forum at forum.nginx.org Sat Aug 17 01:05:05 2019 From: nginx-forum at forum.nginx.org (benztoy) Date: Fri, 16 Aug 2019 21:05:05 -0400 Subject: nginx 1.17.3 and TLSv1.3 In-Reply-To: <20190816183240.GO1877@mdounin.ru> References: <20190816183240.GO1877@mdounin.ru> Message-ID: The problem has been solved, thank you very much Posted at Nginx Forum: https://forum.nginx.org/read.php?2,285325,285327#msg-285327 From beckmann.maik at googlemail.com Sat Aug 17 08:23:46 2019 From: beckmann.maik at googlemail.com (Maik Beckmann) Date: Sat, 17 Aug 2019 10:23:46 +0200 Subject: try_files doubleslash mystery Message-ID: Hi everyone, I'm putting some hours into better understanding how Nginx works. While doing so I came across something that I can't explain nor find anything about in web-searches. To reproduce on your Linux machine (I'm using Arch, if you want to check how nginx was build on their Website) do the following: With your normal user account (we are never root in this), enter /tmp or a place in your home folder where you like to experiment. Execute these commands - mkdir try_files_test && cd try_files_test - mkdir public blog-public client-body fastcgi uwsgi scgi - touch nginx.conf - echo "does not matter here" > public/index.html - echo "Blog" > blog-public/index.html Put this content into the nginx.conf ## ## pid "nginx.pid"; daemon off; events { worker_connections 1024; } error_log /dev/stdout debug; http { client_body_temp_path "client-body"; fastcgi_temp_path "fastcgi"; uwsgi_temp_path "uwsgi"; scgi_temp_path "scgi"; access_log /dev/stdout; rewrite_log on; root "public"; server { listen 8080; location / { return 200 "Homepage\n"; } location /blog { root "blog-public"; set $foo /; try_files $foo $foo/ $foo/index.html =404; } } } ## ## and start it with - nginx -p $PWD -c nginx.conf When requesting / via curl, we get "Hompage" as expected. However, if we request /blog/ we get "Homepage as well. For the convinence, here to curl command - curl -i http://localhost:8080/blog/ If we change the try_files line inside /blog's location blog try_files $foo $foo/index.html =404; by removing the $foo/ the curl request returns the intended "Blog". Instead of putting in the $foo/ again, we just put // there, like this try_files $foo // $foo/index.html =404; and we get "Homepage" for an /blog/ request again. Now my Question: Is there something about double slash as the $uri that causes nginx to do a magical internal redirect? I don't understand. Thanks in advance for your time and have a good weekend if you're reading this today. Regards Maik Beckmann -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris at cretaforce.gr Sat Aug 17 17:17:37 2019 From: chris at cretaforce.gr (Christos Chatzaras) Date: Sat, 17 Aug 2019 20:17:37 +0300 Subject: LiteSpeed 5.4 vs Nginx 1.16 benchmarks Message-ID: <911B9207-EF07-47DB-B778-09959AD6CF6B@cretaforce.gr> Today I read this post: http://www.webhostingtalk.com/showthread.php?t=1775139 In their changelog ( https://www.litespeedtech.com/products/litespeed-web-server/release-log ) I see that they did changes related to HTTP/2. Any idea how they did it? From francis at daoine.org Sat Aug 17 18:04:23 2019 From: francis at daoine.org (Francis Daly) Date: Sat, 17 Aug 2019 19:04:23 +0100 Subject: try_files doubleslash mystery In-Reply-To: References: Message-ID: <20190817180423.6dymvhs6njvzbaxe@daoine.org> On Sat, Aug 17, 2019 at 10:23:46AM +0200, Maik Beckmann via nginx wrote: Hi there, > location /blog { > root "blog-public"; > set $foo /; > try_files $foo $foo/ $foo/index.html =404; > } > When requesting / via curl, we get "Hompage" as expected. However, if we > request /blog/ we get "Homepage as well. > Now my Question: Is there something about double slash as the $uri that > causes nginx to do a magical internal redirect? I don't understand. It is not "double slash". It is "the argument to try files (before variable expansion) ends in slash". https://nginx.org/r/try_files """ The path to a file is constructed from the file parameter according to the root and alias directives. It is possible to check directory?s existence by specifying a slash at the end of a name, e.g. ?$uri/?. """ $foo does not end in slash, so try_files looks for a file of that (expanded) name, and fails to find it. $foo/ does end in slash, so try_files looks for a directory of that (expanded) name, and finds it and serves it (which involves a subrequest/internal redirect). f -- Francis Daly francis at daoine.org From r at roze.lv Sat Aug 17 20:42:17 2019 From: r at roze.lv (Reinis Rozitis) Date: Sat, 17 Aug 2019 23:42:17 +0300 Subject: Location Rewrite Issue In-Reply-To: References: <000201d55413$0ef35ec0$2cda1c40$@roze.lv> Message-ID: <000001d5553c$42832f70$c7898e50$@roze.lv> > Thank you. That was indeed the issue. Now I can see the individual blog entries at /blog/slug-of-blog > > but /blog and /blog/ urls are both throwing a 404. > > Is that an easy fix? > >> rewrite ^/blog/([A-Za-z0-9-]+)/?$ /blog-article.php?slug=$1 break; You have to tweak the regex - currently it expects that there will be always '/' after blog and something afterwards. Depending on what you actually want to pass in the slug something like this might work: rewrite ^/blog(/.*)? /blog-article.php?slug=$1 last; Or if the if the first slash is not needed then ^/blog/?(.*)? (writing out of head so testing required) rr From beckmann.maik at googlemail.com Sun Aug 18 09:40:59 2019 From: beckmann.maik at googlemail.com (Maik Beckmann) Date: Sun, 18 Aug 2019 11:40:59 +0200 Subject: try_files doubleslash mystery In-Reply-To: <20190817180423.6dymvhs6njvzbaxe@daoine.org> References: <20190817180423.6dymvhs6njvzbaxe@daoine.org> Message-ID: Am Sa., 17. Aug. 2019 um 20:04 Uhr schrieb Francis Daly : > """ > The path to a file is constructed from the file parameter according to > the root and alias directives. It is possible to check directory?s > existence by specifying a slash at the end of a name, e.g. ?$uri/?. > """ > > $foo does not end in slash, so try_files looks for a file of that > (expanded) name, and fails to find it. > > $foo/ does end in slash, so try_files looks for a directory of that > (expanded) name, and finds it and serves it (which involves a > subrequest/internal redirect). > Thank you for your answer, Francis. I've read the quoted documentation about try_files before. That it searches for an directory with $uri/ . I did not understand that it isn't about the content of the variable, but only about the appended slash. Thank you. What I still don't understand is how the existence of an directory results in subrequest/internal redirect. Is that documented somewhere? Have a good Sunday. Regards Maik Beckmann PS: It has been years since I've used mailing lists. I do not trust GMail to handle it properly nowadays. It used to "just work". Sorry if I mess up. From mark.mielke at gmail.com Sun Aug 18 16:27:30 2019 From: mark.mielke at gmail.com (Mark Mielke) Date: Sun, 18 Aug 2019 12:27:30 -0400 Subject: LiteSpeed 5.4 vs Nginx 1.16 benchmarks In-Reply-To: <911B9207-EF07-47DB-B778-09959AD6CF6B@cretaforce.gr> References: <911B9207-EF07-47DB-B778-09959AD6CF6B@cretaforce.gr> Message-ID: Any idea how they did what? Misconfigure Nginx and use an obsolete distro version of Nginx? ? On Sat., Aug. 17, 2019, 1:17 p.m. Christos Chatzaras, wrote: > Today I read this post: > > http://www.webhostingtalk.com/showthread.php?t=1775139 > > In their changelog ( > https://www.litespeedtech.com/products/litespeed-web-server/release-log ) > I see that they did changes related to HTTP/2. > > Any idea how they did it? > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lucas at lucasrolff.com Sun Aug 18 21:00:09 2019 From: lucas at lucasrolff.com (Lucas Rolff) Date: Sun, 18 Aug 2019 21:00:09 +0000 Subject: LiteSpeed 5.4 vs Nginx 1.16 benchmarks In-Reply-To: References: <911B9207-EF07-47DB-B778-09959AD6CF6B@cretaforce.gr>, Message-ID: > Misconfigure Nginx Which parts are misconfigured? If I run the tests and tweak it to use CloudFlares suggested SSL settings for example then it still doesn?t really change anything. And I?d assume CloudFlare want good SSL performance. So I?m curious what settings would be configured wrong, at least they accept PR?s to correct the config in case it?s wrong. > and use an obsolete distro version of Nginx? ? It uses nginx stable repository, which isn?t exactly obsolete. - Lucas Get Outlook for iOS ________________________________ From: nginx on behalf of Mark Mielke Sent: Sunday, August 18, 2019 6:27:30 PM To: nginx at nginx.org Subject: Re: LiteSpeed 5.4 vs Nginx 1.16 benchmarks Any idea how they did what? Misconfigure Nginx and use an obsolete distro version of Nginx? ? On Sat., Aug. 17, 2019, 1:17 p.m. Christos Chatzaras, > wrote: Today I read this post: http://www.webhostingtalk.com/showthread.php?t=1775139 In their changelog ( https://www.litespeedtech.com/products/litespeed-web-server/release-log ) I see that they did changes related to HTTP/2. Any idea how they did it? _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From zeeshanopel at gmail.com Mon Aug 19 06:53:56 2019 From: zeeshanopel at gmail.com (Zeeshan Opel) Date: Mon, 19 Aug 2019 11:53:56 +0500 Subject: Help required Message-ID: Dear members, need your help regarding below nginx configuration. I have below config on my nginx server but I am unable to rewrite the link. Please help me. worker_processes 1; worker_rlimit_nofile 30000; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; server { listen 80; server_name 192.168.17.53;#This is my nginx server rewrite ^ 172.17.5.157/mailsvr/mail$ 172.17.5.157/mail permanent;#trying to rewrite 172.17.5.157/mailsvr/mail to 172.17.5.157 /mail client_max_body_size 100m; location /{ proxy_pass http://172.17.5.157; proxy_redirect off; proxy_buffering off; proxy_set_header X-Real-IP $192.168.17.53; proxy_set_header X-Forwarded-For $172.17.5.157; proxy_set_header Host $host; proxy_set_header X-Forwarded-Proto $scheme; #proxy_pass $authenticated; proxy_read_timeout 240; }#end location }#end server }#end http -- Zeeshan Qaiser Opel +92-301-8446630 -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Mon Aug 19 12:48:28 2019 From: nginx-forum at forum.nginx.org (ramber) Date: Mon, 19 Aug 2019 08:48:28 -0400 Subject: SMTP Proxy - STARTTLS offer on per IP base Message-ID: <5e2beb511dcccc6f799d8c37b0e4c919.NginxMailingListEnglish@forum.nginx.org> Hello list, We've setup a nginx reverse smtp proxy to load balance incoming access to our mailservers. Everything is fine... until Some remote sites have broken tls setups and can't deliver mails anymore. Some didn't accept Let's Encrypt as CA for instance. Now I'm searching a way to not provide STARTTLS to them. The AUTH Methode is to late here because it will be started after "rcpto to:". Is there way to call an "Auth Script" after Client-Helo and decide whether dto send STARTTLS Option or not? I know i can do some redirect with the firewall but i would like to add some logic to the desition to provide STARTTLS or not. Tnx for reading . /ramber Posted at Nginx Forum: https://forum.nginx.org/read.php?2,285338,285338#msg-285338 From vincent26 at email.com Mon Aug 19 17:35:57 2019 From: vincent26 at email.com (Vincent Chen) Date: Mon, 19 Aug 2019 19:35:57 +0200 Subject: openssl engine is not initialized properly Message-ID: An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Mon Aug 19 22:32:25 2019 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 20 Aug 2019 01:32:25 +0300 Subject: openssl engine is not initialized properly In-Reply-To: References: Message-ID: <20190819223225.GR1877@mdounin.ru> Hello! On Mon, Aug 19, 2019 at 07:35:57PM +0200, Vincent Chen wrote: > Hi, > > I am trying to implement an openssl (1.1.1c) engine. However, after the > openssl is initialized by nginx 1.17.2, the engine does not initialized > properly. When I am using 'openssl' command it works file. > > After a bit debugging, I realized that nginx 1.17.2 initialize openssl > with function call 'OPENSSL_init_ssl(OPENSSL_INIT_LOAD_CONFIG, NULL)'. > However, inside openssl function OPENSSL_init_crypto() (called from > OPENSSL_init_ssl), it needs the following flags to register all openssl > functions: > ``` > > if (opts & (OPENSSL_INIT_ENGINE_ALL_BUILTIN > > | OPENSSL_INIT_ENGINE_OPENSSL > > | OPENSSL_INIT_ENGINE_AFALG)) { > > ENGINE_register_all_complete(); > > } > > ``` > > The easiest way to fix this issue is to initialize openssl with > multiple flags like 'OPENSSL_INIT_LOAD_CONFIG > | OPENSSL_INIT_ENGINE_ALL_BUILTIN'. Will there be a fix in near future > about this issue? Unlikely. To load engines, you can use OpenSSL config, or the "ssl_engine" directive in nginx configuration, see http://nginx.org/r/ssl_engine. -- Maxim Dounin http://mdounin.ru/ From mdounin at mdounin.ru Tue Aug 20 09:53:23 2019 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 20 Aug 2019 12:53:23 +0300 Subject: SMTP Proxy - STARTTLS offer on per IP base In-Reply-To: <5e2beb511dcccc6f799d8c37b0e4c919.NginxMailingListEnglish@forum.nginx.org> References: <5e2beb511dcccc6f799d8c37b0e4c919.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20190820095323.GX1877@mdounin.ru> Hello! On Mon, Aug 19, 2019 at 08:48:28AM -0400, ramber wrote: > We've setup a nginx reverse smtp proxy to load balance incoming access to > our mailservers. > Everything is fine... until > > Some remote sites have broken tls setups and can't deliver mails anymore. > Some didn't accept Let's Encrypt as CA for instance. > Now I'm searching a way to not provide STARTTLS to them. > The AUTH Methode is to late here because it will be started after "rcpto > to:". > Is there way to call an "Auth Script" after Client-Helo and decide whether > dto send STARTTLS Option or not? > > I know i can do some redirect with the firewall but i would like to add some > logic to the desition to provide STARTTLS or not. No, there is no way to conditionally provide STARTTLS or not. STARTTLS is always provided as long as it is enabled in the relevant server block. -- Maxim Dounin http://mdounin.ru/ From francis at daoine.org Tue Aug 20 13:23:31 2019 From: francis at daoine.org (Francis Daly) Date: Tue, 20 Aug 2019 14:23:31 +0100 Subject: try_files doubleslash mystery In-Reply-To: References: <20190817180423.6dymvhs6njvzbaxe@daoine.org> Message-ID: <20190820132331.i7ubxjjvfvnnzgp7@daoine.org> On Sun, Aug 18, 2019 at 11:40:59AM +0200, Maik Beckmann via nginx wrote: > Am Sa., 17. Aug. 2019 um 20:04 Uhr schrieb Francis Daly : Hi there, > What I still don't understand is how the existence of an directory > results in subrequest/internal redirect. Is that documented > somewhere? The 100% accurate documentation is in the directory "src"; but it may not be in an easy-to-understand format. So I'd suggest trying to understand what try_files is intended to do. (Which is approximately: use this file if it exists. The subtleties will matter sometimes.) And what should it mean for the administrator to ask it to "try" a directory? I expect it means something like "use this directory as the response to the request". And "use a directory" is implemented by the filesystem handler as "make a subrequest for the first element of 'index' that exists in that directory". f -- Francis Daly francis at daoine.org From vincent26 at email.com Tue Aug 20 14:07:58 2019 From: vincent26 at email.com (Vincent Chen) Date: Tue, 20 Aug 2019 16:07:58 +0200 Subject: openssl engine is not initialized properly In-Reply-To: <20190819223225.GR1877@mdounin.ru> References: <20190819223225.GR1877@mdounin.ru> Message-ID: An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Tue Aug 20 19:18:42 2019 From: nginx-forum at forum.nginx.org (grayaii) Date: Tue, 20 Aug 2019 15:18:42 -0400 Subject: limit_req_zone based on subdomain key Message-ID: <469e225c52d835f6d71f5cd72282fdca.NginxMailingListEnglish@forum.nginx.org> We want to rate limit based on subdomain. Is this possible? If so, how do you it? Basically, what goes here? limit_req_zone $XXXXXXX zone=mylimit:10m rate=10r/s; server { location /login/ { limit_req zone=mylimit; proxy_pass http://my_upstream; } } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,285355,285355#msg-285355 From nginx-forum at forum.nginx.org Wed Aug 21 10:30:08 2019 From: nginx-forum at forum.nginx.org (ramber) Date: Wed, 21 Aug 2019 06:30:08 -0400 Subject: SMTP Proxy - STARTTLS offer on per IP base In-Reply-To: <20190820095323.GX1877@mdounin.ru> References: <20190820095323.GX1877@mdounin.ru> Message-ID: Hello Maxim, tnx for your answer. Is there maybe a way to use the lua stuff and set the starttls option dynamicly on some conditions or will the config only read ones during startup? -- Ramber Posted at Nginx Forum: https://forum.nginx.org/read.php?2,285338,285359#msg-285359 From nginx-forum at forum.nginx.org Thu Aug 22 11:44:24 2019 From: nginx-forum at forum.nginx.org (glareboa) Date: Thu, 22 Aug 2019 07:44:24 -0400 Subject: Is there a limitation in nginx on the number of simultaneous via proxy_pass Message-ID: <5aba4c73dc2b4e43eab84657f13d6069.NginxMailingListEnglish@forum.nginx.org> Is there a limitation in nginx on the number of simultaneous via proxy_pass "http://192.168.1.2:90xx"? >From the source http://192.168.1.2:90xx MJPEG is transmitted. Speed 200 kB/sec. The browser displays only 6 video streams (id=1 ... id=6). Threads id=7 ... id=9 not visible. index.html -----------------------------
nginx.conf --------------------------- http { ... server { listen 80 default_server; ... location /aaa/ { proxy_pass "http://192.168.1.2:9000"; } location /bbb/ { proxy_pass "http://192.168.1.2:9001"; } location /ccc/ { proxy_pass "http://192.168.1.2:9002"; } ... location /xxx/ { proxy_pass "http://192.168.1.2:9012"; } } } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,285362,285362#msg-285362 From nginx-forum at forum.nginx.org Thu Aug 22 12:00:38 2019 From: nginx-forum at forum.nginx.org (fevangelou) Date: Thu, 22 Aug 2019 08:00:38 -0400 Subject: Setting a custom header based on another upstream/sent header Message-ID: Hi there, I'm trying (unsuccessfully) to read an upstream/sent response header and set an additional one based on some regex. Let's say I want to check if the site served is WordPress. WordPress will usually output a link header like this: link: ; rel="https://api.w.org/", ; rel=shortlink So if I did this: [code] set $IS_WORDPRESS "false"; # Now lookup "wp-json" in the (response) link header if ($sent_http_link ~* "wp-json") { set $IS_WORDPRESS "true"; } add_header X-Sent-Header "$sent_http_link"; add_header X-Is-WordPress $IS_WORDPRESS; [/code] You'd probably expect to see 2 headers output here, but in reality you only get 1: x-is-wordpress: false x-sent-header is empty and is not output. Additionally, the regex does not match at all, that's why x-is-wordpress returns false. Now dig this. If I comment out the if block, the "$sent_http_link" value is output just fine x-sent-header: ; rel="https://api.w.org/", ; rel=shortlink x-is-wordpress: false It's as if the sent header is nulled if I just call it! Is this expected behaviour? Could there be another way to do this? The same happens if I use $upstream_http_X as this is a proxy setup (Nginx to Apache). Posted at Nginx Forum: https://forum.nginx.org/read.php?2,285363,285363#msg-285363 From nginx-forum at forum.nginx.org Thu Aug 22 12:05:05 2019 From: nginx-forum at forum.nginx.org (aiaa5505) Date: Thu, 22 Aug 2019 08:05:05 -0400 Subject: =?UTF-8?Q?Nginx_agent_grpc_error_=C2=A0?= Message-ID: <2acf1964360063f5e1ffd936b900901d.NginxMailingListEnglish@forum.nginx.org> [error] 8#8: *256283 upstream rejected request with error 0 while reading response header from upstream, HTTP/2.0 I encountered such an error during the process of using nginx The version I am using is 1.17.3 docker image ? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,285364,285364#msg-285364 From r at roze.lv Thu Aug 22 12:26:12 2019 From: r at roze.lv (Reinis Rozitis) Date: Thu, 22 Aug 2019 15:26:12 +0300 Subject: Setting a custom header based on another upstream/sent header In-Reply-To: References: Message-ID: <000001d558e4$c8d29790$5a77c6b0$@roze.lv> > Is this expected behaviour? Could there be another way to do this? 'if' (the rewrite module) is executed in early stages when $sent_* variables are not available that?s why the regex doesn't match. What you could do is use 'map' instead: (http://nginx.org/en/docs/http/ngx_http_map_module.html ) map $sent_http_link $IS_WORDPRESS { ~*.*wp-json.* "true"; default "false"; } And then: add_header X-Is-WordPress $IS_WORDPRESS; rr From nginx-forum at forum.nginx.org Thu Aug 22 12:37:48 2019 From: nginx-forum at forum.nginx.org (b8077691) Date: Thu, 22 Aug 2019 08:37:48 -0400 Subject: nginx-1.16.1 In-Reply-To: References: <20190813170416.GR1877@mdounin.ru> Message-ID: <45d2b1037d56b2acf2a2ed9b07695401.NginxMailingListEnglish@forum.nginx.org> Hello, Can you please comment on the question? Or at least say that there are no guarantees of safety in applying the patches on the nginx-1.14.2 branch? Thanks. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,285234,285366#msg-285366 From jdfischer at cedreo.com Thu Aug 22 13:22:38 2019 From: jdfischer at cedreo.com (Jean-Daniel FISCHER) Date: Thu, 22 Aug 2019 15:22:38 +0200 Subject: Automatic trailing slash redirect and scheme Message-ID: Hi, I an trying to set the sheme used in automatic redirect generates by nginx when trailing slash is missing. The nginx server is behind a proxy that handles ssl, hence all requests are made using http so nginx use http in absolute redirect. Is there a way to configure nginx to use the value of "$http_x_forwarded_proto" ? The server conf: server { listen 8080; server_name _; gzip on; gzip_disable "msie6"; root /usr/share/nginx/www; # Prevent redirect to have port 8080 port_in_redirect off; # 404 error_page 404 /404.html; # Redir auto to http if ($http_x_forwarded_proto = http) { return 301 https://$host$request_uri; } # Ensure remote ip is the right one set_real_ip_from 0.0.0.0/0; real_ip_header X-Forwarded-For; real_ip_recursive on; # Cache control on image location ~ ^/fr/(.*\.(bmp|gif|jpeg|jpg|jxr|hdp|wdp|png|svg|svgz|tif|tiff|wbmp|webp|jng|cur|ico|woff|woff2))$ { add_header Cache-Control public,max-age=86400; alias /usr/share/nginx/www/$1; } # Serving data configuration location ~ ^/fr/(.*) { include /etc/nginx/redirect/*; alias /usr/share/nginx/www/$1; } } Regards, -- *Jean-Daniel Fischer* Developer +33 (0)2 40 18 04 77 16 Bd Charles de Gaulle, B?t. B 44800 Saint-Herblain, France [image: LinkedIn] [image: Facebook] [image: YouTube] [image: Instagram] *Cedreo est not?* [image: Trustpilot Stars] sur [image: Trustpilot Logo] -------------- next part -------------- An HTML attachment was scrubbed... URL: From pluknet at nginx.com Thu Aug 22 13:59:42 2019 From: pluknet at nginx.com (Sergey Kandaurov) Date: Thu, 22 Aug 2019 16:59:42 +0300 Subject: =?UTF-8?Q?Re=3A_Nginx_agent_grpc_error_=C2=A0?= In-Reply-To: <2acf1964360063f5e1ffd936b900901d.NginxMailingListEnglish@forum.nginx.org> References: <2acf1964360063f5e1ffd936b900901d.NginxMailingListEnglish@forum.nginx.org> Message-ID: <6575970A-58CA-4DC3-A2C3-0DABBA67CEB6@nginx.com> > On 22 Aug 2019, at 15:05, aiaa5505 wrote: > > [error] 8#8: *256283 upstream rejected request with error 0 while reading > response header from upstream, HTTP/2.0 > It is not something nginx gRPC proxy currently supports. See https://trac.nginx.org/nginx/ticket/1792 for details and proposed patch. From hungnv at opensource.com.vn Thu Aug 22 14:54:59 2019 From: hungnv at opensource.com.vn (Hung Nguyen) Date: Thu, 22 Aug 2019 21:54:59 +0700 Subject: Is there a limitation in nginx on the number of simultaneous via proxy_pass In-Reply-To: <5aba4c73dc2b4e43eab84657f13d6069.NginxMailingListEnglish@forum.nginx.org> References: <5aba4c73dc2b4e43eab84657f13d6069.NginxMailingListEnglish@forum.nginx.org> Message-ID: <5B206D9D-18C9-4790-BC2C-403497CA2EFE@opensource.com.vn> No, it?s browser limitation -- H?ng > On Aug 22, 2019, at 18:44, glareboa wrote: > > Is there a limitation in nginx on the number of simultaneous via proxy_pass > "http://192.168.1.2:90xx"? > > From the source http://192.168.1.2:90xx MJPEG is transmitted. Speed 200 > kB/sec. > > The browser displays only 6 video streams (id=1 ... id=6). > Threads id=7 ... id=9 not visible. > > index.html > ----------------------------- > >
>
>
> >
>
> >
>
> >
>
>
>
> >
>
> >
>
> >
>
>
>
> >
>
> >
>
> >
>
>
> > > nginx.conf > --------------------------- > http { > > ... > > server { > listen 80 default_server; > > ... > > location /aaa/ > { > proxy_pass "http://192.168.1.2:9000"; > } > > location /bbb/ > { > proxy_pass "http://192.168.1.2:9001"; > } > > location /ccc/ > { > proxy_pass "http://192.168.1.2:9002"; > } > > ... > > location /xxx/ > { > proxy_pass "http://192.168.1.2:9012"; > } > } > } > > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,285362,285362#msg-285362 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From iippolitov at nginx.com Thu Aug 22 15:08:24 2019 From: iippolitov at nginx.com (Igor A. Ippolitov) Date: Thu, 22 Aug 2019 18:08:24 +0300 Subject: Automatic trailing slash redirect and scheme In-Reply-To: References: Message-ID: <7ebd824d-9517-dc8b-881d-f1f5dd5c3bc3@nginx.com> Hello, You can try adding an 'error_page 301 @returnme' and then a location like this: location @returnme { ??? return 301 https://$host$uri/$is_args$args; } Regards, Igor On 22.08.2019 16:22, Jean-Daniel FISCHER wrote: > Hi, > > I an trying to set the sheme used in automatic redirect generates by > nginx when trailing slash is missing. The nginx server is behind a > proxy that handles ssl, hence all requests are made using http so > nginx use http in absolute redirect. > > Is there a way to configure nginx to use the value of > "$http_x_forwarded_proto" ? > > The server conf: > server { > listen 8080; > server_name _; > gzip on; > gzip_disable "msie6"; > root /usr/share/nginx/www; > # Prevent redirect to have port 8080 > port_in_redirect off; > > # 404 > error_page 404 /404.html; > > # Redir auto to http > if ($http_x_forwarded_proto = http) { > return 301 https://$host$request_uri; > } > > # Ensure remote ip is the right one > set_real_ip_from0.0.0.0/0 ; > real_ip_header X-Forwarded-For; > real_ip_recursive on; > > # Cache control on image > location ~ ^/fr/(.*\.(bmp|gif|jpeg|jpg|jxr|hdp|wdp|png|svg|svgz|tif|tiff|wbmp|webp|jng|cur|ico|woff|woff2))$ { > add_header Cache-Control public,max-age=86400; > alias /usr/share/nginx/www/$1; > } > > # Serving data configuration > location ~ ^/fr/(.*) { > include /etc/nginx/redirect/*; > alias /usr/share/nginx/www/$1; > } > } > Regards, > > -- > > > *Jean-Daniel Fischer* > Developer > > +33 (0)2 40 18 04 77 > 16 Bd Charles de Gaulle, B?t. B > 44800 Saint-Herblain, France > > LinkedIn Facebook > YouTube > Instagram > > > * > Cedreo est not?* Trustpilot Stars > > sur Trustpilot Logo > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Thu Aug 22 15:16:33 2019 From: nginx-forum at forum.nginx.org (c0nw0nk) Date: Thu, 22 Aug 2019 11:16:33 -0400 Subject: Nginx + Lua Anti-DDoS Script Authentication page like Cloudflare Bitmitigate sucuri etc Message-ID: <95c93184f9ef27aa9fb6d5b37571ea07.NginxMailingListEnglish@forum.nginx.org> So i thought i would share this here to get some feedback help bring to light some improvements and bugs and to get the ball rolling on how we can make this script better :) I made this because I like the way Cloudflare, BitMitigate and such sites protect their backends with a HTML, Javascript authentication puzzle for those who have seen Cloudflares I am under attack mode! you know what this will do :) You no longer need the third party services like cloudflare you can now protect your own Nginx servers with it. https://github.com/C0nw0nk/Nginx-Lua-Anti-DDoS I was inspired by Cloudflare HTML Javascript based puzzle authentication page to create this so i can use it on my own servers I hope some people find it useful and will help make it better! <3 It is the same as I am under attack mode and the authentication pages of websites under attack on services like Cloudflare, Bitmitigate, Sucuri etc. It takes knowledge of Javascript, HTML, Nginx, Lua, HTTP Headers, Browsers, Cookies to make this function. Installation is as simple as download the .lua script put it in your nginx config and then add this to a location or http block or server block in your nginx config to protect those websites or the entire nginx server. ``` access_by_lua_file anti_ddos_challenge.lua; ``` Posted at Nginx Forum: https://forum.nginx.org/read.php?2,285373,285373#msg-285373 From nginx-forum at forum.nginx.org Thu Aug 22 15:25:21 2019 From: nginx-forum at forum.nginx.org (fevangelou) Date: Thu, 22 Aug 2019 11:25:21 -0400 Subject: Setting a custom header based on another upstream/sent header In-Reply-To: <000001d558e4$c8d29790$5a77c6b0$@roze.lv> References: <000001d558e4$c8d29790$5a77c6b0$@roze.lv> Message-ID: <0f9f56902f36447fb042212583c51421.NginxMailingListEnglish@forum.nginx.org> Thank you Reinis - that did it :) Posted at Nginx Forum: https://forum.nginx.org/read.php?2,285363,285374#msg-285374 From nginx-forum at forum.nginx.org Thu Aug 22 17:33:53 2019 From: nginx-forum at forum.nginx.org (glareboa) Date: Thu, 22 Aug 2019 13:33:53 -0400 Subject: Is there a limitation in nginx on the number of simultaneous via proxy_pass In-Reply-To: <5B206D9D-18C9-4790-BC2C-403497CA2EFE@opensource.com.vn> References: <5B206D9D-18C9-4790-BC2C-403497CA2EFE@opensource.com.vn> Message-ID: <79473455b15aa721e6719f068e959960.NginxMailingListEnglish@forum.nginx.org> If do index.html and nginx.conf like below, that displays all index.html -------------------------
... nginx.conf --------------------------- location /aaa/ { rewrite ^/aaa/ http://mysite.org:9000"; } ... Posted at Nginx Forum: https://forum.nginx.org/read.php?2,285362,285377#msg-285377 From vbart at nginx.com Thu Aug 22 19:51:55 2019 From: vbart at nginx.com (Valentin V. Bartenev) Date: Thu, 22 Aug 2019 22:51:55 +0300 Subject: Unit 1.10.0 release Message-ID: <2398451.nAbZxZBboX@vbart-laptop> Hi, I'm glad to announce a new release of NGINX Unit. This release includes a number of improvements in various language modules and, finally, basic handling of incoming WebSocket connections, currently only for Node.js. Next in line to obtain WebSocket support is the Java module; it's almost ready but requires some polishing. To handle WebSocket connections in your Node.js app via Unit, use the server object from the 'unit-http' module instead of the default one: var webSocketServer = require('unit-http/websocket').server; Another interesting and long-awaited feature in this release is the splitting of PATH_INFO in the PHP module. Now, Unit can properly handle requests like /app.php/some/path?some=args, which are often used to implement "user-friendly" URLs in PHP applications. Changes with Unit 1.10.0 22 Aug 2019 *) Change: matching of cookies in routes made case sensitive. *) Change: decreased log level of common errors when clients close connections. *) Change: removed the Perl module's "--include=" ./configure option. *) Feature: built-in WebSocket server implementation for Node.js module. *) Feature: splitting PATH_INFO from request URI in PHP module. *) Feature: request routing by scheme (HTTP or HTTPS). *) Feature: support for multipart requests body in Java module. *) Feature: improved API compatibility with Node.js 11.10 or later. *) Bugfix: reconfiguration failed if "listeners" or "applications" objects were missing. *) Bugfix: applying a large configuration might have failed. Please welcome our new junior developer, Axel Duch. For this release, he implemented scheme matching in request routing; now, he works to further extend the request routing capabilities with source and destination address matching. In parallel, Tiago Natel de Moura, who also joined the development recently, has achieved significant progress in the effort to add various process isolation features to Unit. You can follow his recent work on Linux namespaces support in the following pull request: - https://github.com/nginx/unit/pull/289 See also his email about the feature: - https://mailman.nginx.org/pipermail/nginx/2019-August/058321.html In the meantime, we are about to finish the first round of adding basic support for serving static media assets and proxying in Unit. Stay tuned! wbr, Valentin V. Bartenev From francis at daoine.org Thu Aug 22 22:07:07 2019 From: francis at daoine.org (Francis Daly) Date: Thu, 22 Aug 2019 23:07:07 +0100 Subject: Automatic trailing slash redirect and scheme In-Reply-To: References: Message-ID: <20190822220707.3u4yaxgmuec3z5of@daoine.org> On Thu, Aug 22, 2019 at 03:22:38PM +0200, Jean-Daniel FISCHER wrote: Hi there, > I an trying to set the sheme used in automatic redirect generates by nginx > when trailing slash is missing. The nginx server is behind a proxy that > handles ssl, hence all requests are made using http so nginx use http in > absolute redirect. > > Is there a way to configure nginx to use the value of > "$http_x_forwarded_proto" ? I think "not directly". So, if the ssl-handling proxy does not have the equivalent of proxy_redirect (http://nginx.org/r/proxy_redirect) to modify the Location: header before it goes to the client; then you could use "absolute_redirect off" (http://nginx.org/r/absolute_redirect) so that nginx will omit the scheme and host and port from the Location: header, which all current clients should Just Work with. f -- Francis Daly francis at daoine.org From robipolli at gmail.com Fri Aug 23 17:26:17 2019 From: robipolli at gmail.com (Roberto Polli) Date: Fri, 23 Aug 2019 19:26:17 +0200 Subject: PoC: patch for x-ratelimit headers Message-ID: Hi all, I stubbed an implementation of ratelimit-headers in nginx. It's just a PoC and it's not intented to be merged, but just to be a basis for discussion. The patch is here: - https://github.com/ioggstream/nginx/pull/1/files Feedback is welcome! Have a nice day, R. From nunojpg at gmail.com Fri Aug 23 17:38:54 2019 From: nunojpg at gmail.com (=?UTF-8?Q?Nuno_Gon=C3=A7alves?=) Date: Fri, 23 Aug 2019 19:38:54 +0200 Subject: proxy_pass redirect for address without trailing slash disregards Host port Message-ID: I am using proxy_pass and I'm facing a issue which I'm not sure it's a bug. It is in regard to the behaviour specified by the documentation [1]: If a location is defined by a prefix string that ends with the slash character, and requests are processed by one of proxy_pass, fastcgi_pass, uwsgi_pass, scgi_pass, memcached_pass, or grpc_pass, then the special processing is performed. In response to a request with URI equal to this string, but without the trailing slash, a permanent redirect with the code 301 will be returned to the requested URI with the slash appended. If this is not desired, an exact match of the URI and location could be defined like this: Consider that there is a proxy pass for location /abcd/ { proxy_pass ... } for a server listening on port 80. If I make a request for /abcd with Host header "example.com:8080", then I receive a 301 for example.com/abcd/ and not for the expected example.com:8080/abcd/. In fact NGINX is considering the Host header domain part, but disregarding the port part. I believe this is a bug. Thanks, Nuno [1] http://nginx.org/en/docs/http/ngx_http_core_module.html From mlybarger at gmail.com Fri Aug 23 19:04:14 2019 From: mlybarger at gmail.com (Mark Lybarger) Date: Fri, 23 Aug 2019 15:04:14 -0400 Subject: lzw compression Message-ID: Hi, I have embedded clients using my REST api (HTTP POST/GET etc). We want to be able to compress the client data over the wire so that there are fewer packets. Apparently, in some markets, people still pay by the MB. The embedded client can only support LZW compression due to available memory/space on the device. I see the option to enable gzip compression, but that's not going to work for me. Any help or tips would be appreciated on possible solutions for me. I'd like to transparently decompress the traffic before it gets to my application layer. Thanks! -mark- -------------- next part -------------- An HTML attachment was scrubbed... URL: From jfs.world at gmail.com Sat Aug 24 06:23:33 2019 From: jfs.world at gmail.com (Jeffrey 'jf' Lim) Date: Sat, 24 Aug 2019 14:23:33 +0800 Subject: proxy_pass redirect for address without trailing slash disregards Host port In-Reply-To: References: Message-ID: The host is defined by the server, surely, and not by what the client tells the server it is? And you tell the server what host it is by the server_name directive ( https://nginx.org/en/docs/http/ngx_http_core_module.html#server_name). -jf On Sat, 24 Aug 2019, 01:39 Nuno Gon?alves, wrote: > I am using proxy_pass and I'm facing a issue which I'm not sure it's a bug. > > It is in regard to the behaviour specified by the documentation [1]: > > If a location is defined by a prefix string that ends with the slash > character, and requests are processed by one of proxy_pass, > fastcgi_pass, uwsgi_pass, scgi_pass, memcached_pass, or grpc_pass, > then the special processing is performed. In response to a request > with URI equal to this string, but without the trailing slash, a > permanent redirect with the code 301 will be returned to the requested > URI with the slash appended. If this is not desired, an exact match of > the URI and location could be defined like this: > > Consider that there is a proxy pass for location /abcd/ { proxy_pass > ... } for a server listening on port 80. > > If I make a request for /abcd with Host header "example.com:8080", > then I receive a 301 for example.com/abcd/ and not for the expected > example.com:8080/abcd/. > > In fact NGINX is considering the Host header domain part, but > disregarding the port part. > > I believe this is a bug. > > Thanks, > Nuno > > [1] http://nginx.org/en/docs/http/ngx_http_core_module.html > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nunojpg at gmail.com Sat Aug 24 09:17:34 2019 From: nunojpg at gmail.com (=?UTF-8?Q?Nuno_Gon=C3=A7alves?=) Date: Sat, 24 Aug 2019 11:17:34 +0200 Subject: proxy_pass redirect for address without trailing slash disregards Host port In-Reply-To: References: Message-ID: On Sat, Aug 24, 2019 at 8:24 AM Jeffrey 'jf' Lim wrote: > > The host is defined by the server, surely, and not by what the client tells the server it is? And you tell the server what host it is by the server_name directive (https://nginx.org/en/docs/http/ngx_http_core_module.html#server_name). That's not correct, the server is taking the Host domain part from the client Host header. It's just not taking the port part. This inconsistency is why I believe it's a bug. For my exact case it was enough to set "absolute_redirect off;", since only the absolute redirect is affected by this issue. Thanks, Nuno > > -jf > > On Sat, 24 Aug 2019, 01:39 Nuno Gon?alves, wrote: >> >> I am using proxy_pass and I'm facing a issue which I'm not sure it's a bug. >> >> It is in regard to the behaviour specified by the documentation [1]: >> >> If a location is defined by a prefix string that ends with the slash >> character, and requests are processed by one of proxy_pass, >> fastcgi_pass, uwsgi_pass, scgi_pass, memcached_pass, or grpc_pass, >> then the special processing is performed. In response to a request >> with URI equal to this string, but without the trailing slash, a >> permanent redirect with the code 301 will be returned to the requested >> URI with the slash appended. If this is not desired, an exact match of >> the URI and location could be defined like this: >> >> Consider that there is a proxy pass for location /abcd/ { proxy_pass >> ... } for a server listening on port 80. >> >> If I make a request for /abcd with Host header "example.com:8080", >> then I receive a 301 for example.com/abcd/ and not for the expected >> example.com:8080/abcd/. >> >> In fact NGINX is considering the Host header domain part, but >> disregarding the port part. >> >> I believe this is a bug. >> >> Thanks, >> Nuno >> >> [1] http://nginx.org/en/docs/http/ngx_http_core_module.html >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From jfs.world at gmail.com Sat Aug 24 13:32:43 2019 From: jfs.world at gmail.com (Jeffrey 'jf' Lim) Date: Sat, 24 Aug 2019 21:32:43 +0800 Subject: proxy_pass redirect for address without trailing slash disregards Host port In-Reply-To: References: Message-ID: On Sat, Aug 24, 2019 at 5:18 PM Nuno Gon?alves wrote: > > On Sat, Aug 24, 2019 at 8:24 AM Jeffrey 'jf' Lim wrote: > > > > The host is defined by the server, surely, and not by what the client tells the server it is? And you tell the server what host it is by the server_name directive (https://nginx.org/en/docs/http/ngx_http_core_module.html#server_name). > > That's not correct, the server is taking the Host domain part from the > client Host header. It's just not taking the port part. > > This inconsistency is why I believe it's a bug. > > For my exact case it was enough to set "absolute_redirect off;", since > only the absolute redirect is affected by this issue. > ok this is interesting. I see "port_in_redirect" as well (https://nginx.org/en/docs/http/ngx_http_core_module.html#port_in_redirect) as well, but from my testing (you can also verify this in the source) the port here would refer to the port *as specified by the server* (specifically the listen directive), and NOT as specified by the client in the Host header. The exception to 'port_in_redirect on' would be if the server is listening at the standard 80; you will not see ':80' in your redirect (i.e. no "Location: http://example.com:80/abcd/") When you say that "only the absolute redirect is affected by this issue", I assume you mean that absolute redirects are affected *because* they specify the hostname (and indirectly, the port), vs relative redirects don't? -jf > Thanks, > Nuno > > > > > -jf > > > > On Sat, 24 Aug 2019, 01:39 Nuno Gon?alves, wrote: > >> > >> I am using proxy_pass and I'm facing a issue which I'm not sure it's a bug. > >> > >> It is in regard to the behaviour specified by the documentation [1]: > >> > >> If a location is defined by a prefix string that ends with the slash > >> character, and requests are processed by one of proxy_pass, > >> fastcgi_pass, uwsgi_pass, scgi_pass, memcached_pass, or grpc_pass, > >> then the special processing is performed. In response to a request > >> with URI equal to this string, but without the trailing slash, a > >> permanent redirect with the code 301 will be returned to the requested > >> URI with the slash appended. If this is not desired, an exact match of > >> the URI and location could be defined like this: > >> > >> Consider that there is a proxy pass for location /abcd/ { proxy_pass > >> ... } for a server listening on port 80. > >> > >> If I make a request for /abcd with Host header "example.com:8080", > >> then I receive a 301 for example.com/abcd/ and not for the expected > >> example.com:8080/abcd/. > >> > >> In fact NGINX is considering the Host header domain part, but > >> disregarding the port part. > >> > >> I believe this is a bug. > >> > >> Thanks, > >> Nuno > >> > >> [1] http://nginx.org/en/docs/http/ngx_http_core_module.html > >> _______________________________________________ > >> nginx mailing list > >> nginx at nginx.org > >> http://mailman.nginx.org/mailman/listinfo/nginx > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nunojpg at gmail.com Sat Aug 24 13:34:49 2019 From: nunojpg at gmail.com (=?UTF-8?Q?Nuno_Gon=C3=A7alves?=) Date: Sat, 24 Aug 2019 15:34:49 +0200 Subject: proxy_pass redirect for address without trailing slash disregards Host port In-Reply-To: References: Message-ID: On Sat, Aug 24, 2019 at 3:33 PM Jeffrey 'jf' Lim wrote: > When you say that "only the absolute redirect is affected by this > issue", I assume you mean that absolute redirects are affected > *because* they specify the hostname (and indirectly, the port), vs > relative redirects don't? Yes. From jfs.world at gmail.com Sat Aug 24 13:59:59 2019 From: jfs.world at gmail.com (Jeffrey 'jf' Lim) Date: Sat, 24 Aug 2019 21:59:59 +0800 Subject: proxy_pass redirect for address without trailing slash disregards Host port In-Reply-To: References: Message-ID: On Sat, Aug 24, 2019 at 9:32 PM Jeffrey 'jf' Lim wrote: > > On Sat, Aug 24, 2019 at 5:18 PM Nuno Gon?alves wrote: > > > > On Sat, Aug 24, 2019 at 8:24 AM Jeffrey 'jf' Lim wrote: > > > > > > The host is defined by the server, surely, and not by what the client tells the server it is? And you tell the server what host it is by the server_name directive (https://nginx.org/en/docs/http/ngx_http_core_module.html#server_name). > > > > That's not correct, the server is taking the Host domain part from the > > client Host header. It's just not taking the port part. > > > > This inconsistency is why I believe it's a bug. > > > > For my exact case it was enough to set "absolute_redirect off;", since > > only the absolute redirect is affected by this issue. > > > > ok this is interesting. I see "port_in_redirect" as well > (https://nginx.org/en/docs/http/ngx_http_core_module.html#port_in_redirect) > as well, but from my testing (you can also verify this in the source) > the port here would refer to the port *as specified by the server* > (specifically the listen directive), and NOT as specified by the > client in the Host header. The exception to 'port_in_redirect on' > would be if the server is listening at the standard 80; you will not > see ':80' in your redirect (i.e. no "Location: > http://example.com:80/abcd/") > Sorry; actually browsing the source, and to be more specific: if SSL *and* port 443, ':443' is not specified as well (makes sense!). If http *and* port 80, ':80' is similarly not specified in the Location (https://trac.nginx.org/nginx/browser/nginx/src/http/ngx_http_header_filter_module.c#L354). You can also see from the code just above that that the port in the Location (if any, as mentioned) is taken from the actual connection of the server (which would be the actual port that the server took this request at, as opposed to the port that the client is reporting). Is this a bug? well I would say that this will only have an effect when you have nginx behind a proxy (in your example, listening at :8080), as opposed to hitting nginx directly with your client. I imagine that the code was written from the perspective that you should be hitting nginx directly (I'll leave Igor or somebody from the nginx team to have the final word on this). While I see no security risks redirecting based on the port that a client gives, I'm not so sure I welcome this. But feel free to propose a directive if you want, I guess. -jf > When you say that "only the absolute redirect is affected by this > issue", I assume you mean that absolute redirects are affected > *because* they specify the hostname (and indirectly, the port), vs > relative redirects don't? > > -jf > > > > Thanks, > > Nuno > > > > > > > > -jf > > > > > > On Sat, 24 Aug 2019, 01:39 Nuno Gon?alves, wrote: > > >> > > >> I am using proxy_pass and I'm facing a issue which I'm not sure it's a bug. > > >> > > >> It is in regard to the behaviour specified by the documentation [1]: > > >> > > >> If a location is defined by a prefix string that ends with the slash > > >> character, and requests are processed by one of proxy_pass, > > >> fastcgi_pass, uwsgi_pass, scgi_pass, memcached_pass, or grpc_pass, > > >> then the special processing is performed. In response to a request > > >> with URI equal to this string, but without the trailing slash, a > > >> permanent redirect with the code 301 will be returned to the requested > > >> URI with the slash appended. If this is not desired, an exact match of > > >> the URI and location could be defined like this: > > >> > > >> Consider that there is a proxy pass for location /abcd/ { proxy_pass > > >> ... } for a server listening on port 80. > > >> > > >> If I make a request for /abcd with Host header "example.com:8080", > > >> then I receive a 301 for example.com/abcd/ and not for the expected > > >> example.com:8080/abcd/. > > >> > > >> In fact NGINX is considering the Host header domain part, but > > >> disregarding the port part. > > >> > > >> I believe this is a bug. > > >> > > >> Thanks, > > >> Nuno > > >> > > >> [1] http://nginx.org/en/docs/http/ngx_http_core_module.html > > >> _______________________________________________ > > >> nginx mailing list > > >> nginx at nginx.org > > >> http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > _______________________________________________ > > > nginx mailing list > > > nginx at nginx.org > > > http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx From phillip.odam at nitorgroup.com Mon Aug 26 05:06:25 2019 From: phillip.odam at nitorgroup.com (Phillip Odam) Date: Mon, 26 Aug 2019 01:06:25 -0400 Subject: Needing TLS handshake to fail In-Reply-To: <1161f164-ea62-7ffb-8ba9-97f87c37a2d7@rosettahealth.com> References: <1161f164-ea62-7ffb-8ba9-97f87c37a2d7@rosettahealth.com> Message-ID: Hi, I have a project that involves mutual / two way TLS and one of the requirements is that the TLS handshake must fail ie. be terminated before completion if the handshake is in anyway unsuccessful, eg. no client certificate provided or client certificate not trusted. After having no success getting nginx (v1.16.1) & openssl (v1.0.2k-fips) to fail the handshake I ended up looking at the nginx source code, in particular src/event/ngx_event_openssl.c, and from what I read here https://www.openssl.org/docs/man1.0.2/man3/SSL_CTX_set_verify.html I think a small but necessary code change is required. Some possible approaches when choosing to remain using nginx as the server end of the mutual TLS connection * in *static int ngx_ssl_verify_callback(int ok, X509_STORE_CTX *x509_store)* make it configurable whether *1* is always returned or the value of *ok* * in *ngx_int_t ngx_ssl_client_certificate(ngx_conf_t *cf, ngx_ssl_t *ssl, ngx_str_t *cert, ngx_int_t depth)* make it configurable whether *SSL_CTX_set_verify(ssl->ctx, SSL_VERIFY_PEER, ngx_ssl_verify_callback);* is called or *SSL_CTX_set_verify(ssl->ctx, SSL_VERIFY_PEER, NULL);* Is a code change required or is there a way for the handshake failure to be 'enabled' as opposed to ending up with a successfully established TLS connection. Admittedly within nginx there's all the detail that the TLS connection doesn't conform to the configured requirements of the TLS connection but this doesn't satisfy the requirements for the project. I won't bother going in to the details of the project but will just say it's a third party certification body that requires the TLS handshake to be terminated before completion if the handshake is in anyway unsuccessful. Regards, Phillip -------------- next part -------------- An HTML attachment was scrubbed... URL: From jdfischer at cedreo.com Mon Aug 26 07:57:58 2019 From: jdfischer at cedreo.com (Jean-Daniel FISCHER) Date: Mon, 26 Aug 2019 09:57:58 +0200 Subject: Automatic trailing slash redirect and scheme In-Reply-To: <20190822220707.3u4yaxgmuec3z5of@daoine.org> References: <20190822220707.3u4yaxgmuec3z5of@daoine.org> Message-ID: Thanks for all the reply, I activate "absolute_redirect off". Le ven. 23 ao?t 2019 ? 00:07, Francis Daly a ?crit : > On Thu, Aug 22, 2019 at 03:22:38PM +0200, Jean-Daniel FISCHER wrote: > > Hi there, > > > I an trying to set the sheme used in automatic redirect generates by > nginx > > when trailing slash is missing. The nginx server is behind a proxy that > > handles ssl, hence all requests are made using http so nginx use http in > > absolute redirect. > > > > Is there a way to configure nginx to use the value of > > "$http_x_forwarded_proto" ? > > I think "not directly". > > So, if the ssl-handling proxy does not have the equivalent of > proxy_redirect (http://nginx.org/r/proxy_redirect) to modify the Location: > header before it goes to the client; then you could use "absolute_redirect > off" (http://nginx.org/r/absolute_redirect) so that nginx will omit the > scheme and host and port from the Location: header, which all current > clients should Just Work with. > > f > -- > Francis Daly francis at daoine.org > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- *Jean-Daniel Fischer* Developer +33 (0)2 40 18 04 77 16 Bd Charles de Gaulle, B?t. B 44800 Saint-Herblain, France [image: LinkedIn] [image: Facebook] [image: YouTube] [image: Instagram] *Cedreo est not?* [image: Trustpilot Stars] sur [image: Trustpilot Logo] -------------- next part -------------- An HTML attachment was scrubbed... URL: From louisgtwo at gmail.com Tue Aug 27 00:55:31 2019 From: louisgtwo at gmail.com (Louis Garcia) Date: Mon, 26 Aug 2019 20:55:31 -0400 Subject: stream server name question Message-ID: I am able to use $ssl_preread_server_name to get the server name. This is with https requests. Is there a corresponding embedded variable for http requests? I would like to setup streams to different backend servers based on http requests. Example below works for https but not http. Thanks. stream { map $ssl_preread_server_name $name { plex.montclaire.lan app1; transmission.montclaire.lan app2; default default; } upstream app1 { server 127.0.0.1:32400 max_fails=3 fail_timeout=10s; } upstream app2 { server 127.0.0.1:9091 max_fails=3 fail_timeout=10s; } server { listen 172.16.0.5:80; listen 172.16.0.5:443; proxy_pass $name; ssl_preread on; } } -------------- next part -------------- An HTML attachment was scrubbed... URL: From phillip.odam at nitorgroup.com Tue Aug 27 01:06:07 2019 From: phillip.odam at nitorgroup.com (Phillip Odam) Date: Mon, 26 Aug 2019 21:06:07 -0400 Subject: stream server name question In-Reply-To: References: Message-ID: <5fd582f6-9111-0b80-02e7-cacc4ad4eb95@nitorgroup.com> Hi Louis The variable I think you're looking for is $host - http://nginx.org/en/docs/http/ngx_http_core_module.html#variables On 8/26/19 8:55 PM, Louis Garcia wrote: > I am able to use $ssl_preread_server_name to get the server name. This > is with https requests. Is there a corresponding embedded variable for > http requests? I would like to setup streams to different backend > servers based on http requests. Example below works for https but not > http. > Thanks. > > stream { > ? ? ? ? map $ssl_preread_server_name $name { > plex.montclaire.lan app1; > transmission.montclaire.lan app2; > ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? default default; > ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?} > ? ? ? ? upstream app1 { > ? ? ? ? ? ? ? ? ? ? ? ?server 127.0.0.1:32400 > max_fails=3 fail_timeout=10s; > ? ? ? ? ? ? ? ? ? ? ? } > ? ? ? ? upstream app2 { > ? ? ? ? ? ? ? ? ? ? ? ?server 127.0.0.1:9091 > max_fails=3 fail_timeout=10s; > ? ? ? ? ? ? ? ? ? ? ? } > ? ? ? ? server { > ??????????????? listen 172.16.0.5:80 ; > ? ? ? ? ? ? ? ? listen 172.16.0.5:443 ; > ? ? ? ? ? ? ? ? proxy_pass $name; > ? ? ? ? ? ? ? ? ssl_preread on; > ? ? ? ? ? ? ? ?} > ? ? ? ?} > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From louisgtwo at gmail.com Tue Aug 27 02:05:02 2019 From: louisgtwo at gmail.com (Louis Garcia) Date: Mon, 26 Aug 2019 22:05:02 -0400 Subject: stream server name question In-Reply-To: <5fd582f6-9111-0b80-02e7-cacc4ad4eb95@nitorgroup.com> References: <5fd582f6-9111-0b80-02e7-cacc4ad4eb95@nitorgroup.com> Message-ID: Does not work. stream { map $host $name { plex.montclaire.lan app1; transmission.montclaire.lan app2; default default; } upstream app1 { server 127.0.0.1:32400 max_fails=3 fail_timeout=10s; } upstream app2 { server 127.0.0.1:9091 max_fails=3 fail_timeout=10s; } server { listen 172.16.0.5:80; listen 172.16.0.5:443; proxy_pass $name; ssl_preread on; } } nginx[31436]: nginx: [emerg] unknown "host" variable nginx[31436]: nginx: configuration file /etc/nginx/nginx.conf test failed On Mon, Aug 26, 2019 at 9:06 PM Phillip Odam wrote: > Hi Louis > > The variable I think you're looking for is $host - > http://nginx.org/en/docs/http/ngx_http_core_module.html#variables > > On 8/26/19 8:55 PM, Louis Garcia wrote: > > I am able to use $ssl_preread_server_name to get the server name. This is > with https requests. Is there a corresponding embedded variable for http > requests? I would like to setup streams to different backend servers based > on http requests. Example below works for https but not http. > Thanks. > > stream { > map $ssl_preread_server_name $name { > plex.montclaire.lan app1; > transmission.montclaire.lan > app2; > default default; > } > upstream app1 { > server 127.0.0.1:32400 max_fails=3 > fail_timeout=10s; > } > upstream app2 { > server 127.0.0.1:9091 max_fails=3 fail_timeout=10s; > } > server { > listen 172.16.0.5:80; > listen 172.16.0.5:443; > proxy_pass $name; > ssl_preread on; > } > } > > > _______________________________________________ > nginx mailing listnginx at nginx.orghttp://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From phillip.odam at nitorgroup.com Tue Aug 27 02:15:27 2019 From: phillip.odam at nitorgroup.com (Phillip Odam) Date: Mon, 26 Aug 2019 22:15:27 -0400 Subject: stream server name question In-Reply-To: References: <5fd582f6-9111-0b80-02e7-cacc4ad4eb95@nitorgroup.com> Message-ID: <065636db-ed38-dabe-d505-806e6047a561@nitorgroup.com> Not inside the stream it won't... you'll need the map to at least be inside http and probably server. [RosettaHealth] Phillip Odam Principal Engineer, RosettaHealth rosettahealth.com e: phillip.odam at rosettahealth.com o: 202.350.4343 a: 2 Wisconsin Circle, Chevy Chase, MD [twitter] [facebook] [Instagram] [linkedin] Connecting a Whole World of Healthcare On 8/26/19 10:05 PM, Louis Garcia wrote: > Does not work. > > stream { > ? ? ? ? map $host $name { > ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? plex.montclaire.lan app1; > transmission.montclaire.lan app2; > ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? default default; > ? ? ? ? ? ? ? ? ? ? ? ??? ? ? ? ?? } > ? ? ? ? upstream app1 { > ? ? ? ? ? ? ? ? ? ? ? ?server 127.0.0.1:32400 > max_fails=3 fail_timeout=10s; > ? ? ? ? ? ? ? ? ? ? ? } > ? ? ? ? upstream app2 { > ? ? ? ? ? ? ? ? ? ? ? ?server 127.0.0.1:9091 > max_fails=3 fail_timeout=10s; > ? ? ? ? ? ? ? ? ? ? ? } > ? ? ? ? server { > ??????????????? listen 172.16.0.5:80 ; > ? ? ? ? ? ? ? ? listen 172.16.0.5:443 ; > ? ? ? ? ? ? ? ? proxy_pass $name; > ? ? ? ? ? ? ? ? ssl_preread on; > ? ? ? ? ? ? ? ?} > ? ? ? ?} > > nginx[31436]: nginx: [emerg] unknown "host" variable > nginx[31436]: nginx: configuration file /etc/nginx/nginx.conf test failed > > On Mon, Aug 26, 2019 at 9:06 PM Phillip Odam > > wrote: > > Hi Louis > > The variable I think you're looking for is $host - > http://nginx.org/en/docs/http/ngx_http_core_module.html#variables > > > On 8/26/19 8:55 PM, Louis Garcia wrote: >> I am able to use $ssl_preread_server_name to get the server name. >> This is with https requests. Is there a corresponding embedded >> variable for http requests? I would like to setup streams to >> different backend servers based on http requests. Example below >> works for https but not http. >> Thanks. >> >> stream { >> ? ? ? ? map $ssl_preread_server_name $name { >> plex.montclaire.lan app1; >> transmission.montclaire.lan app2; >> ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? default default; >> ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?} >> ? ? ? ? upstream app1 { >> ? ? ? ? ? ? ? ? ? ? ? ?server 127.0.0.1:32400 >> max_fails=3 fail_timeout=10s; >> ? ? ? ? ? ? ? ? ? ? ? } >> ? ? ? ? upstream app2 { >> ? ? ? ? ? ? ? ? ? ? ? ?server 127.0.0.1:9091 >> max_fails=3 fail_timeout=10s; >> ? ? ? ? ? ? ? ? ? ? ? } >> ? ? ? ? server { >> ??????????????? listen 172.16.0.5:80 ; >> ? ? ? ? ? ? ? ? listen 172.16.0.5:443 ; >> ? ? ? ? ? ? ? ? proxy_pass $name; >> ? ? ? ? ? ? ? ? ssl_preread on; >> ? ? ? ? ? ? ? ?} >> ? ? ? ?} >> >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Tue Aug 27 11:08:48 2019 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 27 Aug 2019 14:08:48 +0300 Subject: nginx-1.16.1 In-Reply-To: <45d2b1037d56b2acf2a2ed9b07695401.NginxMailingListEnglish@forum.nginx.org> References: <20190813170416.GR1877@mdounin.ru> <45d2b1037d56b2acf2a2ed9b07695401.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20190827110848.GA1877@mdounin.ru> Hello! On Thu, Aug 22, 2019 at 08:37:48AM -0400, b8077691 wrote: > Can you please comment on the question? Or at least say that there are no > guarantees of safety in applying the patches on the nginx-1.14.2 branch? The 1.14.x branch is obsolete and not supported. If you want to apply these patches to 1.14.2 - it is your responsibility to check if it is safe or not. If you are not qualified enough to check it yourself, please consider upgrading to a supported version - nginx 1.16.1 or nginx 1.17.3. -- Maxim Dounin http://mdounin.ru/ From soon.hyouk at gmail.com Fri Aug 30 02:54:33 2019 From: soon.hyouk at gmail.com (Soon Hyouk Lee) Date: Thu, 29 Aug 2019 22:54:33 -0400 Subject: Reverse proxy 404 error help! Message-ID: <20190830025433.oiugtwtabejfrywz@thinkarch.localdomain> Trying to get Unifi (ubiquiti networks) video controller up via nginx reverse proxy, but I keep getting 404 errors by my app. Seems it is trying to pull css from the wrong location and getting 404 errors. when I request [hostname]/unifi/ it goes to a white page but doesn't give me an explicit error, so the location block is working to pull up a base static page, but clearly not anything else is functioning. Error log shows that the css files needed are being referenced using the root directory defined in the '/' location block rather than the root specified in the '/unifi/' location block. Looking at network developer tools in chrome / firefox shows 404 errors for css pages being called by the app. Can anyone provide some much needed assistance? I've been at this for hours, changing various things in the config without any progress! Not sure if it's just syntax associated with the rewrite or if it's something more fundamental. If I change the root to the app's appropriate directory at the '/' location block then some of the 404 errors are eliminated. I will also need to figure how to get websocket to work for this once I can get the login page to load. nginx config below: https://pastebin.com/0KihFgEP error log from nginx below: https://pastebin.com/aWBmLSX3 From nginx-forum at forum.nginx.org Fri Aug 30 11:45:34 2019 From: nginx-forum at forum.nginx.org (milanleon) Date: Fri, 30 Aug 2019 07:45:34 -0400 Subject: How to add Multiple sites with ipv6 and SSL on Nginx ? Message-ID: I have three websites on one Linode IP and I want to add ipv6 with SSL 1. Wordpress 2. Django1 3. Django2 All of them have SSL certificates from Letsencrypt and I have test them and they working. In testing of SSL I have an error with Mismatch and in Debugging error Curl error: 51 (SSL_PEER_CERTIFICATE) So my Nginx block are next : Wordpress: > server { listen 80; listen [::]:80; server_name wpexample.org www.wpexample.org; return 301 https://www.wpexample.org$request_uri; } server { listen 443 ssl http2; listen [::]:443 ssl http2; server_name www.wpexample.org; root /var/www/html/wpexample/src; index index.php; ssl_certificate /etc/letsencrypt/live/wpexample.org/fullchain.pem; ssl_certificate_key /etc/letsencrypt/live/wpexample.org/privkey.pem; ssl_trusted_certificate /etc/letsencrypt/live/wpexample.org/chain.pem; include snippets/ssl.conf; include snippets/letsencrypt.conf; First Django Site >server { listen 80; listen [::]:80; server_name django1.org www.django1.org; rewrite ^(.*) https://www.django1.org$request_uri; } server { listen 443 ssl http2; listen [::]:443 ssl http2; server_name django1.org www.django1.org; index index.html index.htm; ssl_certificate /etc/letsencrypt/live/django1.org/fullchain.pem; ssl_certificate_key /etc/letsencrypt/live/django1.org/privkey.pem; ssl_trusted_certificate /etc/letsencrypt/live/django1.org/chain.pem; include snippets/ssl.conf; include snippets/letsencrypt.conf; Second Django Site >server { listen 80; listen [::]:80; server_name django2.rs www.django2.rs; include /etc/nginx/snippets/letsencrypt.conf; rewrite ^(.*) https://django2.rs$request_uri; } server { listen 443 ssl http2; listen [::]:443 ssl http2; server_name django2.rs www.django2.rs; index index.html index.htm; ssl_certificate /etc/letsencrypt/live/django2.rs/fullchain.pem; ssl_certificate_key /etc/letsencrypt/live/django2.rs/privkey.pem; ssl_trusted_certificate /etc/letsencrypt/live/django2.rs/chain.pem; include snippets/ssl.conf; include snippets/letsencrypt.conf; The problem is comming when I try to test both Django sites with ssllabs.com >Certificate #2: RSA 2048 bits (SHA256withRSA) No SNI The error what I see is "Alternative names wpexample.org www.wpexample.org MISMATCH" And this error is comes for both of Django sites when i test them I have trying to add for both of sites in Nginx blocks **listen [::]:443; default_server and ipv6conly** but then my sites are unavailable and it's shows same Mismatch in testing. Also I got all A+ for both Django sites in ssllabs.com Does anyone have an idea how to solve this issues? Thanks a lot in advance Posted at Nginx Forum: https://forum.nginx.org/read.php?2,285450,285450#msg-285450 From r at roze.lv Fri Aug 30 12:05:06 2019 From: r at roze.lv (Reinis Rozitis) Date: Fri, 30 Aug 2019 15:05:06 +0300 Subject: How to add Multiple sites with ipv6 and SSL on Nginx ? In-Reply-To: References: Message-ID: <000d01d55f2b$2a039d10$7e0ad730$@roze.lv> > The problem is comming when I try to test both Django sites with ssllabs.com > > >Certificate #2: RSA 2048 bits (SHA256withRSA) No SNI > The error what I see is "Alternative names wpexample.org > www.wpexample.org > MISMATCH" It is normal for clients which don't support SNI (server name indication) and SSLabs tests what happens in such case. Depending if you need to server web for old clients (like Android 2.3.7, IE 8 / XP, Java 6u45) the only way is to set up a separate IP (both ipv4/ipv6) for each domain, if not - you can ignore the MISMATCH error (also it doesn't impact the SSLabs rating). rr From aweber at comcast.net Fri Aug 30 14:07:47 2019 From: aweber at comcast.net (AJ Weber) Date: Fri, 30 Aug 2019 10:07:47 -0400 Subject: ssl client auth trouble Message-ID: <2e8b76e7-88e0-e561-759e-6dff98f9eeb0@comcast.net> I have been trying to configure client certificates (really just one cert for now) for two days on CentOS 7, Nginx 1.16.1, and have had very limited success. I have tried various online guides and they are mostly the same - but all have resulted in the same exact scenario.? One such guide is here, for example: https://gist.github.com/mtigas/952344 (Another is here: https://www.guyrutenberg.com/2015/09/15/securing-access-using-tlsssl-client-certificates/) When this is all done, and I import the p12 client certificate on my Windows PCs (tested 2) Chrome and Firefox show me the "400 Bad Request\n No required SSL certificate was sent".? The very strange thing is IE11 on one of the two PCs, actually prompts me to use my newly-installed cert the first time, and it works.? No other browser (including IE on a different PC) works. I have exhausted my Google-foo and am frustrated. I don't think this should be so hard. Does anyone have any suggestions to troubleshoot this? Thanks. -------------- next part -------------- An HTML attachment was scrubbed... URL: From r at roze.lv Fri Aug 30 16:33:35 2019 From: r at roze.lv (Reinis Rozitis) Date: Fri, 30 Aug 2019 19:33:35 +0300 Subject: ssl client auth trouble In-Reply-To: <2e8b76e7-88e0-e561-759e-6dff98f9eeb0@comcast.net> References: <2e8b76e7-88e0-e561-759e-6dff98f9eeb0@comcast.net> Message-ID: <003101d55f50$ab542ae0$01fc80a0$@roze.lv> > When this is all done, and I import the p12 client certificate on my Windows PCs (tested 2) Chrome and Firefox show me the "400 Bad Request\n No required SSL certificate was sent". The very strange thing is IE11 on one of the two PCs, actually prompts me to use my newly-installed cert the first time, and it works. No other browser (including IE on a different PC) works. Afaik Chrome uses Windows certificate store (and iirc as of FF49 there is an optional setting for firefox too) so if IE11 works it could be that rather than nginx configuration it is browser related. For example - some time ago when I had to implement client certificate authentication myself one such caveat turned out to be how Chrome handles http2 - I had several virtualhosts, but client auth only for one domain and it randomly didn't work. When I inspected the http2 stream I noticed that if the resolved IP for the domain matched an existing connection Chrome happily reused/pipelined the request through it without sending the certificate. When the particular domain was placed on a separate ip everything started to work as expected. While there might not be a technical issue for such behavior (not sure?) it wasn't very obvious at first. I would suggest to share at least minimal nginx configuration snippet - it's hard to help without that. Maybe try with ssl_verify_client optional_no_ca; - depending on how the client certificate was created/signed there might be intermediate CAs (not sure if you followed the guides directly about self-made CAs etc) and then the default ssl_verify_depth 1; would also fail at verification. Also log if $ssl_client_s_dn / $ssl_client_escaped_cert actually contain anything. rr From nginx-forum at forum.nginx.org Fri Aug 30 17:03:57 2019 From: nginx-forum at forum.nginx.org (stmx38) Date: Fri, 30 Aug 2019 13:03:57 -0400 Subject: proxy_set_header on HTTP or Server level Message-ID: Hello, We recently made some order in our configuration to make it cleaner and readable. We have moved all reverse proxy related parameters on the HTTP level from the vhosts locations: ---- proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_read_timeout 65; proxy_send_timeout 65; proxy_connect_timeout 30; proxy_buffering off; proxy_buffers 8 64k; proxy_buffer_size 64k; proxy_busy_buffers_size 128k; proxy_http_version 1.1; proxy_intercept_errors off; ---- As a result we got some issue with the at leas some of them: ---- proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; ---- As per documentation - http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_set_header: Context: http, server, location > These directives are inherited from the previous level if and only if there are no proxy_set_header directives defined on the current level. Does it mean that if at least one of the proxy_set_header is defined on location level we should define all other on this level because it broke inheritance? Per our experience, these directives only work on location level. They do not apply when we set them up on HTTP or Server level. Why may be wrong with our configuration? Thank you! Posted at Nginx Forum: https://forum.nginx.org/read.php?2,285462,285462#msg-285462 From jlmuir at imca-cat.org Fri Aug 30 17:33:17 2019 From: jlmuir at imca-cat.org (J. Lewis Muir) Date: Fri, 30 Aug 2019 12:33:17 -0500 Subject: Allow internal redirect to URI x, but deny external request for x? Message-ID: <20190830173317.klutzplfsurrpmrw@mink.imca.aps.anl.gov> Hello! I'm using nginx 1.12.2 on RHEL 7, and I've got a FastCGI web app that uses a deployment structure which uses an atomic symlink change for an atomic app deploy, and I'm wishing to be able to do an internal redirect in nginx to URL x, but deny an external request to the same URL x so that I don't serve the same content at more than one URL. Is there a way to do that? For example, given the external request URI /my-app/index.html I want to do an internal redirect to /my-app/current/index.html but deny an external request for that same URI /my-app/current/index.html because I don't want to serve the app from two different URLs (e.g., /my-app/ and /my-app/current/). The app structure on disk is like the Capistrano structure https://capistranorb.com/documentation/getting-started/structure/ That is, it's like /srv/www/my-app current -> releases/1.0.2 releases 1.0.0 1.0.1 1.0.2 "current" is a symlink. In my nginx config, I've changed $document_root to $realpath_root in the appropriate FastCGI parameters, and have the following locations: location /my-app/ { rewrite ^/my-app/(?!current/)(.*)$ /my-app/current/$1 last; index index.php; } location /my-app/current/ { return 404; } location /my-app/releases/ { return 404; } location ~ ^/my-app/.*?[^/]\.php(?:/|$) { include php-fpm-realpath.conf; } Given an external request for a URI that starts with /my-app/ this returns 404 after the internal redirect. If I remove the two locations that return 404, then it serves the app, but it also allows external requests such as /my-app/current/ which I don't want to allow since that's a duplicate of /my-app/ I initially tried using the alias directive which I thought was a better fit for what I wanted to do location /my-app/ { alias /srv/www/my-app/current/; index index.php; } location /my-app/current/ { return 404; } location /my-app/releases/ { return 404; } location ~ ^/my-app/(.*?[^/]\.php(?:/.*|$)) { alias /srv/www/my-app/current/$1; include php-fpm-realpath.conf; } But that didn't seem to work with the nginx FastCGI implementation. Thank you! Lewis From jlmuir at imca-cat.org Fri Aug 30 18:20:31 2019 From: jlmuir at imca-cat.org (J. Lewis Muir) Date: Fri, 30 Aug 2019 13:20:31 -0500 Subject: Allow internal redirect to URI x, but deny external request for x? In-Reply-To: <20190830173317.klutzplfsurrpmrw@mink.imca.aps.anl.gov> References: <20190830173317.klutzplfsurrpmrw@mink.imca.aps.anl.gov> Message-ID: <20190830182031.jrruhqrcyi5izmor@mink.imca.aps.anl.gov> On 08/30, J. Lewis Muir wrote: > I initially tried using the alias directive which I thought was a better > fit for what I wanted to do > > location /my-app/ { > alias /srv/www/my-app/current/; > index index.php; > } > > location /my-app/current/ { > return 404; > } > > location /my-app/releases/ { > return 404; > } > > location ~ ^/my-app/(.*?[^/]\.php(?:/.*|$)) { > alias /srv/www/my-app/current/$1; > include php-fpm-realpath.conf; > } > > But that didn't seem to work with the nginx FastCGI implementation. What exactly didn't work when I tried the alias directive, based on the error log, seems to be that somewhere there's a file op on /srv/www/my-app/releases/1.0.2/index.php/my-app/index.php which is a wrong path; it should be /srv/www/my-app/releases/1.0.2/index.php In my php-fpm-realpath.conf, I have fastcgi_split_path_info ^(.+?\.php)(/.*)$; if (!-f $realpath_root$fastcgi_script_name) { return 404; } ... fastcgi_param DOCUMENT_ROOT $realpath_root; fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name; ... I'm wondering if the fastcgi_split_path_info function or the if directive is what's emitting the file op message in the error log which would mean that one or both of $realpath_root or $fastcgi_script_name are not set to what I expect. (?) Here are the relevant lines from the error log with the debug option: test location: "/my-app/" test location: ~ "^/my-app/(.*?[^/]\.php(?:/.*|$))" using configuration "/my-app/" open index "/srv/www/my-app/current/index.php" internal redirect: "/my-app/index.php?" rewrite phase: 1 test location: "/my-app/" test location: ~ "^/my-app/(.*?[^/]\.php(?:/.*|$))" using configuration "^/my-app/(.*?[^/]\.php(?:/.*|$))" rewrite phase: 3 http script complex value http script copy: "/srv/www/my-app/current/" http script capture: "index.php" http script copy: "/srv/www/my-app/current/" http script capture: "index.php" http script var: "/srv/www/my-app/releases/1.0.2/index.php" http script var: "/my-app/index.php" http script copy: "" http script var: "/my-app/index.php" http script copy: "" http script file op 0000000000000001 "/srv/www/my-app/releases/1.0.2/index.php/my-app/index.php" http script if http finalize request: 404, "/my-app/index.php?" a:1, c:2 http special response: 404, "/my-app/index.php?" http set discard body xslt filter header HTTP/1.1 404 Not Found Thank you! Lewis From roughlea at hotmail.co.uk Fri Aug 30 18:26:15 2019 From: roughlea at hotmail.co.uk (rough lea) Date: Fri, 30 Aug 2019 18:26:15 +0000 Subject: Fwd: confirm 5f9be349e631f958ba756da43c02aa760f8cc2e3 References: Message-ID: Begin forwarded message: From: nginx-request at nginx.org Subject: confirm 5f9be349e631f958ba756da43c02aa760f8cc2e3 Date: 30 August 2019 at 19:24:24 BST To: roughlea at hotmail.co.uk Reply-To: nginx-request at nginx.org Mailing list removal confirmation notice for mailing list nginx We have received a request from 92.13.144.190 for the removal of your email address, "roughlea at hotmail.co.uk" from the nginx at nginx.org mailing list. To confirm that you want to be removed from this mailing list, simply reply to this message, keeping the Subject: header intact. Or visit this web page: http://mailman.nginx.org/mailman/confirm/nginx/5f9be349e631f958ba756da43c02aa760f8cc2e3 Or include the following line -- and only the following line -- in a message to nginx-request at nginx.org: confirm 5f9be349e631f958ba756da43c02aa760f8cc2e3 Note that simply sending a `reply' to this message should work from most mail readers, since that usually leaves the Subject: line in the right form (additional "Re:" text in the Subject: is okay). If you do not wish to be removed from this list, please simply disregard this message. If you think you are being maliciously removed from the list, or have any other questions, send them to nginx-owner at nginx.org. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jlmuir at imca-cat.org Fri Aug 30 18:37:43 2019 From: jlmuir at imca-cat.org (J. Lewis Muir) Date: Fri, 30 Aug 2019 13:37:43 -0500 Subject: Allow internal redirect to URI x, but deny external request for x? In-Reply-To: <20190830182031.jrruhqrcyi5izmor@mink.imca.aps.anl.gov> References: <20190830173317.klutzplfsurrpmrw@mink.imca.aps.anl.gov> <20190830182031.jrruhqrcyi5izmor@mink.imca.aps.anl.gov> Message-ID: <20190830183743.uzww2kwo2j4sutjb@mink.imca.aps.anl.gov> On 08/30, J. Lewis Muir wrote: > I'm wondering if the fastcgi_split_path_info function or the if > directive is what's emitting the file op message in the error log which > would mean that one or both of $realpath_root or $fastcgi_script_name > are not set to what I expect. (?) Adding return 200 "document_root: $document_root\nfastcgi_script_name: $fastcgi_script_name\n"; to the location like this location ~ ^/my-app/(.*?[^/]\.php(?:/.*|$)) { alias /srv/www/my-app/current/$1; return 200 "realpath_root: $realpath_root\nfastcgi_script_name: $fastcgi_script_name\n"; } yields the following: $ curl http://localhost/my-app/ realpath_root: /srv/www/my-app/releases/1.0.2/index.php fastcgi_script_name: /my-app/index.php So, that doesn't seem right! I was expecting them to be something like: realpath_root: /srv/www/my-app/releases/1.0.2 fastcgi_script_name: /index.php Puzzled, Lewis From jlmuir at imca-cat.org Fri Aug 30 18:58:23 2019 From: jlmuir at imca-cat.org (J. Lewis Muir) Date: Fri, 30 Aug 2019 13:58:23 -0500 Subject: Allow internal redirect to URI x, but deny external request for x? In-Reply-To: <20190830183743.uzww2kwo2j4sutjb@mink.imca.aps.anl.gov> References: <20190830173317.klutzplfsurrpmrw@mink.imca.aps.anl.gov> <20190830182031.jrruhqrcyi5izmor@mink.imca.aps.anl.gov> <20190830183743.uzww2kwo2j4sutjb@mink.imca.aps.anl.gov> Message-ID: <20190830185823.6ohwsgpi6usmzsmz@mink.imca.aps.anl.gov> On 08/30, J. Lewis Muir wrote: > On 08/30, J. Lewis Muir wrote: > > I'm wondering if the fastcgi_split_path_info function or the if > > directive is what's emitting the file op message in the error log which > > would mean that one or both of $realpath_root or $fastcgi_script_name > > are not set to what I expect. (?) > > Adding > > return 200 "document_root: $document_root\nfastcgi_script_name: $fastcgi_script_name\n"; > > to the location like this > > location ~ ^/my-app/(.*?[^/]\.php(?:/.*|$)) { > alias /srv/www/my-app/current/$1; > return 200 "realpath_root: $realpath_root\nfastcgi_script_name: $fastcgi_script_name\n"; > } Hmm, I think I need to call fastcgi_split_path_info first, so now I did location ~ ^/my-app/(.*?[^/]\.php(?:/.*|$)) { alias /srv/www/my-app/current/$1; fastcgi_split_path_info ^(.+?\.php)(/.*)$; return 200 "realpath_root: $realpath_root\nfastcgi_script_name: $fastcgi_script_name\nfastcgi_path_info: $fastcgi_path_info\n"; } which yields the following: $ curl http://localhost/my-app/ realpath_root: /srv/www/my-app/releases/1.0.2/index.php fastcgi_script_name: /my-app/index.php fastcgi_path_info: That doesn't seem right. Still puzzled, Lewis From hobson42 at gmail.com Fri Aug 30 19:01:01 2019 From: hobson42 at gmail.com (Ian Hobson) Date: Fri, 30 Aug 2019 20:01:01 +0100 Subject: Allow internal redirect to URI x, but deny external request for x? In-Reply-To: <20190830173317.klutzplfsurrpmrw@mink.imca.aps.anl.gov> References: <20190830173317.klutzplfsurrpmrw@mink.imca.aps.anl.gov> Message-ID: <964ffd9d-b664-2425-9332-83e46244dde6@gmail.com> Hi Lewis, On 30/08/19 18:33, J. Lewis Muir wrote: > Hello! > > I'm using nginx 1.12.2 on RHEL 7, and I've got a FastCGI web app that > uses a deployment structure which uses an atomic symlink change for an > atomic app deploy, and I'm wishing to be able to do an internal redirect > in nginx to URL x, but deny an external request to the same URL x so > that I don't serve the same content at more than one URL. Is there a > way to do that? > You could place the different versions away from the root so they cannot be obtained from the web. Then they can be served by setting up a symlink to the desired version. This can be changed using "ln -sfn version/dir serving/root" and then restarting nginx to pick up the new version. By not using redirects, this method should be more efficient. Regards Ian -- Ian Hobson From aweber at comcast.net Fri Aug 30 19:13:07 2019 From: aweber at comcast.net (AJ Weber) Date: Fri, 30 Aug 2019 15:13:07 -0400 Subject: ssl client auth trouble In-Reply-To: <003101d55f50$ab542ae0$01fc80a0$@roze.lv> References: <2e8b76e7-88e0-e561-759e-6dff98f9eeb0@comcast.net> <003101d55f50$ab542ae0$01fc80a0$@roze.lv> Message-ID: On 8/30/2019 12:33 PM, Reinis Rozitis wrote: >> When this is all done, and I import the p12 client certificate on my Windows PCs (tested 2) Chrome and Firefox show me the "400 Bad Request\n No required SSL certificate was sent". The very strange thing is IE11 on one of the two PCs, actually prompts me to use my newly-installed cert the first time, and it works. No other browser (including IE on a different PC) works. > > Afaik Chrome uses Windows certificate store (and iirc as of FF49 there is an optional setting for firefox too) so if IE11 works it could be that rather than nginx configuration it is browser related. The tricky thing there is that it works on one PC's IE and not another.? But you are correct, Chrome does use the Windows cert store.? I have checked a dozen times that the cert is in there as correctly as I can surmise.? And when the initial tests show that 1 out of 5 browsers are successful, it is not something I can go forward with before I solve it. :) > > For example - some time ago when I had to implement client certificate authentication myself one such caveat turned out to be how Chrome handles http2 - I had several virtualhosts, but client auth only for one domain and it randomly didn't work. When I inspected the http2 stream I noticed that if the resolved IP for the domain matched an existing connection Chrome happily reused/pipelined the request through it without sending the certificate. > When the particular domain was placed on a separate ip everything started to work as expected. While there might not be a technical issue for such behavior (not sure?) it wasn't very obvious at first. > > > I would suggest to share at least minimal nginx configuration snippet - it's hard to help without that. I can do that.? This is initial setup of nginx. The default nginx.conf has only been edited by certbot (trying Lets Encrypt), and I have zero virtual hosts...in fact nothing in default.d/ > > Maybe try with ssl_verify_client optional_no_ca; - depending on how the client certificate was created/signed there might be intermediate CAs (not sure if you followed the guides directly about self-made CAs etc) I tried following both of those links precisely.? From my eye, they both do the same exact set of steps...just some syntactically separated. The client cert appears to be signed correctly with the output of "openssl verify -verbose -CAfile ..." > and then the default ssl_verify_depth 1; would also fail at verification. I actually set this to 2 based on a recommendation in SO post, but it did not make a difference either way. > Also log if $ssl_client_s_dn / $ssl_client_escaped_cert actually contain anything. I will search for this.? Not sure how to add this info to my logs, or whether it logs failures too? Thank you for your help! From r at roze.lv Fri Aug 30 19:21:53 2019 From: r at roze.lv (Reinis Rozitis) Date: Fri, 30 Aug 2019 22:21:53 +0300 Subject: ssl client auth trouble In-Reply-To: References: <2e8b76e7-88e0-e561-759e-6dff98f9eeb0@comcast.net> <003101d55f50$ab542ae0$01fc80a0$@roze.lv> Message-ID: <004301d55f68$2e413300$8ac39900$@roze.lv> > I will search for this. Not sure how to add this info to my logs, or > whether it logs failures too? $ssl_client_verify - contains the verification status You have to define a custom log_format (http://nginx.org/en/docs/http/ngx_http_log_module.html#log_format ) For example: log_format clientcerts '$remote_addr $ssl_client_s_dn $ssl_client_verify'; access_log logs/ssl.log clientcerts; .. and you can add whatever variables to better identify what's going on http://nginx.org/en/docs/http/ngx_http_ssl_module.html#variables rr From francis at daoine.org Fri Aug 30 19:45:14 2019 From: francis at daoine.org (Francis Daly) Date: Fri, 30 Aug 2019 20:45:14 +0100 Subject: proxy_pass redirect for address without trailing slash disregards Host port In-Reply-To: References: Message-ID: <20190830194514.3dzw3eesueytmcow@daoine.org> On Sat, Aug 24, 2019 at 11:17:34AM +0200, Nuno Gon?alves wrote: Hi there, > That's not correct, the server is taking the Host domain part from the > client Host header. It's just not taking the port part. > > This inconsistency is why I believe it's a bug. Before "absolute_redirect", I would have called it a bug too. Now I just consider it "a feature". I think that nginx has never used the port from the Host header (or from the request line, if present) in its redirects. I think that it might be useful for nginx to be able to use that -- as in "a redirect could optionally use exactly the host:port provided in the string from the request that nginx used to choose the server{}" -- but I guess that no-one who cared enough about it provided a justification and a patch. And now, "absolute_redirect off;" probably makes it unnecessary. If you know that you want to use exactly one "unusual" port in the url, I think that it is possible to fake it using server_name and the various *_in_redirect directives. But that is not a general solution. Cheers, f -- Francis Daly francis at daoine.org From francis at daoine.org Fri Aug 30 19:49:58 2019 From: francis at daoine.org (Francis Daly) Date: Fri, 30 Aug 2019 20:49:58 +0100 Subject: stream server name question In-Reply-To: References: Message-ID: <20190830194958.cvo6lkxls3koiadu@daoine.org> On Mon, Aug 26, 2019 at 08:55:31PM -0400, Louis Garcia wrote: Hi there, > I am able to use $ssl_preread_server_name to get the server name. This is > with https requests. Is there a corresponding embedded variable for http > requests? No. "stream" does not know about http or https. "stream" knows about tcp and ssl (and udp). $ssl_preread_server_name is the ssl name, not the https name -- the difference might seem subtle, but it is important here. If you want to proxy based on https name (or http name), use "http" not "stream". f -- Francis Daly francis at daoine.org From francis at daoine.org Fri Aug 30 19:54:52 2019 From: francis at daoine.org (Francis Daly) Date: Fri, 30 Aug 2019 20:54:52 +0100 Subject: Reverse proxy 404 error help! In-Reply-To: <20190830025433.oiugtwtabejfrywz@thinkarch.localdomain> References: <20190830025433.oiugtwtabejfrywz@thinkarch.localdomain> Message-ID: <20190830195452.fmus3jzl6xhmtpi5@daoine.org> On Thu, Aug 29, 2019 at 10:54:33PM -0400, Soon Hyouk Lee wrote: Hi there, some web services are not set up to be friendly to be reverse-proxied at a different part of the local url hierarchy than they know about. Perhaps this is one of them. If you can configure the back-end server to believe that it is rooted at /unifi/ instead of at /, then perhaps it can work. Otherwise, you may have more luck using a dedicated server{} block that reverse proxies everything to the unifi service without changing the local url. The other option, of trying to rewrite the content on-the-fly, is unlikely to work reliably. f -- Francis Daly francis at daoine.org From francis at daoine.org Fri Aug 30 20:25:11 2019 From: francis at daoine.org (Francis Daly) Date: Fri, 30 Aug 2019 21:25:11 +0100 Subject: proxy_set_header on HTTP or Server level In-Reply-To: References: Message-ID: <20190830202511.7ijnxejkmpatftio@daoine.org> On Fri, Aug 30, 2019 at 01:03:57PM -0400, stmx38 wrote: Hi there, > As per documentation - > http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_set_header: > Context: http, server, location > > > These directives are inherited from the previous level if and only if > there are no proxy_set_header directives defined on the current level. > Does it mean that if at least one of the proxy_set_header is defined on > location level we should define all other on this level because it broke > inheritance? Yes, that is what it means. The request is handled in a location{}, with proxy_pass. If there is any proxy_set_header in that location{}, then only those proxy_set_header values are used. If not, then if there is any proxy_set_header in the surrounding server{}, then only those proxy_set_header values are used. If not, then if there is any proxy_set_header in the surrounding http{}, then only those proxy_set_header values are used. In the same way, if "proxy_busy_buffers_size" is set in the location{}, that value is used; if not, if "proxy_busy_buffers_size" is set in the server{}, that value is used; if not, if "proxy_busy_buffers_size" is set in http{}, that value is used. In general in nginx (with a few exceptions) directive inheritance is "not at all", or "by replacement". > Per our experience, these directives only work on location level. They do > not apply when we set them up on HTTP or Server level. > > Why may be wrong with our configuration? Can you show a sample configuration, if there is a problem? But if it has "proxy_set_header" in the location{}, then any proxy_set_header outside that location is irrelevant for this request. f -- Francis Daly francis at daoine.org From francis at daoine.org Fri Aug 30 20:33:19 2019 From: francis at daoine.org (Francis Daly) Date: Fri, 30 Aug 2019 21:33:19 +0100 Subject: Allow internal redirect to URI x, but deny external request for x? In-Reply-To: <20190830173317.klutzplfsurrpmrw@mink.imca.aps.anl.gov> References: <20190830173317.klutzplfsurrpmrw@mink.imca.aps.anl.gov> Message-ID: <20190830203319.6cviksoncag2utum@daoine.org> On Fri, Aug 30, 2019 at 12:33:17PM -0500, J. Lewis Muir wrote: Hi there, > I'm wishing to be able to do an internal redirect > in nginx to URL x, but deny an external request to the same URL x so > that I don't serve the same content at more than one URL. Is there a > way to do that? > > For example, given the external request URI > > /my-app/index.html > > I want to do an internal redirect to > > /my-app/current/index.html > > but deny an external request for that same URI > > /my-app/current/index.html It sounds like you want "internal": http://nginx.org/r/internal > In my nginx config, I've changed $document_root to $realpath_root in the > appropriate FastCGI parameters, and have the following locations: > > location /my-app/ { > rewrite ^/my-app/(?!current/)(.*)$ /my-app/current/$1 last; > index index.php; > } > > location /my-app/current/ { > return 404; > } > > location /my-app/releases/ { > return 404; > } > > location ~ ^/my-app/.*?[^/]\.php(?:/|$) { > include php-fpm-realpath.conf; > } Note that you might want things like "location ^~ /my-app/current/", if you want that location to handle (and reject) an external request for /my-app/current/app.php. f -- Francis Daly francis at daoine.org From francis at daoine.org Fri Aug 30 20:54:40 2019 From: francis at daoine.org (Francis Daly) Date: Fri, 30 Aug 2019 21:54:40 +0100 Subject: Allow internal redirect to URI x, but deny external request for x? In-Reply-To: <20190830185823.6ohwsgpi6usmzsmz@mink.imca.aps.anl.gov> References: <20190830173317.klutzplfsurrpmrw@mink.imca.aps.anl.gov> <20190830182031.jrruhqrcyi5izmor@mink.imca.aps.anl.gov> <20190830183743.uzww2kwo2j4sutjb@mink.imca.aps.anl.gov> <20190830185823.6ohwsgpi6usmzsmz@mink.imca.aps.anl.gov> Message-ID: <20190830205440.2ztlmmzfy3j57gmu@daoine.org> On Fri, Aug 30, 2019 at 01:58:23PM -0500, J. Lewis Muir wrote: Hi there, > location ~ ^/my-app/(.*?[^/]\.php(?:/.*|$)) { > alias /srv/www/my-app/current/$1; > fastcgi_split_path_info ^(.+?\.php)(/.*)$; > return 200 "realpath_root: $realpath_root\nfastcgi_script_name: $fastcgi_script_name\nfastcgi_path_info: $fastcgi_path_info\n"; > } > > which yields the following: > > $ curl http://localhost/my-app/ > realpath_root: /srv/www/my-app/releases/1.0.2/index.php > fastcgi_script_name: /my-app/index.php > fastcgi_path_info: > > That doesn't seem right. Why not? http://nginx.org/r/$realpath_root says is it the current root or alias value, resolving symlinks. The request was /my-app/, the current request is /my-app/index.php, and you have alias'ed that to /srv/www/my-app/current/index.php http://nginx.org/r/$fastcgi_script_name (and what follows) describes the other variables. The request is /my-app/index.php and your fastcgi_split_path_info sets $fastcgi_script_name to "everything up to .php" and $fastcgi_path_info to "everything after .php", so long as .php is followed by / -- which it isn't, so both are unchanged from their defaults of "the uri" and "empty". (I'm somewhat guessing about the last part there; a test can probably demonstrate whether it is incorrect.) Cheers, f -- Francis Daly francis at daoine.org From lists at lazygranch.com Fri Aug 30 21:23:43 2019 From: lists at lazygranch.com (lists) Date: Fri, 30 Aug 2019 14:23:43 -0700 Subject: Allow internal redirect to URI x, but deny external request for x? In-Reply-To: <964ffd9d-b664-2425-9332-83e46244dde6@gmail.com> Message-ID: I've been following this thread not really out of need but rather that it is really interesting. That said, I don't think for security you want to "escape" the web root. The risk is that might aid a traversal attack. ? Original Message ? From: hobson42 at gmail.com Sent: August 30, 2019 12:01 PM To: nginx at nginx.org Reply-to: nginx at nginx.org Subject: Re: Allow internal redirect to URI x, but deny external request for x? Hi Lewis, On 30/08/19 18:33, J. Lewis Muir wrote: > Hello! > > I'm using nginx 1.12.2 on RHEL 7, and I've got a FastCGI web app that > uses a deployment structure which uses an atomic symlink change for an > atomic app deploy, and I'm wishing to be able to do an internal redirect > in nginx to URL x, but deny an external request to the same URL x so > that I don't serve the same content at more than one URL.? Is there a > way to do that? > You could place the different versions away from the root so they cannot be obtained from the web. Then they can be served by setting up a symlink to the desired version. This can be changed using "ln -sfn version/dir serving/root" and then restarting nginx to pick up the new version. By not using redirects, this method should be more efficient. Regards Ian -- Ian Hobson _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From jlmuir at imca-cat.org Fri Aug 30 21:59:36 2019 From: jlmuir at imca-cat.org (J. Lewis Muir) Date: Fri, 30 Aug 2019 16:59:36 -0500 Subject: Allow internal redirect to URI x, but deny external request for x? In-Reply-To: <20190830205440.2ztlmmzfy3j57gmu@daoine.org> References: <20190830173317.klutzplfsurrpmrw@mink.imca.aps.anl.gov> <20190830182031.jrruhqrcyi5izmor@mink.imca.aps.anl.gov> <20190830183743.uzww2kwo2j4sutjb@mink.imca.aps.anl.gov> <20190830185823.6ohwsgpi6usmzsmz@mink.imca.aps.anl.gov> <20190830205440.2ztlmmzfy3j57gmu@daoine.org> Message-ID: <20190830215936.mdqfwbz3ouna5ove@mink.imca.aps.anl.gov> On 08/30, Francis Daly wrote: > On Fri, Aug 30, 2019 at 01:58:23PM -0500, J. Lewis Muir wrote: > > Hi there, > > > location ~ ^/my-app/(.*?[^/]\.php(?:/.*|$)) { > > alias /srv/www/my-app/current/$1; > > fastcgi_split_path_info ^(.+?\.php)(/.*)$; > > return 200 "realpath_root: $realpath_root\nfastcgi_script_name: $fastcgi_script_name\nfastcgi_path_info: $fastcgi_path_info\n"; > > } > > > > which yields the following: > > > > $ curl http://localhost/my-app/ > > realpath_root: /srv/www/my-app/releases/1.0.2/index.php > > fastcgi_script_name: /my-app/index.php > > fastcgi_path_info: > > > > That doesn't seem right. > > Why not? > > http://nginx.org/r/$realpath_root says is it the current root or alias > value, resolving symlinks. > > The request was /my-app/, the current request is /my-app/index.php, > and you have alias'ed that to /srv/www/my-app/current/index.php Yes, you're absolutely right. This is where I went wrong. I was initially wishing to use the root directive location ~ ^/my-app/(.*?[^/]\.php(?:/.*|$)) { root /srv/www/my-app/current; include php-fpm-realpath.conf; } but then the URI would be appended to that, so for a request of /my-app/index.php it would result in /srv/www/my-app/current/my-app/index.php which wasn't what I wanted; instead I wanted /srv/www/my-app/current/index.php I was wishing for a way to specify a new root but with a modified request URI. So, I tried the alias directive, and I assumed that $document_root and $realpath_root would refer to the aliased document root, but obviously that can't be since nginx has no way of knowing what the aliased document root should be when all it has is a location alias which is the full path to the resource. Sorry for the trouble. > http://nginx.org/r/$fastcgi_script_name (and what follows) describes > the other variables. > > The request is /my-app/index.php and your fastcgi_split_path_info sets > $fastcgi_script_name to "everything up to .php" and $fastcgi_path_info to > "everything after .php", so long as .php is followed by / -- which it > isn't, so both are unchanged from their defaults of "the uri" and "empty". > > (I'm somewhat guessing about the last part there; a test can probably > demonstrate whether it is incorrect.) Yep, I'm sure you're right here as well. Sorry for the trouble; I just totally missed how this worked. Thank you for your help! Lewis From nginx-forum at forum.nginx.org Fri Aug 30 22:28:48 2019 From: nginx-forum at forum.nginx.org (j94305) Date: Fri, 30 Aug 2019 18:28:48 -0400 Subject: Allow internal redirect to URI x, but deny external request for x? In-Reply-To: <20190830173317.klutzplfsurrpmrw@mink.imca.aps.anl.gov> References: <20190830173317.klutzplfsurrpmrw@mink.imca.aps.anl.gov> Message-ID: I've been following this, and I would take a slightly different approach. 1. Serve all apps under /{app}/releases/{version}/{path} as you have them organized in the deployment structure in the file system. 2. Forget about symbolic links and other makeshift versioning/defaulting in the file system. 3. Use a keyval mapping to handle redirections (307) of /{app}/current/{stuff} to /{app}/releases/{currentVersion}/{stuff}, where the keyval mapping provides {app} => {currentVersion}. You can update an manage this during deployment. We usually include this in a CI/CD pipeline after deployment to dynamically switch to the last version (using a curl request to the NGINX API). If you can't use keyvals, use a static map and dynamically generate that "map" directive's mapping. Restart NGINX to reflect changes. Keyvals let you do this on the fly. The major advantage of this approach is with updates. You are most likely going to run into issues with browser or proxy caching if you provide different versions of files/apps under the same path. By having a canonical form that respects the version structure, you are avoiding this altogether. Yet, you have the flexibility to run hotfixes (replace existing files in an existing version without creating a new one), or experimental versions (which won't update the "current" pointer). I would try to keep the complexity low. Cheers, --j. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,285463,285480#msg-285480 From nginx-forum at forum.nginx.org Fri Aug 30 22:58:20 2019 From: nginx-forum at forum.nginx.org (j94305) Date: Fri, 30 Aug 2019 18:58:20 -0400 Subject: ssl client auth trouble In-Reply-To: <2e8b76e7-88e0-e561-759e-6dff98f9eeb0@comcast.net> References: <2e8b76e7-88e0-e561-759e-6dff98f9eeb0@comcast.net> Message-ID: I'm a big fan of throw-away certificates, i.e., self-signed certificates you may dispose of any time. It seems, the generation of proper certificates is still a mystery to some, so let me briefly include a recipe how to create them: Create a cert-client.conf of the following form: ---------------------snip-----snip------------------------ # Client certificate request [ req ] default_bits = 4096 # RSA key size encrypt_key = yes # Protect private key default_md = sha256 # MD to use utf8 = yes # Input is UTF-8 string_mask = utf8only # Emit UTF-8 strings prompt = no # Prompt for DN distinguished_name = email_dn # DN template req_extensions = email_reqext # Desired extensions [ email_dn ] 0.domainComponent = "com" 1.domainComponent = "example" 2.domainComponent = "project" organizationName = "{{ORG}}" organizationalUnitName = "{{CUSTOMER}}" commonName = "{{NAME}}" emailAddress = "{{EMAIL}}" [ email_reqext ] keyUsage = critical,digitalSignature,keyEncipherment extendedKeyUsage = emailProtection,clientAuth subjectKeyIdentifier = hash subjectAltName = email:{{EMAIL}} ---------------------snip-----snip------------------------ Replace project.example.com and the stuff in {{...}} accordingly. I keep the domain components usually fixed for a project, while the rest is dynamically replaced with sed. Assume, you have an instance of cert-client.conf replaced properly, and it's in wherever the value of $conf points to. Assume $name is the name you want to give to this certificate. Then you create it with ---------------------snip-----snip------------------------ openssl req -new -newkey rsa:4096 -x509 -days 365 -nodes -config $conf -sha256 \ -passout "pass:" -out "$name.pem" \ -keyout "$name.key" openssl pkcs12 -export \ -inkey "$name.key" -in "$name.pem" \ -passin "pass:" \ -out "$name.pkcs12" \ -passout "pass:" \ -name "$name" openssl x509 -text -noout -in "$name.pem" \ -passin "pass:" > "$name.info" ---------------------snip-----snip------------------------ This creates a PEM file (public key), key file (private key), and pkcs12 file (certificate). For Chrome and Firefox, use the respective configuration tool of the browser to introduce this as a new personal certificate. For IE, it is sufficient to import the certificate into the Windows certificate store. For the server side of TLS, I usually use something of this sort: ---------------------snip-----snip------------------------ # SAN certificates are defined once on the http context level # ssl_certificate /etc/nginx/certs/fullchain.pem; ssl_certificate_key /etc/nginx/certs/private.pem; ssl_protocols TLSv1.2 TLSv1.3; # TLSv1.3 requires openssl 1.1.1 or later. This tries to enable 0-RTT. #ssl_early_data on; # Prefer the faster Diffie Hellman Parameters over slower RSA algorithms # ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH"; ssl_prefer_server_ciphers on; ssl_ecdh_curve secp384r1; # openssl dhparam -out /etc/nginx/ssl/dhparam.pem 4096 ssl_dhparam /etc/nginx/certs/dhparam.pem; # SSL session parameters # ssl_session_cache shared:SSL:36m; ssl_session_timeout 3m; ssl_session_tickets on; ssl_session_ticket_key /etc/nginx/certs/ticket.key; ssl_session_ticket_key /etc/nginx/certs/ticket.key.old; ---------------------snip-----snip------------------------ with fullchain.pem being the public keys of the certificate chain, starting with the most specific, i.e., the server's SAN certificate. I use a little awk script to re-arrange public keys in the proper order - from the output of a "openssl pkcs7 -print_certs ..." and with dhparam.pem generated by openssl dhparam -out /etc/nginx/certs/dhparam.pem 4096 and ticket.key generated by openssl rand 80 > /etc/nginx/ticket.key Next thing is that TLS can only be switched on per server, not per location. I generally don't like SNI, so I prefer to have SAN certificates for servers. ---------------------snip-----snip------------------------ server { listen 443 ssl; server_name ...{all the names you need}...; ssl_trusted_certificate /etc/nginx/certs/clients.pem; ssl_verify_client optional_no_ca; ssl_verify_depth 1; ... } ---------------------snip-----snip------------------------ will do the trick, if clients.pem is simply a concatenation of all .pem files you have produced earlier (the public keys of clients who are supposed to have access). Note that I use the directive "ssl_trusted_certificate", so the list of permitted CAs is not communicated, but rather the client has to present a certificate. There is no requirement to use any specific CA. As each certificate is self-signed, the "CA" (issuer) of each certificate is the certificate itself. You may now check in the nginx.conf or in a Javascript handler what the client status is: $ssl_client_verify will be NONE if no certificate was presented, SUCCESS if the client specified one of the permitted certificates, and FAILED... if there was some failure. Once this is done, you can check $ssl_client_s_dn to find out who was authenticated. You may use a map directive to map this variable to the e-mail address from the certificate's DN, or look at any one of the other attributes. What you can query you'll find here: https://nginx.org/en/docs/http/ngx_http_ssl_module.html#variables This should be a complete recipe on how to set up working self-signed client certificates. If this works, you may play with other options :-) Cheers, --j. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,285456,285481#msg-285481 From francis at daoine.org Fri Aug 30 23:21:40 2019 From: francis at daoine.org (Francis Daly) Date: Sat, 31 Aug 2019 00:21:40 +0100 Subject: Allow internal redirect to URI x, but deny external request for x? In-Reply-To: <20190830215936.mdqfwbz3ouna5ove@mink.imca.aps.anl.gov> References: <20190830173317.klutzplfsurrpmrw@mink.imca.aps.anl.gov> <20190830182031.jrruhqrcyi5izmor@mink.imca.aps.anl.gov> <20190830183743.uzww2kwo2j4sutjb@mink.imca.aps.anl.gov> <20190830185823.6ohwsgpi6usmzsmz@mink.imca.aps.anl.gov> <20190830205440.2ztlmmzfy3j57gmu@daoine.org> <20190830215936.mdqfwbz3ouna5ove@mink.imca.aps.anl.gov> Message-ID: <20190830232140.yqspqr6rzyrenpdb@daoine.org> On Fri, Aug 30, 2019 at 04:59:36PM -0500, J. Lewis Muir wrote: Hi there, > I was wishing for a way to specify a new root but with a modified > request URI. So, I tried the alias directive, and I assumed that > $document_root and $realpath_root would refer to the aliased document > root, but obviously that can't be since nginx has no way of knowing what > the aliased document root should be when all it has is a location alias > which is the full path to the resource. Sorry for the trouble. It sounds like your desires are for requests: * starts with /my-app/current/ -> reject * starts with /my-app/releases/ -> reject * matches /my-app/something.php, or /myapp/something.php/anything -> fastcgi-process the file /srv/www/my-app/current/something.php * matches /my-app/something -> just send the file /srv/www/my-app/current/something Is that correct? If so -- do exactly that. For example (but mostly untested): == location ^~ /my-app/current/ { return 200 "nothing to see at /current/\n"; } location ^~ /my-app/releases/ { return 200 "nothing to see at /releases/\n"; } location ^~ /my-app/ { location ~ \.php($|/) { fastcgi_split_path_info ^/my-app(/.*php)(.*); root /srv/www/my-app/current/; include fastcgi.conf; fastcgi_pass unix:php.sock; } alias /srv/www/my-app/current/; } == Change the "return"s to 404s or whatever; change the "fastcgi_pass" destination; and don't worry about internal rewrites unless you need them. fastcgi.conf presumably sets SCRIPT_FILENAME and PATH_INFO and whatever else is interesting to sensible values; if not, add suitable fastcgi_param values explicitly here. You might want an "index index.php" somewhere to handle the request for /my-app/. But hopefully, any parts that don't Just Work as-is will leave enough clues to allow you to find or ask for the solution. Good luck with it, f -- Francis Daly francis at daoine.org From francis at daoine.org Sat Aug 31 07:27:31 2019 From: francis at daoine.org (Francis Daly) Date: Sat, 31 Aug 2019 08:27:31 +0100 Subject: Allow internal redirect to URI x, but deny external request for x? In-Reply-To: <20190830232140.yqspqr6rzyrenpdb@daoine.org> References: <20190830173317.klutzplfsurrpmrw@mink.imca.aps.anl.gov> <20190830182031.jrruhqrcyi5izmor@mink.imca.aps.anl.gov> <20190830183743.uzww2kwo2j4sutjb@mink.imca.aps.anl.gov> <20190830185823.6ohwsgpi6usmzsmz@mink.imca.aps.anl.gov> <20190830205440.2ztlmmzfy3j57gmu@daoine.org> <20190830215936.mdqfwbz3ouna5ove@mink.imca.aps.anl.gov> <20190830232140.yqspqr6rzyrenpdb@daoine.org> Message-ID: <20190831072731.5dew6ngns6glgwlp@daoine.org> On Sat, Aug 31, 2019 at 12:21:40AM +0100, Francis Daly wrote: Hi there, A few further thoughts here... > It sounds like your desires are for requests: > > * starts with /my-app/current/ -> reject > * starts with /my-app/releases/ -> reject > * matches /my-app/something.php, or /myapp/something.php/anything -> Typo there -- should be "/my-app/". But note that the "/my-app/" in the request and the "/my-app/" on the filesystem do not need to the same. (And also: the filesystem /my-app/ for the php files and the filesystem /my-app/ for other files do not need to be the same; if you want to keep your "static" and "processed" content separate.) > fastcgi-process the file /srv/www/my-app/current/something.php > * matches /my-app/something -> just send the file > /srv/www/my-app/current/something > > Is that correct? If so -- do exactly that. > > For example (but mostly untested): > > == > location ^~ /my-app/current/ { return 200 "nothing to see at /current/\n"; } > location ^~ /my-app/releases/ { return 200 "nothing to see at /releases/\n"; } > location ^~ /my-app/ { > location ~ \.php($|/) { > fastcgi_split_path_info ^/my-app(/.*php)(.*); If there might be more than one "php" in the request, that will split on "the last one". Perhaps you want to split on "the first one followed by slash". In that case, adjust the regex: fastcgi_split_path_info ^/my-app(/.*?php)($|/.*); > root /srv/www/my-app/current/; You did also show a "if (!-f" config, which is "404 if the matching php file is not present". That can be: try_files $fastcgi_script_name =404; because we have root and the variable set correctly here. > include fastcgi.conf; Possibly the only bits of that file that you care about are: fastcgi_param PATH_INFO $fastcgi_path_info; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; so you could just use those lines directly. > fastcgi_pass unix:php.sock; > } > alias /srv/www/my-app/current/; > } > == Cheers, f -- Francis Daly francis at daoine.org From jlmuir at imca-cat.org Sat Aug 31 14:10:09 2019 From: jlmuir at imca-cat.org (J. Lewis Muir) Date: Sat, 31 Aug 2019 09:10:09 -0500 Subject: Allow internal redirect to URI x, but deny external request for x? In-Reply-To: <20190830232140.yqspqr6rzyrenpdb@daoine.org> References: <20190830173317.klutzplfsurrpmrw@mink.imca.aps.anl.gov> <20190830182031.jrruhqrcyi5izmor@mink.imca.aps.anl.gov> <20190830183743.uzww2kwo2j4sutjb@mink.imca.aps.anl.gov> <20190830185823.6ohwsgpi6usmzsmz@mink.imca.aps.anl.gov> <20190830205440.2ztlmmzfy3j57gmu@daoine.org> <20190830215936.mdqfwbz3ouna5ove@mink.imca.aps.anl.gov> <20190830232140.yqspqr6rzyrenpdb@daoine.org> Message-ID: <20190831141009.ub2vibpfy63kd4fy@mink.imca.aps.anl.gov> On 08/31, Francis Daly wrote: > On Fri, Aug 30, 2019 at 04:59:36PM -0500, J. Lewis Muir wrote: > > Hi there, > > > I was wishing for a way to specify a new root but with a modified > > request URI. So, I tried the alias directive, and I assumed that > > $document_root and $realpath_root would refer to the aliased document > > root, but obviously that can't be since nginx has no way of knowing what > > the aliased document root should be when all it has is a location alias > > which is the full path to the resource. Sorry for the trouble. > > It sounds like your desires are for requests: > > * starts with /my-app/current/ -> reject > * starts with /my-app/releases/ -> reject > * matches /my-app/something.php, or /myapp/something.php/anything -> > fastcgi-process the file /srv/www/my-app/current/something.php > * matches /my-app/something -> just send the file > /srv/www/my-app/current/something > > Is that correct? If so -- do exactly that. Yes! > For example (but mostly untested): > > == > location ^~ /my-app/current/ { return 200 "nothing to see at /current/\n"; } > location ^~ /my-app/releases/ { return 200 "nothing to see at /releases/\n"; } > location ^~ /my-app/ { > location ~ \.php($|/) { > fastcgi_split_path_info ^/my-app(/.*php)(.*); > root /srv/www/my-app/current/; > include fastcgi.conf; > fastcgi_pass unix:php.sock; > } > alias /srv/www/my-app/current/; > } > == Wow! I had given up on getting this approach to work. Honestly, I don't think I would have thought of your solution. Brilliant! I tried it, and it worked after making a few tweaks! Here's what worked: location ^~ /my-app/current/ { return 404; } location ^~ /my-app/releases/ { return 404; } location ^~ /my-app/ { index index.php; location ~ [^/]\.php(?:/|$) { root /srv/www/my-app/current; fastcgi_split_path_info ^/my-app(/.+?\.php)(/.*)?$; include php-fpm-realpath.conf; } alias /srv/www/my-app/current/; } I changed the root directive to come before the fastcgi_split_path_info, but that was just aesthetic; it worked fine the other way too. Previously, I had the fastcgi_split_path_info call in php-fpm-realpath.conf along with the following file-exists check after it: if (!-f $realpath_root$fastcgi_script_name) { return 404; } But I moved the fastcgi_split_path_info call out of php-fpm-realpath.conf so that I could use the custom regex like in your suggestion. So, your solution solved my problem! Thank you! On a related note, while considering those return-404s for /my-app/current/ and /my-app/releases/, I realized that those could, in theory, conflict with the URIs of a web app if the web app actually used those same URIs. For example, maybe the web app is a GitLab-type app and a URI of /my-app/releases/ might very well be part of the app's URI set for displaying software releases or something. This has nothing to do with your solution and everything to do with my initial design choice that I was trying to achieve, and it's something I hadn't considered before. For my current app, it doesn't use those URIs, so it's not a problem, but as a general scheme, it's not perfect. I think one solution would be to move the app root directory to a different name so that it can't conflict. For example, have it live at /srv/www/my-app-deployment or something like that. Then I could just return a 404 for any request on that, e.g.: location ^~ /my-app-deployment/ { return 404; } Thanks again! Lewis From hobson42 at gmail.com Sat Aug 31 14:41:31 2019 From: hobson42 at gmail.com (Ian Hobson) Date: Sat, 31 Aug 2019 15:41:31 +0100 Subject: Allow internal redirect to URI x, but deny external request for x? In-Reply-To: References: Message-ID: <5c509bf0-452d-6abd-78f5-ddb95dee9351@gmail.com> Hi Mark, On 30/08/19 22:23, lists wrote: > I've been following this thread not really out of need but rather that it is really interesting. That said, I don't think for security you want to "escape" the web root. The risk is that might aid a traversal attack. > > I am curious to know how this might work. Nginx itself is safe, so it would have to be a script. And while those may indeed be vulnerable, is the vulnerability changed by symlinking the root elsewhere? I don't see any difference myself, but perhaps you know something I don't. Regards Ian > > > > > > ? Original Message > > > > From: hobson42 at gmail.com > Sent: August 30, 2019 12:01 PM > To: nginx at nginx.org > Reply-to: nginx at nginx.org > Subject: Re: Allow internal redirect to URI x, but deny external request for x? > > > Hi Lewis, > > On 30/08/19 18:33, J. Lewis Muir wrote: >> Hello! >> >> I'm using nginx 1.12.2 on RHEL 7, and I've got a FastCGI web app that >> uses a deployment structure which uses an atomic symlink change for an >> atomic app deploy, and I'm wishing to be able to do an internal redirect >> in nginx to URL x, but deny an external request to the same URL x so >> that I don't serve the same content at more than one URL.? Is there a >> way to do that? >> > You could place the different versions away from the root so they cannot > be obtained from the web. Then they can be served by setting up a > symlink to the desired version. > > This can be changed using "ln -sfn version/dir serving/root" and then > restarting nginx to pick up the new version. > > By not using redirects, this method should be more efficient. > > Regards > > Ian > > -- > Ian Hobson > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Ian Hobson Tel (+351) 910 418 473 From jlmuir at imca-cat.org Sat Aug 31 15:04:23 2019 From: jlmuir at imca-cat.org (J. Lewis Muir) Date: Sat, 31 Aug 2019 10:04:23 -0500 Subject: Allow internal redirect to URI x, but deny external request for x? In-Reply-To: <20190831072731.5dew6ngns6glgwlp@daoine.org> References: <20190830173317.klutzplfsurrpmrw@mink.imca.aps.anl.gov> <20190830182031.jrruhqrcyi5izmor@mink.imca.aps.anl.gov> <20190830183743.uzww2kwo2j4sutjb@mink.imca.aps.anl.gov> <20190830185823.6ohwsgpi6usmzsmz@mink.imca.aps.anl.gov> <20190830205440.2ztlmmzfy3j57gmu@daoine.org> <20190830215936.mdqfwbz3ouna5ove@mink.imca.aps.anl.gov> <20190830232140.yqspqr6rzyrenpdb@daoine.org> <20190831072731.5dew6ngns6glgwlp@daoine.org> Message-ID: <20190831150423.3uisrly2iibuccqk@mink.imca.aps.anl.gov> On 08/31, Francis Daly wrote: > On Sat, Aug 31, 2019 at 12:21:40AM +0100, Francis Daly wrote: > > Hi there, > > A few further thoughts here... > > > It sounds like your desires are for requests: > > > > * starts with /my-app/current/ -> reject > > * starts with /my-app/releases/ -> reject > > * matches /my-app/something.php, or /myapp/something.php/anything -> > > Typo there -- should be "/my-app/". Yeah, no problem; I knew what you meant. > But note that the "/my-app/" in the request and the "/my-app/" on the > filesystem do not need to the same. (And also: the filesystem /my-app/ > for the php files and the filesystem /my-app/ for other files do not > need to be the same; if you want to keep your "static" and "processed" > content separate.) Got it. > > fastcgi-process the file /srv/www/my-app/current/something.php > > * matches /my-app/something -> just send the file > > /srv/www/my-app/current/something > > > > Is that correct? If so -- do exactly that. > > > > For example (but mostly untested): > > > > == > > location ^~ /my-app/current/ { return 200 "nothing to see at /current/\n"; } > > location ^~ /my-app/releases/ { return 200 "nothing to see at /releases/\n"; } > > location ^~ /my-app/ { > > location ~ \.php($|/) { > > fastcgi_split_path_info ^/my-app(/.*php)(.*); > > If there might be more than one "php" in the request, that will split on > "the last one". Perhaps you want to split on "the first one followed by > slash". In that case, adjust the regex: > > fastcgi_split_path_info ^/my-app(/.*?php)($|/.*); Yes, I ended up doing something like that: fastcgi_split_path_info ^/synchweb(/.+?\.php)(/.*)?$; I think you'd want the "\." in yours before the "php", and then I think the only meaningful difference between ours would be that yours would match /my-app/.php while mine would not because I used ".+?" to match one or more reluctantly where you used ".*?" to match zero or more reluctantly. I was aware of https://www.nginx.com/resources/wiki/start/topics/examples/phpfcgi/ which has a location of location ~ [^/]\.php(/|$) { and a fastcgi_split_path_info of fastcgi_split_path_info ^(.+?\.php)(/.*)$; I've always wondered exactly what the idea was with starting that location regex with "[^/]", though. Why require the ".php" to be preceded by a character other than "/"? That means it would match a request of /foo.php but not /.php Is it because that's considered an invalid PHP file name? Is it because some PHP web apps uses a directory named ".php" as a private directory that should not be served? I don't know. > > root /srv/www/my-app/current/; > > You did also show a "if (!-f" config, which is "404 if the matching php > file is not present". That can be: > > try_files $fastcgi_script_name =404; > > because we have root and the variable set correctly here. Ah, I see! Thank you for pointing that out! I like that better. > > include fastcgi.conf; > > Possibly the only bits of that file that you care about are: > > fastcgi_param PATH_INFO $fastcgi_path_info; > fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; > > so you could just use those lines directly. Yes, I could, but I was trying to make my php-fpm-realpath.conf reusable for other apps which is why I was keeping those lines in it. And, I think I mentioned this earlier, but I changed them to use $realpath_root instead of $document_root because of the symlinks and my desire to support an atomic app deploy with no downtime and no nginx reload. Thanks again! Lewis From jlmuir at imca-cat.org Sat Aug 31 15:31:00 2019 From: jlmuir at imca-cat.org (J. Lewis Muir) Date: Sat, 31 Aug 2019 10:31:00 -0500 Subject: Allow internal redirect to URI x, but deny external request for x? In-Reply-To: <964ffd9d-b664-2425-9332-83e46244dde6@gmail.com> References: <20190830173317.klutzplfsurrpmrw@mink.imca.aps.anl.gov> <964ffd9d-b664-2425-9332-83e46244dde6@gmail.com> Message-ID: <20190831153100.7hbxjwss4g3ti7rm@mink.imca.aps.anl.gov> On 08/30, Ian Hobson wrote: > Hi Lewis, > > On 30/08/19 18:33, J. Lewis Muir wrote: > > Hello! > > > > I'm using nginx 1.12.2 on RHEL 7, and I've got a FastCGI web app that > > uses a deployment structure which uses an atomic symlink change for an > > atomic app deploy, and I'm wishing to be able to do an internal redirect > > in nginx to URL x, but deny an external request to the same URL x so > > that I don't serve the same content at more than one URL. Is there a > > way to do that? > > > You could place the different versions away from the root so they cannot be > obtained from the web. Then they can be served by setting up a symlink to > the desired version. > > This can be changed using "ln -sfn version/dir serving/root" and then > restarting nginx to pick up the new version. > > By not using redirects, this method should be more efficient. > > Regards > > Ian Hi, Ian! Thank you for the suggestion! That's an interesting idea, and that would avoid needing to exclude those URIs from being served and I think avoid needing to change the document root and maybe some other things. I toyed around with some designs and came up with four; I think I like the third one best, but I'm not sure, maybe the fourth one is better. = Design #1 Web app root: /srv/www/app/my-app current -> releases/1.0.2 releases 1.0.0 1.0.1 1.0.2 nginx server document root: /srv/www/host/localhost my-app -> /srv/www/app/my-app/current Comments: This tries to share apps between virtual hosts. It provides no way for different virtual hosts to serve the same app but with the app configured differently. Also, all virtual hosts have to serve the same version of the app. = Design #2 Web app root: /srv/www/app/my-app 1.0.0 1.0.1 1.0.2 nginx server document root: /srv/www/host/localhost my-app -> /srv/www/app/my-app/1.0.2 Comments: Similar to design #1 except now the virtual hosts can server different versions of the app, but for a given version, the app is still configured in the exact same way for all virtual hosts. = Design #3 Web app root: /srv/www/localhost/app my-app 1.0.0 1.0.1 1.0.2 nginx server document root: /srv/www/localhost/root my-app -> ../app/my-app/1.0.2 Comments: I like this because now each virtual host has its own set of deployed apps, so the apps can be configured specifically for that particular virtual host. = Design #4 Web app root: /srv/www/localhost/app my-app current -> releases/1.0.2 releases 1.0.0 1.0.1 1.0.2 nginx server document root: /srv/www/localhost/root my-app -> ../app/my-app/current Comments: Similar to design #3 but moves where the "current" symlink lives. So, the deployed version of the app is controlled under the web app root rather under the nginx server document root. Maybe this is a little better because the nginx config reference to "my-app" can be sure to be correct and can be changed if desired, and be unrelated to which version of the app is deployed via the "current" symlink in the web app root. The downside is two symlinks (i.e., a symlink to a symlink) as opposed to one. Regards, Lewis From nginx-forum at forum.nginx.org Sat Aug 31 19:18:00 2019 From: nginx-forum at forum.nginx.org (glareboa) Date: Sat, 31 Aug 2019 15:18:00 -0400 Subject: Is there a limitation in nginx on the number of simultaneous via proxy_pass In-Reply-To: <5B206D9D-18C9-4790-BC2C-403497CA2EFE@opensource.com.vn> References: <5B206D9D-18C9-4790-BC2C-403497CA2EFE@opensource.com.vn> Message-ID: <157924b9121dc03bbb18f35e9faf3b2c.NginxMailingListEnglish@forum.nginx.org> Hung Nguyen: No, it?s browser limitation You're right, this is a browser limitation Posted at Nginx Forum: https://forum.nginx.org/read.php?2,285362,285494#msg-285494 From lists at lazygranch.com Sat Aug 31 19:19:54 2019 From: lists at lazygranch.com (lists) Date: Sat, 31 Aug 2019 12:19:54 -0700 Subject: Allow internal redirect to URI x, but deny external request for x? In-Reply-To: <5c509bf0-452d-6abd-78f5-ddb95dee9351@gmail.com> Message-ID: <918ud8b383qv6760v7kin63m.1567279194826@lazygranch.com> Nginx does detect these traversal attacks. They come up as a 400 error. I got two yesterday. But out of paranoia, I wouldn't leave the web root. There is always some zero day. That traversal attack was from some new to me Hong Kong hosting company and earned a place on my firewall block. Blocking just keeps the log file size down. There will be others. https://null-byte.wonderhowto.com/how-to/perform-directory-traversal-extract-sensitive-information-0185558/ I have run dotdotpwn. Lots of false positives. It takes forever. On nearly a daily basis, some entity gets hacked because of a misconfiguration. So I make sure I have secured the low hanging fruit. I watch file ownership and permissions. That is free. I don't have a WAF but I use Nginx maps and pattern match common hacks, given them the 444. Simple stuff like if you request some WordPress feature you get flagged because I don't run WordPress. I found a list of bad user agents on GitHub that I flag on. ? Original Message ? From: hobson42 at gmail.com Sent: August 31, 2019 7:41 AM To: nginx at nginx.org Reply-to: nginx at nginx.org Subject: Re: Allow internal redirect to URI x, but deny external request for x? Hi Mark, On 30/08/19 22:23, lists wrote: > I've been following this thread not really out of need but rather that it is really interesting. That said, I don't think for security you want to "escape" the web root. The risk is that might aid a traversal attack. > > I am curious to know how this might work. Nginx itself is safe, so it would have to be a script. And while those may indeed be vulnerable, is the vulnerability changed by symlinking the root elsewhere? I don't see any difference myself, but perhaps you know something I don't. Regards Ian > > > > > > ? Original Message > > > > From: hobson42 at gmail.com > Sent: August 30, 2019 12:01 PM > To: nginx at nginx.org > Reply-to: nginx at nginx.org > Subject: Re: Allow internal redirect to URI x, but deny external request for x? > > > Hi Lewis, > > On 30/08/19 18:33, J. Lewis Muir wrote: >> Hello! >> >> I'm using nginx 1.12.2 on RHEL 7, and I've got a FastCGI web app that >> uses a deployment structure which uses an atomic symlink change for an >> atomic app deploy, and I'm wishing to be able to do an internal redirect >> in nginx to URL x, but deny an external request to the same URL x so >> that I don't serve the same content at more than one URL.? Is there a >> way to do that? >> > You could place the different versions away from the root so they cannot > be obtained from the web. Then they can be served by setting up a > symlink to the desired version. > > This can be changed using "ln -sfn version/dir serving/root" and then > restarting nginx to pick up the new version. > > By not using redirects, this method should be more efficient. > > Regards > > Ian > > -- > Ian Hobson > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Ian Hobson Tel (+351) 910 418 473 _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From francis at daoine.org Sat Aug 31 20:50:45 2019 From: francis at daoine.org (Francis Daly) Date: Sat, 31 Aug 2019 21:50:45 +0100 Subject: Allow internal redirect to URI x, but deny external request for x? In-Reply-To: <20190831141009.ub2vibpfy63kd4fy@mink.imca.aps.anl.gov> References: <20190830173317.klutzplfsurrpmrw@mink.imca.aps.anl.gov> <20190830182031.jrruhqrcyi5izmor@mink.imca.aps.anl.gov> <20190830183743.uzww2kwo2j4sutjb@mink.imca.aps.anl.gov> <20190830185823.6ohwsgpi6usmzsmz@mink.imca.aps.anl.gov> <20190830205440.2ztlmmzfy3j57gmu@daoine.org> <20190830215936.mdqfwbz3ouna5ove@mink.imca.aps.anl.gov> <20190830232140.yqspqr6rzyrenpdb@daoine.org> <20190831141009.ub2vibpfy63kd4fy@mink.imca.aps.anl.gov> Message-ID: <20190831205045.o647ydblhwau3fan@daoine.org> On Sat, Aug 31, 2019 at 09:10:09AM -0500, J. Lewis Muir wrote: > On 08/31, Francis Daly wrote: Hi there, > > * starts with /my-app/current/ -> reject > > * starts with /my-app/releases/ -> reject Actually -- those two "rejects" should not be needed. The app probably should not be installed in the general nginx document root directory. The "alias" mentioning the "app/current" directory means that that is the only part that nginx will try to serve files from. The "root" mentioning the "app/current" directory means that that is the only part that nginx will look in (try_files) and mention to the fastcgi server (fastcgi_param). So the "app/releases" directory will not be web-accessible; and the "app/current" directory will only be accessible by the explicit url that you define. So the full config should be of the form location ^~ /app-url/ { alias /active-app-dir/; location ~ \.php(/|$) { root /active-app-dir; fastcgi_split_path_info ^/app-url(/.*?php)($|/.*); try_files $fastcgi_script_name =404; include fastcgi.conf; fastcgi_pass unix:php.sock; } } Adjust regexes based on what you want. "app-url" can be "my-app". "/active-app-dir" can be "/opt/app/releases/3" or "/opt/my-app/current" or anything else. > I changed the root directive to come before the fastcgi_split_path_info, > but that was just aesthetic; it worked fine the other way > too. Yes. For many directives in nginx, the order in the config file does not matter. ("rewrite" module directives use the order. And regex locations use their order. I think that most others do not. Your fastcgi server might care about the order that the fastcgi_param directives had, but nginx does not.) > Previously, I had the fastcgi_split_path_info call in > php-fpm-realpath.conf along with the following file-exists check after Using "realpath" should not affect nginx at all. nginx invites the fastcgi server to use pathname2 instead of pathname1; so the fastcgi server is the only thing that should care. > For my current app, it doesn't use those URIs, so it's not a problem, > but as a general scheme, it's not perfect. I think one solution would > be to move the app root directory to a different name so that it can't > conflict. For example, have it live at > > /srv/www/my-app-deployment As above -- that shouldn't matter. If the app is not deployed in the web server document root, only the specific alias/root directory is accessible, and the entire url-space below that is available. (And you can have one url prefix /my-app/, and a separate url prefix /my-app-prev/, which uses the next most recent version. Restrict access to that location{} to your source IP address, and you can do regression testing between the two.) > or something like that. Then I could just return a 404 for any request > on that, e.g.: > > location ^~ /my-app-deployment/ { > return 404; > } If you don't want nginx to serve content from the my-app-deployment directory, it is probably easier for it to be somewhere other than /srv/www. It is hard to misconfigure nginx in that case. Cheers, f -- Francis Daly francis at daoine.org From jlmuir at imca-cat.org Sat Aug 31 21:55:26 2019 From: jlmuir at imca-cat.org (J. Lewis Muir) Date: Sat, 31 Aug 2019 16:55:26 -0500 Subject: Allow internal redirect to URI x, but deny external request for x? In-Reply-To: <20190831205045.o647ydblhwau3fan@daoine.org> References: <20190830173317.klutzplfsurrpmrw@mink.imca.aps.anl.gov> <20190830182031.jrruhqrcyi5izmor@mink.imca.aps.anl.gov> <20190830183743.uzww2kwo2j4sutjb@mink.imca.aps.anl.gov> <20190830185823.6ohwsgpi6usmzsmz@mink.imca.aps.anl.gov> <20190830205440.2ztlmmzfy3j57gmu@daoine.org> <20190830215936.mdqfwbz3ouna5ove@mink.imca.aps.anl.gov> <20190830232140.yqspqr6rzyrenpdb@daoine.org> <20190831141009.ub2vibpfy63kd4fy@mink.imca.aps.anl.gov> <20190831205045.o647ydblhwau3fan@daoine.org> Message-ID: <20190831215526.mmxekmk7scfoertl@mink.imca.aps.anl.gov> On 08/31, Francis Daly wrote: > On Sat, Aug 31, 2019 at 09:10:09AM -0500, J. Lewis Muir wrote: > > On 08/31, Francis Daly wrote: > > Hi there, > > > > * starts with /my-app/current/ -> reject > > > * starts with /my-app/releases/ -> reject > > Actually -- those two "rejects" should not be needed. > > The app probably should not be installed in the general nginx document > root directory. The "alias" mentioning the "app/current" directory means > that that is the only part that nginx will try to serve files from. The > "root" mentioning the "app/current" directory means that that is the only > part that nginx will look in (try_files) and mention to the fastcgi server > (fastcgi_param). > > So the "app/releases" directory will not be web-accessible; and the > "app/current" directory will only be accessible by the explicit url that > you define. > > So the full config should be of the form > > location ^~ /app-url/ { > alias /active-app-dir/; > location ~ \.php(/|$) { > root /active-app-dir; > fastcgi_split_path_info ^/app-url(/.*?php)($|/.*); > try_files $fastcgi_script_name =404; > include fastcgi.conf; > fastcgi_pass unix:php.sock; > } > } I can't believe this! Another great insight! Thank you! I haven't tried it, but yes, that looks way better, and your observation about not needing the two rejects puts to rest my incorrect belief that there was a URI namespace problem with the app directory structure (i.e., with the "/current/" and "/releases/" URI components). And yes, I will move the app root out of the nginx document root to avoid the unnecessary risk of an nginx misconfiguration. > Adjust regexes based on what you want. > > "app-url" can be "my-app". "/active-app-dir" can be "/opt/app/releases/3" > or "/opt/my-app/current" or anything else. Got it. > > Previously, I had the fastcgi_split_path_info call in > > php-fpm-realpath.conf along with the following file-exists check after > > Using "realpath" should not affect nginx at all. nginx invites the > fastcgi server to use pathname2 instead of pathname1; so the fastcgi > server is the only thing that should care. Hmm, I might not be understanding this. The rationale of using $realpath_root instead of $document_root was to make it so that a new version of the web app could be deployed atomically at any time by changing the "current" symlink, meaning that, for example, if the "current" symlink were changed right in the middle of a request being handled in PHP, it wouldn't be possible for one part of the request to execute in PHP in the old version of the app and another part to execute in the new version. By using $realpath_root, the idea was to ensure that for any given request being handled in PHP, it would execute in its entirety in the same version of the web app. That's why I was doing in php-fpm-realpath.conf if (!-f $realpath_root$fastcgi_script_name) { return 404; } and fastcgi_param DOCUMENT_ROOT $realpath_root; fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name; So, does that make sense, or am I still not understanding this? I don't know what you mean by "nginx invites the fastcgi server to use pathname2 instead of pathname1." What are pathname1 and pathname2? > > For my current app, it doesn't use those URIs, so it's not a problem, > > but as a general scheme, it's not perfect. I think one solution would > > be to move the app root directory to a different name so that it can't > > conflict. For example, have it live at > > > > /srv/www/my-app-deployment > > As above -- that shouldn't matter. If the app is not deployed in the web > server document root, only the specific alias/root directory is accessible, > and the entire url-space below that is available. Understood. > (And you can have one url prefix /my-app/, and a separate url prefix > /my-app-prev/, which uses the next most recent version. Restrict access > to that location{} to your source IP address, and you can do regression > testing between the two.) > > > or something like that. Then I could just return a 404 for any request > > on that, e.g.: > > > > location ^~ /my-app-deployment/ { > > return 404; > > } > > If you don't want nginx to serve content from the my-app-deployment > directory, it is probably easier for it to be somewhere other than > /srv/www. > > It is hard to misconfigure nginx in that case. Agreed; I will move it out of there. Thank you! Lewis From nginx at qzxj.net Sat Aug 31 23:46:12 2019 From: nginx at qzxj.net (Blake Williams) Date: Sun, 1 Sep 2019 09:46:12 +1000 Subject: Patch: slash_redirect_temporary directive Message-ID: <73B604EF-7C7A-4D30-9E82-1168001CA07F@qzxj.net> Hello! We ran into an issue where with the permanent redirects in ngx_http_static_module.c that occur when you omit a slash when requesting a folder, for example from "/foo" to the folder "/foo/". We changed some things around in our site so that "/foo" was actually a file, not a folder, but unfortunately, browsers aggressively cache 301 redirects so our clients were trying to hit the new URL, the browser used its permanent cache, and they'd be redirected to "/foo/" again, which no longer existed as it had been changed to a file. This patch adds an extra configuration directive that allows you to configure that redirect to issue a 302 instead: # HG changeset patch # User Blake Williams # Date 1567294381 -36000 # Sun Sep 01 09:33:01 2019 +1000 # Node ID 85c36c3f5c349a83b1b397a8aad2d11bf6a0875a # Parent 9f1f9d6e056a4f85907957ef263f78a426ae4f9c Add slash_redirect_temporary directive to core diff -r 9f1f9d6e056a -r 85c36c3f5c34 contrib/vim/syntax/nginx.vim --- a/contrib/vim/syntax/nginx.vim Mon Aug 19 15:16:06 2019 +0300 +++ b/contrib/vim/syntax/nginx.vim Sun Sep 01 09:33:01 2019 +1000 @@ -571,6 +571,7 @@ syn keyword ngxDirective contained session_log_format syn keyword ngxDirective contained session_log_zone syn keyword ngxDirective contained set_real_ip_from +syn keyword ngxDirective contained slash_redirect_temporary syn keyword ngxDirective contained slice syn keyword ngxDirective contained smtp_auth syn keyword ngxDirective contained smtp_capabilities diff -r 9f1f9d6e056a -r 85c36c3f5c34 src/http/modules/ngx_http_static_module.c --- a/src/http/modules/ngx_http_static_module.c Mon Aug 19 15:16:06 2019 +0300 +++ b/src/http/modules/ngx_http_static_module.c Sun Sep 01 09:33:01 2019 +1000 @@ -188,7 +188,11 @@ r->headers_out.location->value.len = len; r->headers_out.location->value.data = location; - return NGX_HTTP_MOVED_PERMANENTLY; + if (!clcf->slash_redirect_temporary) { + return NGX_HTTP_MOVED_PERMANENTLY; + } else { + return NGX_HTTP_MOVED_TEMPORARILY; + } } #if !(NGX_WIN32) /* the not regular files are probably Unix specific */ diff -r 9f1f9d6e056a -r 85c36c3f5c34 src/http/ngx_http_core_module.c --- a/src/http/ngx_http_core_module.c Mon Aug 19 15:16:06 2019 +0300 +++ b/src/http/ngx_http_core_module.c Sun Sep 01 09:33:01 2019 +1000 @@ -520,6 +520,13 @@ offsetof(ngx_http_core_loc_conf_t, satisfy), &ngx_http_core_satisfy }, + { ngx_string("slash_redirect_temporary"), + NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_FLAG, + ngx_conf_set_flag_slot, + NGX_HTTP_LOC_CONF_OFFSET, + offsetof(ngx_http_core_loc_conf_t, slash_redirect_temporary), + NULL }, + { ngx_string("internal"), NGX_HTTP_LOC_CONF|NGX_CONF_NOARGS, ngx_http_core_internal, @@ -3443,6 +3450,8 @@ clcf->open_file_cache_errors = NGX_CONF_UNSET; clcf->open_file_cache_events = NGX_CONF_UNSET; + clcf->slash_redirect_temporary = NGX_CONF_UNSET; + #if (NGX_HTTP_GZIP) clcf->gzip_vary = NGX_CONF_UNSET; clcf->gzip_http_version = NGX_CONF_UNSET_UINT; @@ -3727,6 +3736,9 @@ ngx_conf_merge_sec_value(conf->open_file_cache_events, prev->open_file_cache_events, 0); + + ngx_conf_merge_value(conf->slash_redirect_temporary, + prev->slash_redirect_temporary, 0); #if (NGX_HTTP_GZIP) ngx_conf_merge_value(conf->gzip_vary, prev->gzip_vary, 0); diff -r 9f1f9d6e056a -r 85c36c3f5c34 src/http/ngx_http_core_module.h --- a/src/http/ngx_http_core_module.h Mon Aug 19 15:16:06 2019 +0300 +++ b/src/http/ngx_http_core_module.h Sun Sep 01 09:33:01 2019 +1000 @@ -433,6 +433,8 @@ ngx_uint_t types_hash_max_size; ngx_uint_t types_hash_bucket_size; + ngx_flag_t slash_redirect_temporary; + ngx_queue_t *locations; #if 0 From nginx at qzxj.net Sat Aug 31 23:49:08 2019 From: nginx at qzxj.net (Blake Williams) Date: Sun, 1 Sep 2019 09:49:08 +1000 Subject: Patch: tests for slash_redirect_temporary Message-ID: <045555FC-427A-4E29-97DF-9C60BDCC0723@qzxj.net> # HG changeset patch # User Blake Williams # Date 1567294312 -36000 # Sun Sep 01 09:31:52 2019 +1000 # Node ID 9cdf1baf51d3b2ae8fb0d80d10148ba9605d1799 # Parent 44ce08f5259f034c102b7f99b37c423de848c75a Tests: added slash_redirect_temporary diff -r 44ce08f5259f -r 9cdf1baf51d3 http_slash_redirect_temporary.t --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/http_slash_redirect_temporary.t Sun Sep 01 09:31:52 2019 +1000 @@ -0,0 +1,74 @@ +#!/usr/bin/perl + +# (C) Sergey Kandaurov +# (C) Nginx, Inc. + +# Tests for slash_redirect_temporary directive. + +############################################################################### + +use warnings; +use strict; + +use Test::More; + +BEGIN { use FindBin; chdir($FindBin::Bin); } + +use lib 'lib'; +use Test::Nginx; + +############################################################################### + +select STDERR; $| = 1; +select STDOUT; $| = 1; + +my $t = Test::Nginx->new()->has(qw/http/) + ->write_file_expand('nginx.conf', <<'EOF'); + +%%TEST_GLOBALS%% + +daemon off; + +events { +} + +http { + %%TEST_GLOBALS_HTTP%% + + server { + listen 127.0.0.1:8080; + server_name on; + + location /on/ { + slash_redirect_temporary on; + root %%TESTDIR%%/data; + } + location /off/ { + slash_redirect_temporary off; + root %%TESTDIR%%/data; + } + } +} + +EOF + +mkdir($t->testdir() . '/data'); +mkdir($t->testdir() . '/data/on'); +mkdir($t->testdir() . '/data/on/dir'); +mkdir($t->write_file('/data/on/dir/index.html', '')); +mkdir($t->testdir() . '/data/off'); +mkdir($t->testdir() . '/data/off/dir'); +mkdir($t->write_file('/data/off/dir/index.html', '')); + +$t->run()->plan(4); + +############################################################################### + +my $p = port(8080); + +like(http_get('/off/dir'), qr!HTTP/1.1 301.*Location: .*/off/dir/!sm, 'slash off permanent'); +like(http_get('/off/dir/'), qr!HTTP/1.1 200!m, 'slash off direct'); +like(http_get('/on/dir'), qr!HTTP/1.1 302.*Location: .*/on/dir/!sm, 'slash on permanent'); +like(http_get('/on/dir/'), qr!HTTP/1.1 200!m, 'slash on direct'); + +###############################################################################