From nginx-forum at forum.nginx.org Fri Mar 1 11:16:19 2019 From: nginx-forum at forum.nginx.org (kagemandandersen) Date: Fri, 01 Mar 2019 06:16:19 -0500 Subject: Missing period character in URL Message-ID: <50f1e4552655878146d2a2ed93c964bc.NginxMailingListEnglish@forum.nginx.org> I have a service that sends request through our Nginx gateway proxy. The URL contains two period characters (e.g. https://localhost:1234/servicename/some.name./serviceendpoint). When the URL is passed through the Nginx gateway, the URL is missing the last period character in the URL, resulting in a failure. Any idea what could cause this behaviour? A URL normaliser perhaps? I am using Nginx version 1.15.7 for Windows. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,283238,283238#msg-283238 From ian.stephens.ufo at gmail.com Fri Mar 1 12:24:38 2019 From: ian.stephens.ufo at gmail.com (Ian Stephens) Date: Fri, 1 Mar 2019 12:24:38 +0000 Subject: Ignoring proxy_cache_background_update on; Message-ID: We are running NGINX in front of our backend server. We are attempting to enable the proxy_cache_background_update feature to allow NGINX to async updates to the cache and serve STALE content while it does this. However, we are noticing that it still delivers STALE content slowly as if it's not serving from the cache. The time it takes after an item expires is very slow and clearly not served from cache - you can tell it's going to the backend server, getting an update and delivering it to the client. Here is our configuration from NGINX: proxy_cache_revalidate on; proxy_ignore_headers Expires; proxy_cache_background_update on; Our backend server is delivering the following headers: HTTP/1.1 200 OK Date: Thu, 28 Feb 2019 21:07:09 GMT Server: Apache Cache-Control: max-age=1800, stale-while-revalidate=604800 Content-Type: text/html; charset=UTF-8 When attempting an expired page fetch we do notice the following header: X-Cache: STALE However, when providing this response it is very slow as if it's contacted the backend server and done it in realtime. NGINX version: $ nginx -v nginx version: nginx/1.15.9 Any suggestions, tips and config changes are greatly appreciated. *UPDATE* It seems that the nginx server is honoring serving stale content (as we have tested) but it also updates the cache from the backend on the same request/thread thus causing the slow response time to the client. I.e. it seems to be totally ignoring the proxy_cache_background_update on; directive and not updating in the background on a separate subrequest (async). -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Fri Mar 1 12:44:03 2019 From: nginx-forum at forum.nginx.org (kagemandandersen) Date: Fri, 01 Mar 2019 07:44:03 -0500 Subject: Missing period character in URL In-Reply-To: <50f1e4552655878146d2a2ed93c964bc.NginxMailingListEnglish@forum.nginx.org> References: <50f1e4552655878146d2a2ed93c964bc.NginxMailingListEnglish@forum.nginx.org> Message-ID: Seems to be because of the './'. If I add a character between the period and the slash character, then the URL does not change. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,283238,283240#msg-283240 From mdounin at mdounin.ru Fri Mar 1 13:27:25 2019 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 1 Mar 2019 16:27:25 +0300 Subject: Missing period character in URL In-Reply-To: <50f1e4552655878146d2a2ed93c964bc.NginxMailingListEnglish@forum.nginx.org> References: <50f1e4552655878146d2a2ed93c964bc.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20190301132724.GN1877@mdounin.ru> Hello! On Fri, Mar 01, 2019 at 06:16:19AM -0500, kagemandandersen wrote: > I have a service that sends request through our Nginx gateway proxy. The URL > contains two period characters (e.g. > https://localhost:1234/servicename/some.name./serviceendpoint). When the URL > is passed through the Nginx gateway, the URL is missing the last period > character in the URL, resulting in a failure. > > Any idea what could cause this behaviour? A URL normaliser perhaps? > > I am using Nginx version 1.15.7 for Windows. On Windows, trailing dot is not significant and is ignored by filesystem accesses, so it is normalized away. See here for more details: http://hg.nginx.org/nginx/rev/5d86ab8f2340 http://mailman.nginx.org/pipermail/nginx-announce/2012/000086.html If you want nginx to preserve URI of a request exactly as it was provided by the client - make sure to use proxy_pass without an URI compontent, that is: location / { proxy_pass http://backend; } Note no trailing "/" after "backend". -- Maxim Dounin http://mdounin.ru/ From nginx-forum at forum.nginx.org Fri Mar 1 15:46:23 2019 From: nginx-forum at forum.nginx.org (hungnv) Date: Fri, 01 Mar 2019 10:46:23 -0500 Subject: aio thread - socket leaks? Message-ID: <60d43b3a7de5566236debd7a6bb910cd.NginxMailingListEnglish@forum.nginx.org> Hello, We ran into a strange issue today. We start settings aio thread on our storage (streaming) server. There?s 2 server which different settings: Server 1: aio on; Server 2: thread_pool streaming threads=128 max_queue=65536; aio threads=streaming; Just after restart 2 servers, server1 can run well without any problem. On server2, which run by server www, cannot do anything else (for example execute bash by www), resource temporary unavailable I try to check what is going on. On server1: lsof | grep -i www | wc -l 3188 Server 2: lsof | grep -i www | wc -l 346716 There?s many socket open on server 2: lsof | grep -i www nginx 188346 190631 www 43u unix 0xffff91a357d1fc00 0t0 14505437 socket nginx 188346 190631 www 45u unix 0xffff91a357d1d800 0t0 14505439 socket nginx 188346 190631 www 47u unix 0xffff91a357d1dc00 0t0 14505441 socket nginx 188346 190631 www 49u unix 0xffff91a357d1c800 0t0 14505443 socket nginx 188346 190631 www 51u unix 0xffff91ab9ada5800 0t0 14505445 socket nginx 188346 190631 www 53u unix 0xffff91a364637800 0t0 14505447 socket nginx 188346 190631 www 55u unix 0xffff91a364633000 0t0 14505449 socket nginx 188346 190631 www 57u unix 0xffff91a364636000 0t0 14505451 socket nginx 188346 190631 www 59u unix 0xffff91a364631c00 0t0 14505453 socket nginx 188346 190631 www 61u unix 0xffff91a364632000 0t0 14505455 socket nginx 188346 190631 www 63u unix 0xffff91a364636800 0t0 14505457 socket nginx 188346 190631 www 65u unix 0xffff91a364634000 0t0 14505459 socket nginx 188346 190631 www 67u unix 0xffff91a364637400 0t0 14505461 socket nginx 188346 190631 www 69u unix 0xffff91a364632400 0t0 14505463 socket nginx 188346 190631 www 72u unix 0xffff91b278e95c00 0t0 14505466 socket Can you tell me what is going on here? - hungnv Posted at Nginx Forum: https://forum.nginx.org/read.php?2,283242,283242#msg-283242 From vbart at nginx.com Fri Mar 1 17:04:36 2019 From: vbart at nginx.com (Valentin V. Bartenev) Date: Fri, 01 Mar 2019 20:04:36 +0300 Subject: Unit 1.8.0 release Message-ID: <9293607.OmKBFCdl3F@vbart-workstation> Hi, I'm glad to announce a new release of NGINX Unit. This release contains two big features that we have been working on diligently during the last months. Some of you wonder why listener sockets are separated from applications in Unit configuration API. That was done intentionally to introduce advanced routing between sockets and applications in the future, and this future is finally happening. Now you will be able to specify quite handy rules that will direct your requests to a particular application depending on various parameters. Please take a glance at the routing documentation: - https://unit.nginx.org/configuration/#routes Currently, it only supports internal routing by Host, URI, and method request parameters. In the following releases, available options are going to be expanded to allow matching arbitrary headers, arguments, cookies, source and destination addresses. We will also add regular expression patterns. In future releases, these routing abilities will be handy for issuing redirects and changing configuration on a per route basis. As usual with Unit, all routing changes are fully dynamic and gracefully done through its control API. The second feature is even bigger. We've merged the code that Maxim Romanov developed in a separate branch last year to support running applications leveraging certain technology described in the Java(tm) Servlet 3.1 (JSR-340) specification. This module is a BETA release as the module is untested and presumed incompatible with the JSR-340 specification. Now everybody can easily install it from our packages, try it with their Java applications, and leave us feedback. If you're a Jira user, please use this HowTo: - https://unit.nginx.org/howto/jira/ More documentation is available in Installation and Configuration sections: - https://unit.nginx.org/installation/ - https://unit.nginx.org/configuration/#java-application We intend to use our open-development process to refine and improve the software and to eventually test and certify the software's compatibility with the JSR-340 specification. Unless and until the software has been tested and certified, you should not deploy the software in support of deploying or providing Java Servlet 3.1 applications. You should instead deploy production applications on pre-built binaries that have been tested and certified to meet the JSR-340 compatibility requirements such as certified binaries published for the JSR-340 reference implementation available at https://javaee.github.io/glassfish/. * Java is a registered trademark of Oracle and/or its affiliates. Changes with Unit 1.8.0 01 Mar 2019 *) Change: now three numbers are always used for versioning: major, minor, and patch versions. *) Change: now QUERY_STRING is always defined even if the request does not include the query component. *) Feature: basic internal request routing by Host, URI, and method. *) Feature: experimental support for Java Servlet Containers. *) Bugfix: segmentation fault might have occurred in the router process. *) Bugfix: various potential memory leaks. *) Bugfix: TLS connections might have stalled. *) Bugfix: some Perl applications might have failed to send the response body. *) Bugfix: some compilers with specific flags might have produced non-functioning builds; the bug had appeared in 1.5. *) Bugfix: Node.js package had wrong version number when installed from sources. Our versioning scheme is actually always supposed to have the third version number, but the ".0" patch version was hidden. In order to avoid any possible confusion, it was decided to always show ".0" in version numbers. For those who are interested in running Unit on CentOS, Fedora, and RHEL with latest versions of PHP, the corresponding packages are now available in Remi's RPM repository: - https://unit.nginx.org/installation/#remi-s-rpm-repo Many kudos to Remi Collet for collaboration. Note also that our technical writer Artem Konev has recently added more HowTos to the site about configuring various applications, including WordPress, Flask, and Django-based ones: - https://unit.nginx.org/howto/ He will continue discovering and writing instructions for other applications. If you're interested in some specific use cases and applications, please don't hesitate to leave a feature request on the documentation GitHub: - https://github.com/nginx/unit-docs/issues In the following releases, we will continue improving routing capabilities and support for Java applications. Among other big features we're working on are WebSockets support and serving static media assets. Stay tuned, give feedback, and help us to create the best software ever. wbr, Valentin V. Bartenev From francis at daoine.org Sun Mar 3 10:39:30 2019 From: francis at daoine.org (Francis Daly) Date: Sun, 3 Mar 2019 10:39:30 +0000 Subject: Advanced Rewrite request url to match the query string and normalization In-Reply-To: <9fd26f0797ae86539ee193b9a016fc70.NginxMailingListEnglish@forum.nginx.org> References: <9fd26f0797ae86539ee193b9a016fc70.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20190303103930.46x3pa57ugadztmd@daoine.org> On Tue, Feb 26, 2019 at 11:07:34PM -0500, shadowgun1102 wrote: Hi there, > I have a simple nginx forward proxy, configured as: nginx is not a forward proxy, and therefore there will be "rough edges" when you try to use it as such. That's fine, so long as you know. > The client behind its isp firewall sends the request (per nginx log): GET > http://www.clientisp.com/path/rewrite.do?url=http%3A%2F%2Fwww.example.com > HTTP/1.1 > > How do I transform the requested url to http://www.example.com before it is > sent to the upstream? I think that you cannot do this with just stock-nginx. You will probably need one of the embedded-language modules, or some other third-party modules, such as are in the "openresty" distribution. > I looked up many posts online, but I am still confused at: > > 1. The online examples usually teach how you match the uri part, but my > goal is to obtain the queried string only, i.e., everything after the > equation mark"=", http%3A%2F%2Fwww.example.com. You would use "location = /path/rewrite.do", because that it what "location" matches on. Then you would probably make use of the variable $arg_url. > 2. I have no idea how to decode the percentage coded symbols into > normalized one. That's where you would need something non-stock, perhaps set-misc-nginx-module Note that you will also possibly want to mangle the response from upstream, so that any string that the browser might interpret as a url, will expand to the url that you want it to be. By that, I mean that if the response from http://www.example.com includes an "img src" of "/image.png" that gets back to the browser, what request will the browser make for that image? And will that request work, in your setup? Good luck with it. It looks difficult to me. f -- Francis Daly francis at daoine.org From francis at daoine.org Sun Mar 3 10:59:34 2019 From: francis at daoine.org (Francis Daly) Date: Sun, 3 Mar 2019 10:59:34 +0000 Subject: Nginx Proxy Buffer In-Reply-To: References: Message-ID: <20190303105934.uwkhz6cbb6lavdwq@daoine.org> On Sat, Feb 23, 2019 at 04:15:53AM -0500, HasanAtizaz wrote: Hi there, > I am having some difficultly in understanding the following parameters for > nginx, > Scenario (nginx is used for serving static content ) If "static content" is "files from a file system", then proxy_* variables are not used during the serving of that request. Those variables only take effect when proxy_pass is used. > proxy_buffer_size 4k > proxy_buffers 16 32k > proxy_busy_buffers_size 64k > proxy_buffering off > > I like to know if i exceed the proxy_buffer_size to lets say 128 and > proxy_buffers to 4 256 does it affects nginx performance in anyway ? The answer to every question like this is always "if you do not measure a difference, there is not an important difference in your use case". > if proxy_buffering is set to off, does proxy_buffers value in anyway has any > affect on nginx ? My reading of http://nginx.org/r/proxy_buffering suggests that proxy_buffers will not be used for that request. All the best, f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Sun Mar 3 15:54:02 2019 From: nginx-forum at forum.nginx.org (amirs) Date: Sun, 03 Mar 2019 10:54:02 -0500 Subject: Whitelist allow for specific requests Nginx Message-ID: <9b3408b692dcb7265a034f2fc9a9668c.NginxMailingListEnglish@forum.nginx.org> I have a server and there is a Nginx in front. There are many requests which some of them contains special word example: /posts/men/clouths I have a whitelist ip file also. I want to write a rule in Nginx that if requests contains "men", only allow the request if requester's ip is in whitelist file. If requests does not contains "men" allow the request anyway. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,283256,283256#msg-283256 From francis at daoine.org Sun Mar 3 22:54:51 2019 From: francis at daoine.org (Francis Daly) Date: Sun, 3 Mar 2019 22:54:51 +0000 Subject: how to setup as caching reverse proxy for/rewritign global URLs In-Reply-To: <847b4f68-0ea5-445a-e269-ba7cce357c55@desy.de> References: <847b4f68-0ea5-445a-e269-ba7cce357c55@desy.de> Message-ID: <20190303225451.wbmxa6kn3h6gi4kj@daoine.org> On Wed, Feb 20, 2019 at 12:37:23PM +0100, Thomas Hartmann wrote: Hi there, > I would like to setup Nginx as a caching reverse proxy but with explicit > requests in the URL and rewriting all subsequent requests "caching reverse proxy" is what nginx is built for. "rewriting the body content" is not. > Don;t know, if it really counts as reverse proxy and if it is > understandable, so an example ;) The first part of what you want is probably reasonable; the second part is probably going to involve you writing your own code. > For an original URL like > https://org.url.baz/user/repo/foo > I would like to be able to cache all request through nginx running at > my.domain.foo > but with an explicit "cache request" like > > wget http://my.domain.foo/cache/http://org.url.baz/user/repo/foo Your original "https" has become "http" in the middle here. That may or may not be intentional. > and rewrite all subsequent request and cache them. > > So, I am looking for something similar to Internet Archive's memento proxy > > https://web.archive.org/save/https://mailman.nginx.org/pipermail/nginx/2019-February/thread.html > > Since my idea is no 'true' reverse proxy, the example [1] needs probably > a bit of extension and I am not sure, how to do the rewrites. You want to take the $request_uri you were given, remove the leading "/cache/", make sure the rest looks like a url, and proxy_pass to that. And then take any string in the response body that the browser might interpret as a url, and make sure that you rewrite it as necessary so that it expands to the url you need it to be. Rather than start from scratch, you might have more luck seeing if you can find the code that Internet Archive's memento proxy uses. I suspect that it is not "merely" clever nginx config; but it may give you an idea of the kind of things you will need to do in your version. Good luck with it, f -- Francis Daly francis at daoine.org From r at roze.lv Mon Mar 4 11:57:20 2019 From: r at roze.lv (Reinis Rozitis) Date: Mon, 4 Mar 2019 13:57:20 +0200 Subject: how to setup as caching reverse proxy for/rewritign global URLs In-Reply-To: <20190303225451.wbmxa6kn3h6gi4kj@daoine.org> References: <847b4f68-0ea5-445a-e269-ba7cce357c55@desy.de> <20190303225451.wbmxa6kn3h6gi4kj@daoine.org> Message-ID: <000801d4d281$6c4bd430$44e37c90$@roze.lv> > "caching reverse proxy" is what nginx is built for. > > "rewriting the body content" is not. Well you can rewrite body with the sub module ( http://nginx.org/en/docs/http/ngx_http_sub_module.html ) The only caveat is that the module doesn't support compression (gzip) and you need to explicitly set that nginx disables it for the upstream requests. For example a config which changes "//upstreamdomain" to "//mydomain" in the proxied requests: location / { proxy_set_header Accept-Encoding ""; proxy_pass http://upstream.site/; sub_filter_types text/html text/css; sub_filter_once off; sub_filter //upstreamdomain //mydomain; } Also note that you can/need to configure sub_filter_types if there are different proxied objects like javascripts etc. I use it for development environments - while with dynamic/scripting languages it's easy to have a dev domain/url with static content unless you use relative urls (which is sometimes harder/impossible if you use CDNs on different domain) it's easy for the developers to always use static production urls without the need to mangle them afterwards in the deploy process as nginx can dynamically rewrite everything (sub_filter supports also variables). > and rewrite all subsequent request and cache them. As Francis already wrote "probably going to involve you writing your own code" or use some external tools - nginx while can rewrite and cache the body/object it won't make any sub requests unless the client asks. So if you want to actively mirror/cache a site you need some sort of a crawler to either request every link via nginx to fill the cache or as an example wget --mirror --convert-links --adjust-extension --page-requisites --no-parent http://somesite theoretically can store a static mirror. rr From brandonm at medent.com Mon Mar 4 14:49:11 2019 From: brandonm at medent.com (Brandon Mallory) Date: Mon, 4 Mar 2019 09:49:11 -0500 (EST) Subject: Advice in regards to configuration Message-ID: <1284263807.6788672.1551710951222.JavaMail.zimbra@medent.com> I am new to NGINX and looking for advice on how to configure NGINX. Here is what I am trying to accomplish for my cloud infrastructure. Have NGINX configured with a public IP (65.x.x.x.) . I need to use TCP for my application ( not http ). I would like to have a client hit the public IP with a location ( 65.x.x.x.x\12345) and have that connection forwarded to a LAN IP. Example: Client1 - application hits 65.x.x.x./12345 port 11001 and that connection forwarded to a VM at 10.45.2.1:11001 Client2 - application hits 65.x.x.x./56789 port 11001 and that connection forwarded to a VM at 10.45.2.2:11001 Client3 - application hits 65.x.x.x./24681 port 11001 and that connection forwarded to a VM at 10.45.2.3:11001 can anyone point me in the right direction from a configuration aspect. I think I need to use the stream option just not sure what other options I need to send a specific "location" to a specific server. I could also use subdomains if that is required (12345.65.X.X.X to 10.45.2.1) Thanks Best Regards, Brandon Mallory Network & Systems Engineer MEDENT EMR/EHR 15 Hulbert Street Auburn, NY 13021 Phone: [ callto:(315)-255-0900 | (315)-255-0900 ] Fax: [ callto:(315)-255-3539 | (315)-255-3539 ] Web: [ http://www.medent.com/ | www.medent.com ] This message and any attachments may contain information that is protected by law as privileged and confidential, and is transmitted for the sole use of the intended recipient(s). If you are not the intended recipient, you are hereby notified that any use, dissemination, copying or retention of this e-mail or the information contained herein is strictly prohibited. If you received this e-mail in error, please immediately notify the sender by e-mail, and permanently delete this e-mail. -------------- next part -------------- An HTML attachment was scrubbed... URL: From thomas.hartmann at desy.de Mon Mar 4 14:57:46 2019 From: thomas.hartmann at desy.de (Thomas Hartmann) Date: Mon, 4 Mar 2019 15:57:46 +0100 Subject: how to setup as caching reverse proxy for/rewritign global URLs In-Reply-To: <000801d4d281$6c4bd430$44e37c90$@roze.lv> References: <847b4f68-0ea5-445a-e269-ba7cce357c55@desy.de> <20190303225451.wbmxa6kn3h6gi4kj@daoine.org> <000801d4d281$6c4bd430$44e37c90$@roze.lv> Message-ID: <8e6ac067-c4eb-7e44-cf4b-705e13e6d8b7@desy.de> Hi Francis and and Reinis, many thanks for the ideas and hints! At least, I feel assured, that my idea seems to be not completely unreasonable ;) Crawling would be no necessity, assuming that the first client will make 'all' the requests to populate the cache. I will try my luck. Cheers and thanks, Thomas On 04/03/2019 12.57, Reinis Rozitis wrote: >> "caching reverse proxy" is what nginx is built for. >> >> "rewriting the body content" is not. > > Well you can rewrite body with the sub module ( http://nginx.org/en/docs/http/ngx_http_sub_module.html ) > > The only caveat is that the module doesn't support compression (gzip) and you need to explicitly set that nginx disables it for the upstream requests. > > > For example a config which changes "//upstreamdomain" to "//mydomain" in the proxied requests: > > location / { > proxy_set_header Accept-Encoding ""; > proxy_pass http://upstream.site/; > sub_filter_types text/html text/css; > sub_filter_once off; > sub_filter //upstreamdomain //mydomain; > > } > > Also note that you can/need to configure sub_filter_types if there are different proxied objects like javascripts etc. > > > I use it for development environments - while with dynamic/scripting languages it's easy to have a dev domain/url with static content unless you use relative urls (which is sometimes harder/impossible if you use CDNs on different domain) it's easy for the developers to always use static production urls without the need to mangle them afterwards in the deploy process as nginx can dynamically rewrite everything > (sub_filter supports also variables). > > > >> and rewrite all subsequent request and cache them. > > As Francis already wrote "probably going to involve you writing your own code" or use some external tools - nginx while can rewrite and cache the body/object it won't make any sub requests unless the client asks. > > So if you want to actively mirror/cache a site you need some sort of a crawler to either request every link via nginx to fill the cache or as an example wget --mirror --convert-links --adjust-extension --page-requisites --no-parent http://somesite theoretically can store a static mirror. > > rr > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 5334 bytes Desc: S/MIME Cryptographic Signature URL: From francis at daoine.org Mon Mar 4 21:05:03 2019 From: francis at daoine.org (Francis Daly) Date: Mon, 4 Mar 2019 21:05:03 +0000 Subject: Advice in regards to configuration In-Reply-To: <1284263807.6788672.1551710951222.JavaMail.zimbra@medent.com> References: <1284263807.6788672.1551710951222.JavaMail.zimbra@medent.com> Message-ID: <20190304210503.eiss5j47givxtu4z@daoine.org> On Mon, Mar 04, 2019 at 09:49:11AM -0500, Brandon Mallory wrote: Hi there, > Have NGINX configured with a public IP (65.x.x.x.) . I need to use TCP for my application ( not http ). I would like to have a client hit the public IP with a location ( 65.x.x.x.x\12345) and have that connection forwarded to a LAN IP. > "TCP" suggests "stream", which knows about an IP address and a TCP port. "location" and "subdomain" (host name) are features of http requests. Can you rephrase your request in the light of that? If so, perhaps the solution will become clearer. f -- Francis Daly francis at daoine.org From brandonm at medent.com Mon Mar 4 21:30:00 2019 From: brandonm at medent.com (Brandon Mallory) Date: Mon, 4 Mar 2019 16:30:00 -0500 (EST) Subject: Advice in regards to configuration In-Reply-To: <20190304210503.eiss5j47givxtu4z@daoine.org> References: <1284263807.6788672.1551710951222.JavaMail.zimbra@medent.com> <20190304210503.eiss5j47givxtu4z@daoine.org> Message-ID: <1657611071.6904291.1551735000731.JavaMail.zimbra@medent.com> I can try. Basically we have an application that uses TCP to communicate between the client on a windows PC and a linux server. I would like a public facing IP that redirects traffic from the window PC to the correct linux server. I was thinking I could have the windows client point to a public ip and then a location maybe account number. So it would look like Client 1 public ip/12345 forward to private IP 10.45.2.1 (linux server) Client 2 Public ip/54321 forward to private IP 10.45.2.2 (linux server) Similar to how http uses the location Www.domain.com/test you can forward to a location Thanks Best Regards, Brandon Mallory Network & Systems Engineer MEDENT EMR/EHR 15 Hulbert Street Auburn, NY 13021 Phone: (315)-255-0900 Fax: (315)-255-3539 Web: www.medent.com This message and any attachments may contain information that is protected by law as privileged and confidential, and is transmitted for the sole use of the intended recipient(s). If you are not the intended recipient, you are hereby notified that any use, dissemination, copying or retention of this e-mail or the information contained herein is strictly prohibited. If you received this e-mail in error, please immediately notify the sender by e-mail, and permanently delete this e-mail. ----- Original Message ----- From: Francis Daly To: nginx at nginx.org Sent: Mon, 04 Mar 2019 16:05:03 -0500 (EST) Subject: Re: Advice in regards to configuration On Mon, Mar 04, 2019 at 09:49:11AM -0500, Brandon Mallory wrote: Hi there, > Have NGINX configured with a public IP (65.x.x.x.) . I need to use TCP for my application ( not http ). I would like to have a client hit the public IP with a location ( 65.x.x.x.x\12345) and have that connection forwarded to a LAN IP. > "TCP" suggests "stream", which knows about an IP address and a TCP port. "location" and "subdomain" (host name) are features of http requests. Can you rephrase your request in the light of that? If so, perhaps the solution will become clearer. f -- Francis Daly francis at daoine.org _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at forum.nginx.org Tue Mar 5 11:50:54 2019 From: nginx-forum at forum.nginx.org (Olaf van der Spek) Date: Tue, 05 Mar 2019 06:50:54 -0500 Subject: Protect against php files being send as static files Message-ID: Hi, Is there a way to protect against php files being send as static files / source due to some php specific configuration being missed (by accident)? Another web server has this by default: static-file.exclude-extensions = ( ".php", ".pl", ".fcgi" ) Posted at Nginx Forum: https://forum.nginx.org/read.php?2,283274,283274#msg-283274 From francis at daoine.org Tue Mar 5 23:52:12 2019 From: francis at daoine.org (Francis Daly) Date: Tue, 5 Mar 2019 23:52:12 +0000 Subject: Advice in regards to configuration In-Reply-To: <1657611071.6904291.1551735000731.JavaMail.zimbra@medent.com> References: <1284263807.6788672.1551710951222.JavaMail.zimbra@medent.com> <20190304210503.eiss5j47givxtu4z@daoine.org> <1657611071.6904291.1551735000731.JavaMail.zimbra@medent.com> Message-ID: <20190305235212.ts65lc3stwjwoevk@daoine.org> On Mon, Mar 04, 2019 at 04:30:00PM -0500, Brandon Mallory wrote: Hi there, > I was thinking I could have the windows client point to a public ip and then a location maybe account number. So it would look like > > Client 1 > public ip/12345 forward to private IP 10.45.2.1 (linux server) > Client 2 > Public ip/54321 forward to private IP 10.45.2.2 (linux server) > > Similar to how http uses the location > Www.domain.com/test you can forward to a location If your application makes http requests, then you can use nginx's http system to proxy_pass each request to a suitable upstream. You can, for example, use different upstreams for different requests (location). If you application does not make http requests, then you can use nginx's stream system to proxy_pass each connection to a suitable upstream. There is no http request, so there is no location block to use. You need to find some way of determining the correct upstream for each incoming connection. One way is to have nginx listen on multiple ports, so that anyone connecting to nginx:10001 has the connection proxied to 10.45.2.1:10101, and anyone connecting to nginx:10002 has the connection proxied to 10.45.2.2:10101. f -- Francis Daly francis at daoine.org From francis at daoine.org Tue Mar 5 23:59:40 2019 From: francis at daoine.org (Francis Daly) Date: Tue, 5 Mar 2019 23:59:40 +0000 Subject: Protect against php files being send as static files In-Reply-To: References: Message-ID: <20190305235940.i3uvrnsrd2jj5e2b@daoine.org> On Tue, Mar 05, 2019 at 06:50:54AM -0500, Olaf van der Spek wrote: Hi there, > Is there a way to protect against php files being send as static files / > source due to some php specific configuration being missed (by accident)? > Another web server has this by default: static-file.exclude-extensions = ( > ".php", ".pl", ".fcgi" ) I don't think that stock-nginx has a configuration directive for this. "Not putting files that you don't want sent, into a directory that nginx has been told to send files from", would probably be the safest way to avoid external misconfiguration. f -- Francis Daly francis at daoine.org From hobson42 at gmail.com Wed Mar 6 12:01:35 2019 From: hobson42 at gmail.com (Ian Hobson) Date: Wed, 6 Mar 2019 12:01:35 +0000 Subject: Protect against php files being send as static files In-Reply-To: References: Message-ID: <43871fb1-3686-f2f2-831a-be8af660f2ae@gmail.com> On 05/03/2019 11:50, Olaf van der Spek wrote: > Hi, > > Is there a way to protect against php files being send as static files / > source due to some php specific configuration being missed (by accident)? > Another web server has this by default: static-file.exclude-extensions = ( > ".php", ".pl", ".fcgi" ) Hi, I think you need the zero day exploit defence. If you place your php files outside the main root directory, and then do something like this server { ..... root /location/of/static/files; location ~ \.php { root /location/of/php/files; # Zero-day exploit defence, see http://forumm.nginx.org/read.php?2,88846,page 3 try_files $uri =404; fastcgi_split_path_info ^(.+\.php)(/.+)$; include /etc/nginx/fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_pass 127.0.0.1:9000; } } Then you should be OK. There is actually no need to move php files to a new root. Regards Ian -- Ian Hobson Tel (+351) 910 418 473 From nginx-forum at forum.nginx.org Wed Mar 6 14:20:03 2019 From: nginx-forum at forum.nginx.org (jetchars) Date: Wed, 06 Mar 2019 09:20:03 -0500 Subject: gRPC reverse proxy closed tcp connection after 1000 rpc calls Message-ID: <06c884f98e8c040a1737dde733b39da3.NginxMailingListEnglish@forum.nginx.org> Hey, genius I've followed the official user guide to create a gRPC reverse proxy, config as follow: ``` upstream grpcservers { server 10.90.62.50:60080 weight=3; server 10.90.62.51:60080 weight=3; server 10.90.62.52:60080 weight=3; keepalive 2000; keepalive_timeout 120; keepalive_requests 100000; } server { listen 8051 http2; error_log /home/nginx/log/s4_mongo_error.log; access_log /home/nginx/log/s4_mongo_access.log; grpc_socket_keepalive on; location / { grpc_pass grpc://grpcservers; error_page 502 = /error502grpc; } location = /error502grpc { internal; default_type application/grpc; add_header grpc-status 14; add_header grpc-message "unavailable"; return 204; } } ``` - nginx keepalive with upstream servers successfully - but when keepalive with gRPC client, after 1000 requests been processed, nginx will close the tcp connection, because I can find `TIME_WAIT` on the nginx side. - then gRPC client will report tens of thousands of `TransientFailure` at same time Posted at Nginx Forum: https://forum.nginx.org/read.php?2,283297,283297#msg-283297 From pluknet at nginx.com Wed Mar 6 15:23:48 2019 From: pluknet at nginx.com (Sergey Kandaurov) Date: Wed, 6 Mar 2019 18:23:48 +0300 Subject: gRPC reverse proxy closed tcp connection after 1000 rpc calls In-Reply-To: <06c884f98e8c040a1737dde733b39da3.NginxMailingListEnglish@forum.nginx.org> References: <06c884f98e8c040a1737dde733b39da3.NginxMailingListEnglish@forum.nginx.org> Message-ID: <383D3037-42DC-44DC-9626-7AC718E60D4B@nginx.com> > On 6 Mar 2019, at 17:20, jetchars wrote: > > Hey, genius > > I've followed the official user guide to create a gRPC reverse proxy, config > as follow: > > [..] > - nginx keepalive with upstream servers successfully > - but when keepalive with gRPC client, after 1000 requests been processed, > nginx will close the tcp connection, because I can find `TIME_WAIT` on the > nginx side. > - then gRPC client will report tens of thousands of `TransientFailure` at > same time > See http://nginx.org/r/http2_max_requests -- Sergey Kandaurov From brandonm at medent.com Wed Mar 6 20:49:15 2019 From: brandonm at medent.com (Brandon Mallory) Date: Wed, 6 Mar 2019 15:49:15 -0500 (EST) Subject: Advice in regards to configuration In-Reply-To: <20190305235212.ts65lc3stwjwoevk@daoine.org> References: <1284263807.6788672.1551710951222.JavaMail.zimbra@medent.com> <20190304210503.eiss5j47givxtu4z@daoine.org> <1657611071.6904291.1551735000731.JavaMail.zimbra@medent.com> <20190305235212.ts65lc3stwjwoevk@daoine.org> Message-ID: <1787707750.7140101.1551905355410.JavaMail.zimbra@medent.com> Good Advice, After doing some further research. Can you give me your opinion in regards to using the ssl_preread_server_name. So as long as I can get a SNI and then filter TCP connection to the proper server with that information. Does this sound doable ? My plan was to use TLS SNI to to identify and route TCP traffic based in SNI map $ssl_preread_server_name $name { X.X.X.X:11001/12345 12345; X.X.X.X:11001/56789 56789; } upstream 12345 { server 10.45.2.1:11001; } upstream 56789 { server 10.45.2.5:11001; } server { listen 11001; proxy_pass $ssl_preread_server_name; proxy_timeout 1440m; proxy_connect_timeout 1440m; ssl_preread on; } Best Regards, Brandon Mallory Network & Systems Engineer MEDENT EMR/EHR 15 Hulbert Street Auburn, NY 13021 Phone: [ callto:(315)-255-0900 | (315)-255-0900 ] Fax: [ callto:(315)-255-3539 | (315)-255-3539 ] Web: [ http://www.medent.com/ | www.medent.com ] This message and any attachments may contain information that is protected by law as privileged and confidential, and is transmitted for the sole use of the intended recipient(s). If you are not the intended recipient, you are hereby notified that any use, dissemination, copying or retention of this e-mail or the information contained herein is strictly prohibited. If you received this e-mail in error, please immediately notify the sender by e-mail, and permanently delete this e-mail. From: "Francis Daly" To: "nginx" Sent: Tuesday, March 5, 2019 6:52:12 PM Subject: Re: Advice in regards to configuration On Mon, Mar 04, 2019 at 04:30:00PM -0500, Brandon Mallory wrote: Hi there, > I was thinking I could have the windows client point to a public ip and then a location maybe account number. So it would look like > > Client 1 > public ip/12345 forward to private IP 10.45.2.1 (linux server) > Client 2 > Public ip/54321 forward to private IP 10.45.2.2 (linux server) > > Similar to how http uses the location > Www.domain.com/test you can forward to a location If your application makes http requests, then you can use nginx's http system to proxy_pass each request to a suitable upstream. You can, for example, use different upstreams for different requests (location). If you application does not make http requests, then you can use nginx's stream system to proxy_pass each connection to a suitable upstream. There is no http request, so there is no location block to use. You need to find some way of determining the correct upstream for each incoming connection. One way is to have nginx listen on multiple ports, so that anyone connecting to nginx:10001 has the connection proxied to 10.45.2.1:10101, and anyone connecting to nginx:10002 has the connection proxied to 10.45.2.2:10101. f -- Francis Daly francis at daoine.org _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Thu Mar 7 08:29:53 2019 From: nginx-forum at forum.nginx.org (jetchars) Date: Thu, 07 Mar 2019 03:29:53 -0500 Subject: gRPC reverse proxy closed tcp connection after 1000 rpc calls In-Reply-To: <383D3037-42DC-44DC-9626-7AC718E60D4B@nginx.com> References: <383D3037-42DC-44DC-9626-7AC718E60D4B@nginx.com> Message-ID: <958ab726fcb94f7d890c527eb9cacb55.NginxMailingListEnglish@forum.nginx.org> Thanks! Indeed, `http2_max_requests` does close the tcp connection; BTW, I've confirmed that it is a bug of grpc-go. Thanks again! Posted at Nginx Forum: https://forum.nginx.org/read.php?2,283297,283304#msg-283304 From hsc at miracle.dk Thu Mar 7 13:57:40 2019 From: hsc at miracle.dk (Hans Schou) Date: Thu, 7 Mar 2019 14:57:40 +0100 Subject: location redirect always with trailing slash... sometimes Message-ID: Hi I have this buggy backend application where I try to let Nginx sort out the problems. Example of required redirect: http://ex.org/foo -> https://ex2.org/foo/ # Nx solves the bug here http://ex.org/foo/ -> https://ex2.org/foo/ http://ex.org/foo/?id=7 -> https://ex2.org/?id=7 Here is my current configuration which is in too parts but I have the idea that I'm doing it wrong and it can be done in only one "location" with a regex: location ~ /foo$ { return 301 https://ex2.org$request_uri/; } location ~ /foo/$ { return 301 https://ex2.org$request_uri; } I have several paths like /foo/ where I would like to have them all in one regex like location ~ /(foo|bar)/$ { -- Venlig hilsen - best regards -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Thu Mar 7 18:33:39 2019 From: nginx-forum at forum.nginx.org (wkbrad) Date: Thu, 07 Mar 2019 13:33:39 -0500 Subject: Possible memory leak? In-Reply-To: <4c5fdff224b1af7d8cd04654bd14f238.NginxMailingListEnglish@forum.nginx.org> References: <4c5fdff224b1af7d8cd04654bd14f238.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hi all, I just wanted to share the details of what I've found about this issue. Also thanks to Maxim Dounin and Reinis Rozitis who gave some really great answers! The more I look into this the more I'm convinced this is an issue with Nginx itself. I've tested this with 3 different builds now and all have the exact same issue. The first 2 types of servers I tested were both running Nginx 1.15.8 on Centos 7 ( with 1 of them being on 6 ). I tested about 10 of our over 100 servers. This time I tested in a default install of Debian 9 with Nginix version 1.10.3 and the issue exists there too. I just wanted to test on something completely different. For the test, I created 50k very simple vhosts which used about 1G of RAM. Here is the ps_mem output. 94.3 MiB + 1.0 GiB = 1.1 GiB nginx (3) After a normal reload it then uses 2x the ram: 186.3 MiB + 1.9 GiB = 2.1 GiB nginx (3) And if I reload it again it briefly jumps up to about 4G during the reload and then goes back down to 2G. If I instead use the "upgrade" option. In the case of Debian, service nginx upgrade, then it reloads gracefully and goes back to using 1G again. 100.8 MiB + 1.0 GiB = 1.1 GiB nginx (3) The difference between the "reload" and "upgrade" process is basically only that reload sends a HUP signal to Nginx and upgrade sends a USR2 and then QUIT signal. What happens with all of those signals is entirely up to Nginx. It could even ignore them if chose too. Additionally, I ran the same test with Apache. Not because I want to compare Nginx to Apache, they are different for a reason. I just wanted to test if this was a system issue. So I did the same thing on Debian 9, installed Apache and created 50k simple vhosts. It used about 800M of ram and reloading did not cause that to increase at all. All of that leads me to these questions. Why would anyone want to use the normal reload process to reload the Nginx configuration? Shouldn't we always be using the upgrade process instead? Are there any downsides to doing that? Has anyone else noticed these issues and have you found another fix? Look forward to hearing back and thanks in advance! Posted at Nginx Forum: https://forum.nginx.org/read.php?2,283216,283309#msg-283309 From francis at daoine.org Fri Mar 8 00:11:05 2019 From: francis at daoine.org (Francis Daly) Date: Fri, 8 Mar 2019 00:11:05 +0000 Subject: Advice in regards to configuration In-Reply-To: <1787707750.7140101.1551905355410.JavaMail.zimbra@medent.com> References: <1284263807.6788672.1551710951222.JavaMail.zimbra@medent.com> <20190304210503.eiss5j47givxtu4z@daoine.org> <1657611071.6904291.1551735000731.JavaMail.zimbra@medent.com> <20190305235212.ts65lc3stwjwoevk@daoine.org> <1787707750.7140101.1551905355410.JavaMail.zimbra@medent.com> Message-ID: <20190308001105.rwjdgpm57xbutpzs@daoine.org> On Wed, Mar 06, 2019 at 03:49:15PM -0500, Brandon Mallory wrote: Hi there, > Good Advice, After doing some further research. Can you give me your opinion in regards to using the ssl_preread_server_name. So as long as I can get a SNI and then filter TCP connection to the proper server with that information. Does this sound doable ? In principle, yes. I don't see why it should not work. Your clients will speak TLS, and will all connect to different hostnames (which will presumably all resolve to the nginx IP address). You may want to look closely at the example in the docs, and make sure you understand "map" and the content of the $ssl_preread_server_name variable. That will probably help you come to a working config more reliably. Good luck with it, f -- Francis Daly francis at daoine.org From anoopalias01 at gmail.com Fri Mar 8 01:04:58 2019 From: anoopalias01 at gmail.com (Anoop Alias) Date: Fri, 8 Mar 2019 06:34:58 +0530 Subject: Possible memory leak? In-Reply-To: References: <4c5fdff224b1af7d8cd04654bd14f238.NginxMailingListEnglish@forum.nginx.org> Message-ID: Nginx does use more ram for the number of vhosts than Apache does http://nginx.org/en/docs/control.html The USR2 is for binary in-place upgrades and normally you should just send a SIGHUP, I have seen sometimes the USR2 leading multiple master process and showing some weird behaviour You can probably use worker_shutdown_timeout 10s; or something to get the workers to shut down in a more timebound manner On Fri, Mar 8, 2019 at 12:03 AM wkbrad wrote: > Hi all, > > I just wanted to share the details of what I've found about this issue. > Also thanks to Maxim Dounin and Reinis Rozitis who gave some really great > answers! > > The more I look into this the more I'm convinced this is an issue with > Nginx > itself. I've tested this with 3 different builds now and all have the > exact > same issue. > > The first 2 types of servers I tested were both running Nginx 1.15.8 on > Centos 7 ( with 1 of them being on 6 ). I tested about 10 of our over 100 > servers. This time I tested in a default install of Debian 9 with Nginix > version 1.10.3 and the issue exists there too. I just wanted to test on > something completely different. > > For the test, I created 50k very simple vhosts which used about 1G of RAM. > Here is the ps_mem output. > 94.3 MiB + 1.0 GiB = 1.1 GiB nginx (3) > > After a normal reload it then uses 2x the ram: > 186.3 MiB + 1.9 GiB = 2.1 GiB nginx (3) > > And if I reload it again it briefly jumps up to about 4G during the reload > and then goes back down to 2G. > > If I instead use the "upgrade" option. In the case of Debian, service > nginx > upgrade, then it reloads gracefully and goes back to using 1G again. > 100.8 MiB + 1.0 GiB = 1.1 GiB nginx (3) > > The difference between the "reload" and "upgrade" process is basically only > that reload sends a HUP signal to Nginx and upgrade sends a USR2 and then > QUIT signal. What happens with all of those signals is entirely up to > Nginx. It could even ignore them if chose too. > > Additionally, I ran the same test with Apache. Not because I want to > compare Nginx to Apache, they are different for a reason. I just wanted to > test if this was a system issue. So I did the same thing on Debian 9, > installed Apache and created 50k simple vhosts. It used about 800M of ram > and reloading did not cause that to increase at all. > > All of that leads me to these questions. > > Why would anyone want to use the normal reload process to reload the Nginx > configuration? > Shouldn't we always be using the upgrade process instead? > Are there any downsides to doing that? > Has anyone else noticed these issues and have you found another fix? > > Look forward to hearing back and thanks in advance! > > Posted at Nginx Forum: > https://forum.nginx.org/read.php?2,283216,283309#msg-283309 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- *Anoop P Alias* -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Fri Mar 8 02:35:15 2019 From: nginx-forum at forum.nginx.org (wkbrad) Date: Thu, 07 Mar 2019 21:35:15 -0500 Subject: Possible memory leak? In-Reply-To: References: Message-ID: <8bfbc6bfe8005676d25790a8f86e2d1d.NginxMailingListEnglish@forum.nginx.org> Thanks, Anoop! But I don't think you understood the point I was trying to get across. I was definitely not trying to compare nginx and apache memory usage. Let's just ignore that part was ever said. :) I'm trying to understand why Nginx is using 2x the memory usage when the HUP signal is sent, i.e. the normal reload process. When you use the USR2/QUIT method, i.e. the binary upgrade process, it doesn't do this. It's a big problem on high vhost servers when you go from normally using 1G of ram to using 2G and then 4G during subsequent reloads. It's that brief 4G spike that initially caught my attention. But then I noticed that it was always using 2x more ram. Whoa! This is super easy to reproduce so I invite you to test it yourself. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,283216,283312#msg-283312 From anoopalias01 at gmail.com Fri Mar 8 03:08:32 2019 From: anoopalias01 at gmail.com (Anoop Alias) Date: Fri, 8 Mar 2019 08:38:32 +0530 Subject: Possible memory leak? In-Reply-To: <8bfbc6bfe8005676d25790a8f86e2d1d.NginxMailingListEnglish@forum.nginx.org> References: <8bfbc6bfe8005676d25790a8f86e2d1d.NginxMailingListEnglish@forum.nginx.org> Message-ID: Its simple..Nginx has a master process and number of worker process as you configure in nginx.conf . Its the workers that handle connections and each one does async When you do a HUP, all the master process is doing is spawning n new workers and all new connections to port 80/443 are handled by the new workers, but remember that he old workers may still be doing some job and terminating it then and there means you are closing off some connections in a non-graceful way, so the master process keeps the old workers also active for sometime to let it gracefully finish all its doing So if the worker process is n , during reload it will become 2n and then n workers are gracefully shutdown which means if n workers use x memory , then during reload the memory become 2x You can set workers to a low value ,say 1 worker process if the system is limited in memory ,but the possibility of having 2n workers during reload cannot be avoided as its more like a feature and the 2x memory usage is an unwanted side effect of this feature Having said that Nginx dev's can still look into why defining more vhost consume lot of memory while apache dont have this problem. I develop an automation script for a popular web control panel and most servers using the script have upto 10k vhost defined and the memory usage would be 4x times than apache for nginx with this much amount of vhosts defined . with ssl defs etc needed for each vhost we cannot reduce the number of vhosts also On Fri, Mar 8, 2019 at 8:05 AM wkbrad wrote: > Thanks, Anoop! But I don't think you understood the point I was trying to > get across. I was definitely not trying to compare nginx and apache memory > usage. Let's just ignore that part was ever said. :) > > I'm trying to understand why Nginx is using 2x the memory usage when the > HUP > signal is sent, i.e. the normal reload process. > > When you use the USR2/QUIT method, i.e. the binary upgrade process, it > doesn't do this. > > It's a big problem on high vhost servers when you go from normally using 1G > of ram to using 2G and then 4G during subsequent reloads. > > It's that brief 4G spike that initially caught my attention. But then I > noticed that it was always using 2x more ram. Whoa! > > This is super easy to reproduce so I invite you to test it yourself. > > Posted at Nginx Forum: > https://forum.nginx.org/read.php?2,283216,283312#msg-283312 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- *Anoop P Alias* -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists at lazygranch.com Fri Mar 8 06:09:33 2019 From: lists at lazygranch.com (lists at lazygranch.com) Date: Thu, 7 Mar 2019 22:09:33 -0800 Subject: Possible memory leak? In-Reply-To: References: <4c5fdff224b1af7d8cd04654bd14f238.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20190307220933.1c20d7b7.lists@lazygranch.com> On Thu, 07 Mar 2019 13:33:39 -0500 "wkbrad" wrote: > Hi all, > > I just wanted to share the details of what I've found about this > issue. Also thanks to Maxim Dounin and Reinis Rozitis who gave some > really great answers! > > The more I look into this the more I'm convinced this is an issue > with Nginx itself. I've tested this with 3 different builds now and > all have the exact same issue. > > The first 2 types of servers I tested were both running Nginx 1.15.8 > on Centos 7 ( with 1 of them being on 6 ). I tested about 10 of our > over 100 servers. This time I tested in a default install of Debian > 9 with Nginix version 1.10.3 and the issue exists there too. I just > wanted to test on something completely different. > > For the test, I created 50k very simple vhosts which used about 1G of > RAM. Here is the ps_mem output. > 94.3 MiB + 1.0 GiB = 1.1 GiB nginx (3) > > After a normal reload it then uses 2x the ram: > 186.3 MiB + 1.9 GiB = 2.1 GiB nginx (3) > > And if I reload it again it briefly jumps up to about 4G during the > reload and then goes back down to 2G. > > If I instead use the "upgrade" option. In the case of Debian, > service nginx upgrade, then it reloads gracefully and goes back to > using 1G again. 100.8 MiB + 1.0 GiB = 1.1 GiB nginx (3) > > The difference between the "reload" and "upgrade" process is > basically only that reload sends a HUP signal to Nginx and upgrade > sends a USR2 and then QUIT signal. What happens with all of those > signals is entirely up to Nginx. It could even ignore them if chose > too. > > Additionally, I ran the same test with Apache. Not because I want to > compare Nginx to Apache, they are different for a reason. I just > wanted to test if this was a system issue. So I did the same thing > on Debian 9, installed Apache and created 50k simple vhosts. It used > about 800M of ram and reloading did not cause that to increase at all. > > All of that leads me to these questions. > > Why would anyone want to use the normal reload process to reload the > Nginx configuration? > Shouldn't we always be using the upgrade process instead? > Are there any downsides to doing that? > Has anyone else noticed these issues and have you found another fix? > > Look forward to hearing back and thanks in advance! > > Posted at Nginx Forum: > https://forum.nginx.org/read.php?2,283216,283309#msg-283309 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Well for what it's worth, here is my result. centos 7 3.10.0-957.5.1.el7.x86_64 #1 SMP Fri Feb 1 14:54:57 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux sh-4.2# nginx -v nginx version: nginx/1.14.0 sh-4.2# ps_mem | grep nginx 4.7 MiB + 2.1 MiB = 6.7 MiB nginx (2) sh-4.2# systemctl reload nginx sh-4.2# ps_mem | grep nginx 1.7 MiB + 4.0 MiB = 5.7 MiB nginx (2) sh-4.2# systemctl restart nginx sh-4.2# ps_mem | grep nginx 804.0 KiB + 3.5 MiB = 4.2 MiB nginx (2) sh-4.2# ps_mem | grep nginx 2.9 MiB + 2.9 MiB = 5.8 MiB nginx (2) sh-4.2# ps_mem | grep nginx 2.9 MiB + 2.9 MiB = 5.8 MiB nginx (2) From hsc at miracle.dk Fri Mar 8 08:58:19 2019 From: hsc at miracle.dk (Hans Schou) Date: Fri, 8 Mar 2019 09:58:19 +0100 Subject: location redirect always with trailing slash... sometimes In-Reply-To: References: Message-ID: I found a solution (after reading the manual) http://nginx.org/en/docs/http/ngx_http_rewrite_module.html On Thu, 7 Mar 2019 at 14:57, Hans Schou wrote: > Example of required redirect: > http://ex.org/foo -> https://ex2.org/foo/ # Nx solves the bug here > http://ex.org/foo/ -> https://ex2.org/foo/ > http://ex.org/foo/?id=7 -> https://ex2.org/?id=7 > "rewrite" is the way to go. To change /foo or /foo/ to /foo/ and don't change the rest, this will do it: location ~ /(foo|bar) { rewrite ^(/[^/]+)/? https://ex2.org$1/ permanent; } If any path should be handled this way: location / { rewrite ^(/[^/]+)/? https://ex2.org$1/ permanent; } -- Venlig hilsen - best regards -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Fri Mar 8 15:39:10 2019 From: nginx-forum at forum.nginx.org (wkbrad) Date: Fri, 08 Mar 2019 10:39:10 -0500 Subject: Possible memory leak? In-Reply-To: References: Message-ID: <6d06a459428d9cde3ffebcb72cc950c1.NginxMailingListEnglish@forum.nginx.org> Hi Anoop! I thought you might have been the nDeploy guy and I've been planning on bringing this up with you too. We actually have several servers licensed with you. :) And they do have the same issue but you're still misunderstanding what the problem is. I completely understand that when the reload happens it should use 2x the ram. That's expected. What is not expected is that the ram stays at that level AFTER the reload is complete. Let's look at an example from a live Xtendweb server. Here is the ram usage after a restart. 30.5 MiB + 1.4 GiB = 1.5 GiB nginx (4) And here is the ram usage after a reload. 28.4 MiB + 2.8 GiB = 2.9 GiB nginx (4) The reload is completely finished at that point with no workers in shutting down state and it's now using 2x the ram. Now if I use the binary reload process next it goes back down. 26.1 MiB + 1.5 GiB = 1.5 GiB nginx (4) Again, I'm not talking about what SHOULD be happening. It's totally normal and expected for it to use 2x the ram DURING the reload. It's not expected for it to continue using 2x the ram AFTER the reload is finished. Thanks! Posted at Nginx Forum: https://forum.nginx.org/read.php?2,283216,283317#msg-283317 From nginx-forum at forum.nginx.org Fri Mar 8 15:42:28 2019 From: nginx-forum at forum.nginx.org (wkbrad) Date: Fri, 08 Mar 2019 10:42:28 -0500 Subject: Possible memory leak? In-Reply-To: <20190307220933.1c20d7b7.lists@lazygranch.com> References: <20190307220933.1c20d7b7.lists@lazygranch.com> Message-ID: Thanks for that info. It's definitely harder to notice the issue on small servers like that. But you are still seeing about a 50% increase in ram usage there by your own tests. The smallest server I've tested this on uses about 20M during the first start and about 50M after a reload is completely finished. Not so much of a problem for small servers but definitely a big problem for large ones. That said the issue is still there on small servers like you've just seen. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,283216,283318#msg-283318 From anoopalias01 at gmail.com Fri Mar 8 23:52:50 2019 From: anoopalias01 at gmail.com (Anoop Alias) Date: Sat, 9 Mar 2019 05:22:50 +0530 Subject: Possible memory leak? In-Reply-To: <6d06a459428d9cde3ffebcb72cc950c1.NginxMailingListEnglish@forum.nginx.org> References: <6d06a459428d9cde3ffebcb72cc950c1.NginxMailingListEnglish@forum.nginx.org> Message-ID: Sorry OT - @wkbrad - Please contact me off-the-list and we can discuss this further On Fri, Mar 8, 2019 at 9:09 PM wkbrad wrote: > Hi Anoop! > > I thought you might have been the nDeploy guy and I've been planning on > bringing this up with you too. We actually have several servers licensed > with you. :) > > And they do have the same issue but you're still misunderstanding what the > problem is. > > I completely understand that when the reload happens it should use 2x the > ram. That's expected. What is not expected is that the ram stays at that > level AFTER the reload is complete. > > Let's look at an example from a live Xtendweb server. Here is the ram > usage > after a restart. > 30.5 MiB + 1.4 GiB = 1.5 GiB nginx (4) > > And here is the ram usage after a reload. > 28.4 MiB + 2.8 GiB = 2.9 GiB nginx (4) > > The reload is completely finished at that point with no workers in shutting > down state and it's now using 2x the ram. Now if I use the binary reload > process next it goes back down. > 26.1 MiB + 1.5 GiB = 1.5 GiB nginx (4) > > Again, I'm not talking about what SHOULD be happening. It's totally normal > and expected for it to use 2x the ram DURING the reload. It's not expected > for it to continue using 2x the ram AFTER the reload is finished. > > Thanks! > > Posted at Nginx Forum: > https://forum.nginx.org/read.php?2,283216,283317#msg-283317 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- *Anoop P Alias* -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists at lazygranch.com Sat Mar 9 01:07:12 2019 From: lists at lazygranch.com (lists at lazygranch.com) Date: Fri, 8 Mar 2019 17:07:12 -0800 Subject: Possible memory leak? In-Reply-To: References: <20190307220933.1c20d7b7.lists@lazygranch.com> Message-ID: <20190308170712.2460f18f.lists@lazygranch.com> On Fri, 08 Mar 2019 10:42:28 -0500 "wkbrad" wrote: > Thanks for that info. It's definitely harder to notice the issue on > small servers like that. But you are still seeing about a 50% > increase in ram usage there by your own tests. > > The smallest server I've tested this on uses about 20M during the > first start and about 50M after a reload is completely finished. > > Not so much of a problem for small servers but definitely a big > problem for large ones. That said the issue is still there on small > servers like you've just seen. > > Posted at Nginx Forum: > https://forum.nginx.org/read.php?2,283216,283318#msg-283318 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Actually the total RAM went down after a reload for my ps_mem in the previous email. I repeated the test just using free, which could be a polluted test, but the RAM went down again. I also did the ps_mem test again and total RAM was reduced. I'm not caching in nginx, if that makes a difference. sh-4.2# free -m total used free shared buff/cache available Mem: 1838 276 175 104 1385 1259 Swap: 0 0 0 sh-4.2# systemctl reload nginx sh-4.2# free -m total used free shared buff/cache available Mem: 1838 272 180 104 1385 1263 Swap: 0 0 0 And repeated ps_mem test: sh-4.2# ps_mem | grep nginx 2.3 MiB + 3.5 MiB = 5.8 MiB nginx (2) sh-4.2# systemctl reload nginx sh-4.2# ps_mem | grep nginx 1.8 MiB + 3.1 MiB = 4.9 MiB nginx (2) From careygister at outlook.com Sun Mar 10 17:19:35 2019 From: careygister at outlook.com (careygister) Date: Sun, 10 Mar 2019 10:19:35 -0700 (MST) Subject: Slice Look Ahead Message-ID: <1552238375220-0.post@n2.nabble.com> Hi,? I'm interested in extending the slice module to look ahead one slice. I want to add the slice to the cache when it completes and when requested by the user look ahead one more slice. The intention is to improve client-side performance because I will alway have a copy of the next slice ready to deliver and I won't have to request it from the upstream server when the client requests it.? I've looked over the code for the slice and caching modules and I think a possible implementation would be to issue a request from the slice module asynchronously. Only, I can't figure out how to deliver an asynchronous subrequest. It looks to me like the parent request waits for the subrequest to complete before returning to the caller. Also, in the caching module I don't see the hook for issuing a request when a cache hit occurs.? Can anyone give me some references to similar code or to documentation in the manuals that solves a related problem?? Thanks,? Carey -- Sent from: http://nginx.2469901.n2.nabble.com/ From nginx-forum at forum.nginx.org Mon Mar 11 07:38:05 2019 From: nginx-forum at forum.nginx.org (Irelia) Date: Mon, 11 Mar 2019 03:38:05 -0400 Subject: Enabling "Transfer-Encoding : chunked" In-Reply-To: <20180924161106.GX56558@mdounin.ru> References: <20180924161106.GX56558@mdounin.ru> Message-ID: <98e9e8b2d8ea17d5cc7bede30cc5b2b4.NginxMailingListEnglish@forum.nginx.org> Hello?Maxim Dounin? We have a problem about NGINX and hope that you can help us. We want to transform a file which is being written to. In other words,we want to send one request to NGINX server to download a file which is being written to. And we want to get the complete file finally. We read http/1.1 and knew that the "Transfer-Encoding: chunked " should be used. But we don't know how to let NGINX know the file is not completed and use " Transfer-encoding: Chunked". We are looking forward to your reply! Best wishes, Irelia Posted at Nginx Forum: https://forum.nginx.org/read.php?2,281371,283331#msg-283331 From nginx-forum at forum.nginx.org Mon Mar 11 07:49:52 2019 From: nginx-forum at forum.nginx.org (Irelia) Date: Mon, 11 Mar 2019 03:49:52 -0400 Subject: Transform a file with dynamic size. Message-ID: We want to transform a file which is being written to. In other words,we want to send one request to NGINX server to download a file which is being written to. And we want to get the complete file finally. We read http/1.1 and knew that the "Transfer-Encoding: chunked " should be used. But we don't know how to let NGINX know the file is not completed and use " Transfer-encoding: Chunked". Please suggest us some ways . Posted at Nginx Forum: https://forum.nginx.org/read.php?2,283332,283332#msg-283332 From francis at daoine.org Mon Mar 11 09:03:59 2019 From: francis at daoine.org (Francis Daly) Date: Mon, 11 Mar 2019 09:03:59 +0000 Subject: location redirect always with trailing slash... sometimes In-Reply-To: References: Message-ID: <20190311090359.a7wloxeex22zxs4p@daoine.org> On Fri, Mar 08, 2019 at 09:58:19AM +0100, Hans Schou wrote: Hi there, > I found a solution (after reading the manual) > http://nginx.org/en/docs/http/ngx_http_rewrite_module.html Great that you found a solution that works for you. > > Example of required redirect: > > http://ex.org/foo -> https://ex2.org/foo/ # Nx solves the bug here > > http://ex.org/foo/ -> https://ex2.org/foo/ > > http://ex.org/foo/?id=7 -> https://ex2.org/?id=7 > > > > "rewrite" is the way to go. > To change /foo or /foo/ to /foo/ and don't change the rest, this will do it: > location ~ /(foo|bar) { > rewrite ^(/[^/]+)/? https://ex2.org$1/ permanent; > } Just as an aside: that location will also redirect /foox to /foox/, /foo/x to /foo/, and /x/foo to /x/. It will keep any ?k=v part in the original request, in the redirected one. If you want to limit it to just "/foo", "/foo/", "/bar", and "/bar/", (with optional ?k=v) then you will want to anchor some regexes using ^ and $. For example: location ~ ^/(foo|bar)/?$ { rewrite ^(/[^/]+)/? https://ex2.org$1/ permanent; } > If any path should be handled this way: > location / { > rewrite ^(/[^/]+)/? https://ex2.org$1/ permanent; > } That will do the same -- any request of the form /word or /word/x (where "word" does not include "/") will be redirected to /word/ rewrite ^(/[^/]+)/?$ https://ex2.org$1/ permanent; would only redirect requests of the form /word or /word/ Note in particular: a request for /foo/?id=7 will be redirected to /foo/?id=7, and not to /?id=7. So that does not match your third requirement as-stated. (I suspect that you want it to go to /foo/?id=7, and your requirement is incorrect; so what you have does do what you want.) Cheers, f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Mon Mar 11 09:12:22 2019 From: nginx-forum at forum.nginx.org (qwertymy) Date: Mon, 11 Mar 2019 05:12:22 -0400 Subject: Transform a file with dynamic size. In-Reply-To: References: Message-ID: <4b6a31ff4a61aaba9153e9829a53ac44.NginxMailingListEnglish@forum.nginx.org> It's a good question. I want know too. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,283332,283335#msg-283335 From nginx-forum at forum.nginx.org Mon Mar 11 19:09:45 2019 From: nginx-forum at forum.nginx.org (Koxx) Date: Mon, 11 Mar 2019 15:09:45 -0400 Subject: LDAP authentication using ngx_http_auth_request_module Message-ID: <5154264c419743b0fde28bb56be6e935.NginxMailingListEnglish@forum.nginx.org> Hello, I am currently using the LDAP auth request module for a small SSO portal. I am talking about this : https://github.com/nginxinc/nginx-ldap-auth/ I am annoyed by the fact that I need to store the login/pwd in a cookie in order to maintain the auth valid. I encrypted the login/pwd with a much better algorithm, but still, it is subject to cookies hack. What would be a better solution without breaking everything ? by the way, I need the login/pwd in nginx for further usage to authenticate user on the backend. Regards, F. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,283339,283339#msg-283339 From igor at sysoev.ru Mon Mar 11 20:16:39 2019 From: igor at sysoev.ru (Igor Sysoev) Date: Mon, 11 Mar 2019 23:16:39 +0300 Subject: NGINX to Join F5 Message-ID: <1C865391-46B4-4958-9BFC-15C03DF44C4E@sysoev.ru> Today is an important day for NGINX. We signed an agreement to join to F5. The NGINX team and I believe this is a significant milestone for our open source technology, community, and the company. F5 is committed to our open source mission. There will be no changes to the name, projects, their licenses, development team, release cadence, or otherwise. In fact, F5 will increase investment to ensure NGINX open source projects become even stronger. Our CEO, Gus Robertson, wrote a blog to explain more: https://www.nginx.com/blog/nginx-joins-f5/ -- Igor Sysoev http://nginx.com From nginx-forum at forum.nginx.org Mon Mar 11 20:37:50 2019 From: nginx-forum at forum.nginx.org (wkbrad) Date: Mon, 11 Mar 2019 16:37:50 -0400 Subject: Possible memory leak? In-Reply-To: <4c5fdff224b1af7d8cd04654bd14f238.NginxMailingListEnglish@forum.nginx.org> References: <4c5fdff224b1af7d8cd04654bd14f238.NginxMailingListEnglish@forum.nginx.org> Message-ID: <299fed6e0d4ab6e1f5eed3d60315865b.NginxMailingListEnglish@forum.nginx.org> Hi All, I think I haven't been clear in what I'm seeing so let's start over. :) I set up a very simple test on Centos 7 with a default install of Nginx 1.12.2. Below is exactly what I did to produce the result and it's clear to me that Nginx is using 2x the ram than it should be using after the first reload. Can anyone explain why the ram usage would double after doing a config reload? yum update reboot yum install epel-release yum install nginx systemctl enable nginx systemctl start nginx yum install ps_mem vim cd /etc/nginx/ vim vhost.template -------------------------------------------------------------------------------- server { listen 80; listen [::]:80; server_name {{DOMAIN}}; root /var/www/html; index index.html; location / { try_files $uri $uri/ =404; } } -------------------------------------------------------------------------------- cd conf.d for i in $(seq -w 1 50000); do sed 's/{{DOMAIN}}/dom'${i}'.com/' ../vhost.template > dom${i}.conf; done systemctl restart nginx ps_mem|grep nginx -------------------------------------------------------------------------------- 13.8 MiB + 750.7 MiB = 764.5 MiB nginx (3) -------------------------------------------------------------------------------- systemctl reload nginx; sleep 60; ps_mem |grep nginx -------------------------------------------------------------------------------- 27.2 MiB + 1.4 GiB = 1.5 GiB nginx (3) Posted at Nginx Forum: https://forum.nginx.org/read.php?2,283216,283344#msg-283344 From manuel.baesler at gmail.com Mon Mar 11 23:57:59 2019 From: manuel.baesler at gmail.com (Manuel) Date: Tue, 12 Mar 2019 00:57:59 +0100 Subject: Possible to modify response headers from a proxied request before the response is written do the cache? (modified headers should be written to disk) Message-ID: Hello, nginx writes the rsponse from a proxy to disk. eg. [...] Server: nginx Date: Mon, 11 Mar 2019 23:23:28 GMT Content-Type: image/png Content-Length: 45360 Connection: close Expect-CT: max-age=0, report-uri=" https://openstreetmap.report-uri.com/r/d/ct/reportOnly" ETag: "314b65190a8968893c6c400f29b13369" Cache-Control: max-age=126195 Expires: Wed, 13 Mar 2019 10:26:43 GMT Access-Control-Allow-Origin: * X-Cache: MISS from trogdor.openstreetmap.org X-Cache-Lookup: HIT from trogdor.openstreetmap.org:3128 Via: 1.1 trogdor.openstreetmap.org:3128 (squid/2.7.STABLE9) Set-Cookie: qos_token=031042; Max-Age=3600; Domain=openstreetmap.org; Path=/ Strict-Transport-Security: max-age=31536000; includeSubDomains; preload [...] is it possible to modify the Cache-Control and Expires header before the response is written to disk? The config: location /tiles/ { proxy_http_version 1.1; proxy_ignore_headers "Cache-Control" "Expires" "Set-Cookie"; proxy_cache_valid any 30d; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X_FORWARDED_PROTO https; proxy_set_header Host $proxy_host; proxy_ssl_server_name on; proxy_ssl_name $proxy_host; proxy_ssl_certificate /etc/nginx/cert.pem; proxy_ssl_certificate_key /etc/nginx/key.pem; expires 30d; proxy_cache_lock on; proxy_cache_valid 200 302 30d; proxy_cache_valid 404 1m; proxy_cache_key "$request_uri"; proxy_redirect off; proxy_cache_use_stale error timeout http_500 http_502 http_503 http_504; add_header X-Cache-Status $upstream_cache_status; # add_header Cache-Control public; add_header Last-Modified ""; add_header ETag ""; proxy_cache tiles; proxy_pass https://openstreetmap_backend/; } The problem is: the cached tiles on disk do not have "Cache-Control: max-age=2592000" but "Cache-Control: max-age=126195" regardless of setting proxy_ignore_headers "Cache-Control". I assumed that setting proxy_ignore_headers "Cache-Control"; and "expires 30d;" will remove the header from the response and write the corresponding "Cache-Control" and "Expires" with the 30d. Or do I have to do this: browser -> nginx, caches and if necessary requests new tile via -> nginx, sets expires: 30d; calls tileserver Kind regards, Manuel -------------- next part -------------- An HTML attachment was scrubbed... URL: From hungnv at opensource.com.vn Tue Mar 12 01:03:55 2019 From: hungnv at opensource.com.vn (Hung Nguyen) Date: Tue, 12 Mar 2019 08:03:55 +0700 Subject: NGINX to Join F5 In-Reply-To: <1C865391-46B4-4958-9BFC-15C03DF44C4E@sysoev.ru> References: <1C865391-46B4-4958-9BFC-15C03DF44C4E@sysoev.ru> Message-ID: Congratulation to NGINX and team, Hopefully this change won?t change open source way of nginx, and team members will stay the same also, Maxim Dounin, Arut... are the most incredible developers I?ve known. Keep doing your great things, -- H?ng > On Mar 12, 2019, at 03:16, Igor Sysoev wrote: > > Today is an important day for NGINX. We signed an agreement to join to F5. > > The NGINX team and I believe this is a significant milestone for our > open source technology, community, and the company. > > F5 is committed to our open source mission. There will be no changes > to the name, projects, their licenses, development team, release > cadence, or otherwise. In fact, F5 will increase investment to > ensure NGINX open source projects become even stronger. > > Our CEO, Gus Robertson, wrote a blog to explain more: > https://www.nginx.com/blog/nginx-joins-f5/ > > > -- > Igor Sysoev > http://nginx.com > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at forum.nginx.org Tue Mar 12 08:45:34 2019 From: nginx-forum at forum.nginx.org (Olaf van der Spek) Date: Tue, 12 Mar 2019 04:45:34 -0400 Subject: Protect against php files being send as static files In-Reply-To: <20190305235940.i3uvrnsrd2jj5e2b@daoine.org> References: <20190305235940.i3uvrnsrd2jj5e2b@daoine.org> Message-ID: <8cb61462b9c27673b9055d2d8e776ee9.NginxMailingListEnglish@forum.nginx.org> Francis Daly Wrote: > I don't think that stock-nginx has a configuration directive for this. > > "Not putting files that you don't want sent, into a directory that > nginx > has been told to send files from", would probably be the safest way to > avoid external misconfiguration. Sure, but as that's bound to happen anyway sometime somewhere, some defense in depth would be nice. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,283274,283349#msg-283349 From al-nginx at none.at Tue Mar 12 08:52:06 2019 From: al-nginx at none.at (Aleksandar Lazic) Date: Tue, 12 Mar 2019 09:52:06 +0100 Subject: NGINX to Join F5 In-Reply-To: <1C865391-46B4-4958-9BFC-15C03DF44C4E@sysoev.ru> References: <1C865391-46B4-4958-9BFC-15C03DF44C4E@sysoev.ru> Message-ID: <484f6e4a-be04-cfd8-8eaf-4e7f23e97063@none.at> Hi Igor. Am 11.03.2019 um 21:16 schrieb Igor Sysoev: > Today is an important day for NGINX. We signed an agreement to join to F5. > > The NGINX team and I believe this is a significant milestone for our > open source technology, community, and the company. > > F5 is committed to our open source mission. There will be no changes > to the name, projects, their licenses, development team, release > cadence, or otherwise. In fact, F5 will increase investment to > ensure NGINX open source projects become even stronger. > > Our CEO, Gus Robertson, wrote a blog to explain more: > https://www.nginx.com/blog/nginx-joins-f5/ WOW that's great ;-) Congratulations to this big success ;-) Aleks From nginx-forum at forum.nginx.org Tue Mar 12 08:53:42 2019 From: nginx-forum at forum.nginx.org (Olaf van der Spek) Date: Tue, 12 Mar 2019 04:53:42 -0400 Subject: Protect against php files being send as static files In-Reply-To: <43871fb1-3686-f2f2-831a-be8af660f2ae@gmail.com> References: <43871fb1-3686-f2f2-831a-be8af660f2ae@gmail.com> Message-ID: <8d390ac756445c8212c5b0503f243bb9.NginxMailingListEnglish@forum.nginx.org> Ian Hobson Wrote: > http://forumm.nginx.org/read.php?2,88846,page 3 This link doesn't work.. > try_files $uri =404; > fastcgi_split_path_info ^(.+\.php)(/.+)$; > include /etc/nginx/fastcgi_params; > fastcgi_param SCRIPT_FILENAME > $document_root$fastcgi_script_name; > fastcgi_pass 127.0.0.1:9000; > } > } Unix sockets are recommended over TCP AFAIK. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,283274,283350#msg-283350 From nginx-forum at forum.nginx.org Tue Mar 12 10:22:30 2019 From: nginx-forum at forum.nginx.org (gogan) Date: Tue, 12 Mar 2019 06:22:30 -0400 Subject: nginx directives geo and map behind proxy Message-ID: <8112a41dfd48ad667dfb5bb5b70ae958.NginxMailingListEnglish@forum.nginx.org> We have a problem with mapping ip's on our nginx loadbalancer behind myracloud (proxy). In configuration file we have: set_real_ip_from x.x.x.x ... real_ip_header CF-Connecting-IP; real_ip_recursive on; In addition to map ip's: geo $limited { default 0; x.x.x.x 1; } map $limited $botlimit { 0 $remote_addr; 1 ''; } We want to limit requests with limit_req_zone in gninx. Using it directly connected to the loadbalancer is fine. It works great, but connections coming from myracloud are not limited. Guess nginx is evaluating ip address before extracting real client ip from proxy. So, is there a way to solve the problem? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,283352,283352#msg-283352 From mdounin at mdounin.ru Tue Mar 12 12:43:23 2019 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 12 Mar 2019 15:43:23 +0300 Subject: Possible to modify response headers from a proxied request before the response is written do the cache? (modified headers should be written to disk) In-Reply-To: References: Message-ID: <20190312124323.GM1877@mdounin.ru> Hello! On Tue, Mar 12, 2019 at 12:57:59AM +0100, Manuel wrote: > Hello, > > nginx writes the rsponse from a proxy to disk. eg. > [...] > Server: nginx > Date: Mon, 11 Mar 2019 23:23:28 GMT > Content-Type: image/png > Content-Length: 45360 > Connection: close > Expect-CT: max-age=0, report-uri=" > https://openstreetmap.report-uri.com/r/d/ct/reportOnly" > ETag: "314b65190a8968893c6c400f29b13369" > Cache-Control: max-age=126195 > Expires: Wed, 13 Mar 2019 10:26:43 GMT > Access-Control-Allow-Origin: * > X-Cache: MISS from trogdor.openstreetmap.org > X-Cache-Lookup: HIT from trogdor.openstreetmap.org:3128 > Via: 1.1 trogdor.openstreetmap.org:3128 (squid/2.7.STABLE9) > Set-Cookie: qos_token=031042; Max-Age=3600; Domain=openstreetmap.org; Path=/ > Strict-Transport-Security: max-age=31536000; includeSubDomains; preload > [...] > > is it possible to modify the Cache-Control and Expires header before the > response is written to disk? No. Cache stores the original response as got from the upstream server. You can, however, return different headers to clients, using the "proxy_hide_header", "add_header", and "expires" directives. In case of Cache-Control and Expires, just "expires" as already present in your configuration should be enough. [...] > The problem is: the cached tiles on disk do not have "Cache-Control: > max-age=2592000" but "Cache-Control: max-age=126195" regardless of setting > proxy_ignore_headers "Cache-Control". > I assumed that setting proxy_ignore_headers "Cache-Control"; and "expires > 30d;" will remove the header from the response and write the corresponding > "Cache-Control" and "Expires" with the 30d. Well, your assumption is not correct. The "proxy_ignore_headers" directive controls if nginx itself will respect Cache-Control or not when caching a response. And the "expires" directive controls what will be returned to clients. >From practical point of view, however, these should be enough to return correct responses to clients. What is stored in the cache file is irrelevant. -- Maxim Dounin http://mdounin.ru/ From anoopalias01 at gmail.com Tue Mar 12 13:53:53 2019 From: anoopalias01 at gmail.com (Anoop Alias) Date: Tue, 12 Mar 2019 19:23:53 +0530 Subject: Possible memory leak? In-Reply-To: <299fed6e0d4ab6e1f5eed3d60315865b.NginxMailingListEnglish@forum.nginx.org> References: <4c5fdff224b1af7d8cd04654bd14f238.NginxMailingListEnglish@forum.nginx.org> <299fed6e0d4ab6e1f5eed3d60315865b.NginxMailingListEnglish@forum.nginx.org> Message-ID: I am able to reproduce the issue @wkbrad is reporting [root at server1 ~]# ps_mem|head -1 && ps_mem|grep nginx Private + Shared = RAM used Program 25.3 MiB + 119.5 MiB = 144.9 MiB nginx (3) [root at server1 ~]# systemctl restart nginx [root at server1 ~]# ps_mem|head -1 && ps_mem|grep nginx Private + Shared = RAM used Program 24.2 MiB + 58.1 MiB = 82.2 MiB nginx (4) --------------------------> notice the sharedmemory usage is half os what is used before restart [root at server1 ~]# ps_mem|head -1 && ps_mem|grep nginx Private + Shared = RAM used Program 23.1 MiB + 57.9 MiB = 81.0 MiB nginx (3) ---------------------------> the cache loader process exits and the ram usage remain same [root at server1 ~]# nginx -s reload ---------------------------> A graceful reload is performed on Nginx [root at server1 ~]# ps_mem|head -1 && ps_mem|grep nginx Private + Shared = RAM used Program 15.8 MiB + 118.8 MiB = 134.5 MiB nginx (3) ----------------------------> The shared RAM size doubles and stay at this value till another restart is performed ############################################################################## I think this is because the pmap shows 2 heaps after reload whereas there is only one right after the restart , An additional heap appears after reload [root at server1 ~]# systemctl restart nginx [root at server1 ~]# ps aux|grep nginx root 22392 0.0 0.7 510316 62184 ? Ss 13:49 0:00 nginx: master process /usr/sbin/nginx -c /etc/nginx/nginx.conf [root at server1 ~]# pmap -X 22392|head -2 && pmap -X 22392|grep heap 22392: nginx: master process /usr/sbin/nginx -c /etc/nginx/nginx.conf Address Perm Offset Device Inode Size Rss Pss Referenced Anonymous Swap Locked Mapping 01b10000 rw-p 00000000 00:00 0 61224 58688 17187 80 58688 0 0 [heap] Now after the reload [root at server1 ~]# pmap -X 20983|head -2 && pmap -X 20983|grep heap 20983: nginx: master process /usr/sbin/nginx -c /etc/nginx/nginx.conf Address Perm Offset Device Inode Size Rss Pss Referenced Anonymous Swap Locked Mapping 02780000 rw-p 00000000 00:00 0 61224 61220 23118 51540 61220 0 0 [heap] 0634a000 rw-p 00000000 00:00 0 57856 55360 19138 55360 55360 0 0 [heap] ########################################################################### On Tue, Mar 12, 2019 at 2:07 AM wkbrad wrote: > Hi All, > > I think I haven't been clear in what I'm seeing so let's start over. :) I > set up a very simple test on Centos 7 with a default install of Nginx > 1.12.2. Below is exactly what I did to produce the result and it's clear > to > me that Nginx is using 2x the ram than it should be using after the first > reload. Can anyone explain why the ram usage would double after doing a > config reload? > > yum update > reboot > yum install epel-release > yum install nginx > systemctl enable nginx > systemctl start nginx > yum install ps_mem vim > cd /etc/nginx/ > vim vhost.template > > -------------------------------------------------------------------------------- > server { > listen 80; > listen [::]:80; > > server_name {{DOMAIN}}; > > root /var/www/html; > index index.html; > > location / { > try_files $uri $uri/ =404; > } > } > > -------------------------------------------------------------------------------- > cd conf.d > for i in $(seq -w 1 50000); do sed 's/{{DOMAIN}}/dom'${i}'.com/' > ../vhost.template > dom${i}.conf; done > systemctl restart nginx > ps_mem|grep nginx > > -------------------------------------------------------------------------------- > 13.8 MiB + 750.7 MiB = 764.5 MiB nginx (3) > > -------------------------------------------------------------------------------- > systemctl reload nginx; sleep 60; ps_mem |grep nginx > > -------------------------------------------------------------------------------- > 27.2 MiB + 1.4 GiB = 1.5 GiB nginx (3) > > Posted at Nginx Forum: > https://forum.nginx.org/read.php?2,283216,283344#msg-283344 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- *Anoop P Alias* -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Tue Mar 12 14:39:05 2019 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 12 Mar 2019 17:39:05 +0300 Subject: Possible memory leak? In-Reply-To: <299fed6e0d4ab6e1f5eed3d60315865b.NginxMailingListEnglish@forum.nginx.org> References: <4c5fdff224b1af7d8cd04654bd14f238.NginxMailingListEnglish@forum.nginx.org> <299fed6e0d4ab6e1f5eed3d60315865b.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20190312143905.GO1877@mdounin.ru> Hello! On Mon, Mar 11, 2019 at 04:37:50PM -0400, wkbrad wrote: > I think I haven't been clear in what I'm seeing so let's start over. :) I > set up a very simple test on Centos 7 with a default install of Nginx > 1.12.2. Below is exactly what I did to produce the result and it's clear to > me that Nginx is using 2x the ram than it should be using after the first > reload. Can anyone explain why the ram usage would double after doing a > config reload? As I already tried to explained earlier in this thread, this is a result of two things: 1) How nginx allocates memory when doing a configuration reload: it creates a new configuration first, and then frees the old one. 2) How system memory allocator works. Usually it cannot return memory to the system if there are any remaining allocations above the freed memory regions. In some cases you can configure system allocator to use mmap(), so it will be possible to free such allocations, but it may a be a bad idea for other reasons. As a result, if large amount of memory is used solely for the configuration structures, memory occupied by the nginx master process from the system point of view is roughly doubled after a configuration reload. Note that the memory in question is not leaked. It is properly freed by nginx, and it is available for future allocations within nginx. In worker processes, this memory will be used for various run-time allocations, such as request buffers and so on. In the master process, this memory will be used on further configuration reloads, so the master process will not grow any further. If the amount of memory used for configuration structures is a problem, you may want to re-think your configuration approach. In particular, large virtual hosting providers are known to use nginx with small number of server{} blocks serving many different domains. Alternatively, you may want to build nginx with less modules compiled in, as each module usually allocates at least basic configuration structures in each server{} / location{} even if not used. -- Maxim Dounin http://mdounin.ru/ From anoopalias01 at gmail.com Tue Mar 12 14:58:54 2019 From: anoopalias01 at gmail.com (Anoop Alias) Date: Tue, 12 Mar 2019 20:28:54 +0530 Subject: Possible memory leak? In-Reply-To: <20190312143905.GO1877@mdounin.ru> References: <4c5fdff224b1af7d8cd04654bd14f238.NginxMailingListEnglish@forum.nginx.org> <299fed6e0d4ab6e1f5eed3d60315865b.NginxMailingListEnglish@forum.nginx.org> <20190312143905.GO1877@mdounin.ru> Message-ID: limiting the server blocks may not be practical when each domain has a different TLS config unless we use lua modules provided in the openresty Correct me if I am wrong On Tue, Mar 12, 2019 at 8:09 PM Maxim Dounin wrote: > Hello! > > On Mon, Mar 11, 2019 at 04:37:50PM -0400, wkbrad wrote: > > > I think I haven't been clear in what I'm seeing so let's start over. > :) I > > set up a very simple test on Centos 7 with a default install of Nginx > > 1.12.2. Below is exactly what I did to produce the result and it's > clear to > > me that Nginx is using 2x the ram than it should be using after the first > > reload. Can anyone explain why the ram usage would double after doing a > > config reload? > > As I already tried to explained earlier in this thread, this is a > result of two things: > > 1) How nginx allocates memory when doing a configuration reload: > it creates a new configuration first, and then frees the old one. > > 2) How system memory allocator works. Usually it cannot return > memory to the system if there are any remaining allocations above > the freed memory regions. In some cases you can configure system > allocator to use mmap(), so it will be possible to free such > allocations, but it may a be a bad idea for other reasons. > > As a result, if large amount of memory is used solely for the > configuration structures, memory occupied by the nginx master > process from the system point of view is roughly doubled after a > configuration reload. > > Note that the memory in question is not leaked. It is properly > freed by nginx, and it is available for future allocations within > nginx. In worker processes, this memory will be used for various > run-time allocations, such as request buffers and so on. In the > master process, this memory will be used on further configuration > reloads, so the master process will not grow any further. > > If the amount of memory used for configuration structures is a > problem, you may want to re-think your configuration approach. In > particular, large virtual hosting providers are known to use nginx > with small number of server{} blocks serving many different > domains. Alternatively, you may want to build nginx with less > modules compiled in, as each module usually allocates at least > basic configuration structures in each server{} / location{} even > if not used. > > -- > Maxim Dounin > http://mdounin.ru/ > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- *Anoop P Alias* -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Tue Mar 12 15:47:42 2019 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 12 Mar 2019 18:47:42 +0300 Subject: Possible memory leak? In-Reply-To: References: <4c5fdff224b1af7d8cd04654bd14f238.NginxMailingListEnglish@forum.nginx.org> <299fed6e0d4ab6e1f5eed3d60315865b.NginxMailingListEnglish@forum.nginx.org> <20190312143905.GO1877@mdounin.ru> Message-ID: <20190312154742.GP1877@mdounin.ru> Hello! On Tue, Mar 12, 2019 at 08:28:54PM +0530, Anoop Alias wrote: > limiting the server blocks may not be practical when each domain has a > different TLS config > > unless we use lua modules provided in the openresty > > Correct me if I am wrong There are at least several ways to reduce number of server blocks even when using SSL, including using certificates with multiple alternative names, using wildcard certificates, or using dynamic certificate loading with variables as introduced in nginx 1.15.9. But either way it should be understood that SSL requires resources, even if certificates are available for free. Note well that reducing the number of server{} blocks is not the only approach to reduce memory footprint I've outlined. If you are using tens of thousands of server{} blocks, the amount of compiled in modules may make a significant difference. -- Maxim Dounin http://mdounin.ru/ From nginx-forum at forum.nginx.org Tue Mar 12 18:09:06 2019 From: nginx-forum at forum.nginx.org (wkbrad) Date: Tue, 12 Mar 2019 14:09:06 -0400 Subject: Possible memory leak? In-Reply-To: <20190312143905.GO1877@mdounin.ru> References: <20190312143905.GO1877@mdounin.ru> Message-ID: <48f8b477a118d9b214aca0f9ce9e5621.NginxMailingListEnglish@forum.nginx.org> Hey Maxim, First of all, thanks so much for your insights into this and being patient with me. :) I'm just trying to understand the issue and what can be done about it. Can you explain to me what you mean by this? > you can configure system allocator to use mmap() I'm not a C programmer so correct me if I'm wrong, but doesn't the Nginx code determine which memory allocator it uses? If not can you point me to an article that describes how to do that as I would like to test it? Also, you seem to be saying that Nginx IS attempting to free the memory but is not able to due to the way the OS is allocating memory or refusing to release the memory. I've tested this in several Linux distros, kernels, and Nginx versions and I see the same behavior in all of them. Do you know of an OS or specific distro where Nginx can release the old memory allocations correctly? I would like to test that too. :) Thanks! Posted at Nginx Forum: https://forum.nginx.org/read.php?2,283216,283362#msg-283362 From moseleymark at gmail.com Tue Mar 12 18:23:06 2019 From: moseleymark at gmail.com (Mark Moseley) Date: Tue, 12 Mar 2019 11:23:06 -0700 Subject: NGINX to Join F5 In-Reply-To: <1C865391-46B4-4958-9BFC-15C03DF44C4E@sysoev.ru> References: <1C865391-46B4-4958-9BFC-15C03DF44C4E@sysoev.ru> Message-ID: On Mon, Mar 11, 2019 at 1:16 PM Igor Sysoev wrote: > Today is an important day for NGINX. We signed an agreement to join to F5. > > The NGINX team and I believe this is a significant milestone for our > open source technology, community, and the company. > > F5 is committed to our open source mission. There will be no changes > to the name, projects, their licenses, development team, release > cadence, or otherwise. In fact, F5 will increase investment to > ensure NGINX open source projects become even stronger. > > Our CEO, Gus Robertson, wrote a blog to explain more: > https://www.nginx.com/blog/nginx-joins-f5/ > > Two of the favorite things in my toolbox, nginx and BigIP, under the same roof :) -------------- next part -------------- An HTML attachment was scrubbed... URL: From manuel.baesler at gmail.com Tue Mar 12 20:42:22 2019 From: manuel.baesler at gmail.com (Manuel) Date: Tue, 12 Mar 2019 21:42:22 +0100 Subject: Possible to modify response headers from a proxied request before the response is written do the cache? (modified headers should be written to disk) In-Reply-To: <20190312124323.GM1877@mdounin.ru> References: <20190312124323.GM1877@mdounin.ru> Message-ID: Hi Maxim, thanks for taking the time to answer my question. > From practical point of view, however, these should be enough to > return correct responses to clients. What is stored in the cache > file is irrelevant. Well, the expires header is in the cached file, and that was the problem. The expires was not 30d but some 1.x days. And so the cache will request upstream to early, because upstream returned Cache-Control: max-age=126195 I want to cache the upstream resource for 30d regardless of the returned cache headers from upstream. My solution now is a two step approach: step one: check cache, if the resource is expired or not cached, nginx calls itself to get the resource. Step two: call upstream and modify the expires header to 30d. Return response to the cache. Cache is now happy with an expires 30d header :-) Kind regards, Manuel From hobson42 at gmail.com Tue Mar 12 20:54:52 2019 From: hobson42 at gmail.com (Ian Hobson) Date: Tue, 12 Mar 2019 20:54:52 +0000 Subject: Protect against php files being send as static files In-Reply-To: <8d390ac756445c8212c5b0503f243bb9.NginxMailingListEnglish@forum.nginx.org> References: <43871fb1-3686-f2f2-831a-be8af660f2ae@gmail.com> <8d390ac756445c8212c5b0503f243bb9.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hi, On 12/03/2019 08:53, Olaf van der Spek wrote: > Ian Hobson Wrote: >> http://forumm.nginx.org/read.php?2,88846,page 3 > > This link doesn't work.. Sorry - typo (made months ago) https://forum.nginx.org/read.php?2,88845,page=3 > >> try_files $uri =404; >> fastcgi_split_path_info ^(.+\.php)(/.+)$; >> include /etc/nginx/fastcgi_params; >> fastcgi_param SCRIPT_FILENAME >> $document_root$fastcgi_script_name; >> fastcgi_pass 127.0.0.1:9000; >> } >> } > > Unix sockets are recommended over TCP AFAIK. Yup - this config was a copy.paste job. I have never understood why. -- Ian Hobson From mdounin at mdounin.ru Wed Mar 13 01:57:29 2019 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 13 Mar 2019 04:57:29 +0300 Subject: Possible memory leak? In-Reply-To: <48f8b477a118d9b214aca0f9ce9e5621.NginxMailingListEnglish@forum.nginx.org> References: <20190312143905.GO1877@mdounin.ru> <48f8b477a118d9b214aca0f9ce9e5621.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20190313015729.GR1877@mdounin.ru> Hello! On Tue, Mar 12, 2019 at 02:09:06PM -0400, wkbrad wrote: > First of all, thanks so much for your insights into this and being patient > with me. :) I'm just trying to understand the issue and what can be done > about it. > > Can you explain to me what you mean by this? > > you can configure system allocator to use mmap() > > I'm not a C programmer so correct me if I'm wrong, but doesn't the Nginx > code determine which memory allocator it uses? Normally C programs use malloc() / free() functions as provided by system libc library to allocate memory. While it is possible for an application to provide its own implementation of these functions, this is something rarely used in practice. > If not can you point me to an article that describes how to do that as I > would like to test it? For details on how to control system allocator on Linux, please refer to the mallopt(3) manpage, notably the MALLOC_MMAP_THRESHOLD_ environment variable. Web version is available here: http://man7.org/linux/man-pages/man3/mallopt.3.html Please refer to the M_MMAP_THRESHOLD description in the same man page for details on what it does and various implications. Using a values less than NGX_CYCLE_POOL_SIZE (16k by default) should help to move all configuration-related allocations into mmap(), so these can be freed independently. Alternatively, recompiling nginx with NGX_CYCLE_POOL_SIZE set to a value larger than 128k (default mmap() threshold) should have similar effect. Note though that there may be other limiting factors, such as MALLOC_MMAP_MAX_, which limits maximum number of mmap() allocations to 65536 by default. You can also play with different allocators by using the LD_PRELOAD environment variable, see for example jemalloc's wiki here: https://github.com/jemalloc/jemalloc/wiki/Getting-Started > Also, you seem to be saying that Nginx IS attempting to free the memory but > is not able to due to the way the OS is allocating memory or refusing to > release the memory. I've tested this in several Linux distros, kernels, and > Nginx versions and I see the same behavior in all of them. Do you know of > an OS or specific distro where Nginx can release the old memory allocations > correctly? I would like to test that too. :) Any Linux distro can be tuned so freed memory will be returned to the system, see above. And for example on FreeBSD, which uses jemalloc as a system allocator, unused memory is properly returned to the system out of the box (though can be seen in virtual address space occupied by the process, since the allocator uses madvise() to make the memory as unused instead of unmapping a mapping). -- Maxim Dounin http://mdounin.ru/ From mdounin at mdounin.ru Wed Mar 13 02:10:32 2019 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 13 Mar 2019 05:10:32 +0300 Subject: Possible to modify response headers from a proxied request before the response is written do the cache? (modified headers should be written to disk) In-Reply-To: References: <20190312124323.GM1877@mdounin.ru> Message-ID: <20190313021032.GS1877@mdounin.ru> Hello! On Tue, Mar 12, 2019 at 09:42:22PM +0100, Manuel wrote: > > From practical point of view, however, these should be enough to > > return correct responses to clients. What is stored in the cache > > file is irrelevant. > > Well, the expires header is in the cached file, and that was the problem. > The expires was not 30d but some 1.x days. > And so the cache will request upstream to early, because upstream > returned Cache-Control: max-age=126195 > > I want to cache the upstream resource for 30d > regardless of the returned cache headers from upstream. As long as a particular header is ignored using the "proxy_ignore_header" directive, it doesn't matter what it contains in the response returned by upstream server and/or in the cache file, nginx will not use the header to determine response validity time. Instead, it will use other headers which are not ignored (if any), or will determine cache validity based on proxy_cache_valid directives. The resulting validity time is stored in the binary cache header which comes before response headers in the cache file. Note well that cache validity time is determined when a response is stored into cache, so changing proxy_cache_valid in the configuration won't change the validity times of previously cached responses. A response needs to be re-cached for proxy_cache_valid changes to be applied. > My solution now is a two step approach: > step one: check cache, if the resource is expired > or not cached, nginx calls itself to get the resource. > Step two: call upstream and modify the expires > header to 30d. Return response to the cache. > Cache is now happy with an expires 30d header :-) Well, using double proxying will certainly work too, but in this particular case it is not needed. -- Maxim Dounin http://mdounin.ru/ From nginx-forum at forum.nginx.org Wed Mar 13 03:41:44 2019 From: nginx-forum at forum.nginx.org (Irelia) Date: Tue, 12 Mar 2019 23:41:44 -0400 Subject: Transform a file with dynamic size. In-Reply-To: References: Message-ID: About "Transfer Encoding-Chunked", we're reading NGINX open source to find some useful messege. If you have some suggestions or questions, we can talk about it. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,283332,283368#msg-283368 From peter_booth at me.com Wed Mar 13 05:43:53 2019 From: peter_booth at me.com (Peter Booth) Date: Wed, 13 Mar 2019 01:43:53 -0400 Subject: Possible memory leak? In-Reply-To: <20190313015729.GR1877@mdounin.ru> References: <20190312143905.GO1877@mdounin.ru> <48f8b477a118d9b214aca0f9ce9e5621.NginxMailingListEnglish@forum.nginx.org> <20190313015729.GR1877@mdounin.ru> Message-ID: Perhaps I?m naive or just lucky, but I have used nginx on many contracts and permanent jobs for over ten years and have never attempted to reload canfigurations. I have always stopped then restarted nginx instances one at a time. Am I not recognizing a constraint that affects other people? Curious , Peter Sent from my iPhone > On Mar 12, 2019, at 9:57 PM, Maxim Dounin wrote: > > Hello! > >> On Tue, Mar 12, 2019 at 02:09:06PM -0400, wkbrad wrote: >> >> First of all, thanks so much for your insights into this and being patient >> with me. :) I'm just trying to understand the issue and what can be done >> about it. >> >> Can you explain to me what you mean by this? >>> you can configure system allocator to use mmap() >> >> I'm not a C programmer so correct me if I'm wrong, but doesn't the Nginx >> code determine which memory allocator it uses? > > Normally C programs use malloc() / free() functions as provided by > system libc library to allocate memory. While it is possible for > an application to provide its own implementation of these > functions, this is something rarely used in practice. > >> If not can you point me to an article that describes how to do that as I >> would like to test it? > > For details on how to control system allocator on Linux, please > refer to the mallopt(3) manpage, notably the > MALLOC_MMAP_THRESHOLD_ environment variable. Web version is > available here: > > http://man7.org/linux/man-pages/man3/mallopt.3.html > > Please refer to the M_MMAP_THRESHOLD description in the same man > page for details on what it does and various implications. > > Using a values less than NGX_CYCLE_POOL_SIZE (16k by default) > should help to move all configuration-related allocations into > mmap(), so these can be freed independently. Alternatively, > recompiling nginx with NGX_CYCLE_POOL_SIZE set to a value larger > than 128k (default mmap() threshold) should have similar > effect. > > Note though that there may be other limiting factors, > such as MALLOC_MMAP_MAX_, which limits maximum number of mmap() > allocations to 65536 by default. > > You can also play with different allocators by using the > LD_PRELOAD environment variable, see for example jemalloc's wiki > here: > > https://github.com/jemalloc/jemalloc/wiki/Getting-Started > >> Also, you seem to be saying that Nginx IS attempting to free the memory but >> is not able to due to the way the OS is allocating memory or refusing to >> release the memory. I've tested this in several Linux distros, kernels, and >> Nginx versions and I see the same behavior in all of them. Do you know of >> an OS or specific distro where Nginx can release the old memory allocations >> correctly? I would like to test that too. :) > > Any Linux distro can be tuned so freed memory will be returned to > the system, see above. And for example on FreeBSD, which uses > jemalloc as a system allocator, unused memory is properly returned > to the system out of the box (though can be seen in virtual > address space occupied by the process, since the allocator uses > madvise() to make the memory as unused instead of unmapping a > mapping). > > -- > Maxim Dounin > http://mdounin.ru/ > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From anoopalias01 at gmail.com Wed Mar 13 05:53:01 2019 From: anoopalias01 at gmail.com (Anoop Alias) Date: Wed, 13 Mar 2019 11:23:01 +0530 Subject: Possible memory leak? In-Reply-To: References: <20190312143905.GO1877@mdounin.ru> <48f8b477a118d9b214aca0f9ce9e5621.NginxMailingListEnglish@forum.nginx.org> <20190313015729.GR1877@mdounin.ru> Message-ID: An nginx restart can take the web server offline for more than 30 seconds or so depending upon the number of server{} blocks and configuration. It may be fine for a few vhost though On Wed, Mar 13, 2019 at 11:14 AM Peter Booth via nginx wrote: > Perhaps I?m naive or just lucky, but I have used nginx on many contracts > and permanent jobs for over ten years and have never attempted to reload > canfigurations. I have always stopped then restarted nginx instances one at > a time. Am I not recognizing a constraint that affects other people? > > Curious , > > Peter > > Sent from my iPhone > > > On Mar 12, 2019, at 9:57 PM, Maxim Dounin wrote: > > > > Hello! > > > >> On Tue, Mar 12, 2019 at 02:09:06PM -0400, wkbrad wrote: > >> > >> First of all, thanks so much for your insights into this and being > patient > >> with me. :) I'm just trying to understand the issue and what can be > done > >> about it. > >> > >> Can you explain to me what you mean by this? > >>> you can configure system allocator to use mmap() > >> > >> I'm not a C programmer so correct me if I'm wrong, but doesn't the Nginx > >> code determine which memory allocator it uses? > > > > Normally C programs use malloc() / free() functions as provided by > > system libc library to allocate memory. While it is possible for > > an application to provide its own implementation of these > > functions, this is something rarely used in practice. > > > >> If not can you point me to an article that describes how to do that as I > >> would like to test it? > > > > For details on how to control system allocator on Linux, please > > refer to the mallopt(3) manpage, notably the > > MALLOC_MMAP_THRESHOLD_ environment variable. Web version is > > available here: > > > > http://man7.org/linux/man-pages/man3/mallopt.3.html > > > > Please refer to the M_MMAP_THRESHOLD description in the same man > > page for details on what it does and various implications. > > > > Using a values less than NGX_CYCLE_POOL_SIZE (16k by default) > > should help to move all configuration-related allocations into > > mmap(), so these can be freed independently. Alternatively, > > recompiling nginx with NGX_CYCLE_POOL_SIZE set to a value larger > > than 128k (default mmap() threshold) should have similar > > effect. > > > > Note though that there may be other limiting factors, > > such as MALLOC_MMAP_MAX_, which limits maximum number of mmap() > > allocations to 65536 by default. > > > > You can also play with different allocators by using the > > LD_PRELOAD environment variable, see for example jemalloc's wiki > > here: > > > > https://github.com/jemalloc/jemalloc/wiki/Getting-Started > > > >> Also, you seem to be saying that Nginx IS attempting to free the memory > but > >> is not able to due to the way the OS is allocating memory or refusing to > >> release the memory. I've tested this in several Linux distros, > kernels, and > >> Nginx versions and I see the same behavior in all of them. Do you know > of > >> an OS or specific distro where Nginx can release the old memory > allocations > >> correctly? I would like to test that too. :) > > > > Any Linux distro can be tuned so freed memory will be returned to > > the system, see above. And for example on FreeBSD, which uses > > jemalloc as a system allocator, unused memory is properly returned > > to the system out of the box (though can be seen in virtual > > address space occupied by the process, since the allocator uses > > madvise() to make the memory as unused instead of unmapping a > > mapping). > > > > -- > > Maxim Dounin > > http://mdounin.ru/ > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- *Anoop P Alias* -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists at lazygranch.com Wed Mar 13 05:57:01 2019 From: lists at lazygranch.com (Gary) Date: Tue, 12 Mar 2019 22:57:01 -0700 Subject: Possible memory leak? In-Reply-To: Message-ID: I use three maps to kick out the usual clowns trying to misuse the web server. (I detect odd urls, bad user agents, and references [Links] from shady websites.) Any change to a map requires a reload. Or am I wrong? ? Original Message ? From: nginx at nginx.org Sent: March 12, 2019 10:44 PM To: nginx at nginx.org Reply-to: nginx at nginx.org Cc: peter_booth at me.com Subject: Re: Possible memory leak? Perhaps I?m naive or just lucky, but I have used nginx on many contracts and permanent jobs for over ten years and have never attempted to reload canfigurations. I have always stopped then restarted nginx instances one at a time. Am I not recognizing a constraint that affects other people? Curious , Peter Sent from my iPhone > On Mar 12, 2019, at 9:57 PM, Maxim Dounin wrote: > > Hello! > >> On Tue, Mar 12, 2019 at 02:09:06PM -0400, wkbrad wrote: >> >> First of all, thanks so much for your insights into this and being patient >> with me.? :)? I'm just trying to understand the issue and what can be done >> about it. >> >> Can you explain to me what you mean by this? >>> you can configure system allocator to use mmap() >> >> I'm not a C programmer so correct me if I'm wrong, but doesn't the Nginx >> code determine which memory allocator it uses? > > Normally C programs use malloc() / free() functions as provided by > system libc library to allocate memory.? While it is possible for > an application to provide its own implementation of these > functions, this is something rarely used in practice. > >> If not can you point me to an article that describes how to do that as I >> would like to test it? > > For details on how to control system allocator on Linux, please > refer to the mallopt(3) manpage, notably the > MALLOC_MMAP_THRESHOLD_ environment variable.? Web version is > available here: > > http://man7.org/linux/man-pages/man3/mallopt.3.html > > Please refer to the M_MMAP_THRESHOLD description in the same man > page for details on what it does and various implications. > > Using a values less than NGX_CYCLE_POOL_SIZE (16k by default) > should help to move all configuration-related allocations into > mmap(), so these can be freed independently.? Alternatively, > recompiling nginx with NGX_CYCLE_POOL_SIZE set to a value larger > than 128k (default mmap() threshold) should have similar > effect. > > Note though that there may be other limiting factors, > such as MALLOC_MMAP_MAX_, which limits maximum number of mmap() > allocations to 65536 by default. > > You can also play with different allocators by using the > LD_PRELOAD environment variable, see for example jemalloc's wiki > here: > > https://github.com/jemalloc/jemalloc/wiki/Getting-Started > >> Also, you seem to be saying that Nginx IS attempting to free the memory but >> is not able to due to the way the OS is allocating memory or refusing to >> release the memory.? I've tested this in several Linux distros, kernels, and >> Nginx versions and I see the same behavior in all of them.? Do you know of >> an OS or specific distro where Nginx can release the old memory allocations >> correctly?? I would like to test that too.? :) > > Any Linux distro can be tuned so freed memory will be returned to > the system, see above.? And for example on FreeBSD, which uses > jemalloc as a system allocator, unused memory is properly returned > to the system out of the box (though can be seen in virtual > address space occupied by the process, since the allocator uses > madvise() to make the memory as unused instead of unmapping a > mapping). > > -- > Maxim Dounin > http://mdounin.ru/ > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at forum.nginx.org Thu Mar 14 11:20:44 2019 From: nginx-forum at forum.nginx.org (pb.rakesh90) Date: Thu, 14 Mar 2019 07:20:44 -0400 Subject: Nginx + LUA, how to read the file and set environment variable? Message-ID: Hi Team, I would like to know if a way in Nginx to use Lua module to read a value from a file and set enviornment variable. Any help will be appreciated Posted at Nginx Forum: https://forum.nginx.org/read.php?2,283376,283376#msg-283376 From satcse88 at gmail.com Thu Mar 14 11:32:49 2019 From: satcse88 at gmail.com (Sathish Kumar) Date: Thu, 14 Mar 2019 19:32:49 +0800 Subject: Cookie HTTP Only & Secure Message-ID: Hi All, To fix Cross site scripting (XSS), I am trying to add below config but I am not seeing cookie in the response headers. Cookie in the browser still showing as not secure and not http. We are using Nginx as reverse proxy to Jetty and running a java application on it. Below is the nginx config: location /abc/ { proxy_pass http://127.0.0.1:8080; proxy_set_header X-Real-IP $remote_addr; proxy_cookie_path / "/; secure; HttpOnly; SameSite=strict"; } # nginx -V nginx version: nginx/1.10.3 (Ubuntu) built with OpenSSL 1.0.2g 1 Mar 2016 TLS SNI support enabled configure arguments: --with-cc-opt='-g -O2 -fPIE -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2' --with-ld-opt='-Wl,-Bsymbolic-functions -fPIE -pie -Wl,-z,relro -Wl,-z,now' --prefix=/usr/share/nginx --conf-path=/etc/nginx/nginx.conf --http-log-path=/var/log/nginx/access.log --error-log-path=/var/log/nginx/error.log --lock-path=/var/lock/nginx.lock --pid-path=/run/nginx.pid --http-client-body-temp-path=/var/lib/nginx/body --http-fastcgi-temp-path=/var/lib/nginx/fastcgi --http-proxy-temp-path=/var/lib/nginx/proxy --http-scgi-temp-path=/var/lib/nginx/scgi --http-uwsgi-temp-path=/var/lib/nginx/uwsgi --with-debug --with-pcre-jit --with-ipv6 --with-http_ssl_module --with-http_stub_status_module --with-http_realip_module --with-http_auth_request_module --with-http_addition_module --with-http_dav_module --with-http_geoip_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_image_filter_module --with-http_v2_module --with-http_sub_module --with-http_xslt_module --with-stream --with-stream_ssl_module --with-mail --with-mail_ssl_module --with-threads Can you let me know, if I am missing any. -------------- next part -------------- An HTML attachment was scrubbed... URL: From meier.stefan at gmail.com Thu Mar 14 16:37:49 2019 From: meier.stefan at gmail.com (Stefan Meier) Date: Thu, 14 Mar 2019 17:37:49 +0100 Subject: Cookie path with reverse proxy Message-ID: I have three different applications running behind a NGINX reverse proxy. They all have a login GUI, which uses the same authentication API. Authentication is based on cookies. My problem is now, that the path's of the cookies are set differently depending on which GUI is used to login. The authentication API, actually sets the cookie-path to /, but I assume it is the NGINX proxy, which is overwriting that depending on the location to /app or /admin. Is there a way to set the path of the cookies to /, regardless which GUI is used? I appreciate any help. This is how my configuration looks like: http { server { listen 80; server_name localhost; location = / { rewrite / /admin; } location /admin/ { proxy_pass http://localhost:9001/; } location /app/ { proxy_pass http://localhost:3100/; } location / { proxy_pass http://localhost:3000/; } }} -------------- next part -------------- An HTML attachment was scrubbed... URL: From r at roze.lv Thu Mar 14 17:49:32 2019 From: r at roze.lv (Reinis Rozitis) Date: Thu, 14 Mar 2019 19:49:32 +0200 Subject: Cookie path with reverse proxy In-Reply-To: References: Message-ID: <012901d4da8e$482b1060$d8813120$@roze.lv> > Is there a way to set the path of the cookies to /, regardless which GUI is used? Yes you can make nginx to change the cookie http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_cookie_path If the path from the backend app is unknown you can probably use a regex to match everything: proxy_cookie_path ~*^/.* /; rr From francis at daoine.org Thu Mar 14 18:33:37 2019 From: francis at daoine.org (Francis Daly) Date: Thu, 14 Mar 2019 18:33:37 +0000 Subject: nginx directives geo and map behind proxy In-Reply-To: <8112a41dfd48ad667dfb5bb5b70ae958.NginxMailingListEnglish@forum.nginx.org> References: <8112a41dfd48ad667dfb5bb5b70ae958.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20190314183337.mt4gfgyyfpdritd5@daoine.org> On Tue, Mar 12, 2019 at 06:22:30AM -0400, gogan wrote: Hi there, > We want to limit requests with limit_req_zone in gninx. Using it directly > connected to the loadbalancer is fine. It works great, but connections > coming from myracloud are not limited. Guess nginx is evaluating ip address > before extracting real client ip from proxy. So, is there a way to solve the > problem? My testing suggests that the realip side sets $remote_addr correctly, and that the geo side uses the correct $remote_addr. Can you show config / example / logs of the problem that you are reporting? If you repeat the test below, do you see something different? == http { geo $geo { default unknown; 127.0.0.1 one; 127.0.0.3 three; 127.0.0.10 ten; } server { listen 8000; set_real_ip_from 127.0.0.10; real_ip_header CF-Connecting-IP; location = /ip { return 200 "\nCF-Connecting-IP: $http_cf_connecting_ip;\nremote: $remote_addr;\nreal: $realip_remote_addr;\ngeo $geo;\n"; } } } == # Send the header, and connect from a trusted address; remote and geo are based on the address from the header: $ curl -H CF-Connecting-IP:127.0.0.3 http://127.0.0.10:8000/ip CF-Connecting-IP: 127.0.0.3; remote: 127.0.0.3; real: 127.0.0.10; geo three; # Send the header, but connect from an untrusted address; remote and geo are based on the untrusted address: $ curl -H CF-Connecting-IP:127.0.0.3 http://127.0.0.1:8000/ip CF-Connecting-IP: 127.0.0.3; remote: 127.0.0.1; real: 127.0.0.1; geo one; Have I misunderstood what you are doing? f -- Francis Daly francis at daoine.org From francis at daoine.org Thu Mar 14 18:44:18 2019 From: francis at daoine.org (Francis Daly) Date: Thu, 14 Mar 2019 18:44:18 +0000 Subject: Cookie HTTP Only & Secure In-Reply-To: References: Message-ID: <20190314184418.m4pvbu5ybom5sxf5@daoine.org> On Thu, Mar 14, 2019 at 07:32:49PM +0800, Sathish Kumar wrote: Hi there, > To fix Cross site scripting (XSS), I am trying to add below config but I am > not seeing cookie in the response headers. Cookie in the browser still > showing as not secure and not http. Do you see a Set-Cookie: header in the response from upstream to nginx? If you do not, your nginx config will not make a difference. If you do see it in the response from upstream to nginx, and do not see it in the response from nginx to the client, then there is something interesting going on. f -- Francis Daly francis at daoine.org From zn1314 at 126.com Fri Mar 15 08:36:22 2019 From: zn1314 at 126.com (David Ni) Date: Fri, 15 Mar 2019 16:36:22 +0800 (CST) Subject: nginx servers share session or cookies Message-ID: <3fd2ca45.6896.169807ebaaf.Coremail.zn1314@126.com> Hi Nginx Experts, I have one requirement right now,we are using nginx with ldap auth(I compiled nginx with auth_ldap module),and I created many servers like datanode02.bddev.test.net datanode03.bddev.test.net,I have successfully conigured nginx with ldap auth,so when I access these servers ,we need to input the correct username and password which stored in ldap,my requirement is that whether datanode02.bddev.test.net datanode03.bddev.test.net can share cookies or session with each other,so that if I have accessed datanode02.bddev.test.net successfully,I don't need to input username and password when I access datanode03.bddev.test.net , is this possible? If possible,how to achieve this?Thanks very much! server { listen 80; server_name datanode02.bddev.test.net; error_log /var/log/nginx/error_for_bigdata.log info; access_log /var/log/nginx/http_access_for_bigdata.log main; auth_ldap "Restricted Space"; auth_ldap_servers bigdataldap; location / { proxy_pass http://dev-datanode02:8042/; more_clear_headers "X-Frame-options"; } } server { listen 80; server_name datanode03.bddev.test.net; error_log /var/log/nginx/error_for_bigdata.log info; access_log /var/log/nginx/http_access_for_bigdata.log main; auth_ldap "Restricted Space"; auth_ldap_servers bigdataldap; location / { proxy_pass http://dev-datanode03:8042/; more_clear_headers "X-Frame-options"; } } -------------- next part -------------- An HTML attachment was scrubbed... URL: From meier.stefan at gmail.com Fri Mar 15 09:39:26 2019 From: meier.stefan at gmail.com (Stefan Meier) Date: Fri, 15 Mar 2019 10:39:26 +0100 Subject: Cookie path with reverse proxy In-Reply-To: <012901d4da8e$482b1060$d8813120$@roze.lv> References: <012901d4da8e$482b1060$d8813120$@roze.lv> Message-ID: Thanks for the answer Reinis. The directive proxy_cookie_path is for sure the solutions for the problem I described in this mail. (But I just found out that my cookie wasn't set from the serverside API but on the client side. So the problem wasn't the NGINX config. Thank you anyways!) Am Do., 14. M?rz 2019 um 18:49 Uhr schrieb Reinis Rozitis : > > Is there a way to set the path of the cookies to /, regardless which GUI > is used? > > Yes you can make nginx to change the cookie > http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_cookie_path > > > If the path from the backend app is unknown you can probably use a regex > to match everything: > > proxy_cookie_path ~*^/.* /; > > rr > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Stefan Meier Kusken 21 DK-7500 Holstebro -------------- next part -------------- An HTML attachment was scrubbed... URL: From scott.callum at gmail.com Fri Mar 15 14:17:01 2019 From: scott.callum at gmail.com (Callum Scott) Date: Fri, 15 Mar 2019 14:17:01 +0000 Subject: File Downloads with reverse proxy Message-ID: Hi All, I am having difficulty forcing downloads of mp4 files with a ?download query in the url. I am proxying files from an s3 bucket like this location ~* ^/myvideo/content/(.*) { set $bucket 'mys3buket.domain.com'; set $aws_access 'my_aws_access_key'; set $aws_secret 'my_aws_secret_key'; set $url_full "$1"; set_by_lua $now "return ngx.cookie_time(ngx.time())"; set $string_to_sign "$request_method\n\n\n\nx-amz-date:${now}\n/$bucket/$url_full"; set_hmac_sha1 $aws_signature $aws_secret $string_to_sign; set_encode_base64 $aws_signature $aws_signature; resolver 172.31.0.2 valid=300s; resolver_timeout 10s; proxy_http_version 1.1; proxy_set_header Host $bucket.s3.amazonaws.com; proxy_set_header x-amz-date $now; proxy_set_header Authorization "AWS $aws_access:$aws_signature"; proxy_buffering off; proxy_intercept_errors on; rewrite .* /$url_full break; more_set_headers 'Access-Control-Allow-Origin: $cors_header' 'Vary: Origin'; proxy_pass http://s3.amazonaws.com; } and have another location section like this location ~* (.*\.mp4\?download) { autoindex off; expires 365d; add_header Pragma public; add_header Cache-Control "public"; if ($arg_dl = "1") { add_header Content-disposition "attachment; filename=$1"; } } The equivelent apache config that works is # Add headers to force download if required RewriteCond %{REQUEST_URI} \.mp4$ RewriteCond %{QUERY_STRING} ^download$ RewriteRule ^ "-" [E=dwn:1] I was expecting the video to download in this case, however it is just streamed instead as it would be without the ?download query. Can somone please suggest where I am going wrong? Regards -- Callum -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Fri Mar 15 14:34:40 2019 From: nginx-forum at forum.nginx.org (gogan) Date: Fri, 15 Mar 2019 10:34:40 -0400 Subject: nginx directives geo and map behind proxy In-Reply-To: <20190314183337.mt4gfgyyfpdritd5@daoine.org> References: <20190314183337.mt4gfgyyfpdritd5@daoine.org> Message-ID: <47132426caa425a701b489f7b398d7df.NginxMailingListEnglish@forum.nginx.org> Hi, thanks for the response. I try it with a short view. Situation 1) Proxy (external, myracloud) <--- Connect official way | LB/Proxy (internal) | w-1 w-2 w-3 .. w10 Situation 2) LB/Proxy (internal) <--- directly connect | w-1 w-2 w-3 .. w10 In both situations I see real client IP addresses in server log on webservers and proxy/loadbalancer. In situation 1 traffic is general limited without exceptions. In situation 2 traffic is limited as expected, all is fine. ====== server.conf===== limit_req_zone $botlimit zone=req_limit_per_login:10m rate=4r/s; ... location ~ ^(/userzentrum/login).*$ { limit_req zone=req_limit_per_login; proxy_pass xxx_application; include /etc/nginx/proxy_params; } ====nginx conf==== geo $limited { default 0; x.x.x.x 1; } map $limited $botlimit { 1 ''; 0 $remote_addr; } ... # get x-real-ip from myracloud set_real_ip_from x.x.x.x; real_ip_header CF-Connecting-IP; real_ip_recursive on; Posted at Nginx Forum: https://forum.nginx.org/read.php?2,283352,283396#msg-283396 From nginx-forum at forum.nginx.org Fri Mar 15 14:38:25 2019 From: nginx-forum at forum.nginx.org (WoMa) Date: Fri, 15 Mar 2019 10:38:25 -0400 Subject: =?UTF-8?Q?Nginx_can=E2=80=99t_proxy_client_certificate_authentication?= Message-ID: Hi, all I have path: request https -> nginx -> haproxy -> http application It works fine until I add client certificate authentication on haproxy. When I add client certificate authentication on haproxy I getting error on nginx: 2019/03/14 17:39:39 [error] 1090#0: *6254 SSL_do_handshake() failed (SSL: error:14094410:SSL routines:SSL3_READ_BYTES:sslv3 alert handshake failure:SSL alert number 40) while SSL handshaking to upstream, When I test it without nginx (https -> haproxy -> http application ) I can authenticate with a client certificate and all work fine. (On nginx proxy to haproxy only location /contextroot1 and location /contextroot2) Any help or suggestions are appreciated. Thanks! My nginx version: 1.10.2 My nginx config: upstream backend_www { server 172.16.1.4:443; } upstream backend_lbxaproxy { server 172.16.1.5:443; } server { listen 443 ssl; server_name www.sampledomain.com; ssl on; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_prefer_server_ciphers on; ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH"; ssl_certificate /etc/pki/tls/certs/www.sampledomain.com/sampledomain.crt; ssl_certificate_key /etc/pki/tls/certs/www.sampledomain.com/sampledomain.key; ssl_stapling on; ssl_stapling_verify on; ssl_trusted_certificate /etc/pki/tls/certs/www.eskok.pl/CA_root.crt; ssl_session_cache shared:SSL:10m; ssl_session_timeout 1h; ssl_dhparam /etc/pki/tls/certs/dhparam.pem; location / { proxy_pass https://backend_www; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwared-For $proxy_add_x_forwarded_for; } location /contextroot1 { proxy_pass https://backend_lbxaproxy/contextroot1; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwared-For $proxy_add_x_forwarded_for; } location /contextroot2 { proxy_pass https://backend_lbxaproxy/contextroot2; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwared-For $proxy_add_x_forwarded_for; } } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,283393,283393#msg-283393 From francis at daoine.org Sat Mar 16 09:33:12 2019 From: francis at daoine.org (Francis Daly) Date: Sat, 16 Mar 2019 09:33:12 +0000 Subject: nginx directives geo and map behind proxy In-Reply-To: <47132426caa425a701b489f7b398d7df.NginxMailingListEnglish@forum.nginx.org> References: <20190314183337.mt4gfgyyfpdritd5@daoine.org> <47132426caa425a701b489f7b398d7df.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20190316093312.uhsh52wac6pmnaxp@daoine.org> On Fri, Mar 15, 2019 at 10:34:40AM -0400, gogan wrote: Hi there, > In both situations I see real client IP addresses in server log on > webservers and proxy/loadbalancer. > > In situation 1 traffic is general limited without exceptions. > In situation 2 traffic is limited as expected, all is fine. > geo $limited { > default 0; > x.x.x.x 1; > } > > map $limited $botlimit { > 1 ''; > 0 $remote_addr; > } That config says that requests with $remote_addr set to x.x.x.x should not be limited, and everything else should be limited. > # get x-real-ip from myracloud > set_real_ip_from x.x.x.x; > > real_ip_header CF-Connecting-IP; The comment mentions x-real-ip, but the code says CF-Connecting-IP. Does myracloud set the Cloudflare header? If you temporarily add the config stanza to nginx at server level: location = /iptest { return 200 "CF-Connecting-IP: $http_cf_connecting_ip;\nX-Real-IP: $http_x_real_ip\n;remote_addr: $remote_addr;\nreal remote: $realip_remote_addr;\ngeo: $limited;\nbotlimit: $botlimit\n"; } and make some requests for /iptest, which lines show the address x.x.x.x and which lines show the real client IP address? That might help show what the actual incoming requests look like. f -- Francis Daly francis at daoine.org From francis at daoine.org Sat Mar 16 09:56:15 2019 From: francis at daoine.org (Francis Daly) Date: Sat, 16 Mar 2019 09:56:15 +0000 Subject: File Downloads with reverse proxy In-Reply-To: References: Message-ID: <20190316095615.gwjmrypl5ckzgtxt@daoine.org> On Fri, Mar 15, 2019 at 02:17:01PM +0000, Callum Scott wrote: Hi there, > I am having difficulty forcing downloads of mp4 files with a ?download > query in the url. > > I am proxying files from an s3 bucket like this > > location ~* ^/myvideo/content/(.*) { ... > proxy_pass http://s3.amazonaws.com; > } > > > and have another location section like this > > location ~* (.*\.mp4\?download) { > autoindex off; > expires 365d; > add_header Pragma public; > add_header Cache-Control "public"; > > if ($arg_dl = "1") { > add_header Content-disposition "attachment; filename=$1"; > } > } > I was expecting the video to download in this case, however it is just > streamed instead as it would be without the ?download query. > > Can somone please suggest where I am going wrong? * one request is handled in one location, so if your second location is used, the proxy_pass from the first is not used. See http://nginx.org/r/location * $arg_dl is the value of the "dl" argument. Possibly you want $arg_download, or 'if ($args = "download")'. See http://nginx.org/r/$arg_ * nginx does not use the query string (? part) in choosing which location to use. So your second location will not match any "normal" requests. I'm not certain what your overall requirements are; but possibly the simplest config would be not to configure things specially, and just let the client decide what they want to download by using their "save link as" feature. I hope this points you in the right direction. Cheers, f -- Francis Daly francis at daoine.org From francis at daoine.org Sat Mar 16 10:08:38 2019 From: francis at daoine.org (Francis Daly) Date: Sat, 16 Mar 2019 10:08:38 +0000 Subject: =?UTF-8?Q?Re=3A_Nginx_can=E2=80=99t_proxy_client_certificate_authenticatio?= =?UTF-8?Q?n?= In-Reply-To: References: Message-ID: <20190316100838.lowyvrnnou4pj7cq@daoine.org> On Fri, Mar 15, 2019 at 10:38:25AM -0400, WoMa wrote: Hi there, > I have path: request https -> nginx -> haproxy -> http application > It works fine until I add client certificate authentication on haproxy. > When I add client certificate authentication on haproxy I getting error on > nginx: Nothing can proxy (at an application level) client certificate authentication. That is the point of certificates. > When I test it without nginx (https -> haproxy -> http application ) I can > authenticate with a client certificate > and all work fine. You could try a tcp-level proxy, which in nginx is spelled "stream". But... > (On nginx proxy to haproxy only location /contextroot1 and location > /contextroot2) ...then you lose the http-level facilities, like handling locations. > Any help or suggestions are appreciated. In nginx, you could include a header that includes an indication of the client certificate and the fact that nginx has confirmed that the client does have the certificate. Then in haproxy, you would have to add something so that it trusts, without verifying, that the client has the indicated certificate. (If that header comes in a request from nginx.) I do not know if the suggested haproxy config is possible. Good luck with it, f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Sat Mar 16 18:30:16 2019 From: nginx-forum at forum.nginx.org (WoMa) Date: Sat, 16 Mar 2019 14:30:16 -0400 Subject: =?UTF-8?Q?Re=3A_Nginx_can=E2=80=99t_proxy_client_certificate_authenticatio?= =?UTF-8?Q?n?= In-Reply-To: <20190316100838.lowyvrnnou4pj7cq@daoine.org> References: <20190316100838.lowyvrnnou4pj7cq@daoine.org> Message-ID: <6c7aa3809f9938fed2b2cdbd24747c5b.NginxMailingListEnglish@forum.nginx.org> Hi Francis I solved this problem maybe not elegantly but it works. 1) Client certificate authentication is set on the nginx side and not on haproxy ssl_client_certificate /etc/pki/tls/certs/CA_COPE_SZAFIR_TEST.cer; 2) Authentication is optional and not required ssl_verify_client optional; 3 ) In locations that require a certificate (/ polishapi and / identityserver), it is verified if the authentication was successful client's certificate, if not, error 403 is returned - access denied if ($ssl_client_verify != SUCCESS) { return 403; } I tested on IE 11, FF 65 and Chrome 72 the behavior was correct. Good luck, M.W. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,283393,283401#msg-283401 From nginx-forum at forum.nginx.org Sun Mar 17 05:21:32 2019 From: nginx-forum at forum.nginx.org (yuy) Date: Sun, 17 Mar 2019 01:21:32 -0400 Subject: ngx_stream_log_module log_format not working Message-ID: <65aaf8805b498e221a2ea9c4a62327cb.NginxMailingListEnglish@forum.nginx.org> I am trying to enable access log for stream on 1.15.9-1~xenial but log_format definition doesn't seem to work. Config syntax check fails as if stream log_format was not defined > $ cat nginx.conf user nginx; worker_processes 1; error_log /var/log/nginx/error.log warn; pid /var/run/nginx.pid; ... include /etc/nginx/stream.conf; > /etc/nginx$ cat stream.conf stream{ access_log /var/log/nginx/stream.access.log proxy; log_format proxy '$remote_addr [$time_local] ' '$protocol $status $bytes_sent $bytes_received ' '$session_time "$upstream_addr" ' '"$upstream_bytes_sent" "$upstream_bytes_received" "$upstream_connect_time"'; ... } > $ sudo nginx -t nginx: [emerg] unknown log format "proxy" in /etc/nginx/stream.conf:3 nginx: configuration file /etc/nginx/nginx.conf test failed Escape issue? Does nginx -t actually test the content of log_format string? I used the string from https://nginx.org/en/docs/stream/ngx_stream_log_module.html#log_format Posted at Nginx Forum: https://forum.nginx.org/read.php?2,283403,283403#msg-283403 From francis at daoine.org Sun Mar 17 08:46:51 2019 From: francis at daoine.org (Francis Daly) Date: Sun, 17 Mar 2019 08:46:51 +0000 Subject: ngx_stream_log_module log_format not working In-Reply-To: <65aaf8805b498e221a2ea9c4a62327cb.NginxMailingListEnglish@forum.nginx.org> References: <65aaf8805b498e221a2ea9c4a62327cb.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20190317084651.tt6myxd6m4szlanb@daoine.org> On Sun, Mar 17, 2019 at 01:21:32AM -0400, yuy wrote: Hi there, > I am trying to enable access log for stream on 1.15.9-1~xenial but > log_format definition doesn't seem to work. Config syntax check fails as if > stream log_format was not defined The error reports that "proxy" is unknown on line 3. You use "proxy" on line 3, and define it from line 4. > stream{ > > access_log /var/log/nginx/stream.access.log proxy; > log_format proxy '$remote_addr [$time_local] ' > '$protocol $status $bytes_sent $bytes_received ' > '$session_time "$upstream_addr" ' > '"$upstream_bytes_sent" "$upstream_bytes_received" > "$upstream_connect_time"'; > ... > } Switch the order of those directives and it should work. f -- Francis Daly francis at daoine.org From francis at daoine.org Sun Mar 17 17:34:17 2019 From: francis at daoine.org (Francis Daly) Date: Sun, 17 Mar 2019 17:34:17 +0000 Subject: =?UTF-8?Q?Re=3A_Nginx_can=E2=80=99t_proxy_client_certificate_authenticatio?= =?UTF-8?Q?n?= In-Reply-To: <6c7aa3809f9938fed2b2cdbd24747c5b.NginxMailingListEnglish@forum.nginx.org> References: <20190316100838.lowyvrnnou4pj7cq@daoine.org> <6c7aa3809f9938fed2b2cdbd24747c5b.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20190317173417.yosph7lf72lktynw@daoine.org> On Sat, Mar 16, 2019 at 02:30:16PM -0400, WoMa wrote: Hi there, > I solved this problem maybe not elegantly but it works. Good that you found a solution. I think that what you describe is the way to do it -- nginx does the client certificate authentication, and does not try to proxy that aspect. > 3 ) In locations that require a certificate (/ polishapi and / > identityserver), it is verified if the authentication was successful > client's certificate, if not, error 403 is returned - access denied > > if ($ssl_client_verify != SUCCESS) { > return 403; > } The only extra piece you could add, if the haproxy side wanted to know which specific client certificate was used, would be to use some of the variables listed around http://nginx.org/r/$ssl_client_i_dn in headers sent to the upstream. That's probably just an extra "nice-to-have", rather than a requirement, of course. Cheers, f -- Francis Daly francis at daoine.org From asierguti at gmail.com Mon Mar 18 07:58:30 2019 From: asierguti at gmail.com (Asier Gutierrez) Date: Mon, 18 Mar 2019 10:58:30 +0300 Subject: 3rd party module contribution Message-ID: Hi there, At Megalabs we have created a couple of nginx modules to generate an validate AWS signatures. We certainly believe that these modules can be very useful to other people, hence we would like to include them in the 3rd party nginx module list. How can we contribute to nginx with our modules? Thanks in advance, Asier Gutierrez -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Mon Mar 18 08:31:20 2019 From: nginx-forum at forum.nginx.org (WoMa) Date: Mon, 18 Mar 2019 04:31:20 -0400 Subject: =?UTF-8?Q?Re=3A_Nginx_can=E2=80=99t_proxy_client_certificate_authenticatio?= =?UTF-8?Q?n?= In-Reply-To: <20190317173417.yosph7lf72lktynw@daoine.org> References: <20190317173417.yosph7lf72lktynw@daoine.org> Message-ID: <8239b03222274c768ce77d1f1410963c.NginxMailingListEnglish@forum.nginx.org> Hi Francis, >The only extra piece you could add, if the haproxy side wanted to know >which specific client certificate was used, would be to use some of the >variables listed around http://nginx.org/r/$ssl_client_i_dn in headers >sent to the upstream. Thanks, I will probably need to pass this information to haproxy. Cheers, M.W. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,283393,283411#msg-283411 From nginx-forum at forum.nginx.org Mon Mar 18 11:25:54 2019 From: nginx-forum at forum.nginx.org (CCS) Date: Mon, 18 Mar 2019 07:25:54 -0400 Subject: ngx_stream_log_module log_format not working In-Reply-To: <20190317084651.tt6myxd6m4szlanb@daoine.org> References: <20190317084651.tt6myxd6m4szlanb@daoine.org> Message-ID: <488811480c91949a586d9dff94f42c4a.NginxMailingListEnglish@forum.nginx.org> This worked for me without and issue. Try and copy. log_format proxy '$remote_addr [$time_local]' 'with SSL Server name "$ssl_server_name" ' 'proxying to "$selected_upstream" ' '$protocol $status $bytes_sent $bytes_received ' '$session_time'; access_log /var/log/nginx/nginx-access.log proxy; Posted at Nginx Forum: https://forum.nginx.org/read.php?2,283403,283414#msg-283414 From osa at freebsd.org.ru Mon Mar 18 13:24:18 2019 From: osa at freebsd.org.ru (Sergey A. Osokin) Date: Mon, 18 Mar 2019 16:24:18 +0300 Subject: 3rd party module contribution In-Reply-To: References: Message-ID: <20190318132418.GB10348@FreeBSD.org.ru> Hi Asier, the FreeBSD ports tree contains two ports of nginx - stable and mainline versions. Both ports support 60+ third-party modules: clojure, passenger, ldap_auth and so on. Usualy third-party module contributors for nginx use github.com and other web-based hosting services for version control for developement and distribution their modules. Also, nginx' wiki contains information about third-party modules for nginx: please see https://www.nginx.com/resources/wiki/modules/ for details. It can be forked on https://github.com/nginxinc/nginx-wiki to add information about a third-party module. Hope it helps. -- Sergey Osokin On Mon, Mar 18, 2019 at 10:58:30AM +0300, Asier Gutierrez wrote: > Hi there, > > At Megalabs we have created a couple of nginx modules to generate an > validate AWS signatures. We certainly believe that these modules can be > very useful to other people, hence we would like to include them in the 3rd > party nginx module list. > > How can we contribute to nginx with our modules? > > Thanks in advance, > Asier Gutierrez > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at forum.nginx.org Mon Mar 18 13:49:05 2019 From: nginx-forum at forum.nginx.org (gogan) Date: Mon, 18 Mar 2019 09:49:05 -0400 Subject: nginx directives geo and map behind proxy In-Reply-To: <20190316093312.uhsh52wac6pmnaxp@daoine.org> References: <20190316093312.uhsh52wac6pmnaxp@daoine.org> Message-ID: Francis Daly Wrote: ------------------------------------------------------- > geo $limited { > default 0; > x.x.x.x 1; > } > > map $limited $botlimit { > 1 ''; > 0 $remote_addr; > } > That config says that requests with $remote_addr set to x.x.x.x should > not be limited, and everything else should be limited. Yes, this is how it should work. > Does myracloud set the Cloudflare header? Yes. > and make some requests for /iptest, which lines show the address > x.x.x.x > and which lines show the real client IP address? > With myracloud CF-Connecting-IP and remote_addr are set to client ip and real remote is myracloud ip. botlimit is set to remote_addr in general and to x.x.x.x if hits in geo given ip address. So far as I expected and now it works. I don't know why, but I have a better understanding because of your help, thanks! A trifle still ... is set limit_req_status 429; limit_conn_status 429; But in access.log I get 503, why? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,283352,283416#msg-283416 From srkadamb at cisco.com Tue Mar 19 12:49:05 2019 From: srkadamb at cisco.com (SriSagar Kadambi (srkadamb)) Date: Tue, 19 Mar 2019 12:49:05 +0000 Subject: Possible bug with Nginx Message-ID: Hi, So i'm guessing i might have run into a bug with nginx while i was perf testing one of my applications. Hope i'm wrong and you guys have a solution for me! ? Set up : 1 Nginx instance running as a tcp load balancer with streams enabled (i have attached the config output with this mail). 4 backend instances running the application 1 locust instance to simulate load (tcp requests) Behavior : When load is pumped in to the nginx load balancer, the requests are evenly distributed between the backend app instances. When the nginx configuration is modified to remove one instance and nginx is restarted, the new config takes effect. A new nginx worker process is spawned and the older one goes into "nginx: worker process is shutting down" mode. However, we see that on the node that was removed from the list of backend instances in the nginx config, the requests still keep coming in for quite a while (~10secs-2mins depending on the load). Point to note here is that the application processes every request within 10-20ms and terminates the tcp connection. In essence, the worker process that is scheduled to die still keeps getting new requests, when ideally all new requests should go to the new worker thread. Please let me know if this is expected behavior or not. If it is, how do i ensure the worker thread that is scheduled to die will get no new requests? Thanks, Sagar. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: runnning_config Type: application/octet-stream Size: 7049 bytes Desc: runnning_config URL: From nginx-forum at forum.nginx.org Wed Mar 20 06:18:29 2019 From: nginx-forum at forum.nginx.org (waleedkh) Date: Wed, 20 Mar 2019 02:18:29 -0400 Subject: Where does the content get stored? Message-ID: Hi There, I have a question I am hoping someone can assist with. I have a setup with a front end Nginx server on a public IP and two end servers on private IP's. I am using PHP-FPM fastcgi to do my upstream load balancing on NGinx. Each server is a clone of the original NGinx server with all the web content sitting on all three of them. My question is where does that content actually get served from? I suspect it is only on the front end NGinx server. Does this mean the web content files can then be removed on the back end servers (on private IP)? If so, is it just fastcgi processing that would then happen on the back end servers, so there is no need to have any of the web application files running on it? There is also a Ruby web service running on all of them, so would that server only be running on the front end Nginx server? Thanks Posted at Nginx Forum: https://forum.nginx.org/read.php?2,283437,283437#msg-283437 From nginx-forum at forum.nginx.org Wed Mar 20 07:57:57 2019 From: nginx-forum at forum.nginx.org (angelochen960) Date: Wed, 20 Mar 2019 03:57:57 -0400 Subject: proxy_pass and path with period Message-ID: Hi, I use proxy_pass to forward everything to a backend: location / { client_max_body_size 4M; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_set_header X-NginX-Proxy true; proxy_pass http://local_app:8080; proxy_redirect off; } this works: /admin/utilities log : GET /admin/utilities HTTP/1.1" 200 this does not work: /admin/utilites.my_form log: "POST /admin/utilities.my_form HTTP/1.1" 302 0 what should I do to make it work? Thanks, A.C. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,283439,283439#msg-283439 From hobson42 at gmail.com Wed Mar 20 11:48:58 2019 From: hobson42 at gmail.com (Ian Hobson) Date: Wed, 20 Mar 2019 11:48:58 +0000 Subject: Where does the content get stored? In-Reply-To: References: Message-ID: <11b87de0-ca3e-981f-58ac-20799d5f11dc@gmail.com> Hi, On 20/03/2019 06:18, waleedkh wrote: > Hi There, I have a question I am hoping someone can assist with. > > I have a setup with a front end Nginx server on a public IP and two end > servers on private IP's. > > I am using PHP-FPM fastcgi to do my upstream load balancing on NGinx. I don't think that statement makes much sense. The front end Nginx receives all the requests and has to load balance as it passes to PHP-FPM. > > Each server is a clone of the original NGinx server with all the web content > sitting on all three of them. > > My question is where does that content actually get served from? I suspect > it is only on the front end NGinx server. It is impossible to tell without your from-end nginx config. What I think you *should* be doing is this: serve the static files (.htm, .html, .js, .css, images) using the front end Nginx. That is the most efficient way to handle those. Serving .php files that exist using PHP-FPM. If I understand the set up correctly, you don't need Nginx on the backend, and you don't want PHP-FPM to have a public IP for security reasons. The reason for having all the files in each location might be that it makes the release process simple, and it also enables the front end process to test for the existence of .php files to stop "zero day" exploits. > > Does this mean the web content files can then be removed on the back end > servers (on private IP)? > > If so, is it just fastcgi processing that would then happen on the back end > servers, so there is no need to have any of the web application files > running on it? > That is how I would set it up. Rgds Ian -- Ian Hobson From nginx-forum at forum.nginx.org Wed Mar 20 13:00:25 2019 From: nginx-forum at forum.nginx.org (angelochen960) Date: Wed, 20 Mar 2019 09:00:25 -0400 Subject: (Solved) Re: proxy_pass and path with period In-Reply-To: References: Message-ID: 302 is redirect Posted at Nginx Forum: https://forum.nginx.org/read.php?2,283439,283445#msg-283445 From nginx-forum at forum.nginx.org Wed Mar 20 22:41:01 2019 From: nginx-forum at forum.nginx.org (wkbrad) Date: Wed, 20 Mar 2019 18:41:01 -0400 Subject: Possible memory leak? In-Reply-To: <20190313015729.GR1877@mdounin.ru> References: <20190313015729.GR1877@mdounin.ru> Message-ID: Hey Maxim, Thanks so much for the very thoughtful and detailed response! Sorry for the delay in getting back to you. I wanted to test all of this and then I got busy at work and haven't been able to get back to it. I've run a lot of tests on this and I'm seeing some success. :) The first test I ran was in FreeBSD just because I was curious. Lol. But I actually saw the exact same problem on it. I can send you some tests if you like but they look the same as the others do. Now on to the partial success! But first, I'll include the results without any modifications to malloc. -------------------------------------------------------------------------------- [root at localhost ~]# systemctl restart nginx; sleep 10; ps_mem |egrep 'RAM|nginx' Private + Shared = RAM used Program 13.8 MiB + 750.7 MiB = 764.5 MiB nginx (3) [root at localhost ~]# systemctl reload nginx; sleep 10; ps_mem |egrep 'RAM|nginx' Private + Shared = RAM used Program 27.2 MiB + 1.4 GiB = 1.5 GiB nginx (3) -------------------------------------------------------------------------------- Instead of passing the environment variables in the command line I'm going to run the tests by modifying the systemd script to use the following. -------------------------------------------------------------------------------- Environment=MALLOC_MMAP_THRESHOLD_=16384 Environment=MALLOC_MMAP_MAX_=524288 -------------------------------------------------------------------------------- Now for the test: -------------------------------------------------------------------------------- [root at localhost ~]# systemctl restart nginx; sleep 10; ps_mem |egrep 'RAM|nginx' Private + Shared = RAM used Program 7.6 MiB + 905.8 MiB = 913.4 MiB nginx (3) [root at localhost ~]# systemctl reload nginx; sleep 10; ps_mem |egrep 'RAM|nginx' Private + Shared = RAM used Program 9.7 MiB + 905.4 MiB = 915.1 MiB nginx (3) -------------------------------------------------------------------------------- Notice it starts up using more memory than before but it now does not double it's ram usage after a reload. I can deal with that but I'm still curious if you know why it's now using more ram. Any thoughts on that? But when I bring this setup to one of my live servers the effect isn't quite the same. Here is a test from it with the same settings in systemd. -------------------------------------------------------------------------------- [root at live ~]# systemctl restart nginx; sleep 60; ps_mem |egrep 'RAM|nginx' Private + Shared = RAM used Program 21.6 MiB + 1.4 GiB = 1.5 GiB nginx (3) [root at live ~]# systemctl reload nginx; sleep 60; ps_mem |egrep 'RAM|nginx' Private + Shared = RAM used Program 21.9 MiB + 2.3 GiB = 2.4 GiB nginx (3) -------------------------------------------------------------------------------- So that's definitely better but still not like the test environment. I'm sure part of the difference is that this server has real traffic and is somewhat busy. And it's using caching and probably has different buffer settings and such. So that may explain why there is an increase. But it's still very perplexing. What are your thoughts on all of that? And thank you again for all of the help! Posted at Nginx Forum: https://forum.nginx.org/read.php?2,283216,283448#msg-283448 From careygister at outlook.com Thu Mar 21 00:24:19 2019 From: careygister at outlook.com (careygister) Date: Wed, 20 Mar 2019 17:24:19 -0700 (MST) Subject: Caching with SSL Enabled Runs Slowly Message-ID: <1553127859483-0.post@n2.nabble.com> I have a server with SSL enabled. It has caching enabled and fetches data from an upstream server. With SSL disabled, the downstream server reads and caches the data from the upstream server very quickly -- as quickly as the upstream server can return it. With SSL enabled, the downstream server fetches much more slowly. In fact, it appears that the downstream server is waiting for the client to request the data before reading and caching the data from the upstream server. What is going on and what can I do to increase throughput to the upstream server, and ultimately, my clients? Thanks, Carey -- Sent from: http://nginx.2469901.n2.nabble.com/ From mdounin at mdounin.ru Thu Mar 21 13:45:26 2019 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 21 Mar 2019 16:45:26 +0300 Subject: Possible memory leak? In-Reply-To: References: <20190313015729.GR1877@mdounin.ru> Message-ID: <20190321134526.GO1877@mdounin.ru> Hello! On Wed, Mar 20, 2019 at 06:41:01PM -0400, wkbrad wrote: [...] > The first test I ran was in FreeBSD just because I was curious. Lol. But I > actually saw the exact same problem on it. I can send you some tests if you > like but they look the same as the others do. On FreeBSD you'll see that _virtual_ memory usage grows on reload, but the actual memory is returned to the system and can be used by other processes. [...] > Notice it starts up using more memory than before but it now does not double > it's ram usage after a reload. I can deal with that but I'm still curious > if you know why it's now using more ram. Any thoughts on that? This is because mmap()-based individual allocations imply additional overhead. Using mmapAnd this is are costly. > > But when I bring this setup to one of my live servers the effect isn't quite > the same. Here is a test from it with the same settings in systemd. > -------------------------------------------------------------------------------- > [root at live ~]# systemctl restart nginx; sleep 60; ps_mem |egrep 'RAM|nginx' > Private + Shared = RAM used Program > 21.6 MiB + 1.4 GiB = 1.5 GiB nginx (3) > [root at live ~]# systemctl reload nginx; sleep 60; ps_mem |egrep 'RAM|nginx' > Private + Shared = RAM used Program > 21.9 MiB + 2.3 GiB = 2.4 GiB nginx (3) > -------------------------------------------------------------------------------- > > So that's definitely better but still not like the test environment. I'm > sure part of the difference is that this server has real traffic and is > somewhat busy. And it's using caching and probably has different buffer > settings and such. So that may explain why there is an increase. But it's > still very perplexing. > > What are your thoughts on all of that? > > And thank you again for all of the help! > > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,283216,283448#msg-283448 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Maxim Dounin http://mdounin.ru/ From mdounin at mdounin.ru Thu Mar 21 13:47:50 2019 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 21 Mar 2019 16:47:50 +0300 Subject: Possible memory leak? In-Reply-To: <20190321134526.GO1877@mdounin.ru> References: <20190313015729.GR1877@mdounin.ru> <20190321134526.GO1877@mdounin.ru> Message-ID: <20190321134750.GP1877@mdounin.ru> Hello! On Thu, Mar 21, 2019 at 04:45:26PM +0300, Maxim Dounin wrote: > On Wed, Mar 20, 2019 at 06:41:01PM -0400, wkbrad wrote: > > [...] > > > The first test I ran was in FreeBSD just because I was curious. Lol. But I > > actually saw the exact same problem on it. I can send you some tests if you > > like but they look the same as the others do. > > On FreeBSD you'll see that _virtual_ memory usage grows on reload, > but the actual memory is returned to the system and can be used by > other processes. > > [...] > > > Notice it starts up using more memory than before but it now does not double > > it's ram usage after a reload. I can deal with that but I'm still curious > > if you know why it's now using more ram. Any thoughts on that? > > This is because mmap()-based individual allocations imply additional > overhead. Using mmapAnd this is are costly. Err, sorry about this, this was a draft not meant to be sent. Either way, it says most of what I was going to write here. [...] -- Maxim Dounin http://mdounin.ru/ From nginx-forum at forum.nginx.org Thu Mar 21 15:29:04 2019 From: nginx-forum at forum.nginx.org (wkbrad) Date: Thu, 21 Mar 2019 11:29:04 -0400 Subject: Possible memory leak? In-Reply-To: <20190321134526.GO1877@mdounin.ru> References: <20190321134526.GO1877@mdounin.ru> Message-ID: <5ad221b8db22dece09b83c5e0a248dd7.NginxMailingListEnglish@forum.nginx.org> Thanks again Maxim! You're really providing some valuable insights for me. > This is because mmap()-based individual allocations imply additional > overhead. Using mmapAnd this is are costly. That's what I figured might be going on. I assume there are also some negative impacts on performance with accessing memory that is allocated via mmap. Is that right? > On FreeBSD you'll see that _virtual_ memory usage grows on reload, > but the actual memory is returned to the system and can be used by > other processes. Actually, I see an increase in virtual and resident. Here are my tests from FreeBSD. -------------------------------------------------------------------------------- service nginx restart; sleep 10; echo; ps aux|grep -v grep|egrep 'RSS|nginx' USER PID %CPU %MEM VSZ RSS TT STAT STARTED TIME COMMAND root 923 0.0 5.4 121840 110368 - Ss 08:10 0:00.00 nginx: master process /usr/local/ www 924 0.0 5.4 121840 110372 - S 08:10 0:00.01 nginx: worker process (nginx) -------------------------------------------------------------------------------- service nginx reload; sleep 10; echo; ps aux|grep -v grep|egrep 'RSS|nginx' USER PID %CPU %MEM VSZ RSS TT STAT STARTED TIME COMMAND root 923 0.0 10.2 219828 209388 - Ss 08:10 0:00.44 nginx: master process /usr/local/ www 940 0.0 10.2 219828 209384 - S 08:10 0:00.01 nginx: worker process (nginx) -------------------------------------------------------------------------------- Those tests look exactly like the Linux tests and it's definitely not released back to the system. I think a big part of this is the 2 heaps that Anoop found. Nginx seems to be using those 2 heaps in a round robin way when it reloads. It looks like it's doing this. Startup: 1st heap is created 1st Reload: 2nd heap is created 2nd Reload: 1st heap is cleared and used but 2nd heap stays in memory. So on the 2nd reload is does indeed clear that memory for the heap it is using. Is there any way for us to manually clear the unused heap? Even in the case of my test VM using the adjusted malloc, it still creates the second heap and does not clear it. It's just not putting much in the heap in my test VM. You may be asking why I'm even looking into this and I've had a lot of push back and questions as to why I'm doing this. My point is this. Even if Nginx on my server is only using 50M of ram, wouldn't it better for it to use only 25M when that's all it needs? That's clearly a problem in my mind. Can you address why that is not a problem? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,283216,283465#msg-283465 From mdounin at mdounin.ru Thu Mar 21 18:16:19 2019 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 21 Mar 2019 21:16:19 +0300 Subject: Possible memory leak? In-Reply-To: <5ad221b8db22dece09b83c5e0a248dd7.NginxMailingListEnglish@forum.nginx.org> References: <20190321134526.GO1877@mdounin.ru> <5ad221b8db22dece09b83c5e0a248dd7.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20190321181619.GR1877@mdounin.ru> Hello! On Thu, Mar 21, 2019 at 11:29:04AM -0400, wkbrad wrote: > Thanks again Maxim! You're really providing some valuable insights for me. > > > This is because mmap()-based individual allocations imply additional > > overhead. Using mmapAnd this is are costly. > > That's what I figured might be going on. I assume there are also some > negative impacts on performance with accessing memory that is allocated via > mmap. Is that right? Accessing the memory shouldn't depend on the way how it is allocated. But there may be negative effects on the memory allocator performance due to the number of individual mmap() allocations it has to maintain, and there may be negative effects on the whole system performance with such a big number of mmap()s. > > On FreeBSD you'll see that _virtual_ memory usage grows on reload, > > but the actual memory is returned to the system and can be used by > > other processes. > > Actually, I see an increase in virtual and resident. Here are my tests from > FreeBSD. > -------------------------------------------------------------------------------- > service nginx restart; sleep 10; echo; ps aux|grep -v grep|egrep > 'RSS|nginx' > > USER PID %CPU %MEM VSZ RSS TT STAT STARTED TIME COMMAND > root 923 0.0 5.4 121840 110368 - Ss 08:10 0:00.00 nginx: master > process /usr/local/ > www 924 0.0 5.4 121840 110372 - S 08:10 0:00.01 nginx: worker > process (nginx) > -------------------------------------------------------------------------------- > service nginx reload; sleep 10; echo; ps aux|grep -v grep|egrep 'RSS|nginx' > > USER PID %CPU %MEM VSZ RSS TT STAT STARTED TIME COMMAND > root 923 0.0 10.2 219828 209388 - Ss 08:10 0:00.44 nginx: master > process /usr/local/ > www 940 0.0 10.2 219828 209384 - S 08:10 0:00.01 nginx: worker > process (nginx) > -------------------------------------------------------------------------------- > > Those tests look exactly like the Linux tests and it's definitely not > released back to the system. The problem is that "ps" doesn't know if memory counted in RSS can be reclaimed by the system or not. And reclaiming the memory might not happen till the system needs this memory for something else. You may have a better picture by looking at memory stats reported by top: $ ps aux | grep nginx | grep -v grep; top | grep Mem mdounin 55612 0.0 18.0 55192 41868 0 S+ 18:48 0:00.29 nginx: master mdounin 55613 0.0 18.0 55192 41868 0 S+ 18:48 0:00.00 nginx: worker Mem: 44M Active, 8208K Inact, 63M Wired, 29M Buf, 108M Free $ kill -HUP 55612 $ ps aux | grep nginx | grep -v grep; top | grep Mem mdounin 55612 1.0 32.6 87960 75808 0 S+ 18:48 0:00.68 nginx: master mdounin 55619 1.0 32.6 87960 75808 0 S+ 18:49 0:00.00 nginx: worker Mem: 52M Active, 34M Inact, 63M Wired, 29M Buf, 74M Free Note that most of the additional memory is now counted as "Inact", i.e., it can be reclaimed by the system at any time. The other factor is that jemalloc maintains various allocation caches, and some of the allocations are preserved there and not returned to the system. [...] > You may be asking why I'm even looking into this and I've had a lot of push > back and questions as to why I'm doing this. My point is this. Even if > Nginx on my server is only using 50M of ram, wouldn't it better for it to > use only 25M when that's all it needs? That's clearly a problem in my mind. > Can you address why that is not a problem? There are two things to keep in mind here: - Every task needs memory. Depending on the amount of memory used and how common the task is, it may worth optimizing or not. In most setups, nginx uses far less memory for the configuration data than for other things. And optimizing configuration storage hardly worth the effort, as it doesn't matter if your configuration uses 25M or 50M compared to a 2G shared memory zone used for cache keys, or several gigabytes of proxy buffers. - The particular effect of "doubling" memory usage on reload you are trying to understand in this thread is not really about nginx, but rather about how your system allocator works. You can tune or improve your system allocator to handle this better, but see above about if it worth the effort. If you still think this is a problem for you, and you want to save 25M of memory, you were already suggested several ways to improve things - including re-writing your configuration, re-compiling nginx without unused modules, or tuning/replacing your system allocator. -- Maxim Dounin http://mdounin.ru/ From nginx-forum at forum.nginx.org Thu Mar 21 21:55:23 2019 From: nginx-forum at forum.nginx.org (wkbrad) Date: Thu, 21 Mar 2019 17:55:23 -0400 Subject: Possible memory leak? In-Reply-To: <20190321181619.GR1877@mdounin.ru> References: <20190321181619.GR1877@mdounin.ru> Message-ID: <08f506b0c9acee091b227deb47230a3e.NginxMailingListEnglish@forum.nginx.org> Hey Maxim, Thanks again! I was a little confused at first because your tests in freebsd were so much different than mine but then I found what you did wrong. You were testing the 2nd reload but the issue can only be seen on the first reload. Here is my test to show what I mean. -------------------------------------------------------------------------------- root at freebsd:~ # service nginx restart; sleep 10; ps aux | grep -v grep | grep nginx ; top | grep Mem root 864 0.0 5.4 121796 110344 - Ss 10:02 0:00.00 nginx: master process /usr/local/s www 865 0.0 5.4 121796 110348 - S 10:02 0:00.00 nginx: worker process (nginx) Mem: 116M Active, 2808K Inact, 87M Wired, 42M Buf, 1750M Free root at freebsd:~ # service nginx reload ; sleep 10 ; ps aux | grep -v grep | grep nginx ; top | grep Mem root 864 0.0 10.2 219844 209392 - Ss 10:02 0:00.45 nginx: master process /usr/local/s www 881 0.0 10.2 219844 209388 - S 10:03 0:00.01 nginx: worker process (nginx) Mem: 213M Active, 7652K Inact, 88M Wired, 42M Buf, 1646M Free root at freebsd:~ # service nginx reload ; sleep 10 ; ps aux | grep -v grep | grep nginx ; top | grep Mem root 864 0.0 11.4 242920 234400 - Ss 10:02 0:00.91 nginx: master process /usr/local/ www 898 0.0 11.4 242920 234400 - S 10:07 0:00.01 nginx: worker process (nginx) Mem: 239M Active, 82M Inact, 89M Wired, 42M Buf, 1546M Free -------------------------------------------------------------------------------- Notice that my 2nd reload looks just like your test. But compare the restart test to the 1st reload and you'll see what I mean. Active ram doubles, it's not in inactive, and the free ram also went down by the same amount that active ram increased. I bet if you run that same test on your server you'll see the same thing that I do. Can you run that test and send me the results? I'm sure you know this but make sure that the reload is complete before you check the ram usage or you'll get misleading results. :) Thanks! Posted at Nginx Forum: https://forum.nginx.org/read.php?2,283216,283469#msg-283469 From francis at daoine.org Thu Mar 21 23:11:12 2019 From: francis at daoine.org (Francis Daly) Date: Thu, 21 Mar 2019 23:11:12 +0000 Subject: nginx directives geo and map behind proxy In-Reply-To: References: <20190316093312.uhsh52wac6pmnaxp@daoine.org> Message-ID: <20190321231112.gmjpwox4yi6x2kfs@daoine.org> On Mon, Mar 18, 2019 at 09:49:05AM -0400, gogan wrote: Hi there, > With myracloud CF-Connecting-IP and remote_addr are set to client ip and > real remote is myracloud ip. botlimit is set to remote_addr in general and > to x.x.x.x if hits in geo given ip address. So far as I expected and now it > works. I don't know why, but I have a better understanding because of your > help, thanks! Good that you have a working setup. > A trifle still ... is set > > limit_req_status 429; > limit_conn_status 429; > > But in access.log I get 503, why? It works for me -- I get 429 in access.log. Can you show a small but complete config file and test requests that demonstrate the problem? f -- Francis Daly francis at daoine.org From zn1314 at 126.com Fri Mar 22 06:47:36 2019 From: zn1314 at 126.com (David Ni) Date: Fri, 22 Mar 2019 14:47:36 +0800 (CST) Subject: nginx servers share session or cookies In-Reply-To: <3fd2ca45.6896.169807ebaaf.Coremail.zn1314@126.com> References: <3fd2ca45.6896.169807ebaaf.Coremail.zn1314@126.com> Message-ID: <165822bc.50dd.169a427a89c.Coremail.zn1314@126.com> Hi Experts, Who can help with this?Thanks very much. At 2019-03-15 16:36:22, "David Ni" wrote: Hi Nginx Experts, I have one requirement right now,we are using nginx with ldap auth(I compiled nginx with auth_ldap module),and I created many servers like datanode02.bddev.test.net datanode03.bddev.test.net,I have successfully conigured nginx with ldap auth,so when I access these servers ,we need to input the correct username and password which stored in ldap,my requirement is that whether datanode02.bddev.test.net datanode03.bddev.test.net can share cookies or session with each other,so that if I have accessed datanode02.bddev.test.net successfully,I don't need to input username and password when I access datanode03.bddev.test.net , is this possible? If possible,how to achieve this?Thanks very much! server { listen 80; server_name datanode02.bddev.test.net; error_log /var/log/nginx/error_for_bigdata.log info; access_log /var/log/nginx/http_access_for_bigdata.log main; auth_ldap "Restricted Space"; auth_ldap_servers bigdataldap; location / { proxy_pass http://dev-datanode02:8042/; more_clear_headers "X-Frame-options"; } } server { listen 80; server_name datanode03.bddev.test.net; error_log /var/log/nginx/error_for_bigdata.log info; access_log /var/log/nginx/http_access_for_bigdata.log main; auth_ldap "Restricted Space"; auth_ldap_servers bigdataldap; location / { proxy_pass http://dev-datanode03:8042/; more_clear_headers "X-Frame-options"; } } -------------- next part -------------- An HTML attachment was scrubbed... URL: From hungnv at opensource.com.vn Fri Mar 22 06:49:36 2019 From: hungnv at opensource.com.vn (Hung Nguyen) Date: Fri, 22 Mar 2019 13:49:36 +0700 Subject: nginx servers share session or cookies In-Reply-To: <165822bc.50dd.169a427a89c.Coremail.zn1314@126.com> References: <3fd2ca45.6896.169807ebaaf.Coremail.zn1314@126.com> <165822bc.50dd.169a427a89c.Coremail.zn1314@126.com> Message-ID: <286C47FC-4B35-4C88-BDA2-EEAF53EF7440@opensource.com.vn> Just for your information, Cookie is stored on client side, there?s no need to share btw. > On Mar 22, 2019, at 1:47 PM, David Ni wrote: > > Hi Experts, > Who can help with this?Thanks very much. > > > > > > At 2019-03-15 16:36:22, "David Ni" wrote: > Hi Nginx Experts, > I have one requirement right now,we are using nginx with ldap auth(I compiled nginx with auth_ldap module),and I created many servers like datanode02.bddev.test.net datanode03.bddev.test.net,I have successfully conigured nginx with ldap auth,so when I access these servers ,we need to input the correct username and password which stored in ldap,my requirement is that whether datanode02.bddev.test.net datanode03.bddev.test.net can share cookies or session with each other,so that if I have accessed datanode02.bddev.test.net successfully,I don't need to input username and password when I access datanode03.bddev.test.net , is this possible? If possible,how to achieve this?Thanks very much! > > > server { > listen 80; > server_name datanode02.bddev.test.net; > error_log /var/log/nginx/error_for_bigdata.log info; > access_log /var/log/nginx/http_access_for_bigdata.log main; > auth_ldap "Restricted Space"; > auth_ldap_servers bigdataldap; > > location / { > proxy_pass http://dev-datanode02:8042/; > more_clear_headers "X-Frame-options"; > } > } > server { > listen 80; > server_name datanode03.bddev.test.net; > error_log /var/log/nginx/error_for_bigdata.log info; > access_log /var/log/nginx/http_access_for_bigdata.log main; > auth_ldap "Restricted Space"; > auth_ldap_servers bigdataldap; > > location / { > proxy_pass http://dev-datanode03:8042/; > more_clear_headers "X-Frame-options"; > } > } > > > > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Fri Mar 22 06:59:29 2019 From: nginx-forum at forum.nginx.org (waleedkh) Date: Fri, 22 Mar 2019 02:59:29 -0400 Subject: Where does the content get stored? In-Reply-To: <11b87de0-ca3e-981f-58ac-20799d5f11dc@gmail.com> References: <11b87de0-ca3e-981f-58ac-20799d5f11dc@gmail.com> Message-ID: <8a14c0aa9d18c8b7d8e83050ee97c893.NginxMailingListEnglish@forum.nginx.org> Ok thanks that is interesting. The way I configured Nginx on the front end is as such: [code] upstream ruby_application { ip_hash; server 10.0.0.21:9000 max_fails=1 fail_timeout=10s; server 10.0.0.22:9000 max_fails=1 fail_timeout=10s; [/code] and then [code] include fastcgi_params; fastcgi_param APP_ENV prod; fastcgi_read_timeout 300; fastcgi_pass ruby_application; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; [/code] The entire web application sites in /var/www/html on the front end server (config above) As well as both back end servers on 10.0.0.21 and 10.0.0.2 in /var/www/html If I remove the directory /var/www/html on the back end severs, the site doesn't load ( page not found) I thought the content was only being served on the front end but it looks like the content needs to be on all the servers. That makes sense if it is pushing the requests to the other servers and it needs to server it. My question is if the web application needs to be updated, should the code be updated on all the servers that is hosting the content? Seems a bit much so maybe I am missing something. Thanks. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,283437,283475#msg-283475 From francis at daoine.org Fri Mar 22 08:03:23 2019 From: francis at daoine.org (Francis Daly) Date: Fri, 22 Mar 2019 08:03:23 +0000 Subject: nginx servers share session or cookies In-Reply-To: <165822bc.50dd.169a427a89c.Coremail.zn1314@126.com> References: <3fd2ca45.6896.169807ebaaf.Coremail.zn1314@126.com> <165822bc.50dd.169a427a89c.Coremail.zn1314@126.com> Message-ID: <20190322080323.wornsfp75v4lelx4@daoine.org> On Fri, Mar 22, 2019 at 02:47:36PM +0800, David Ni wrote: Hi there, > Who can help with this?Thanks very much. The question looks very like the one at https://forum.nginx.org/read.php?2,282620,282620 I presume that the answer remains the same. Which is: if "Domain" works for you, add it to the place that creates (or causes to be sent to the client) the "Set-Cookie" http response header. f -- Francis Daly francis at daoine.org From u_can at 163.com Fri Mar 22 16:20:26 2019 From: u_can at 163.com (=?GBK?B?tOWzpA==?=) Date: Sat, 23 Mar 2019 00:20:26 +0800 (CST) Subject: a proxy_module question ,maybe a bug? Message-ID: <6b5d9024.15c.169a6341bd5.Coremail.u_can@163.com> nginx version: nginx/1.10.3 uname: Linux 4.9.0-8-amd64 #1 SMP Debian 4.9.144-3.1 (2019-02-19) x86_64 GNU/Linux VPS: Linode $5 I set about 900 subsite like this: proxy_cache_path /var/cache/nginx/abc.com/aaa levels=1:2 use_temp_path=off keys_zone=aaa.abc.com:64k inactive=8h max_size=128m; proxy_cache_path /var/cache/nginx/abc.com/bbb levels=1:2 use_temp_path=off keys_zone=bbb.abc.com:64k inactive=8h max_size=128m; ... the list is 900+ server { listen 80; server_name ~^([^.]+)\.abc\.com$; set $sub $1; location / { proxy_pass https://172.22.207.56/; proxy_redirect https://172.22.207.56/ /; proxy_set_header Host $sub.abc.com; proxy_cache $sub.abc.com; } } the nginx.conf is: user www-data; worker_processes auto; pid /run/nginx.pid; include /etc/nginx/modules-enabled/*.conf; events { worker_connections 4096; multi_accept on; use epoll; worker_aio_requests 256; } http { ## # Basic Settings ## sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 15; types_hash_max_size 2048; server_tokens off; server_names_hash_bucket_size 128; server_names_hash_max_size 512; server_name_in_redirect on; include /etc/nginx/mime.types; default_type application/octet-stream; ## # Proxy Settings ## proxy_buffering on; proxy_buffer_size 512k; proxy_buffers 32 512k; proxy_busy_buffers_size 512k; proxy_request_buffering on; proxy_cache_valid 200 6h; proxy_cache_lock on; proxy_cache_lock_timeout 60s; proxy_cache_lock_age 300s; proxy_cache_use_stale updating error timeout invalid_header http_404 http_500 http_502 http_503 http_504; proxy_cache_revalidate on; proxy_connect_timeout 15; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Accept-Encoding ""; proxy_intercept_errors off; proxy_ignore_headers X-Accel-Expires Expires Cache-Control Set-Cookie Vary; proxy_hide_header Cache-Control; proxy_hide_header Set-Cookie; proxy_hide_header Expires; proxy_hide_header X-Accel-Expires; include /etc/nginx/sites-enabled/*; } I used wget to download every subsite's index page, in order to active the cache?per 6 hour , crontab? just like this: * */6 * * * /usr/bin/wget -t 1 -qi /root/.script/linklist-80.wget -O /dev/null when I mannually type the command first: wget -t 1 -qi /root/.script/linklist-80.wget -O /dev/null everything is good. about 130minutes. but when I type run it again at once, the nginx will go wrong in minutes: 2019/03/22 10:30:03 [alert] 6356#6356: ignore long locked inactive cache entry 3391b383577454e8dfb6337e060c1d22, count:1 2019/03/22 10:30:03 [alert] 6356#6356: ignore long locked inactive cache entry 3391b383577454e8dfb6337e060c1d22, count:1 2019/03/22 10:30:03 [alert] 6356#6356: ignore long locked inactive cache entry 3391b383577454e8dfb6337e060c1d22, count:1 2019/03/22 10:30:03 [alert] 6356#6356: ignore long locked inactive cache entry 3391b383577454e8dfb6337e060c1d22, count:1 2019/03/22 10:30:03 [alert] 6356#6356: ignore long locked inactive cache entry 3391b383577454e8dfb6337e060c1d22, count:1 2019/03/22 10:30:03 [alert] 6356#6356: ignore long locked inactive cache entry 3391b383577454e8dfb6337e060c1d22, count:1 2019/03/22 10:30:03 [alert] 6356#6356: ignore long locked inactive cache entry 3391b383577454e8dfb6337e060c1d22, count:1 2019/03/22 10:30:03 [alert] 6356#6356: ignore long locked inactive cache entry 3391b383577454e8dfb6337e060c1d22, count:1 2019/03/22 10:30:03 [alert] 6356#6356: ignore long locked inactive cache entry 3391b383577454e8dfb6337e060c1d22, count:1 2019/03/22 10:30:03 [alert] 6356#6356: ignore long locked inactive cache entry 3391b383577454e8dfb6337e060c1d22, count:1 2019/03/22 10:30:03 [alert] 6356#6356: ignore long locked inactive cache entry 3391b383577454e8dfb6337e060c1d22, count:1 2019/03/22 10:30:03 [alert] 6356#6356: ignore long locked inactive cache entry 3391b383577454e8dfb6337e060c1d22, count:1 2019/03/22 10:30:03 [alert] 6356#6356: ignore long locked inactive cache entry 3391b383577454e8dfb6337e060c1d22, count:1 2019/03/22 10:30:03 [alert] 6356#6356: ignore long locked inactive cache entry 3391b383577454e8dfb6337e060c1d22, count:1 ... until entire disk full. what's wrong with my config files ? or what's wrong with nginx? ??? ??: 13166322138 ??: u_can at 163.com ??: http://www.panswork.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From hemantbist at gmail.com Sun Mar 24 00:17:43 2019 From: hemantbist at gmail.com (Hemant Bist) Date: Sat, 23 Mar 2019 17:17:43 -0700 Subject: Mapping url to physical urls using lua script or something else. Message-ID: Hi, I want to know if this a right way to make the change ( or if there is a better /recommended method). So far we have only tweaked the configuration of nginx which scales very nicely for us The change I need to do looks like a common case to me. Currently our urls map directly to the local dir structure e.g. the url /foo/10000/1234/9999/my.jpg is local file /var/www/html/foo/10000/1234/9999/my.jpg so Now the url /foo/first/second/third/my.jpg will map to /newfoo/new_first/new_second/new_third/my.jpg where newfoo folder is done by lookup of a static Hash_map/table of about 10000 to 20000 entries. new_first (new_second and new_third) are calculated by some arithmatic operation on first(second and new third). My plan is: a) pass all handling to a lua script that will do internal_redirect to the correct physical url... And do a load test to make sure that is not too much performance hit. [ I haven't implemented it, but it looks possible from the examples I have looked at so far] Best, HB -------------- next part -------------- An HTML attachment was scrubbed... URL: From peter_booth at me.com Sun Mar 24 02:39:36 2019 From: peter_booth at me.com (Peter Booth) Date: Sat, 23 Mar 2019 22:39:36 -0400 Subject: Mapping url to physical urls using lua script or something else. In-Reply-To: References: Message-ID: <7865DF80-D876-4B2B-A73B-62F418F0BD2A@me.com> Here?s my opinion: You can do this however you want. It?s your website. Most of my work has been for other people. When I was working on my own startup it made me honest. Nothing was dynamic. The rationale was ?do everything ahead of time so users never wait for anything and the site has 100% uptime?. So for your usecase - why make your users pay the price of your hashmap lookup? Why not publish/rewrite your content to the ?right? site structure ahead of time? Sure, nginx with openresty / lua can be a super fast appserver. But boring solutions beat clever every time. My two cents, Peter Sent from my iPhone > On Mar 23, 2019, at 8:17 PM, Hemant Bist wrote: > > Hi, > I want to know if this a right way to make the change ( or if there is a better /recommended method). So far we have only tweaked the configuration of nginx which scales very nicely for us The change I need to do looks like a common case to me. > > Currently our urls map directly to the local dir structure > e.g. the url /foo/10000/1234/9999/my.jpg is local file /var/www/html/foo/10000/1234/9999/my.jpg so > > Now the url /foo/first/second/third/my.jpg will map to /newfoo/new_first/new_second/new_third/my.jpg > where newfoo folder is done by lookup of a static Hash_map/table of about 10000 to 20000 entries. > new_first (new_second and new_third) are calculated by some arithmatic operation on first(second and new third). > > My plan is: a) pass all handling to a lua script that will do internal_redirect to the correct physical url... And do a load test to make sure that is not too much performance hit. [ I haven't implemented it, but it looks possible from the examples I have looked at so far] > > Best, > HB > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From zn1314 at 126.com Mon Mar 25 03:05:04 2019 From: zn1314 at 126.com (David Ni) Date: Mon, 25 Mar 2019 11:05:04 +0800 (CST) Subject: nginx servers share session or cookies In-Reply-To: <20190322080323.wornsfp75v4lelx4@daoine.org> References: <3fd2ca45.6896.169807ebaaf.Coremail.zn1314@126.com> <165822bc.50dd.169a427a89c.Coremail.zn1314@126.com> <20190322080323.wornsfp75v4lelx4@daoine.org> Message-ID: <67dab2fc.2e94.169b2cf0260.Coremail.zn1314@126.com> Hi Francis I tried to set cookies like this server { listen 80; server_name datanode02.bddev.test.net; error_log /var/log/nginx/error_for_bigdata.log info; access_log /var/log/nginx/http_access_for_bigdata.log main; ##here to check whether logged_in cookie was set if ($cookie_logged_in != "1") { auth_ldap "Restricted Space"; auth_ldap_servers bigdataldap; } location / { proxy_pass http://dev-datanode02:8042/; more_clear_headers "X-Frame-options"; add_header Set-Cookie "logged_in=1;Domain=.bddev.test.net;Path=/;Max-Age=315360000"; } } server { listen 80; server_name datanode03.bddev.test.net; error_log /var/log/nginx/error_for_bigdata.log info; access_log /var/log/nginx/http_access_for_bigdata.log main; ##here to check whether logged_in cookie was set if ($cookie_logged_in != "1") { auth_ldap "Restricted Space"; auth_ldap_servers bigdataldap; } location / { proxy_pass http://dev-datanode03:8042/; more_clear_headers "X-Frame-options"; add_header Set-Cookie "logged_in=1;Domain=.bddev.test.net;Path=/;Max-Age=315360000"; } } but nginx failed to start, seem it is not possible to set "if" block like this,do you know how to check whether cookie value? At 2019-03-22 16:03:23, "Francis Daly" wrote: >On Fri, Mar 22, 2019 at 02:47:36PM +0800, David Ni wrote: > >Hi there, > >> Who can help with this?Thanks very much. > >The question looks very like the one at >https://forum.nginx.org/read.php?2,282620,282620 > >I presume that the answer remains the same. > >Which is: if "Domain" works for you, add it to the place that creates (or >causes to be sent to the client) the "Set-Cookie" http response header. > > f >-- >Francis Daly francis at daoine.org >_______________________________________________ >nginx mailing list >nginx at nginx.org >http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ??.png Type: image/png Size: 44881 bytes Desc: not available URL: From zn1314 at 126.com Mon Mar 25 08:09:04 2019 From: zn1314 at 126.com (David Ni) Date: Mon, 25 Mar 2019 16:09:04 +0800 (CST) Subject: nginx servers share session or cookies In-Reply-To: <20190322080323.wornsfp75v4lelx4@daoine.org> References: <3fd2ca45.6896.169807ebaaf.Coremail.zn1314@126.com> <165822bc.50dd.169a427a89c.Coremail.zn1314@126.com> <20190322080323.wornsfp75v4lelx4@daoine.org> Message-ID: <76614865.6a6d.169b3e55276.Coremail.zn1314@126.com> Hi Francis I tried to set cookies like this server { listen 80; server_name datanode02.bddev.test.net; error_log /var/log/nginx/error_for_bigdata.log info; access_log /var/log/nginx/http_access_for_bigdata.log main; ##here to check whether logged_in cookie was set if ($cookie_logged_in != "1") { auth_ldap "Restricted Space"; auth_ldap_servers bigdataldap; } location / { proxy_pass http://dev-datanode02:8042/; more_clear_headers "X-Frame-options"; add_header Set-Cookie "logged_in=1;Domain=.bddev.test.net;Path=/;Max-Age=315360000"; } } server { listen 80; server_name datanode03.bddev.test.net; error_log /var/log/nginx/error_for_bigdata.log info; access_log /var/log/nginx/http_access_for_bigdata.log main; ##here to check whether logged_in cookie was set if ($cookie_logged_in != "1") { auth_ldap "Restricted Space"; auth_ldap_servers bigdataldap; } location / { proxy_pass http://dev-datanode03:8042/; more_clear_headers "X-Frame-options"; add_header Set-Cookie "logged_in=1;Domain=.bddev.test.net;Path=/;Max-Age=315360000"; } } but nginx failed to start, seem it is not possible to set auth_ldap "Restricted Space"; auth_ldap_servers bigdataldap; in "if" block like this, do you know how to skip the auth_ldap_servers setting when logged_in cookie is set? Thanks At 2019-03-22 16:03:23, "Francis Daly" wrote: >On Fri, Mar 22, 2019 at 02:47:36PM +0800, David Ni wrote: > >Hi there, > >> Who can help with this?Thanks very much. > >The question looks very like the one at >https://forum.nginx.org/read.php?2,282620,282620 > >I presume that the answer remains the same. > >Which is: if "Domain" works for you, add it to the place that creates (or >causes to be sent to the client) the "Set-Cookie" http response header. > > f >-- >Francis Daly francis at daoine.org >_______________________________________________ >nginx mailing list >nginx at nginx.org >http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From zn1314 at 126.com Mon Mar 25 08:11:50 2019 From: zn1314 at 126.com (David Ni) Date: Mon, 25 Mar 2019 16:11:50 +0800 (CST) Subject: nginx servers share session or cookies In-Reply-To: <20190322080323.wornsfp75v4lelx4@daoine.org> References: <3fd2ca45.6896.169807ebaaf.Coremail.zn1314@126.com> <165822bc.50dd.169a427a89c.Coremail.zn1314@126.com> <20190322080323.wornsfp75v4lelx4@daoine.org> Message-ID: <652a4281.6b15.169b3e7daa0.Coremail.zn1314@126.com> Hi Francis I tried to set cookies like this server { listen 80; server_name datanode02.bddev.test.net; error_log /var/log/nginx/error_for_bigdata.log info; access_log /var/log/nginx/http_access_for_bigdata.log main; ##here to check whether logged_in cookie was set if ($cookie_logged_in != "1") { auth_ldap "Restricted Space"; auth_ldap_servers bigdataldap; } location / { proxy_pass http://dev-datanode02:8042/; more_clear_headers "X-Frame-options"; add_header Set-Cookie "logged_in=1;Domain=.bddev.test.net;Path=/;Max-Age=315360000"; } } server { listen 80; server_name datanode03.bddev.test.net; error_log /var/log/nginx/error_for_bigdata.log info; access_log /var/log/nginx/http_access_for_bigdata.log main; ##here to check whether logged_in cookie was set if ($cookie_logged_in != "1") { auth_ldap "Restricted Space"; auth_ldap_servers bigdataldap; } location / { proxy_pass http://dev-datanode03:8042/; more_clear_headers "X-Frame-options"; add_header Set-Cookie "logged_in=1;Domain=.bddev.test.net;Path=/;Max-Age=315360000"; } } but nginx failed to start, seem it is not possible to set auth_ldap "Restricted Space"; auth_ldap_servers bigdataldap; in "if" block like this, do you know how to skip the auth_ldap_servers setting when logged_in cookie is set? Thanks At 2019-03-22 16:03:23, "Francis Daly" wrote: >On Fri, Mar 22, 2019 at 02:47:36PM +0800, David Ni wrote: > >Hi there, > >> Who can help with this?Thanks very much. > >The question looks very like the one at >https://forum.nginx.org/read.php?2,282620,282620 > >I presume that the answer remains the same. > >Which is: if "Domain" works for you, add it to the place that creates (or >causes to be sent to the client) the "Set-Cookie" http response header. > > f >-- >Francis Daly francis at daoine.org >_______________________________________________ >nginx mailing list >nginx at nginx.org >http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Mon Mar 25 12:39:49 2019 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 25 Mar 2019 15:39:49 +0300 Subject: Possible memory leak? In-Reply-To: <08f506b0c9acee091b227deb47230a3e.NginxMailingListEnglish@forum.nginx.org> References: <20190321181619.GR1877@mdounin.ru> <08f506b0c9acee091b227deb47230a3e.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20190325123949.GU1877@mdounin.ru> Hello! On Thu, Mar 21, 2019 at 05:55:23PM -0400, wkbrad wrote: > Thanks again! I was a little confused at first because your tests in > freebsd were so much different than mine but then I found what you did > wrong. > > You were testing the 2nd reload but the issue can only be seen on the first > reload. Here is my test to show what I mean. As can be seen from the pid numbers provided, first ps/top output in my previous message is from the initial nginx start, and the reload shown is the first one. What looks like an important factor is "junk:true" in my malloc.conf. Without at least "junk:free" I indeed see similar results to yours - most likely because kernel fails to free pages which are referenced from multiple processes when madvise() is called. [...] -- Maxim Dounin http://mdounin.ru/ From mdounin at mdounin.ru Mon Mar 25 13:26:23 2019 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 25 Mar 2019 16:26:23 +0300 Subject: a proxy_module question ,maybe a bug? In-Reply-To: <6b5d9024.15c.169a6341bd5.Coremail.u_can@163.com> References: <6b5d9024.15c.169a6341bd5.Coremail.u_can@163.com> Message-ID: <20190325132622.GV1877@mdounin.ru> Hello! On Sat, Mar 23, 2019 at 12:20:26AM +0800, ?? wrote: > nginx version: nginx/1.10.3 > uname: Linux 4.9.0-8-amd64 #1 SMP Debian 4.9.144-3.1 (2019-02-19) x86_64 GNU/Linux > VPS: Linode $5 > > > > I set about 900 subsite like this: > > > proxy_cache_path /var/cache/nginx/abc.com/aaa levels=1:2 use_temp_path=off keys_zone=aaa.abc.com:64k inactive=8h max_size=128m; > proxy_cache_path /var/cache/nginx/abc.com/bbb levels=1:2 use_temp_path=off keys_zone=bbb.abc.com:64k inactive=8h max_size=128m; > ... > the list is 900+ Just a side note: it is usually a bad idea to configure individual cache paths for each site, as it consumes much more resources than using a single cache for all sites. Instead, consider using a single proxy_cache_path and add site identifier to proxy_cache_key. [...] > but when I type run it again at once, the nginx will go wrong in minutes: > > > 2019/03/22 10:30:03 [alert] 6356#6356: ignore long locked inactive cache entry 3391b383577454e8dfb6337e060c1d22, count:1 > 2019/03/22 10:30:03 [alert] 6356#6356: ignore long locked inactive cache entry 3391b383577454e8dfb6337e060c1d22, count:1 The "ignore long locked inactive cache entry" alert indicates that nginx was trying to remove a cache entry, but failed to do this because the entry was locked. This may be either result of a bug, or may happen if a worker process is killed or crashed. That is, this may happen, e.g., if your server run out of memory at some point and OOM-killer killed an nginx worker process. To find out what actually happened you may want to investigate error log before the first "ignore long locked" alert. On the other hand, the first thing you may want to do is to upgrade - nginx 1.10.3 is rather old and no longer supported. Invesigating anything with nginx 1.10.3 hadly make sense. [...] -- Maxim Dounin http://mdounin.ru/ From hemantbist at gmail.com Mon Mar 25 16:39:15 2019 From: hemantbist at gmail.com (Hemant Bist) Date: Mon, 25 Mar 2019 09:39:15 -0700 Subject: Mapping url to physical urls using lua script or something else. In-Reply-To: <7865DF80-D876-4B2B-A73B-62F418F0BD2A@me.com> References: <7865DF80-D876-4B2B-A73B-62F418F0BD2A@me.com> Message-ID: Thanks peter, In this case, both the new set of urls and old set of url are going to be exposed to different set of users. (And there are some DRM related details I omitted in the email) (similar to what you are suggesting ) : One of the changes I am testing is simply copying the files/linking the files( which means serving tiles via nginx would be trivial , at the cost of increasing memory / complications in the backend scripts that sync the data). 1) Computationally the mapping is not that hard, and the production server is has lot of CPU. So I am hoping that we can get away with some sort of simple lua script. 2) Additionally it looks like a small percentage of the urls(both number and traffic wise) are served by lookup on a small redis db, and lua. So no major changes would be required in installation etc : as long as it scales decently... HB HB On Sat, Mar 23, 2019 at 7:39 PM Peter Booth via nginx wrote: > Here?s my opinion: > > You can do this however you want. It?s your website. Most of my work has > been for other people. When I was working on my own startup it made me > honest. Nothing was dynamic. The rationale was ?do everything ahead of > time so users never wait for anything and the site has 100% uptime?. > > So for your usecase - why make your users pay the price of your hashmap > lookup? Why not publish/rewrite your content to the ?right? site structure > ahead of time? Sure, nginx with openresty / lua can be a super fast > appserver. But boring solutions beat clever every time. > > My two cents, > > Peter > > Sent from my iPhone > > > On Mar 23, 2019, at 8:17 PM, Hemant Bist wrote: > > > > Hi, > > I want to know if this a right way to make the change ( or if there is a > better /recommended method). So far we have only tweaked the configuration > of nginx which scales very nicely for us The change I need to do looks > like a common case to me. > > > > Currently our urls map directly to the local dir structure > > e.g. the url /foo/10000/1234/9999/my.jpg is local file > /var/www/html/foo/10000/1234/9999/my.jpg so > > > > Now the url /foo/first/second/third/my.jpg will map to > /newfoo/new_first/new_second/new_third/my.jpg > > where newfoo folder is done by lookup of a static Hash_map/table of > about 10000 to 20000 entries. > > new_first (new_second and new_third) are calculated by some arithmatic > operation on first(second and new third). > > > > My plan is: a) pass all handling to a lua script that will do > internal_redirect to the correct physical url... And do a load test to make > sure that is not too much performance hit. [ I haven't implemented it, but > it looks possible from the examples I have looked at so far] > > > > Best, > > HB > > > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From rajkumaradass at avaya.com Tue Mar 26 09:13:44 2019 From: rajkumaradass at avaya.com (R, Rajkumar (Raj)) Date: Tue, 26 Mar 2019 09:13:44 +0000 Subject: TCP connection limit on dynamic backend Message-ID: Hi, Using nginx in TCP/Stream mode and would like to limit the number of active connection to my backend server whereas the backend is resolved dynamically based on the SNI header ($ssl_preread_server_name). But this does not allow any connections to the backend with below config. I see examples of limiting backend connections if the backend server block is pre configured. Could you please confirm if this achievable or supported currently with Stream mode? Below is the related config part. map $ssl_preread_server_name $backend_svr { ~^(\w+).test.com $1-tcp.default.svc.cluster.local; } limit_conn_zone $ssl_preread_server_name zone=perserver:10m; server { listen 443 reuseport so_keepalive=30s:30s:3 backlog=64999; proxy_pass $backend_svr:443; limit_conn perserver 255; ssl_preread on; } thanks, raj +918067153382 -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Tue Mar 26 09:47:51 2019 From: nginx-forum at forum.nginx.org (sivak) Date: Tue, 26 Mar 2019 05:47:51 -0400 Subject: Is it possible to add milliseconds in error.log and also timestamps Message-ID: <15d1757b0ed2bcc2de430d9f3a7b3a67.NginxMailingListEnglish@forum.nginx.org> Is it possible to add milliseconds in error.log and also to include timestamps in the output after executing below commands $NGINX_EXECUTABLE_FILE -I $NGINX_EXECUTABLE_FILE -P Posted at Nginx Forum: https://forum.nginx.org/read.php?2,283506,283506#msg-283506 From mdounin at mdounin.ru Tue Mar 26 11:08:11 2019 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 26 Mar 2019 14:08:11 +0300 Subject: Is it possible to add milliseconds in error.log and also timestamps In-Reply-To: <15d1757b0ed2bcc2de430d9f3a7b3a67.NginxMailingListEnglish@forum.nginx.org> References: <15d1757b0ed2bcc2de430d9f3a7b3a67.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20190326110811.GB1877@mdounin.ru> Hello! On Tue, Mar 26, 2019 at 05:47:51AM -0400, sivak wrote: > Is it possible to add milliseconds in error.log No. > and also to include > timestamps in the output after executing below commands > > $NGINX_EXECUTABLE_FILE -I > $NGINX_EXECUTABLE_FILE -P There is no such commands. -- Maxim Dounin http://mdounin.ru/ From arut at nginx.com Tue Mar 26 11:28:30 2019 From: arut at nginx.com (Roman Arutyunyan) Date: Tue, 26 Mar 2019 14:28:30 +0300 Subject: TCP connection limit on dynamic backend In-Reply-To: References: Message-ID: <20190326112830.GH1661@Romans-MacBook-Air.local> Hi, On Tue, Mar 26, 2019 at 09:13:44AM +0000, R, Rajkumar (Raj) wrote: > Hi, > > Using nginx in TCP/Stream mode and would like to limit the number of active connection to my backend server whereas the backend is resolved dynamically based on the SNI header ($ssl_preread_server_name). But this does not allow any connections to the backend with below config. I see examples of limiting backend connections if the backend server block is pre configured. > > Could you please confirm if this achievable or supported currently with Stream mode? > > Below is the related config part. > > map $ssl_preread_server_name $backend_svr { > ~^(\w+).test.com $1-tcp.default.svc.cluster.local; > } > > limit_conn_zone $ssl_preread_server_name zone=perserver:10m; > > server { > listen 443 reuseport so_keepalive=30s:30s:3 backlog=64999; > proxy_pass $backend_svr:443; > limit_conn perserver 255; > ssl_preread on; > } The problem is limit_conn is executed at an earlier phase than ssl_preread. The $ssl_preread_server_name variable is just empty at that moment. You basically limit client connections by an empty variable. -- Roman Arutyunyan From 15555513217 at 163.com Tue Mar 26 12:22:19 2019 From: 15555513217 at 163.com (=?GBK?B?vaqyrtHz?=) Date: Tue, 26 Mar 2019 20:22:19 +0800 (CST) Subject: Routing problems through cookies Message-ID: <709d4aee.11da2.169b9f38abd.Coremail.15555513217@163.com> map $cookie_wpt_debug $forward_to_gray { # When default is not specified, the default resulting value will be an empty string. default ""; 9cb88042edc55bf85c22e89cf880c63a 10.105.195.11; } if ( $forward_to_gray != '' ) { proxy_pass http://$forward_to_gray$request_uri; break; } When I configure this, he can be routed normally. map $cookie_wpt_debug $to_gray { # When default is not specified, the default resulting value will be an empty string. default ""; 9cb88042edc55bf85c22e89cf880c63a 10.105.195.11; } if ( $to_gray != '' ) { proxy_pass http://$to_gray$request_uri; break; } When I configure this, he said that it can't be routed, I don't know why. -------------- next part -------------- An HTML attachment was scrubbed... URL: From rajkumaradass at avaya.com Tue Mar 26 12:27:06 2019 From: rajkumaradass at avaya.com (R, Rajkumar (Raj)) Date: Tue, 26 Mar 2019 12:27:06 +0000 Subject: TCP connection limit on dynamic backend In-Reply-To: <20190326112830.GH1661@Romans-MacBook-Air.local> References: <20190326112830.GH1661@Romans-MacBook-Air.local> Message-ID: Thanks for your quick response. Is there a way to delay the execution of limit_conn. Please suggest if there's a way forward on this. thanks, raj -----Original Message----- From: nginx On Behalf Of Roman Arutyunyan Sent: Tuesday, March 26, 2019 4:59 PM To: nginx at nginx.org Subject: Re: TCP connection limit on dynamic backend Hi, On Tue, Mar 26, 2019 at 09:13:44AM +0000, R, Rajkumar (Raj) wrote: > Hi, > > Using nginx in TCP/Stream mode and would like to limit the number of active connection to my backend server whereas the backend is resolved dynamically based on the SNI header ($ssl_preread_server_name). But this does not allow any connections to the backend with below config. I see examples of limiting backend connections if the backend server block is pre configured. > > Could you please confirm if this achievable or supported currently with Stream mode? > > Below is the related config part. > > map $ssl_preread_server_name $backend_svr { > ~^(\w+).test.com $1-tcp.default.svc.cluster.local; > } > > limit_conn_zone $ssl_preread_server_name zone=perserver:10m; > > server { > listen 443 reuseport so_keepalive=30s:30s:3 backlog=64999; > proxy_pass $backend_svr:443; > limit_conn perserver 255; > ssl_preread on; > } The problem is limit_conn is executed at an earlier phase than ssl_preread. The $ssl_preread_server_name variable is just empty at that moment. You basically limit client connections by an empty variable. -- Roman Arutyunyan _______________________________________________ nginx mailing list nginx at nginx.org https://urldefense.proofpoint.com/v2/url?u=http-3A__mailman.nginx.org_mailman_listinfo_nginx&d=DwICAg&c=BFpWQw8bsuKpl1SgiZH64Q&r=LDE-f1bLxMPmcrsp8ONITcznNqEIlhe9ffKEZPeB7EI&m=unQV1WrV4FRI5jvKwIh6Zn5db3ZaY3WQha37LnjRjrE&s=tV5nHAXiBKw4H6XIbKfCKiSzzDoVF8aHoL95w2mgtbQ&e= From u_can at 163.com Tue Mar 26 13:09:27 2019 From: u_can at 163.com (=?GBK?B?tOWzpA==?=) Date: Tue, 26 Mar 2019 21:09:27 +0800 (CST) Subject: nginx Digest, Vol 113, Issue 33 In-Reply-To: References: Message-ID: <3e371d58.160.169ba1eb31a.Coremail.u_can@163.com> Thanks! I builded the nginx-1.14.2 from the source instead of apt package(Debian9), then it goes well(with the same config file). And on FreeBSD, it has no problem with same version of nginx than Debian9. And I migrated the path to a single directory. ??? ??: 13166322138 ??: u_can at 163.com ??: http://www.panswork.com ?2019?03?26 17?47?, "nginx-request"??: Send nginx mailing list submissions to nginx at nginx.org To subscribe or unsubscribe via the World Wide Web, visit http://mailman.nginx.org/mailman/listinfo/nginx or, via email, send a message with subject or body 'help' to nginx-request at nginx.org You can reach the person managing the list at nginx-owner at nginx.org When replying, please edit your Subject line so it is more specific than "Re: Contents of nginx digest..." Today's Topics: 1. Re: Possible memory leak? (Maxim Dounin) 2. Re: a proxy_module question ,maybe a bug? (Maxim Dounin) 3. Re: Mapping url to physical urls using lua script or something else. (Hemant Bist) 4. TCP connection limit on dynamic backend (R, Rajkumar (Raj)) 5. Is it possible to add milliseconds in error.log and also timestamps (sivak) ---------------------------------------------------------------------- Message: 1 Date: Mon, 25 Mar 2019 15:39:49 +0300 From: Maxim Dounin To: nginx at nginx.org Subject: Re: Possible memory leak? Message-ID: <20190325123949.GU1877 at mdounin.ru> Content-Type: text/plain; charset=us-ascii Hello! On Thu, Mar 21, 2019 at 05:55:23PM -0400, wkbrad wrote: > Thanks again! I was a little confused at first because your tests in > freebsd were so much different than mine but then I found what you did > wrong. > > You were testing the 2nd reload but the issue can only be seen on the first > reload. Here is my test to show what I mean. As can be seen from the pid numbers provided, first ps/top output in my previous message is from the initial nginx start, and the reload shown is the first one. What looks like an important factor is "junk:true" in my malloc.conf. Without at least "junk:free" I indeed see similar results to yours - most likely because kernel fails to free pages which are referenced from multiple processes when madvise() is called. [...] -- Maxim Dounin http://mdounin.ru/ ------------------------------ Message: 2 Date: Mon, 25 Mar 2019 16:26:23 +0300 From: Maxim Dounin To: nginx at nginx.org Subject: Re: a proxy_module question ,maybe a bug? Message-ID: <20190325132622.GV1877 at mdounin.ru> Content-Type: text/plain; charset=utf-8 Hello! On Sat, Mar 23, 2019 at 12:20:26AM +0800, ?? wrote: > nginx version: nginx/1.10.3 > uname: Linux 4.9.0-8-amd64 #1 SMP Debian 4.9.144-3.1 (2019-02-19) x86_64 GNU/Linux > VPS: Linode $5 > > > > I set about 900 subsite like this: > > > proxy_cache_path /var/cache/nginx/abc.com/aaa levels=1:2 use_temp_path=off keys_zone=aaa.abc.com:64k inactive=8h max_size=128m; > proxy_cache_path /var/cache/nginx/abc.com/bbb levels=1:2 use_temp_path=off keys_zone=bbb.abc.com:64k inactive=8h max_size=128m; > ... > the list is 900+ Just a side note: it is usually a bad idea to configure individual cache paths for each site, as it consumes much more resources than using a single cache for all sites. Instead, consider using a single proxy_cache_path and add site identifier to proxy_cache_key. [...] > but when I type run it again at once, the nginx will go wrong in minutes: > > > 2019/03/22 10:30:03 [alert] 6356#6356: ignore long locked inactive cache entry 3391b383577454e8dfb6337e060c1d22, count:1 > 2019/03/22 10:30:03 [alert] 6356#6356: ignore long locked inactive cache entry 3391b383577454e8dfb6337e060c1d22, count:1 The "ignore long locked inactive cache entry" alert indicates that nginx was trying to remove a cache entry, but failed to do this because the entry was locked. This may be either result of a bug, or may happen if a worker process is killed or crashed. That is, this may happen, e.g., if your server run out of memory at some point and OOM-killer killed an nginx worker process. To find out what actually happened you may want to investigate error log before the first "ignore long locked" alert. On the other hand, the first thing you may want to do is to upgrade - nginx 1.10.3 is rather old and no longer supported. Invesigating anything with nginx 1.10.3 hadly make sense. [...] -- Maxim Dounin http://mdounin.ru/ ------------------------------ Message: 3 Date: Mon, 25 Mar 2019 09:39:15 -0700 From: Hemant Bist To: nginx at nginx.org Subject: Re: Mapping url to physical urls using lua script or something else. Message-ID: Content-Type: text/plain; charset="utf-8" Thanks peter, In this case, both the new set of urls and old set of url are going to be exposed to different set of users. (And there are some DRM related details I omitted in the email) (similar to what you are suggesting ) : One of the changes I am testing is simply copying the files/linking the files( which means serving tiles via nginx would be trivial , at the cost of increasing memory / complications in the backend scripts that sync the data). 1) Computationally the mapping is not that hard, and the production server is has lot of CPU. So I am hoping that we can get away with some sort of simple lua script. 2) Additionally it looks like a small percentage of the urls(both number and traffic wise) are served by lookup on a small redis db, and lua. So no major changes would be required in installation etc : as long as it scales decently... HB HB On Sat, Mar 23, 2019 at 7:39 PM Peter Booth via nginx wrote: > Here?s my opinion: > > You can do this however you want. It?s your website. Most of my work has > been for other people. When I was working on my own startup it made me > honest. Nothing was dynamic. The rationale was ?do everything ahead of > time so users never wait for anything and the site has 100% uptime?. > > So for your usecase - why make your users pay the price of your hashmap > lookup? Why not publish/rewrite your content to the ?right? site structure > ahead of time? Sure, nginx with openresty / lua can be a super fast > appserver. But boring solutions beat clever every time. > > My two cents, > > Peter > > Sent from my iPhone > > > On Mar 23, 2019, at 8:17 PM, Hemant Bist wrote: > > > > Hi, > > I want to know if this a right way to make the change ( or if there is a > better /recommended method). So far we have only tweaked the configuration > of nginx which scales very nicely for us The change I need to do looks > like a common case to me. > > > > Currently our urls map directly to the local dir structure > > e.g. the url /foo/10000/1234/9999/my.jpg is local file > /var/www/html/foo/10000/1234/9999/my.jpg so > > > > Now the url /foo/first/second/third/my.jpg will map to > /newfoo/new_first/new_second/new_third/my.jpg > > where newfoo folder is done by lookup of a static Hash_map/table of > about 10000 to 20000 entries. > > new_first (new_second and new_third) are calculated by some arithmatic > operation on first(second and new third). > > > > My plan is: a) pass all handling to a lua script that will do > internal_redirect to the correct physical url... And do a load test to make > sure that is not too much performance hit. [ I haven't implemented it, but > it looks possible from the examples I have looked at so far] > > > > Best, > > HB > > > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Message: 4 Date: Tue, 26 Mar 2019 09:13:44 +0000 From: "R, Rajkumar (Raj)" To: "nginx at nginx.org" Subject: TCP connection limit on dynamic backend Message-ID: Content-Type: text/plain; charset="us-ascii" Hi, Using nginx in TCP/Stream mode and would like to limit the number of active connection to my backend server whereas the backend is resolved dynamically based on the SNI header ($ssl_preread_server_name). But this does not allow any connections to the backend with below config. I see examples of limiting backend connections if the backend server block is pre configured. Could you please confirm if this achievable or supported currently with Stream mode? Below is the related config part. map $ssl_preread_server_name $backend_svr { ~^(\w+).test.com $1-tcp.default.svc.cluster.local; } limit_conn_zone $ssl_preread_server_name zone=perserver:10m; server { listen 443 reuseport so_keepalive=30s:30s:3 backlog=64999; proxy_pass $backend_svr:443; limit_conn perserver 255; ssl_preread on; } thanks, raj +918067153382 -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Message: 5 Date: Tue, 26 Mar 2019 05:47:51 -0400 From: "sivak" To: nginx at nginx.org Subject: Is it possible to add milliseconds in error.log and also timestamps Message-ID: <15d1757b0ed2bcc2de430d9f3a7b3a67.NginxMailingListEnglish at forum.nginx.org> Content-Type: text/plain; charset=UTF-8 Is it possible to add milliseconds in error.log and also to include timestamps in the output after executing below commands $NGINX_EXECUTABLE_FILE -I $NGINX_EXECUTABLE_FILE -P Posted at Nginx Forum: https://forum.nginx.org/read.php?2,283506,283506#msg-283506 ------------------------------ Subject: Digest Footer _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx ------------------------------ End of nginx Digest, Vol 113, Issue 33 ************************************** -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Tue Mar 26 14:25:48 2019 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 26 Mar 2019 17:25:48 +0300 Subject: nginx-1.15.10 Message-ID: <20190326142548.GE1877@mdounin.ru> Changes with nginx 1.15.10 26 Mar 2019 *) Change: when using a hostname in the "listen" directive nginx now creates listening sockets for all addresses the hostname resolves to (previously, only the first address was used). *) Feature: port ranges in the "listen" directive. *) Feature: loading of SSL certificates and secret keys from variables. *) Workaround: the $ssl_server_name variable might be empty when using OpenSSL 1.1.1. *) Bugfix: nginx/Windows could not be built with Visual Studio 2015 or newer; the bug had appeared in 1.15.9. -- Maxim Dounin http://nginx.org/ From nginx-forum at forum.nginx.org Tue Mar 26 14:50:12 2019 From: nginx-forum at forum.nginx.org (Olaf van der Spek) Date: Tue, 26 Mar 2019 10:50:12 -0400 Subject: Protect against php files being send as static files In-Reply-To: <43871fb1-3686-f2f2-831a-be8af660f2ae@gmail.com> References: <43871fb1-3686-f2f2-831a-be8af660f2ae@gmail.com> Message-ID: <84ecbc6d6ae3fee1c03e6f67a3a6e64b.NginxMailingListEnglish@forum.nginx.org> Ian Hobson Wrote: ------------------------------------------------------- > If you place your php files outside the main root directory, and > then do something like this That'd good but unfortunately not common practice. It'd be nice to have better safety by default. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,283274,283526#msg-283526 From kworthington at gmail.com Tue Mar 26 16:01:20 2019 From: kworthington at gmail.com (Kevin Worthington) Date: Tue, 26 Mar 2019 12:01:20 -0400 Subject: [nginx-announce] nginx-1.15.10 In-Reply-To: <20190326142554.GF1877@mdounin.ru> References: <20190326142554.GF1877@mdounin.ru> Message-ID: Hello Nginx users, Now available: Nginx 1.15.10 for Windows https://kevinworthington.com/nginxwin11510 (32-bit and 64-bit versions) These versions are to support legacy users who are already using Cygwin based builds of Nginx. Officially supported native Windows binaries are at nginx.org. Announcements are also available here: Twitter http://twitter.com/kworthington Google+ https://plus.google.com/+KevinWorthington/ Thank you, Kevin -- Kevin Worthington kworthington *@* (gmail] [dot} {com) https://kevinworthington.com/ https://twitter.com/kworthington https://plus.google.com/+KevinWorthington/ On Tue, Mar 26, 2019 at 10:26 AM Maxim Dounin wrote: > Changes with nginx 1.15.10 26 Mar > 2019 > > *) Change: when using a hostname in the "listen" directive nginx now > creates listening sockets for all addresses the hostname resolves to > (previously, only the first address was used). > > *) Feature: port ranges in the "listen" directive. > > *) Feature: loading of SSL certificates and secret keys from variables. > > *) Workaround: the $ssl_server_name variable might be empty when using > OpenSSL 1.1.1. > > *) Bugfix: nginx/Windows could not be built with Visual Studio 2015 or > newer; the bug had appeared in 1.15.9. > > > -- > Maxim Dounin > http://nginx.org/ > _______________________________________________ > nginx-announce mailing list > nginx-announce at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-announce > -------------- next part -------------- An HTML attachment was scrubbed... URL: From anoopalias01 at gmail.com Tue Mar 26 16:19:40 2019 From: anoopalias01 at gmail.com (Anoop Alias) Date: Tue, 26 Mar 2019 21:49:40 +0530 Subject: nginx-1.15.10 In-Reply-To: <20190326142548.GE1877@mdounin.ru> References: <20190326142548.GE1877@mdounin.ru> Message-ID: On Tue, Mar 26, 2019 at 7:55 PM Maxim Dounin wrote: > Changes with nginx 1.15.10 26 Mar > 2019 > > > *) Feature: loading of SSL certificates and secret keys from variables > The doc says: Since version 1.15.9, variables can be used in the file name when using OpenSSL 1.0.2 or higher: So what's new in 1.15.10? > -- > Maxim Dounin > http://nginx.org/ > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- *Anoop P Alias* -------------- next part -------------- An HTML attachment was scrubbed... URL: From xeioex at nginx.com Tue Mar 26 16:29:25 2019 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Tue, 26 Mar 2019 19:29:25 +0300 Subject: njs-0.3.0 Message-ID: Hello, I'm glad to announce a new release of NGINX JavaScript module (njs). This release proceeds to extend the coverage of ECMAScript specifications and modules functionality. - Added initial ES6 modules support: : // module.js : function sum(a, b) {return a + b} : export default {sum}; : // shell : > import m from 'module.js' : undefined : > m.sum(3, 4) : 7 You can learn more about njs: - Overview and introduction: http://nginx.org/en/docs/njs/ - Presentation: https://youtu.be/Jc_L6UffFOs Feel free to try it and give us feedback on: - Github: https://github.com/nginx/njs/issues - Mailing list: http://mailman.nginx.org/mailman/listinfo/nginx-devel Changes with njs 0.3.0 26 Mar 2019 nginx modules: *) Feature: added js_path directive. *) Change: returning undefined value instead of empty strings for absent properties in the following objects: r.args, r.headersIn, r.headersOut, r.variables, s.variables. *) Change: returning undefined value instead of throwing an exception for r.requestBody when request body is unavailable. *) Bugfix: fixed crash while iterating over r.args when a value is absent in a key-value pair. Core: *) Feature: added initial ES6 modules support. Default import and default export statements are supported. Thanks to ??? (Hong Zhi Dao). *) Feature: added Object.prototype.propertyIsEnumerable(). *) Feature: reporting file name and function name in disassembler output. *) Bugfix: fixed function redeclarations in interactive shell. Thanks to ??? (Hong Zhi Dao). *) Bugfix: fixed RegExp literals parsing. *) Bugfix: fixed setting length of UTF8 string in fs.readFileSync(). *) Bugfix: fixed nxt_file_dirname() for paths with no dir component. From mdounin at mdounin.ru Tue Mar 26 16:50:04 2019 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 26 Mar 2019 19:50:04 +0300 Subject: nginx-1.15.10 In-Reply-To: References: <20190326142548.GE1877@mdounin.ru> Message-ID: <20190326165003.GI1877@mdounin.ru> Hello! On Tue, Mar 26, 2019 at 09:49:40PM +0530, Anoop Alias wrote: > On Tue, Mar 26, 2019 at 7:55 PM Maxim Dounin wrote: > > > Changes with nginx 1.15.10 26 Mar > > 2019 > > > > > > *) Feature: loading of SSL certificates and secret keys from variables > > > > The doc says: > > Since version 1.15.9, variables can be used in the file name when using > OpenSSL 1.0.2 or higher: > > So what's new in 1.15.10? The difference is that in 1.15.10 you can put a certificate itself into a variable. Quoting docs: : The value data:$variable can be specified instead of the file : (1.15.10), which loads a certificate from a variable without using : intermediate files. Note that inappropriate use of this syntax may : have its security implications, such as writing secret key data to : error log. -- Maxim Dounin http://mdounin.ru/ From sca at andreasschulze.de Tue Mar 26 17:16:39 2019 From: sca at andreasschulze.de (A. Schulze) Date: Tue, 26 Mar 2019 18:16:39 +0100 Subject: nginx-1.15.10 In-Reply-To: <20190326165003.GI1877@mdounin.ru> References: <20190326142548.GE1877@mdounin.ru> <20190326165003.GI1877@mdounin.ru> Message-ID: Am 26.03.19 um 17:50 schrieb Maxim Dounin: > The difference is that in 1.15.10 you can put a certificate itself > into a variable. Quoting docs: > > : The value data:$variable can be specified instead of the file > : (1.15.10), which loads a certificate from a variable without using > : intermediate files. Note that inappropriate use of this syntax may > : have its security implications, such as writing secret key data to > : error log. Hello Maxim, could you more verbose about the intended use-case? that would make the feature more transparent (at least to me) Thanks, Andreas From mdounin at mdounin.ru Tue Mar 26 17:45:21 2019 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 26 Mar 2019 20:45:21 +0300 Subject: nginx-1.15.10 In-Reply-To: References: <20190326142548.GE1877@mdounin.ru> <20190326165003.GI1877@mdounin.ru> Message-ID: <20190326174521.GJ1877@mdounin.ru> Hello! On Tue, Mar 26, 2019 at 06:16:39PM +0100, A. Schulze wrote: > Am 26.03.19 um 17:50 schrieb Maxim Dounin: > > The difference is that in 1.15.10 you can put a certificate itself > > into a variable. Quoting docs: > > > > : The value data:$variable can be specified instead of the file > > : (1.15.10), which loads a certificate from a variable without using > > : intermediate files. Note that inappropriate use of this syntax may > > : have its security implications, such as writing secret key data to > > : error log. > > Hello Maxim, > > could you more verbose about the intended use-case? > that would make the feature more transparent (at least to me) This is intended to be used with some external means of providing certificates and keys, such as perl or njs code, or a keyval database (http://nginx.org/r/keyval). -- Maxim Dounin http://mdounin.ru/ From 15555513217 at 163.com Wed Mar 27 02:41:07 2019 From: 15555513217 at 163.com (=?GBK?B?vaqyrtHz?=) Date: Wed, 27 Mar 2019 10:41:07 +0800 (CST) Subject: Fw:Routing problems through cookies Message-ID: <294b6cb2.5b85.169bd05cb32.Coremail.15555513217@163.com> map $cookie_wpt_debug $forward_to_gray { # When default is not specified, the default resulting value will be an empty string. default ""; 9cb88042edc55bf85c22e89cf880c63a 10.105.195.11; } if ( $forward_to_gray != '' ) { proxy_pass http://$forward_to_gray$request_uri; break; } When I configure this, he can be routed normally. map $cookie_wpt_debug $to_gray { # When default is not specified, the default resulting value will be an empty string. default ""; 9cb88042edc55bf85c22e89cf880c63a 10.105.195.11; } if ( $to_gray != '' ) { proxy_pass http://$to_gray$request_uri; break; } When I configure this, he said that it can't be routed, I don't know why. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Wed Mar 27 06:16:04 2019 From: nginx-forum at forum.nginx.org (BharathShankar) Date: Wed, 27 Mar 2019 02:16:04 -0400 Subject: Uneven load distribution inside container Message-ID: I am trying out the sample gRPC streaming application at https://github.com/grpc/grpc/tree/v1.19.0/examples/python/route_guide. I have modified the server to start 4 grpc server processes and correspondingly modified the client to spawn 4 different processes to hit the server. Client and server are running on different VMs on google cloud. Nginx.conf ---------------- user www-data; worker_processes 1; pid /run/nginx.pid; include /etc/nginx/modules-enabled/*.conf; events { worker_connections 768; # multi_accept on; } http { log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent"'; map $http_upgrade $connection_upgrade { default upgrade; '' close; } upstream backend{ server localhost:9000 weight=1; server localhost:9001 weight=1; server localhost:9002 weight=1; server localhost:9003 weight=1; } server { listen 6666 http2; <<<<<<<<< On container I have set it to 9999 access_log /tmp/access.log main; error_log /tmp/error.log error; proxy_buffering off; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Scheme $scheme; proxy_set_header Host $http_host; location / { grpc_pass grpc://backend; } } } Problem Statement -------------------------- Nginx load balancing (round robin) is uneven when run inside a container where as it is uniform when nginx is run directly inside a VM. Scenario 1 : Server running in VM ---------------------------------------------- $ python route_guide_server.py pid 14287's current affinity list: 0-5 pid 14287's new affinity list: 2 pid 14288's current affinity list: 0-5 pid 14288's new affinity list: 3 pid 14286's current affinity list: 0-5 pid 14286's new affinity list: 1 pid 14285's current affinity list: 0-5 pid 14285's new affinity list: 0 Server starting in port 9003 with cpu 3 Server starting in port 9002 with cpu 2 Server starting in port 9001 with cpu 1 Server starting in port 9000 with cpu 0 Now I run the client on a different VM. $ python3 route_guide_client.py ........... ....... On the server we see that the requests are uniformly distributed between all 4 server processes running on different ports. For example the output on server for above client invocation is Serving route chat request using 14285 << These are PIDs of processes that are bound to different server ports. Serving route chat request using 14286 Serving route chat request using 14287 Serving route chat request using 14288 Scenario 1 : Server running in Container ------------------------------------------------------ I now spin up a container on the server VM, Install and configure nginx the same way inside the container and use the same nginx config file except for then nginx server listen port. $ sudo docker run -p 9999:9999 --cpus=4 grpcnginx:latest ............... root at b81bb72fcab2:/# ls README.md docker_entry.sh media route_guide_client.py route_guide_pb2_grpc.py route_guide_server.py.bak sys __pycache__ etc mnt route_guide_client.py.bak route_guide_pb2_grpc.pyc run tmp bin home opt route_guide_db.json route_guide_resources.py run_codegen.py usr boot lib proc route_guide_pb2.py route_guide_resources.pyc sbin var dev lib64 root route_guide_pb2.pyc route_guide_server.py srv root at b81bb72fcab2:/# python3 route_guide_server.py pid 71's current affinity list: 0-5 pid 71's new affinity list: 0 Server starting in port 9000 with cpu 0 pid 74's current affinity list: 0-5 pid 74's new affinity list: 3 Server starting in port 9003 with cpu 3 pid 72's current affinity list: 0-5 pid 72's new affinity list: 1 pid 73's current affinity list: 0-5 pid 73's new affinity list: 2 Server starting in port 9001 with cpu 1 Server starting in port 9002 with cpu 2 On the client VM $ python3 route_guide_client.py ............ .............. Now on the server we see that the requests are only served by 2 ports/processes. Serving route chat request using 71 Serving route chat request using 72 Serving route chat request using 71 Serving route chat request using 72 Requesting help to resolve load distribution problem inside container. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,283542,283542#msg-283542 From nginx-forum at forum.nginx.org Wed Mar 27 06:55:11 2019 From: nginx-forum at forum.nginx.org (ops42) Date: Wed, 27 Mar 2019 02:55:11 -0400 Subject: Slow startup, reload and config test with 40k server blocks Message-ID: I'm testing an nginx config with 40k server blocks acting as proxies (proxy_pass). Each server block has its own proxy_cache_path. The "nginx -t" takes about 5 minutes to complete. The config test duration is even exponential if I add more server blocks and proxy_cache_path. Changing any parameter of the proxy_cache_path doesn't change the config test duration. It only takes 30s for nginx -t if I use a single proxy_cache_path for all 40k server blocks. Do you know why a lot of proxy_cache_path's slow down nginx startup, reload and nginx -t? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,283544,283544#msg-283544 From mdounin at mdounin.ru Wed Mar 27 11:32:32 2019 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 27 Mar 2019 14:32:32 +0300 Subject: Fw:Routing problems through cookies In-Reply-To: <294b6cb2.5b85.169bd05cb32.Coremail.15555513217@163.com> References: <294b6cb2.5b85.169bd05cb32.Coremail.15555513217@163.com> Message-ID: <20190327113232.GK1877@mdounin.ru> Hello! On Wed, Mar 27, 2019 at 10:41:07AM +0800, ??? wrote: [...] You may want to avoid posting the same question multiple times. Thank you. -- Maxim Dounin http://mdounin.ru/ From nginx-forum at forum.nginx.org Wed Mar 27 18:54:41 2019 From: nginx-forum at forum.nginx.org (wkbrad) Date: Wed, 27 Mar 2019 14:54:41 -0400 Subject: Possible memory leak? In-Reply-To: <20190325123949.GU1877@mdounin.ru> References: <20190325123949.GU1877@mdounin.ru> Message-ID: Hey Maxim, I want to thank you again for all of your time and effort looking into this. It's been extremely helpful. Really... thank you! According to the man page, changing the junk setting is for debugging and would have negative impacts on performance. https://www.freebsd.org/cgi/man.cgi?query=malloc And the same is basically true for the mmap threshold settings. I'd really like to find a way to deal with this in a way that doesn't have a negative impact on performance. Combine all of that with the fact that this issue happens on all systems and that the direct competitor doesn't have this issue, that makes me think this is something that Nginx needs to address directly. After all, how can it be a system level issue if it happens on ALL systems? It has to be a problem in Nginx itself in my view. What are your thoughts on that? Thanks! Posted at Nginx Forum: https://forum.nginx.org/read.php?2,283216,283553#msg-283553 From mdounin at mdounin.ru Wed Mar 27 20:26:00 2019 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 27 Mar 2019 23:26:00 +0300 Subject: Possible memory leak? In-Reply-To: References: <20190325123949.GU1877@mdounin.ru> Message-ID: <20190327202600.GM1877@mdounin.ru> Hello! On Wed, Mar 27, 2019 at 02:54:41PM -0400, wkbrad wrote: > Combine all of that with the fact that this issue happens on all systems and > that the direct competitor doesn't have this issue, that makes me think this > is something that Nginx needs to address directly. If by "direct competitor" you mean Apache you've previously mentioned in this thread, note that: - Apache doesn't try to provide a safe configuration reload, and will be in an unusable state if something goes wrong while loading the new configuration (not to mention that Apache can't do a configuration reload while processing ongoing requests at the same time, and instead pauses all processing till reload is complete). This is not something nginx can afford, and hence we do it differently. While our approach may need more memory, this is something we are ready to pay to make sure no requests will be lost. - And, really, nginx and Apache are not direct competitors. If Apache works for you - use it, it is a reputable server we all love and use or used in the past. Though it doesn't scale good enough to thousands of connections, and that's why nginx was written in the first place. And it also cannot handle upgrades and configuration reloads seamlessly, so it cannot be used as a frontend server on loaded sites unless you have a way to direct traffic to a different server. Nevertheless, even on loaded sites Apache can be (and often is) used as a backend server behind nginx, and it does this just fine. What nginx does to keep configuration reloads seamless certainly require additional resources, but this is something we do for a good reason. And this particular problem is relatively easy to mitigate by keeping a configuration small (the other side of the problem is that you anyway need about 2x memory on the server, including various client-related buffers, so nginx will be able to start two sets of worker processes during the configuration reload, and this is something you can't mitigate at all; the good part is that this memory is not wasted but used for page cache between reloads). Also, if you really think this is a major problem, you may want to start working on improving system allocators to handle this better, it is something certainly possible. -- Maxim Dounin http://mdounin.ru/ From francis at daoine.org Wed Mar 27 22:48:58 2019 From: francis at daoine.org (Francis Daly) Date: Wed, 27 Mar 2019 22:48:58 +0000 Subject: nginx servers share session or cookies In-Reply-To: <652a4281.6b15.169b3e7daa0.Coremail.zn1314@126.com> References: <3fd2ca45.6896.169807ebaaf.Coremail.zn1314@126.com> <165822bc.50dd.169a427a89c.Coremail.zn1314@126.com> <20190322080323.wornsfp75v4lelx4@daoine.org> <652a4281.6b15.169b3e7daa0.Coremail.zn1314@126.com> Message-ID: <20190327224858.fkimslbvy5apmbea@daoine.org> On Mon, Mar 25, 2019 at 04:11:50PM +0800, David Ni wrote: Hi there, > I tried to set cookies like this You should not be setting cookies, in this case. If your "session" information is handled by cookies, you must modify the cookies before they go to the client, in order to convince the client to send the cookies back to both of your servers. If your session information is not handled by cookies, adding cookies (or any "Single Sign-On" mechanism) is a more involved job. >From reading the docs for one auth_lap module, it looks like it uses http Basic Authentication, and not cookies. Maybe the one you use does, too. In that case, the client will not send the credentials for one web server to another web server. So the way to get the client to send the same credentials to both, is to present them as one web server -- for example by using nginx as a reverse-proxy in front of both. With that, you would need some way of knowing which requests correspond to which back-end server, while using the same server name. f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Thu Mar 28 10:59:50 2019 From: nginx-forum at forum.nginx.org (sivak) Date: Thu, 28 Mar 2019 06:59:50 -0400 Subject: Is it possible to add milliseconds in error.log and also timestamps In-Reply-To: <20190326110811.GB1877@mdounin.ru> References: <20190326110811.GB1877@mdounin.ru> Message-ID: For below command , is there anyway to append timestamps in output. sbin/nginx -p (-p - setting the nginx path prefix, i.e. the directory in which the server files will be located (by default, the directory /usr/local/nginx).) Posted at Nginx Forum: https://forum.nginx.org/read.php?2,283506,283557#msg-283557 From nginx-forum at forum.nginx.org Fri Mar 29 00:49:48 2019 From: nginx-forum at forum.nginx.org (darthhexx) Date: Thu, 28 Mar 2019 20:49:48 -0400 Subject: Keepalived Connections Reset after reloading the configuration (HUP Signal) In-Reply-To: References: Message-ID: <56774ba55034a4075b78189191a29a62.NginxMailingListEnglish@forum.nginx.org> Hi, We are seeing some fallout from this behaviour on keep-alive connections when proxying traffic from remote POPs back to an Origin DC that, due to latency, brings about a race condition in the socket shutdown sequence. The result being the fateful "upstream prematurely closed connection while reading response header from upstream" in the Remote POP. A walk through of what we are seeing: 1. Config reload happens on the Origin DC. 2. Socket shutdowns are sent to all open, but not transacting, keep-alive connections. 3. Remote POP sends data on a cached connection at around the same time as #2, because at this point it has not received the disconnect yet. 4. Remote POP then receives the disconnect and errors with "upstream prematurely..". Ideally we should be able to have the Origin honour the `worker_shutdown_timeout` (or some other setting) for keep-alive connections. That way we would be able to use the `keepalive_timeout` setting for upstreams to ensure the upstream's cached connections always time out before a worker is shutdown. Would that be possible or is there another way to mitigate this scenario? /David Posted at Nginx Forum: https://forum.nginx.org/read.php?2,197927,283564#msg-283564 From nginx-forum at forum.nginx.org Fri Mar 29 11:24:44 2019 From: nginx-forum at forum.nginx.org (sivak) Date: Fri, 29 Mar 2019 07:24:44 -0400 Subject: Is it possible to add milliseconds in error.log and also timestamps In-Reply-To: <15d1757b0ed2bcc2de430d9f3a7b3a67.NginxMailingListEnglish@forum.nginx.org> References: <15d1757b0ed2bcc2de430d9f3a7b3a67.NginxMailingListEnglish@forum.nginx.org> Message-ID: Also for some of the statements in error.log ,date and time is not added.Is there anyway to add it for missed ones? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,283506,283570#msg-283570 From nginx-forum at forum.nginx.org Fri Mar 29 12:18:34 2019 From: nginx-forum at forum.nginx.org (gaoyan09) Date: Fri, 29 Mar 2019 08:18:34 -0400 Subject: Will nginx support http3 Message-ID: <7d4c7c6665656c1fedd9f01c4f766578.NginxMailingListEnglish@forum.nginx.org> Hello, Nginx 1.15 brought udp stream session, it seems that nginx has been ready for http3. Do nginx have any plan to support http3? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,283571,283571#msg-283571 From francis at daoine.org Sat Mar 30 16:08:23 2019 From: francis at daoine.org (Francis Daly) Date: Sat, 30 Mar 2019 16:08:23 +0000 Subject: Is it possible to add milliseconds in error.log and also timestamps In-Reply-To: References: <15d1757b0ed2bcc2de430d9f3a7b3a67.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20190330160823.mpi55wojctrzqing@daoine.org> On Fri, Mar 29, 2019 at 07:24:44AM -0400, sivak wrote: Hi there, > Also for some of the statements in error.log ,date and time is not added.Is > there anyway to add it for missed ones? Can you give some examples of what you do to generate these lines, what the lines look like, and what you want them to look like instead? As I understand things: in general in nginx, things written to error_log have a timestamp included; and things written to stderr do not. And most things written to stderr are because they were asked for, or because the error log does not exist. If you can show specific things that can be improved, then perhaps someone will be inspired to change them. I do not think that all stderr output should have a timestamp -- for example: $ sbin/nginx -t nginx: the configuration file /usr/local/nginx/conf/nginx.conf syntax is ok nginx: configuration file /usr/local/nginx/conf/nginx.conf test is successful looks correct to me as-is; as does $ sbin/nginx -p nginx: option "-p" requires directory name Thanks, f -- Francis Daly francis at daoine.org