From jeffreyg at buyseasons.com Fri Feb 1 00:00:37 2019 From: jeffreyg at buyseasons.com (Jeffrey Gilgenbach) Date: Fri, 1 Feb 2019 00:00:37 +0000 Subject: Using Public IP for NGINX server Message-ID: I am currently evaluating migrating my current BIGIP configuration to NGINX. I'm completely aware that not all configuration will translate 1:1, however I am looking to see if anyone has suggestions where I am stuck and if it's possible to do. I currently have a routable network setup between my edge router and my NGINX server, we'll say it's 192.168.1.0/24. My edge router has a static route for one of my public IP's to my NGINX server IP 192.168.1.239. On my NGINX server, I want to assign this public IP to a listen directive under a server context for load balancing a few web servers upstream. On F5, I could SNAT this IP using the routing network I created between my edge router and the F5. I'm not sure if using iptables is necessary or if I need to add the public IP as a virtual interface on the routing network? Just a bit lost and could use some direction if anyone has any suggestions. Thanks! Jeffrey -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Fri Feb 1 13:29:59 2019 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 1 Feb 2019 16:29:59 +0300 Subject: Matching & Acting upon Headers received from Upstream with proxy_pass In-Reply-To: References: Message-ID: <20190201132959.GS1877@mdounin.ru> Hello! On Thu, Jan 31, 2019 at 09:06:10PM +0000, Alec Muffett wrote: > I'm running a reverse proxy and I want to trap when upstream is sending me: > > Content-Encoding: gzip > > ...and on those occasions return (probably) 406 downstream to the client; > the reason for this is that I am always using: > > proxy_set_header Accept-Encoding "identity"; > > ...so the upstream should *never* send me gzip/etc; but sometimes it does > so because of errors with CDN configuration and "Vary:" headers, and that > kind of thing. I would like to make the situation more obvious and easier > to detect. The 406 looks wrong to me. It tells the client that the response is not acceptable as per accept headers in the request. In your case it is more like 500, as the problem is that your backend returned a response it shouldn't, and your frontend thinks it's a fatal error. Note well that HTTP servers are allowed to return gzipped responses even to requests with "Accept-Encoding: identify". Quoting RFC 2616 (https://tools.ietf.org/html/rfc2616#section-10.4.7): Note: HTTP/1.1 servers are allowed to return responses which are not acceptable according to the accept headers sent in the request. In some cases, this may even be preferable to sending a 406 response. User agents are encouraged to inspect the headers of an incoming response to determine if it is acceptable. For example, this is something that can happen when only gzipped variants of files are available on the server. Apache can be configured to add encoding based on the extension (so ".gz" files will be returned with "Content-Encoding: gz", see https://httpd.apache.org/docs/2.4/mod/mod_mime.html#addencoding), and nginx can return gzipped variants regardless of client's support with "gzip_static always;" (see http://nginx.org/r/gzip_static). > I have been trying solutions like: > > if ( $upstream_http_content_encoding ~ /gzip/ ) { return 406; } > > and: > > map $upstream_http_content_encoding $badness { > br 1; > compress 1; > deflate 1; > gzip 1; > identity 0; > default 0; > } > ... > server { ... > if ($badness) { return 406; } > > ...but nothing is working like I had hoped, I suspect because I do not know > if/where to place the if-statement such that the > upstream_http_content_encoding is both set and valid during an appropriate > processing phase. This is not going to work as rewrite module directives are processed while selecting a configuration to work with (see http://nginx.org/en/docs/http/ngx_http_rewrite_module.html). Obviously enough this happens before the request is passed to the upstream, and before the response is received. > The most annoying thing is that I can see that the > upstream_http_content_encoding variable is set to "gzip", because if I do: > > more_set_headers "Foo: /$upstream_http_content_encoding/"; > > ...then I can see the "Foo: /gzip/" value on the client; but that does not > help me do what I want. > > Can anyone suggest a route forward, please? You should be able to do what you want by writing a header filter which will check headers returned and will return an error if these headers don't match your expectations. -- Maxim Dounin http://mdounin.ru/ From alec.muffett at gmail.com Fri Feb 1 14:32:18 2019 From: alec.muffett at gmail.com (Alec Muffett) Date: Fri, 1 Feb 2019 14:32:18 +0000 Subject: Matching & Acting upon Headers received from Upstream with proxy_pass In-Reply-To: <20190201132959.GS1877@mdounin.ru> References: <20190201132959.GS1877@mdounin.ru> Message-ID: On Fri, 1 Feb 2019 at 13:30, Maxim Dounin wrote: Hi Maxim! The 406 looks wrong to me. It tells the client that the response > is not acceptable as per accept headers in the request. In your > case it is more like 500, Concur. I did some research on Wikipedia and followed Cloudflare's example to use 520. Note well that HTTP servers are allowed to return gzipped > responses even to requests with "Accept-Encoding: identify". > Quoting RFC 2616 (https://tools.ietf.org/html/rfc2616#section-10.4.7): > > Note: HTTP/1.1 servers are allowed to return responses which are > not acceptable according to the accept headers sent in the > request. In some cases, this may even be preferable to sending a > 406 response. User agents are encouraged to inspect the headers of > an incoming response to determine if it is acceptable. > Being a reverse proxy, in a sense it is that sort of inspection which I am attempting :-) They are certainly allowed to do it in this circumstance, but it's not helpful for the proxy in question. This is not going to work as rewrite module directives are > processed while selecting a configuration to work with (see > http://nginx.org/en/docs/http/ngx_http_rewrite_module.html). > Obviously enough this happens before the request is passed to the > upstream, and before the response is received. > Yep. NGINX is a marvel, but working out which bits of configuration are executed in what order, is still a challenge for me without coffee. > You should be able to do what you want by writing a header filter > which will check headers returned and will return an error if > these headers don't match your expectations. > Eventually I went with this, using Lua; I am still trying to work out how to make it return an informative message AS WELL AS the 520 error code, perhaps by using an internal redirect? I am wondering whether a header filter can cause an internal redirect to a more useful "520 page" Thank you for getting back to me! :-) - alec Extract from: https://github.com/alecmuffett/eotk/blob/master/templates.d/nginx.conf.txt init_by_lua_block { Dictionary = function (list) local set = {} for _, l in ipairs(list) do set[l] = true end return set end is_compression = Dictionary{ "br", "compress", "deflate", "gzip", } } header_filter_by_lua_block { local ce = ngx.var.upstream_http_content_encoding or "" if is_compression[ce] then -- I'd prefer to do something nice like this: -- ngx.status = 520 -- ngx.say("upstream content was compressed and therefore not rewritable") -- ngx.exit(ngx.OK) -- ...but say() needs an API that is not available in this phase: -- https://github.com/openresty/lua-nginx-module#header_filter_by_lua -- therefore: ngx.exit(520) -- en.wikipedia.org/wiki/List_of_HTTP_status_codes end } -- http://dropsafe.crypticide.com/aboutalecm -------------- next part -------------- An HTML attachment was scrubbed... URL: From cyflhn at 163.com Sat Feb 2 15:49:16 2019 From: cyflhn at 163.com (yf chu) Date: Sat, 2 Feb 2019 23:49:16 +0800 (CST) Subject: client timed out (110: Connection timed out) while SSL handshaking for web browser on mobile device Message-ID: <7a5bc9ce.9080.168aee65346.Coremail.cyflhn@163.com> We have found that occassionally the web page could not be opened with browser on mobile device. We sometimes wait for a very long time before we get the error message that this web page could not be opened, but then we reopen the page and the normal page shows without any problems. We have checked the Nginx log files and found that the message "client timed out (110: Connection timed out) while SSL handshaking" showed many times. But it is info message, not the error or warn message. What could be the reason for this issue? Does it have something to do with HTTP2 or tls2? We have enabled the HTTP2. Here is some key configurations: listen 443 http2 ssl; ssl_session_timeout 1d; ssl_session_cache shared:SSL:50m; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_prefer_server_ciphers on; -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Tue Feb 5 12:29:19 2019 From: nginx-forum at forum.nginx.org (abmackenzie) Date: Tue, 05 Feb 2019 07:29:19 -0500 Subject: Modifying Headers by Analyzing Body Message-ID: <639b529c8cc6d90776cd250a40ca3c84.NginxMailingListEnglish@forum.nginx.org> Hi all, I am attempting to develop an NGINX module which is required to modify the response headers based on the content of the response body, whether from an upstream proxy or being fulfilled by the server itself. My current approach, a body filter, does not seem fruitful. By the time the body filter is invoked the headers have already been sent. There is no way to generate the correct header without the response body. I cannot find any mechanism in the NGX API to postpone sending of the headers or access the response body in a header filter. I was hoping somebody could either point me in the direction of a solution (or confirm that what I am trying to do is impossible without modification of the NGINX source). Thanks, Alex Posted at Nginx Forum: https://forum.nginx.org/read.php?2,282914,282914#msg-282914 From arut at nginx.com Tue Feb 5 13:57:51 2019 From: arut at nginx.com (Roman Arutyunyan) Date: Tue, 5 Feb 2019 16:57:51 +0300 Subject: Modifying Headers by Analyzing Body In-Reply-To: <639b529c8cc6d90776cd250a40ca3c84.NginxMailingListEnglish@forum.nginx.org> References: <639b529c8cc6d90776cd250a40ca3c84.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20190205135751.GZ960@Romans-MacBook-Air.local> Hi Alex, On Tue, Feb 05, 2019 at 07:29:19AM -0500, abmackenzie wrote: > Hi all, > > I am attempting to develop an NGINX module which is required to modify the > response headers based on the content of the response body, whether from an > upstream proxy or being fulfilled by the server itself. > > My current approach, a body filter, does not seem fruitful. By the time the > body filter is invoked the headers have already been sent. There is no way > to generate the correct header without the response body. I cannot find any > mechanism in the NGX API to postpone sending of the headers or access the > response body in a header filter. > > I was hoping somebody could either point me in the direction of a solution > (or confirm that what I am trying to do is impossible without modification > of the NGINX source). This is possible. There are two examples of this in the nginx source: ngx_http_xslt_filter_module and ngx_http_image_filter_module. In a nutshell, you need to register both a header filter and a body filter. In the header filter you don't call the next filter in chain. You only call it from the body filter once you're done with receiving and caching the body. After calling the next header filter, you should output all the body cached by your module by calling the next body filter. Also notice the ngx_http_filter_finalize_request() call in the body filter. Use this call to finalize the request in case of error if header is not yet sent. PS: it is better to send development questions to nginx-devel at nginx.org mailing list -- Roman Arutyunyan From nginx-forum at forum.nginx.org Tue Feb 5 14:11:10 2019 From: nginx-forum at forum.nginx.org (abmackenzie) Date: Tue, 05 Feb 2019 09:11:10 -0500 Subject: Modifying Headers by Analyzing Body In-Reply-To: <20190205135751.GZ960@Romans-MacBook-Air.local> References: <20190205135751.GZ960@Romans-MacBook-Air.local> Message-ID: Hi Roman, Thank you very much. I will study the source of the modules you mention, I had not realised there were examples of this being done in core NGINX (and in the future direct development questions to the correct mailing list!). Alex Posted at Nginx Forum: https://forum.nginx.org/read.php?2,282914,282918#msg-282918 From francis at daoine.org Tue Feb 5 19:54:57 2019 From: francis at daoine.org (Francis Daly) Date: Tue, 5 Feb 2019 19:54:57 +0000 Subject: Nginx access log query string params per line. In-Reply-To: References: Message-ID: <20190205195457.xzdcyk7yyanpyx3m@daoine.org> On Thu, Jan 31, 2019 at 05:25:00PM -0500, c0nw0nk wrote: Hi there, > The access.log file shows > > query1=param1&query2=param2 > > All on the same line isit possible to split these up onto diifferent lines. > > Example. > > query1=param1 > > query2=param2 I think "no", using just stock nginx config. However -- is it acceptable to post-process the log file? tr '&' '\n' < access.log is one way to see what you want, without changing what nginx logs. f -- Francis Daly francis at daoine.org From francis at daoine.org Wed Feb 6 00:32:20 2019 From: francis at daoine.org (Francis Daly) Date: Wed, 6 Feb 2019 00:32:20 +0000 Subject: bypassing and replacing cache if origin changed - or purging cache when origin changed? In-Reply-To: References: Message-ID: <20190206003220.4nixfj3zzxfps3xm@daoine.org> On Fri, Jan 25, 2019 at 03:28:42PM -0500, nenaB wrote: Hi there, None of the suggestions below are fully tested by me, so they are not drop-in answers to your questions. > I have nginx configured to cache certain requests mapped by > tenant/host/uri/args - these files wouldn't change often, but if they do, i > want nginx to fetch them new. > > I don't have NginxPlus - and I'm running Nginx docker. Some things that I > have investigated, but not sure how I could configure to make it work is: > the proxy_cache_bypass and the expires settings. "Normal" caching involves nginx accepting what the upstream says the expiry time of the response is. If you want nginx to check upstream for new content before that expiry time, you either need to purge nginx's cache for that request, or to bypass the cache for the request. I think that there is not an explicit "purge" facility in the stock nginx. But there is the proxy_cache_bypass directive. You can perhaps try a third-party module for purging elements from the cache. Or set up your system such that big or expensive content never expires, and instead have small or lightweight content that links to the "current" big content, and the link is updated when new big content is available. Or you can just use proxy_cache_bypass. If you know by some external means that the content that claimed to be valid for the next month, is now in fact stale; then you can make a request of nginx for the content while including some "please bypass the cache" flags in the request, and nginx should make the request of upstream, and store the response such that the next "normal" request will get the new response. > What exactly do I need to configure if > I want to test whether for a given cache-key on the upstream server, is > there new content? (I can implement an api that could test whether new > content is available.) I am assuming that you know when content has been changed, and you want to tell nginx to refresh its cache even though nginx was previously told that the content would not have changed yet. In that case: ignore the cache key. Just make the normal request; but make it from a specific source IP address and/or include a specific header in the request, that causes your proxy_cache_bypass variable to be non-zero. If you do not know whether the upstream content has changed or not, perhaps you could set a short "expiry" time within nginx, and use proxy_cache_revalidate to "refresh" the validity without re-fetching the content? Good luck with it, f -- Francis Daly francis at daoine.org From vbart at nginx.com Thu Feb 7 16:50:13 2019 From: vbart at nginx.com (Valentin V. Bartenev) Date: Thu, 07 Feb 2019 19:50:13 +0300 Subject: Unit 1.7.1 release Message-ID: <1879441.00TDSJ8TLp@vbart-workstation> Hi, This is a bugfix release of NGINX Unit that eliminates a security flaw. All versions of Unit from 0.3 to 1.7 are affected. Everybody is strongly advised to update to a new version. Changes with Unit 1.7.1 07 Feb 2019 *) Security: a heap memory buffer overflow might have been caused in the router process by a specially crafted request, potentially resulting in a segmentation fault or other unspecified behavior (CVE-2019-7401). *) Bugfix: install of Go module failed without prior building of Unit daemon; the bug had appeared in 1.7. Release of Unit 1.8 with support for internal request routing and an experimental Java module is planned for end of February. wbr, Valentin V. Bartenev From vbart at nginx.com Thu Feb 7 16:51:47 2019 From: vbart at nginx.com (Valentin V. Bartenev) Date: Thu, 07 Feb 2019 19:51:47 +0300 Subject: Unit security advisory (CVE-2019-7401) Message-ID: <2960635.QidAOF5Xu5@vbart-workstation> Hi, A security issue was identified in NGINX Unit, which might allow an attacker to cause a heap memory buffer overflow in the router process with a specially crafted request. This may result in a denial of service (router process crash) or other unspecified behavior (CVE-2019-7401). The issue affects Unit 0.3 - 1.7. The issue is fixed in Unit 1.7.1. wbr, Valentin V. Bartenev From reiner at buehl.net Sat Feb 9 10:50:22 2019 From: reiner at buehl.net (=?UTF-8?Q?Reiner_B=C3=BChl?=) Date: Sat, 09 Feb 2019 11:50:22 +0100 Subject: Rewrite doesn't work if location has no trailing / Message-ID: <45f81a832c0c6bd87b47385fb23d52f6@buehl.net> Hi all, I currently use the following location to redirect every request for a resource under /webmail to a seperate server: location /webmail { rewrite ^/webmail(.*) /$1 break; proxy_pass http://127.0.0.1:8081; proxy_redirect off; proxy_set_header Host $host; } This works fine for URLs like /webmail/ or /webmail/<*>. However if the URL is just /webmail - no trailing / - then the page itself is served from the other web server but all resources like pictures that are referenced in that page do not get rewriten. If I call the same page with /webmail/, all resources in the page are properly rewritten and load correctly. Can you give me a hint what to change in the rule? Best regards, Reiner From me at nanaya.pro Sat Feb 9 10:55:22 2019 From: me at nanaya.pro (nanaya) Date: Sat, 09 Feb 2019 19:55:22 +0900 Subject: Rewrite doesn't work if location has no trailing / In-Reply-To: <45f81a832c0c6bd87b47385fb23d52f6@buehl.net> References: <45f81a832c0c6bd87b47385fb23d52f6@buehl.net> Message-ID: <1549709722.3137704.1654290320.1BAB2B30@webmail.messagingengine.com> Hi, On Sat, Feb 9, 2019, at 19:50, Reiner B?hl wrote: > Hi all, > > I currently use the following location to redirect every request for a > resource under /webmail to a seperate server: > > location /webmail { > rewrite ^/webmail(.*) /$1 break; > proxy_pass http://127.0.0.1:8081; > proxy_redirect off; > proxy_set_header Host $host; > } > > This works fine for URLs like /webmail/ or /webmail/<*>. However if the > URL is just /webmail - no trailing / - then the page itself is served > from the other web server but all resources like pictures that are > referenced in that page do not get rewriten. If I call the same page > with /webmail/, all resources in the page are properly rewritten and > load correctly. > > Can you give me a hint what to change in the rule? > The resources are probably in relative path and /webmail is different to /webmail/ - one has base path of / and the other is /webmail/. Request to css/app.css in first path resolves to /css/app.css and the other to /webmail/css/app.css. I'd do a 301/302 redirect of /webmail to /webmail/ instead. From nginx-forum at forum.nginx.org Sat Feb 9 13:06:45 2019 From: nginx-forum at forum.nginx.org (reibuehl) Date: Sat, 09 Feb 2019 08:06:45 -0500 Subject: Rewrite doesn't work if location has no trailing / In-Reply-To: <1549709722.3137704.1654290320.1BAB2B30@webmail.messagingengine.com> References: <1549709722.3137704.1654290320.1BAB2B30@webmail.messagingengine.com> Message-ID: I added a rewrite rule to permanently redirect /webmail to /webmail/ and it works fine! Many thanks for the quick help! Posted at Nginx Forum: https://forum.nginx.org/read.php?2,282962,282964#msg-282964 From adami at seeitonthenet.com Sun Feb 10 15:49:45 2019 From: adami at seeitonthenet.com (Adam) Date: Sun, 10 Feb 2019 15:49:45 +0000 Subject: Server fails to start with unknown variable in log Message-ID: <48a0be62-4744-943b-3879-b3911e72b012@seeitonthenet.com> Hi, Since GeoIP community database was removed by Maxmind I wanted to remove GeoIP from my nginx.conf file. On running a start I get the following error: Performing sanity check on nginx configuration: nginx: [emerg] unknown "allow_visit" variable nginx: configuration file /usr/local/etc/nginx/nginx.conf test failed Starting nginx. nginx: [emerg] unknown "allow_visit" variable /usr/local/etc/rc.d/nginx: WARNING: failed to start nginx So I ran: cat /usr/local/etc/nginx/nginx.conf | grep "allow_visit" with no result at all. A web search produced nothing. My system is 12.0-RELEASE-p3 FreeBSD 12.0-RELEASE-p3 GENERIC amd64 nginx: nginx-devel-1.15.8_4 freebsd port install any pointers would help. Regards, Adam From francis at daoine.org Sun Feb 10 16:23:30 2019 From: francis at daoine.org (Francis Daly) Date: Sun, 10 Feb 2019 16:23:30 +0000 Subject: Server fails to start with unknown variable in log In-Reply-To: <48a0be62-4744-943b-3879-b3911e72b012@seeitonthenet.com> References: <48a0be62-4744-943b-3879-b3911e72b012@seeitonthenet.com> Message-ID: <20190210162330.sl5vvesjwzzzwhhq@daoine.org> On Sun, Feb 10, 2019 at 03:49:45PM +0000, Adam wrote: Hi there, > Performing sanity check on nginx configuration: > nginx: [emerg] unknown "allow_visit" variable > nginx: configuration file /usr/local/etc/nginx/nginx.conf test failed > So I ran: > > cat /usr/local/etc/nginx/nginx.conf | grep "allow_visit" > > with no result at all. There may be other files "include"d from your nginx.conf. > any pointers would help. nginx -T | grep -e 'configuration file\|allow_visit' should show the mention of allow_visit; the previous configuration file name is the one that uses it. f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Sun Feb 10 17:00:28 2019 From: nginx-forum at forum.nginx.org (Adami) Date: Sun, 10 Feb 2019 12:00:28 -0500 Subject: Server fails to start with unknown variable in log In-Reply-To: <20190210162330.sl5vvesjwzzzwhhq@daoine.org> References: <20190210162330.sl5vvesjwzzzwhhq@daoine.org> Message-ID: <5306edd1afc474555e1058ada7640c48.NginxMailingListEnglish@forum.nginx.org> Thanks for that thought it lead me to check for any reference to "GeoIP" in the nginx.conf and discovered a link tucked away. Simple removal solved it. Thanks for the lead. Regards, Adam Posted at Nginx Forum: https://forum.nginx.org/read.php?2,282966,282968#msg-282968 From nginx-forum at forum.nginx.org Mon Feb 11 03:21:22 2019 From: nginx-forum at forum.nginx.org (nevereturn01) Date: Sun, 10 Feb 2019 22:21:22 -0500 Subject: Use sub-url to identify the different server In-Reply-To: <20190124085832.cntioa53ks3cxycr@daoine.org> References: <20190124085832.cntioa53ks3cxycr@daoine.org> Message-ID: <3aa6d4339c4f02f755c0452e5a47bcba.NginxMailingListEnglish@forum.nginx.org> Hi Francis, Thanks for your suggestions. The rule seems to work. Thanks very much! Posted at Nginx Forum: https://forum.nginx.org/read.php?2,282615,282975#msg-282975 From nginx-forum at forum.nginx.org Mon Feb 11 14:59:06 2019 From: nginx-forum at forum.nginx.org (joao.pereira) Date: Mon, 11 Feb 2019 09:59:06 -0500 Subject: STALE responses taking as much as MISS responses Message-ID: Hi all, I'm trying to set up an nginx with a big amount of disk to serve as a cache server. I have the following configuration: proxy_cache_path /mnt/cache levels=2:2:2 keys_zone=my-cache:10000m max_size=700000m inactive=30d; proxy_temp_path /mnt/cache/tmp; On my logs I can see that HIT's are very fast but STALEs take as much as MISS while I believe they should take as much as HITs. Is there something I can do to improve this ? Are the stale responses a true "stale-while-revalidate" response ?or are they waiting for the response from the origin server ? Thanks Posted at Nginx Forum: https://forum.nginx.org/read.php?2,282984,282984#msg-282984 From nginx-forum at forum.nginx.org Mon Feb 11 15:06:15 2019 From: nginx-forum at forum.nginx.org (joao.pereira) Date: Mon, 11 Feb 2019 10:06:15 -0500 Subject: STALE responses taking as much as MISS responses In-Reply-To: References: Message-ID: <15e8d22ff886406b9d83aadccdeb55ce.NginxMailingListEnglish@forum.nginx.org> Just to add more information, I also have: proxy_cache_use_stale error timeout invalid_header updating http_500 http_502 http_503 http_504 http_404; proxy_cache_background_update on; Posted at Nginx Forum: https://forum.nginx.org/read.php?2,282984,282985#msg-282985 From nginx-forum at forum.nginx.org Mon Feb 11 15:16:07 2019 From: nginx-forum at forum.nginx.org (rick_pri) Date: Mon, 11 Feb 2019 10:16:07 -0500 Subject: I'm about to embark on creating 12000 vhosts Message-ID: <15117d4ab68ac3bc542bc01b571ff07d.NginxMailingListEnglish@forum.nginx.org> Our current setup is pretty simple, we have a regex capture to ensure that the incoming request is a valid ascii domain name and we serve all our traffic from that. Great ... for us. However, our customers, with about 12000 domain names at present have started to become quite vocal about having HTTPS on their websites, to which we provide a custom CMS and website package, which means we're about to create a new Nginx layer in front of our current servers to terminate TLS. This will require us to set up vhosts for each certificate issued with server names which match what's in the certificate's SAN. To keep this simple we're currently thinking about just having each domain, and www subdomain, on its own certificate (LetsEncrypt) and vhost but that is going to lead, approximately, to the number of vhosts mentioned in the subject line. As such I wanted to put the feelers out to see if anyone else had tried to work with large numbers of vhosts and any issues which they may have come across. Kind regards, Richard Posted at Nginx Forum: https://forum.nginx.org/read.php?2,282986,282986#msg-282986 From crackhd2 at gmail.com Mon Feb 11 15:35:18 2019 From: crackhd2 at gmail.com (Ben Schmidt) Date: Mon, 11 Feb 2019 16:35:18 +0100 Subject: I'm about to embark on creating 12000 vhosts In-Reply-To: <15117d4ab68ac3bc542bc01b571ff07d.NginxMailingListEnglish@forum.nginx.org> References: <15117d4ab68ac3bc542bc01b571ff07d.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hi Richard, we have experience with around 1/4th the vhosts on a single Server, no Issues at all. Reloading can take up to a minute but the Hardware isn't what I would call recent. The only thing that you'll have to watch out are Letsencrypt rate Limits > https://letsencrypt.org/docs/rate-limits/ ##### /etc/letsencrypt/renewal $ ls | wc -l 1647 ##### We switched to using SAN Certs whenever possible. Around 8 years ago I managed a 8000 vHosts Webfarm with a apache. No Issues ether. Cheers, Ben On Mon, Feb 11, 2019 at 4:16 PM rick_pri wrote: > Our current setup is pretty simple, we have a regex capture to ensure that > the incoming request is a valid ascii domain name and we serve all our > traffic from that. Great ... for us. > > However, our customers, with about 12000 domain names at present have > started to become quite vocal about having HTTPS on their websites, to > which > we provide a custom CMS and website package, which means we're about to > create a new Nginx layer in front of our current servers to terminate TLS. > This will require us to set up vhosts for each certificate issued with > server names which match what's in the certificate's SAN. > > To keep this simple we're currently thinking about just having each domain, > and www subdomain, on its own certificate (LetsEncrypt) and vhost but that > is going to lead, approximately, to the number of vhosts mentioned in the > subject line. As such I wanted to put the feelers out to see if anyone > else > had tried to work with large numbers of vhosts and any issues which they > may > have come across. > > Kind regards, > > Richard > > Posted at Nginx Forum: > https://forum.nginx.org/read.php?2,282986,282986#msg-282986 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Richard at primarysite.net Mon Feb 11 15:58:54 2019 From: Richard at primarysite.net (Richard Paul) Date: Mon, 11 Feb 2019 15:58:54 +0000 Subject: I'm about to embark on creating 12000 vhosts In-Reply-To: References: <15117d4ab68ac3bc542bc01b571ff07d.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hi Ben, Thanks for the quick response. That's great to hear, as we'd only get to find this out after putting rather a lot of effort into the process. We'll be hosting these on cloud instances but since those aren't the fastest machines around I'll take the reloading as a word of caution (we're probably going to have to make another bit of application functionality which will handle this so that we're only reloading when we have domain changes rather than on a regular schedule that'd I'd thought would be the simplest method.) I have a plan for the rate limits, but thank you for mentioning it. SANs would reduce the number of vhosts, but I'm not sure about the added complexity of managing the vhost templates and the key/cert naming. Kind regards, Richard On Mon, 2019-02-11 at 16:35 +0100, Ben Schmidt wrote: Hi Richard, we have experience with around 1/4th the vhosts on a single Server, no Issues at all. Reloading can take up to a minute but the Hardware isn't what I would call recent. The only thing that you'll have to watch out are Letsencrypt rate Limits > https://letsencrypt.org/docs/rate-limits/ ##### /etc/letsencrypt/renewal $ ls | wc -l 1647 ##### We switched to using SAN Certs whenever possible. Around 8 years ago I managed a 8000 vHosts Webfarm with a apache. No Issues ether. Cheers, Ben On Mon, Feb 11, 2019 at 4:16 PM rick_pri > wrote: Our current setup is pretty simple, we have a regex capture to ensure that the incoming request is a valid ascii domain name and we serve all our traffic from that. Great ... for us. However, our customers, with about 12000 domain names at present have started to become quite vocal about having HTTPS on their websites, to which we provide a custom CMS and website package, which means we're about to create a new Nginx layer in front of our current servers to terminate TLS. This will require us to set up vhosts for each certificate issued with server names which match what's in the certificate's SAN. To keep this simple we're currently thinking about just having each domain, and www subdomain, on its own certificate (LetsEncrypt) and vhost but that is going to lead, approximately, to the number of vhosts mentioned in the subject line. As such I wanted to put the feelers out to see if anyone else had tried to work with large numbers of vhosts and any issues which they may have come across. Kind regards, Richard Posted at Nginx Forum: https://forum.nginx.org/read.php?2,282986,282986#msg-282986 _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From rpaprocki at fearnothingproductions.net Mon Feb 11 18:34:05 2019 From: rpaprocki at fearnothingproductions.net (Robert Paprocki) Date: Mon, 11 Feb 2019 10:34:05 -0800 Subject: I'm about to embark on creating 12000 vhosts In-Reply-To: References: <15117d4ab68ac3bc542bc01b571ff07d.NginxMailingListEnglish@forum.nginx.org> Message-ID: FWIW, this kind of large installation is why solutions like OpenResty exist (providing for dynamic config/cert service/hostname registration without having to worry about the time/expense of re-parsing the Nginx config). On Mon, Feb 11, 2019 at 7:59 AM Richard Paul wrote: > Hi Ben, > > Thanks for the quick response. That's great to hear, as we'd only get to > find this out after putting rather a lot of effort into the process. > We'll be hosting these on cloud instances but since those aren't the > fastest machines around I'll take the reloading as a word of caution (we're > probably going to have to make another bit of application functionality > which will handle this so that we're only reloading when we have domain > changes rather than on a regular schedule that'd I'd thought would be the > simplest method.) > > I have a plan for the rate limits, but thank you for mentioning it. SANs > would reduce the number of vhosts, but I'm not sure about the added > complexity of managing the vhost templates and the key/cert naming. > > Kind regards, > Richard > > > On Mon, 2019-02-11 at 16:35 +0100, Ben Schmidt wrote: > > Hi Richard, > > we have experience with around 1/4th the vhosts on a single Server, no > Issues at all. > Reloading can take up to a minute but the Hardware isn't what I would call > recent. > > The only thing that you'll have to watch out are Letsencrypt rate Limits > > https://letsencrypt.org/docs/rate-limits/ > ##### > /etc/letsencrypt/renewal $ ls | wc -l > 1647 > ##### > We switched to using SAN Certs whenever possible. > > Around 8 years ago I managed a 8000 vHosts Webfarm with a apache. No > Issues ether. > > Cheers, > Ben > > On Mon, Feb 11, 2019 at 4:16 PM rick_pri > wrote: > > Our current setup is pretty simple, we have a regex capture to ensure that > the incoming request is a valid ascii domain name and we serve all our > traffic from that. Great ... for us. > > However, our customers, with about 12000 domain names at present have > started to become quite vocal about having HTTPS on their websites, to > which > we provide a custom CMS and website package, which means we're about to > create a new Nginx layer in front of our current servers to terminate TLS. > This will require us to set up vhosts for each certificate issued with > server names which match what's in the certificate's SAN. > > To keep this simple we're currently thinking about just having each domain, > and www subdomain, on its own certificate (LetsEncrypt) and vhost but that > is going to lead, approximately, to the number of vhosts mentioned in the > subject line. As such I wanted to put the feelers out to see if anyone > else > had tried to work with large numbers of vhosts and any issues which they > may > have come across. > > Kind regards, > > Richard > > Posted at Nginx Forum: > https://forum.nginx.org/read.php?2,282986,282986#msg-282986 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > > http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From rainer at ultra-secure.de Mon Feb 11 18:57:57 2019 From: rainer at ultra-secure.de (Rainer Duffner) Date: Mon, 11 Feb 2019 19:57:57 +0100 Subject: I'm about to embark on creating 12000 vhosts In-Reply-To: <15117d4ab68ac3bc542bc01b571ff07d.NginxMailingListEnglish@forum.nginx.org> References: <15117d4ab68ac3bc542bc01b571ff07d.NginxMailingListEnglish@forum.nginx.org> Message-ID: <97F94933-CB18-498D-81C4-12BA3DAF2F7C@ultra-secure.de> > Am 11.02.2019 um 16:16 schrieb rick_pri : > > However, our customers, with about 12000 domain names at present have Let?s Encrypt rate limits will likely make these very difficult to obtain and also to renew. If you own the DNS, maybe using Wildcard DNS entries is more practical. Then, HAProxy allows to just drop all the certificates in a directory and let itself figure out the domain-names it has to answer. At least, that?s what my co-worker told me. Also, there?s the fabio LB with similar goal-posts. -------------- next part -------------- An HTML attachment was scrubbed... URL: From peter_booth at me.com Mon Feb 11 19:00:21 2019 From: peter_booth at me.com (Peter Booth) Date: Mon, 11 Feb 2019 14:00:21 -0500 Subject: STALE responses taking as much as MISS responses In-Reply-To: <15e8d22ff886406b9d83aadccdeb55ce.NginxMailingListEnglish@forum.nginx.org> References: <15e8d22ff886406b9d83aadccdeb55ce.NginxMailingListEnglish@forum.nginx.org> Message-ID: <7730CDD9-07BD-4886-A37B-AAE894A1A01D@me.com> You should be able to answer this by tailing the log of your nginx and orig server at the same time. It would be helpful if you shared an (anonymized) section of both logs. When I say fast or slow I might mean something very different to what you hear. > On 11 Feb 2019, at 10:06 AM, joao.pereira wrote: > > Just to add more information, I also have: > > proxy_cache_use_stale error > timeout > invalid_header > updating > http_500 > http_502 > http_503 > http_504 > http_404; > proxy_cache_background_update on; > > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,282984,282985#msg-282985 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From peter_booth at me.com Mon Feb 11 19:27:05 2019 From: peter_booth at me.com (Peter Booth) Date: Mon, 11 Feb 2019 14:27:05 -0500 Subject: STALE responses taking as much as MISS responses In-Reply-To: <7730CDD9-07BD-4886-A37B-AAE894A1A01D@me.com> References: <15e8d22ff886406b9d83aadccdeb55ce.NginxMailingListEnglish@forum.nginx.org> <7730CDD9-07BD-4886-A37B-AAE894A1A01D@me.com> Message-ID: <2B5F67FA-9244-424C-8B4B-BCB16817B65B@me.com> You are specifying a key zone that can hold about 80 million keys, and three level cache. Do you really have that many cached files? Unless you are serving petabytes of content, I?d suggest reverting your settings to default values and running some test cases to validate correct caching behavior. Peter > On 11 Feb 2019, at 2:00 PM, Peter Booth wrote: > > You should be able to answer this by tailing the log of your nginx and orig server at the same time. > > It would be helpful if you shared an (anonymized) section of both logs. When I say fast or slow > I might mean something very different to what you hear. > > >> On 11 Feb 2019, at 10:06 AM, joao.pereira > wrote: >> >> Just to add more information, I also have: >> >> proxy_cache_use_stale error >> timeout >> invalid_header >> updating >> http_500 >> http_502 >> http_503 >> http_504 >> http_404; >> proxy_cache_background_update on; >> >> Posted at Nginx Forum: https://forum.nginx.org/read.php?2,282984,282985#msg-282985 >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jeff.dyke at gmail.com Mon Feb 11 19:33:27 2019 From: jeff.dyke at gmail.com (Jeff Dyke) Date: Mon, 11 Feb 2019 14:33:27 -0500 Subject: I'm about to embark on creating 12000 vhosts In-Reply-To: <97F94933-CB18-498D-81C4-12BA3DAF2F7C@ultra-secure.de> References: <15117d4ab68ac3bc542bc01b571ff07d.NginxMailingListEnglish@forum.nginx.org> <97F94933-CB18-498D-81C4-12BA3DAF2F7C@ultra-secure.de> Message-ID: I use haproxy in a similar way as stated by Rainer, rather than having hundreds and hundreds of config files (yes there are other ways), i have 1 for haproxy and 2(on multiple machines defined in HAProxy). One for my main domain that listens to an "real" server_name and another that listens to `server_name _;` All of the nginx servers simply listen on 80 and 81 to handle non H2 clients and the application does the correct thing with the domain. Which is where YMMV as all applications differ. I found this much simpler and easier to maintain over time. I got around the LE limits by a staggered migration, so i was only requesting what was in the limit each day, then have a custom script that calls LE (which is also on the same machine as HAProxy) when certs are about 10 days out, so the staggering stays within the limits. When i was using custom configuration, i was build them via python using a yaml file and nginx would effectively be a jinja2 template. But even that became onerous. When going down the nginx path ensure you pay attention to the variables that control domain hash sizes. http://nginx.org/en/docs/hash.html HTH, good luck! Jeff On Mon, Feb 11, 2019 at 1:58 PM Rainer Duffner wrote: > > > Am 11.02.2019 um 16:16 schrieb rick_pri : > > However, our customers, with about 12000 domain names at present have > > > > Let?s Encrypt rate limits will likely make these very difficult to obtain > and also to renew. > > If you own the DNS, maybe using Wildcard DNS entries is more practical. > > Then, HAProxy allows to just drop all the certificates in a directory and > let itself figure out the domain-names it has to answer. > At least, that?s what my co-worker told me. > > Also, there?s the fabio LB with similar goal-posts. > > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From r at roze.lv Mon Feb 11 19:50:19 2019 From: r at roze.lv (Reinis Rozitis) Date: Mon, 11 Feb 2019 21:50:19 +0200 Subject: STALE responses taking as much as MISS responses In-Reply-To: References: Message-ID: <001901d4c243$04e08f50$0ea1adf0$@roze.lv> > On my logs I can see that HIT's are very fast but STALEs take as much as MISS > while I believe they should take as much as HITs. > > Is there something I can do to improve this ? Are the stale responses a true > "stale-while-revalidate" response ?or are they waiting for the response from the > origin server ? IMO the stale responses can't be as fast as HIT even theoretically (except maybe in 'updating' state with proxy_cache_background_update enabled) since they happen when the cache object has expired and nginx tries to get a new version from backend but for some reason can't get an updated response from the upstream (all the proxy_cache_use_stale states). I would try to identify why the stales actually happen - is it because the backend fails or there are multiple parallel requests to the same url and something like proxy_cache_lock is also defined. To speed up the response you might want to check the proxy_*_timeout directives as the defaults are quite high (60sec) and for example proxy_read_timeout is between reads so depending on the object size in case of a slow backend might take even more time. rr From sca at andreasschulze.de Mon Feb 11 19:53:24 2019 From: sca at andreasschulze.de (A. Schulze) Date: Mon, 11 Feb 2019 20:53:24 +0100 Subject: I'm about to embark on creating 12000 vhosts In-Reply-To: <15117d4ab68ac3bc542bc01b571ff07d.NginxMailingListEnglish@forum.nginx.org> References: <15117d4ab68ac3bc542bc01b571ff07d.NginxMailingListEnglish@forum.nginx.org> Message-ID: <0eb0d369-8f9f-4912-2d91-affb15ad60a6@andreasschulze.de> Am 11.02.19 um 16:16 schrieb rick_pri: > As such I wanted to put the feelers out to see if anyone else > had tried to work with large numbers of vhosts and any issues which they may > have come across. Hello we're running nginx (latest) with ~5k domains + 5k www.domain without issues. Configuration file is created by configuration management system. Currently nginx only serve https and proxy to a apache at localhost. Funfact: nging reload that number of vhost + certificates faster then apache simply handling only plain http :-) Andreas From peter_booth at me.com Mon Feb 11 19:54:40 2019 From: peter_booth at me.com (Peter Booth) Date: Mon, 11 Feb 2019 14:54:40 -0500 Subject: I'm about to embark on creating 12000 vhosts In-Reply-To: References: <15117d4ab68ac3bc542bc01b571ff07d.NginxMailingListEnglish@forum.nginx.org> Message-ID: <73C41379-6883-4FAC-BD2D-36EB3CE3AD5F@me.com> +1 to the openresty suggestion I?ve found that whenever I want to do something gnarly or perverse with nginx, openresty helps me do it in a way that?s maintainable and with any ugliness minimized. It?s like nginx with super-powers! Sent from my iPhone > On Feb 11, 2019, at 1:34 PM, Robert Paprocki wrote: > > FWIW, this kind of large installation is why solutions like OpenResty exist (providing for dynamic config/cert service/hostname registration without having to worry about the time/expense of re-parsing the Nginx config). > >> On Mon, Feb 11, 2019 at 7:59 AM Richard Paul wrote: >> Hi Ben, >> >> Thanks for the quick response. That's great to hear, as we'd only get to find this out after putting rather a lot of effort into the process. >> We'll be hosting these on cloud instances but since those aren't the fastest machines around I'll take the reloading as a word of caution (we're probably going to have to make another bit of application functionality which will handle this so that we're only reloading when we have domain changes rather than on a regular schedule that'd I'd thought would be the simplest method.) >> >> I have a plan for the rate limits, but thank you for mentioning it. SANs would reduce the number of vhosts, but I'm not sure about the added complexity of managing the vhost templates and the key/cert naming. >> >> Kind regards, >> Richard >> >> >>> On Mon, 2019-02-11 at 16:35 +0100, Ben Schmidt wrote: >>> Hi Richard, >>> >>> we have experience with around 1/4th the vhosts on a single Server, no Issues at all. >>> Reloading can take up to a minute but the Hardware isn't what I would call recent. >>> >>> The only thing that you'll have to watch out are Letsencrypt rate Limits > https://letsencrypt.org/docs/rate-limits/ >>> ##### >>> /etc/letsencrypt/renewal $ ls | wc -l >>> 1647 >>> ##### >>> We switched to using SAN Certs whenever possible. >>> >>> Around 8 years ago I managed a 8000 vHosts Webfarm with a apache. No Issues ether. >>> >>> Cheers, >>> Ben >>> >>>> On Mon, Feb 11, 2019 at 4:16 PM rick_pri wrote: >>>> Our current setup is pretty simple, we have a regex capture to ensure that >>>> the incoming request is a valid ascii domain name and we serve all our >>>> traffic from that. Great ... for us. >>>> >>>> However, our customers, with about 12000 domain names at present have >>>> started to become quite vocal about having HTTPS on their websites, to which >>>> we provide a custom CMS and website package, which means we're about to >>>> create a new Nginx layer in front of our current servers to terminate TLS. >>>> This will require us to set up vhosts for each certificate issued with >>>> server names which match what's in the certificate's SAN. >>>> >>>> To keep this simple we're currently thinking about just having each domain, >>>> and www subdomain, on its own certificate (LetsEncrypt) and vhost but that >>>> is going to lead, approximately, to the number of vhosts mentioned in the >>>> subject line. As such I wanted to put the feelers out to see if anyone else >>>> had tried to work with large numbers of vhosts and any issues which they may >>>> have come across. >>>> >>>> Kind regards, >>>> >>>> Richard >>>> >>>> Posted at Nginx Forum: https://forum.nginx.org/read.php?2,282986,282986#msg-282986 >>>> >>>> _______________________________________________ >>>> nginx mailing list >>>> nginx at nginx.org >>>> http://mailman.nginx.org/mailman/listinfo/nginx >>> >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> >>> http://mailman.nginx.org/mailman/listinfo/nginx >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Mon Feb 11 22:36:06 2019 From: francis at daoine.org (Francis Daly) Date: Mon, 11 Feb 2019 22:36:06 +0000 Subject: Use sub-url to identify the different server In-Reply-To: <3aa6d4339c4f02f755c0452e5a47bcba.NginxMailingListEnglish@forum.nginx.org> References: <20190124085832.cntioa53ks3cxycr@daoine.org> <3aa6d4339c4f02f755c0452e5a47bcba.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20190211223606.jpcei6lgzgaklodz@daoine.org> On Sun, Feb 10, 2019 at 10:21:22PM -0500, nevereturn01 wrote: Hi there, > Thanks for your suggestions. > The rule seems to work. Good to hear that it is working for you :-) Cheers, f -- Francis Daly francis at daoine.org From anoopalias01 at gmail.com Tue Feb 12 02:01:35 2019 From: anoopalias01 at gmail.com (Anoop Alias) Date: Tue, 12 Feb 2019 07:31:35 +0530 Subject: I'm about to embark on creating 12000 vhosts In-Reply-To: <73C41379-6883-4FAC-BD2D-36EB3CE3AD5F@me.com> References: <15117d4ab68ac3bc542bc01b571ff07d.NginxMailingListEnglish@forum.nginx.org> <73C41379-6883-4FAC-BD2D-36EB3CE3AD5F@me.com> Message-ID: I maintain an Nginx config generation plugin for a web hosting control panel, where people put on such high number of domains on a server normally and things I notice are 1. Memory consumption by worker process go up when vhost count go up , so we may need to reduce worker count 2. As already mentioned the reload might take a lot of time ,so do nginx -t 3. Even startup will take time as most package maintainers put a nginx -t on ExecPre(similar in non-systemd) which take a lot of time on startup I have read somewhere, Nginx is not good at handling this many vhost defs ,so they use a dynamic setup (like the one in OpenResty) at CloudFlare edge servers for SSL On Tue, Feb 12, 2019 at 1:25 AM Peter Booth via nginx wrote: > +1 to the openresty suggestion > > I?ve found that whenever I want to do something gnarly or perverse with > nginx, openresty helps me do it in a way that?s maintainable and with any > ugliness minimized. > > It?s like nginx with super-powers! > > Sent from my iPhone > > On Feb 11, 2019, at 1:34 PM, Robert Paprocki < > rpaprocki at fearnothingproductions.net> wrote: > > FWIW, this kind of large installation is why solutions like OpenResty > exist (providing for dynamic config/cert service/hostname registration > without having to worry about the time/expense of re-parsing the Nginx > config). > > On Mon, Feb 11, 2019 at 7:59 AM Richard Paul > wrote: > >> Hi Ben, >> >> Thanks for the quick response. That's great to hear, as we'd only get to >> find this out after putting rather a lot of effort into the process. >> We'll be hosting these on cloud instances but since those aren't the >> fastest machines around I'll take the reloading as a word of caution (we're >> probably going to have to make another bit of application functionality >> which will handle this so that we're only reloading when we have domain >> changes rather than on a regular schedule that'd I'd thought would be the >> simplest method.) >> >> I have a plan for the rate limits, but thank you for mentioning it. SANs >> would reduce the number of vhosts, but I'm not sure about the added >> complexity of managing the vhost templates and the key/cert naming. >> >> Kind regards, >> Richard >> >> >> On Mon, 2019-02-11 at 16:35 +0100, Ben Schmidt wrote: >> >> Hi Richard, >> >> we have experience with around 1/4th the vhosts on a single Server, no >> Issues at all. >> Reloading can take up to a minute but the Hardware isn't what I would >> call recent. >> >> The only thing that you'll have to watch out are Letsencrypt rate >> Limits > https://letsencrypt.org/docs/rate-limits/ >> ##### >> /etc/letsencrypt/renewal $ ls | wc -l >> 1647 >> ##### >> We switched to using SAN Certs whenever possible. >> >> Around 8 years ago I managed a 8000 vHosts Webfarm with a apache. No >> Issues ether. >> >> Cheers, >> Ben >> >> On Mon, Feb 11, 2019 at 4:16 PM rick_pri >> wrote: >> >> Our current setup is pretty simple, we have a regex capture to ensure that >> the incoming request is a valid ascii domain name and we serve all our >> traffic from that. Great ... for us. >> >> However, our customers, with about 12000 domain names at present have >> started to become quite vocal about having HTTPS on their websites, to >> which >> we provide a custom CMS and website package, which means we're about to >> create a new Nginx layer in front of our current servers to terminate >> TLS. >> This will require us to set up vhosts for each certificate issued with >> server names which match what's in the certificate's SAN. >> >> To keep this simple we're currently thinking about just having each >> domain, >> and www subdomain, on its own certificate (LetsEncrypt) and vhost but that >> is going to lead, approximately, to the number of vhosts mentioned in the >> subject line. As such I wanted to put the feelers out to see if anyone >> else >> had tried to work with large numbers of vhosts and any issues which they >> may >> have come across. >> >> Kind regards, >> >> Richard >> >> Posted at Nginx Forum: >> https://forum.nginx.org/read.php?2,282986,282986#msg-282986 >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> >> _______________________________________________ >> >> nginx mailing list >> >> nginx at nginx.org >> >> >> http://mailman.nginx.org/mailman/listinfo/nginx >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- *Anoop P Alias* -------------- next part -------------- An HTML attachment was scrubbed... URL: From Richard at primarysite.net Tue Feb 12 08:44:39 2019 From: Richard at primarysite.net (Richard Paul) Date: Tue, 12 Feb 2019 08:44:39 +0000 Subject: I'm about to embark on creating 12000 vhosts In-Reply-To: References: <15117d4ab68ac3bc542bc01b571ff07d.NginxMailingListEnglish@forum.nginx.org> Message-ID: <568480d0bcb44db1961479f33d0490f5cf552eba.camel@primarysite.net> Hi Robert, I've not looked in a while but I think that there where some large assumptions in openresty that you are running on Linux. I'll have a look again but it might not quite be a good fit for us. Kind regards, Richard On Mon, 2019-02-11 at 10:34 -0800, Robert Paprocki wrote: FWIW, this kind of large installation is why solutions like OpenResty exist (providing for dynamic config/cert service/hostname registration without having to worry about the time/expense of re-parsing the Nginx config). On Mon, Feb 11, 2019 at 7:59 AM Richard Paul > wrote: Hi Ben, Thanks for the quick response. That's great to hear, as we'd only get to find this out after putting rather a lot of effort into the process. We'll be hosting these on cloud instances but since those aren't the fastest machines around I'll take the reloading as a word of caution (we're probably going to have to make another bit of application functionality which will handle this so that we're only reloading when we have domain changes rather than on a regular schedule that'd I'd thought would be the simplest method.) I have a plan for the rate limits, but thank you for mentioning it. SANs would reduce the number of vhosts, but I'm not sure about the added complexity of managing the vhost templates and the key/cert naming. Kind regards, Richard On Mon, 2019-02-11 at 16:35 +0100, Ben Schmidt wrote: Hi Richard, we have experience with around 1/4th the vhosts on a single Server, no Issues at all. Reloading can take up to a minute but the Hardware isn't what I would call recent. The only thing that you'll have to watch out are Letsencrypt rate Limits > https://letsencrypt.org/docs/rate-limits/ ##### /etc/letsencrypt/renewal $ ls | wc -l 1647 ##### We switched to using SAN Certs whenever possible. Around 8 years ago I managed a 8000 vHosts Webfarm with a apache. No Issues ether. Cheers, Ben On Mon, Feb 11, 2019 at 4:16 PM rick_pri > wrote: Our current setup is pretty simple, we have a regex capture to ensure that the incoming request is a valid ascii domain name and we serve all our traffic from that. Great ... for us. However, our customers, with about 12000 domain names at present have started to become quite vocal about having HTTPS on their websites, to which we provide a custom CMS and website package, which means we're about to create a new Nginx layer in front of our current servers to terminate TLS. This will require us to set up vhosts for each certificate issued with server names which match what's in the certificate's SAN. To keep this simple we're currently thinking about just having each domain, and www subdomain, on its own certificate (LetsEncrypt) and vhost but that is going to lead, approximately, to the number of vhosts mentioned in the subject line. As such I wanted to put the feelers out to see if anyone else had tried to work with large numbers of vhosts and any issues which they may have come across. Kind regards, Richard Posted at Nginx Forum: https://forum.nginx.org/read.php?2,282986,282986#msg-282986 _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From Richard at primarysite.net Tue Feb 12 08:54:16 2019 From: Richard at primarysite.net (Richard Paul) Date: Tue, 12 Feb 2019 08:54:16 +0000 Subject: I'm about to embark on creating 12000 vhosts In-Reply-To: <97F94933-CB18-498D-81C4-12BA3DAF2F7C@ultra-secure.de> References: <15117d4ab68ac3bc542bc01b571ff07d.NginxMailingListEnglish@forum.nginx.org> <97F94933-CB18-498D-81C4-12BA3DAF2F7C@ultra-secure.de> Message-ID: Hi Rainer, We don't control all the DNS, so of our customers prefer to keep control in house for that stuff. Also, wildcards don't work for us in this case, they have individual vanity domains, sometimes more than one which are not wildcardable unless I could get something like *.*.co.uk ?. Kind regards, Richard On Mon, 2019-02-11 at 19:57 +0100, Rainer Duffner wrote: Am 11.02.2019 um 16:16 schrieb rick_pri >: However, our customers, with about 12000 domain names at present have Let?s Encrypt rate limits will likely make these very difficult to obtain and also to renew. If you own the DNS, maybe using Wildcard DNS entries is more practical. Then, HAProxy allows to just drop all the certificates in a directory and let itself figure out the domain-names it has to answer. At least, that?s what my co-worker told me. Also, there?s the fabio LB with similar goal-posts. _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From Richard at primarysite.net Tue Feb 12 09:04:03 2019 From: Richard at primarysite.net (Richard Paul) Date: Tue, 12 Feb 2019 09:04:03 +0000 Subject: I'm about to embark on creating 12000 vhosts In-Reply-To: References: <15117d4ab68ac3bc542bc01b571ff07d.NginxMailingListEnglish@forum.nginx.org> <97F94933-CB18-498D-81C4-12BA3DAF2F7C@ultra-secure.de> Message-ID: <57de577c3cc7683292061efca6bfd267279ae062.camel@primarysite.net> Hi Jeff That's interesting, how do you manage the progamming to load the right certificate for the right domain coming in as the server name? We need to load the right certificate for the incoming domain and the 12000 figure is the number of unique vanity domains without the www. subdomains. We're planning to follow the same path as you though, we're essentially putting these Nginx TLS terminators (fronted by GCP load balancers) in front of our existing Varnish caching and Nginx backend infrastructure which currently only listen on port 80. I couldn't work out what the limits are at LE as it's not clear with regards to adding new unique domains limits. I'm going to have to ask in the forums at some point so that I can work out what our daily batches are going to be. Kind regards, Richard On Mon, 2019-02-11 at 14:33 -0500, Jeff Dyke wrote: I use haproxy in a similar way as stated by Rainer, rather than having hundreds and hundreds of config files (yes there are other ways), i have 1 for haproxy and 2(on multiple machines defined in HAProxy). One for my main domain that listens to an "real" server_name and another that listens to `server_name _;` All of the nginx servers simply listen on 80 and 81 to handle non H2 clients and the application does the correct thing with the domain. Which is where YMMV as all applications differ. I found this much simpler and easier to maintain over time. I got around the LE limits by a staggered migration, so i was only requesting what was in the limit each day, then have a custom script that calls LE (which is also on the same machine as HAProxy) when certs are about 10 days out, so the staggering stays within the limits. When i was using custom configuration, i was build them via python using a yaml file and nginx would effectively be a jinja2 template. But even that became onerous. When going down the nginx path ensure you pay attention to the variables that control domain hash sizes. http://nginx.org/en/docs/hash.html HTH, good luck! Jeff On Mon, Feb 11, 2019 at 1:58 PM Rainer Duffner > wrote: Am 11.02.2019 um 16:16 schrieb rick_pri >: However, our customers, with about 12000 domain names at present have Let?s Encrypt rate limits will likely make these very difficult to obtain and also to renew. If you own the DNS, maybe using Wildcard DNS entries is more practical. Then, HAProxy allows to just drop all the certificates in a directory and let itself figure out the domain-names it has to answer. At least, that?s what my co-worker told me. Also, there?s the fabio LB with similar goal-posts. _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From Richard at primarysite.net Tue Feb 12 09:07:11 2019 From: Richard at primarysite.net (Richard Paul) Date: Tue, 12 Feb 2019 09:07:11 +0000 Subject: I'm about to embark on creating 12000 vhosts In-Reply-To: <0eb0d369-8f9f-4912-2d91-affb15ad60a6@andreasschulze.de> References: <15117d4ab68ac3bc542bc01b571ff07d.NginxMailingListEnglish@forum.nginx.org> <0eb0d369-8f9f-4912-2d91-affb15ad60a6@andreasschulze.de> Message-ID: Hi Andreas, Good to hear that this is scaling well for you at this level. With regards to reload, you mean a reload rather than a restart I take it? We'll be load balanced and building these from config and deployment management systems so a long reload/restart is not the end of the world as we can build a patched box and take out an old unpatched machine. Kind regards, Richard On Mon, 2019-02-11 at 20:53 +0100, A. Schulze wrote: Am 11.02.19 um 16:16 schrieb rick_pri: As such I wanted to put the feelers out to see if anyone else had tried to work with large numbers of vhosts and any issues which they may have come across. Hello we're running nginx (latest) with ~5k domains + 5k www.domain without issues. Configuration file is created by configuration management system. Currently nginx only serve https and proxy to a apache at localhost. Funfact: nging reload that number of vhost + certificates faster then apache simply handling only plain http :-) Andreas _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From rainer at ultra-secure.de Tue Feb 12 09:31:09 2019 From: rainer at ultra-secure.de (rainer at ultra-secure.de) Date: Tue, 12 Feb 2019 10:31:09 +0100 Subject: I'm about to embark on creating 12000 vhosts In-Reply-To: <568480d0bcb44db1961479f33d0490f5cf552eba.camel@primarysite.net> References: <15117d4ab68ac3bc542bc01b571ff07d.NginxMailingListEnglish@forum.nginx.org> <568480d0bcb44db1961479f33d0490f5cf552eba.camel@primarysite.net> Message-ID: Am 2019-02-12 09:44, schrieb Richard Paul: > Hi Robert, > > I've not looked in a while but I think that there where some large > assumptions in openresty that you are running on Linux. I'll have a > look again but it might not quite be a good fit for us. Another problem with SAN certificates is that if you don't control the domains, you have to make sure they are not moved away. If one hostname fails to validate, the whole SAN is not renewed. What's your target platform? From lucas at lucasrolff.com Tue Feb 12 09:32:32 2019 From: lucas at lucasrolff.com (Lucas Rolff) Date: Tue, 12 Feb 2019 09:32:32 +0000 Subject: I'm about to embark on creating 12000 vhosts In-Reply-To: <57de577c3cc7683292061efca6bfd267279ae062.camel@primarysite.net> References: <15117d4ab68ac3bc542bc01b571ff07d.NginxMailingListEnglish@forum.nginx.org> <97F94933-CB18-498D-81C4-12BA3DAF2F7C@ultra-secure.de> <57de577c3cc7683292061efca6bfd267279ae062.camel@primarysite.net> Message-ID: <05CD6497-0AB1-45C3-B52F-C112AE8D1FBC@lucasrolff.com> In haproxy, you simply specify a path where you have all your certificates. frontend https_frontend bind *:443 ssl crt /etc/haproxy/certs/default-cert.pem crt /etc/haproxy/certs alpn h2,http/1.1 This way, haproxy will read all certs, and when stuff comes in, it uses the host header to determine which certificate it should serve. There was a thread on the haproxy mailing list not long ago, with managing more than 100k certificates per haproxy instance, and they?re working on further optimizations with those kinds of deployments (if it?s not already done.. haven?t checked to be honest). Best Regards, From: nginx on behalf of Richard Paul Reply-To: "nginx at nginx.org" Date: Tuesday, 12 February 2019 at 10.04 To: "nginx at nginx.org" Subject: Re: I'm about to embark on creating 12000 vhosts Hi Jeff That's interesting, how do you manage the progamming to load the right certificate for the right domain coming in as the server name? We need to load the right certificate for the incoming domain and the 12000 figure is the number of unique vanity domains without the www. subdomains. We're planning to follow the same path as you though, we're essentially putting these Nginx TLS terminators (fronted by GCP load balancers) in front of our existing Varnish caching and Nginx backend infrastructure which currently only listen on port 80. I couldn't work out what the limits are at LE as it's not clear with regards to adding new unique domains limits. I'm going to have to ask in the forums at some point so that I can work out what our daily batches are going to be. Kind regards, Richard On Mon, 2019-02-11 at 14:33 -0500, Jeff Dyke wrote: I use haproxy in a similar way as stated by Rainer, rather than having hundreds and hundreds of config files (yes there are other ways), i have 1 for haproxy and 2(on multiple machines defined in HAProxy). One for my main domain that listens to an "real" server_name and another that listens to `server_name _;` All of the nginx servers simply listen on 80 and 81 to handle non H2 clients and the application does the correct thing with the domain. Which is where YMMV as all applications differ. I found this much simpler and easier to maintain over time. I got around the LE limits by a staggered migration, so i was only requesting what was in the limit each day, then have a custom script that calls LE (which is also on the same machine as HAProxy) when certs are about 10 days out, so the staggering stays within the limits. When i was using custom configuration, i was build them via python using a yaml file and nginx would effectively be a jinja2 template. But even that became onerous. When going down the nginx path ensure you pay attention to the variables that control domain hash sizes. http://nginx.org/en/docs/hash.html HTH, good luck! Jeff On Mon, Feb 11, 2019 at 1:58 PM Rainer Duffner > wrote: Am 11.02.2019 um 16:16 schrieb rick_pri >: However, our customers, with about 12000 domain names at present have Let?s Encrypt rate limits will likely make these very difficult to obtain and also to renew. If you own the DNS, maybe using Wildcard DNS entries is more practical. Then, HAProxy allows to just drop all the certificates in a directory and let itself figure out the domain-names it has to answer. At least, that?s what my co-worker told me. Also, there?s the fabio LB with similar goal-posts. _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From Richard at primarysite.net Tue Feb 12 09:40:57 2019 From: Richard at primarysite.net (Richard Paul) Date: Tue, 12 Feb 2019 09:40:57 +0000 Subject: I'm about to embark on creating 12000 vhosts In-Reply-To: <73C41379-6883-4FAC-BD2D-36EB3CE3AD5F@me.com> References: <15117d4ab68ac3bc542bc01b571ff07d.NginxMailingListEnglish@forum.nginx.org> <73C41379-6883-4FAC-BD2D-36EB3CE3AD5F@me.com> Message-ID: Hi Peter, I'm sure that it's great and all, but I've just been to look at the https://openresty.org/en/installation.html page for the installation again and it's very much not friendly for configuration management unless you're on a supported platform with packages available to you. I'm sure that we could put together a poudriare server to do the package building from source/ports for FreeBSD but if I can avoid that I will for the time being. Kind regards, Richard On Mon, 2019-02-11 at 14:54 -0500, Peter Booth via nginx wrote: +1 to the openresty suggestion I?ve found that whenever I want to do something gnarly or perverse with nginx, openresty helps me do it in a way that?s maintainable and with any ugliness minimized. It?s like nginx with super-powers! Sent from my iPhone On Feb 11, 2019, at 1:34 PM, Robert Paprocki > wrote: FWIW, this kind of large installation is why solutions like OpenResty exist (providing for dynamic config/cert service/hostname registration without having to worry about the time/expense of re-parsing the Nginx config). On Mon, Feb 11, 2019 at 7:59 AM Richard Paul > wrote: Hi Ben, Thanks for the quick response. That's great to hear, as we'd only get to find this out after putting rather a lot of effort into the process. We'll be hosting these on cloud instances but since those aren't the fastest machines around I'll take the reloading as a word of caution (we're probably going to have to make another bit of application functionality which will handle this so that we're only reloading when we have domain changes rather than on a regular schedule that'd I'd thought would be the simplest method.) I have a plan for the rate limits, but thank you for mentioning it. SANs would reduce the number of vhosts, but I'm not sure about the added complexity of managing the vhost templates and the key/cert naming. Kind regards, Richard On Mon, 2019-02-11 at 16:35 +0100, Ben Schmidt wrote: Hi Richard, we have experience with around 1/4th the vhosts on a single Server, no Issues at all. Reloading can take up to a minute but the Hardware isn't what I would call recent. The only thing that you'll have to watch out are Letsencrypt rate Limits > https://letsencrypt.org/docs/rate-limits/ ##### /etc/letsencrypt/renewal $ ls | wc -l 1647 ##### We switched to using SAN Certs whenever possible. Around 8 years ago I managed a 8000 vHosts Webfarm with a apache. No Issues ether. Cheers, Ben On Mon, Feb 11, 2019 at 4:16 PM rick_pri > wrote: Our current setup is pretty simple, we have a regex capture to ensure that the incoming request is a valid ascii domain name and we serve all our traffic from that. Great ... for us. However, our customers, with about 12000 domain names at present have started to become quite vocal about having HTTPS on their websites, to which we provide a custom CMS and website package, which means we're about to create a new Nginx layer in front of our current servers to terminate TLS. This will require us to set up vhosts for each certificate issued with server names which match what's in the certificate's SAN. To keep this simple we're currently thinking about just having each domain, and www subdomain, on its own certificate (LetsEncrypt) and vhost but that is going to lead, approximately, to the number of vhosts mentioned in the subject line. As such I wanted to put the feelers out to see if anyone else had tried to work with large numbers of vhosts and any issues which they may have come across. Kind regards, Richard Posted at Nginx Forum: https://forum.nginx.org/read.php?2,282986,282986#msg-282986 _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From Richard at primarysite.net Tue Feb 12 09:43:57 2019 From: Richard at primarysite.net (Richard Paul) Date: Tue, 12 Feb 2019 09:43:57 +0000 Subject: I'm about to embark on creating 12000 vhosts In-Reply-To: References: <15117d4ab68ac3bc542bc01b571ff07d.NginxMailingListEnglish@forum.nginx.org> <73C41379-6883-4FAC-BD2D-36EB3CE3AD5F@me.com> Message-ID: <64812d5a0155a679cfd5184f9685a12fa2497852.camel@primarysite.net> Hi Anoop, This is great and really valuable information, thank you. . I'd heard that CloudFlare use a variant of Nginx for providing SSL termination which was why I was hopefully that it would be able to manage our use case. Kind regards, Richard On Tue, 2019-02-12 at 07:31 +0530, Anoop Alias wrote: I maintain an Nginx config generation plugin for a web hosting control panel, where people put on such high number of domains on a server normally and things I notice are 1. Memory consumption by worker process go up when vhost count go up , so we may need to reduce worker count 2. As already mentioned the reload might take a lot of time ,so do nginx -t 3. Even startup will take time as most package maintainers put a nginx -t on ExecPre(similar in non-systemd) which take a lot of time on startup I have read somewhere, Nginx is not good at handling this many vhost defs ,so they use a dynamic setup (like the one in OpenResty) at CloudFlare edge servers for SSL On Tue, Feb 12, 2019 at 1:25 AM Peter Booth via nginx > wrote: +1 to the openresty suggestion I?ve found that whenever I want to do something gnarly or perverse with nginx, openresty helps me do it in a way that?s maintainable and with any ugliness minimized. It?s like nginx with super-powers! Sent from my iPhone On Feb 11, 2019, at 1:34 PM, Robert Paprocki > wrote: FWIW, this kind of large installation is why solutions like OpenResty exist (providing for dynamic config/cert service/hostname registration without having to worry about the time/expense of re-parsing the Nginx config). On Mon, Feb 11, 2019 at 7:59 AM Richard Paul > wrote: Hi Ben, Thanks for the quick response. That's great to hear, as we'd only get to find this out after putting rather a lot of effort into the process. We'll be hosting these on cloud instances but since those aren't the fastest machines around I'll take the reloading as a word of caution (we're probably going to have to make another bit of application functionality which will handle this so that we're only reloading when we have domain changes rather than on a regular schedule that'd I'd thought would be the simplest method.) I have a plan for the rate limits, but thank you for mentioning it. SANs would reduce the number of vhosts, but I'm not sure about the added complexity of managing the vhost templates and the key/cert naming. Kind regards, Richard On Mon, 2019-02-11 at 16:35 +0100, Ben Schmidt wrote: Hi Richard, we have experience with around 1/4th the vhosts on a single Server, no Issues at all. Reloading can take up to a minute but the Hardware isn't what I would call recent. The only thing that you'll have to watch out are Letsencrypt rate Limits > https://letsencrypt.org/docs/rate-limits/ ##### /etc/letsencrypt/renewal $ ls | wc -l 1647 ##### We switched to using SAN Certs whenever possible. Around 8 years ago I managed a 8000 vHosts Webfarm with a apache. No Issues ether. Cheers, Ben On Mon, Feb 11, 2019 at 4:16 PM rick_pri > wrote: Our current setup is pretty simple, we have a regex capture to ensure that the incoming request is a valid ascii domain name and we serve all our traffic from that. Great ... for us. However, our customers, with about 12000 domain names at present have started to become quite vocal about having HTTPS on their websites, to which we provide a custom CMS and website package, which means we're about to create a new Nginx layer in front of our current servers to terminate TLS. This will require us to set up vhosts for each certificate issued with server names which match what's in the certificate's SAN. To keep this simple we're currently thinking about just having each domain, and www subdomain, on its own certificate (LetsEncrypt) and vhost but that is going to lead, approximately, to the number of vhosts mentioned in the subject line. As such I wanted to put the feelers out to see if anyone else had tried to work with large numbers of vhosts and any issues which they may have come across. Kind regards, Richard Posted at Nginx Forum: https://forum.nginx.org/read.php?2,282986,282986#msg-282986 _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From Richard at primarysite.net Tue Feb 12 09:56:35 2019 From: Richard at primarysite.net (Richard Paul) Date: Tue, 12 Feb 2019 09:56:35 +0000 Subject: I'm about to embark on creating 12000 vhosts In-Reply-To: <05CD6497-0AB1-45C3-B52F-C112AE8D1FBC@lucasrolff.com> References: <15117d4ab68ac3bc542bc01b571ff07d.NginxMailingListEnglish@forum.nginx.org> <97F94933-CB18-498D-81C4-12BA3DAF2F7C@ultra-secure.de> <57de577c3cc7683292061efca6bfd267279ae062.camel@primarysite.net> <05CD6497-0AB1-45C3-B52F-C112AE8D1FBC@lucasrolff.com> Message-ID: <691fa4469b1c256c4e007aebe6afc79b59d39bc9.camel@primarysite.net> Hi Lucas, Well that looks great. I've not looked at HAproxy too much, as I've not used it before other than during a switch over just prior to Christmas last year where rinetd couldn't cope with the incoming traffic load and we had to cobble together a quick HAProxy layer 4 configuration to redirect traffic from AWS to GCP. I'll start digging into this a bit more as this looks like a better solution and I can maybe just use LE's webroot plugin without having to generate and sync Nginx configuration as well as the certs over to the TLS terminator instances. Kind regards, Richard On Tue, 2019-02-12 at 09:32 +0000, Lucas Rolff wrote: In haproxy, you simply specify a path where you have all your certificates. frontend https_frontend bind *:443 ssl crt /etc/haproxy/certs/default-cert.pem crt /etc/haproxy/certs alpn h2,http/1.1 This way, haproxy will read all certs, and when stuff comes in, it uses the host header to determine which certificate it should serve. There was a thread on the haproxy mailing list not long ago, with managing more than 100k certificates per haproxy instance, and they?re working on further optimizations with those kinds of deployments (if it?s not already done.. haven?t checked to be honest). Best Regards, From: nginx on behalf of Richard Paul Reply-To: "nginx at nginx.org" Date: Tuesday, 12 February 2019 at 10.04 To: "nginx at nginx.org" Subject: Re: I'm about to embark on creating 12000 vhosts Hi Jeff That's interesting, how do you manage the progamming to load the right certificate for the right domain coming in as the server name? We need to load the right certificate for the incoming domain and the 12000 figure is the number of unique vanity domains without the www. subdomains. We're planning to follow the same path as you though, we're essentially putting these Nginx TLS terminators (fronted by GCP load balancers) in front of our existing Varnish caching and Nginx backend infrastructure which currently only listen on port 80. I couldn't work out what the limits are at LE as it's not clear with regards to adding new unique domains limits. I'm going to have to ask in the forums at some point so that I can work out what our daily batches are going to be. Kind regards, Richard On Mon, 2019-02-11 at 14:33 -0500, Jeff Dyke wrote: I use haproxy in a similar way as stated by Rainer, rather than having hundreds and hundreds of config files (yes there are other ways), i have 1 for haproxy and 2(on multiple machines defined in HAProxy). One for my main domain that listens to an "real" server_name and another that listens to `server_name _;` All of the nginx servers simply listen on 80 and 81 to handle non H2 clients and the application does the correct thing with the domain. Which is where YMMV as all applications differ. I found this much simpler and easier to maintain over time. I got around the LE limits by a staggered migration, so i was only requesting what was in the limit each day, then have a custom script that calls LE (which is also on the same machine as HAProxy) when certs are about 10 days out, so the staggering stays within the limits. When i was using custom configuration, i was build them via python using a yaml file and nginx would effectively be a jinja2 template. But even that became onerous. When going down the nginx path ensure you pay attention to the variables that control domain hash sizes. http://nginx.org/en/docs/hash.html HTH, good luck! Jeff On Mon, Feb 11, 2019 at 1:58 PM Rainer Duffner > wrote: Am 11.02.2019 um 16:16 schrieb rick_pri >: However, our customers, with about 12000 domain names at present have Let?s Encrypt rate limits will likely make these very difficult to obtain and also to renew. If you own the DNS, maybe using Wildcard DNS entries is more practical. Then, HAProxy allows to just drop all the certificates in a directory and let itself figure out the domain-names it has to answer. At least, that?s what my co-worker told me. Also, there?s the fabio LB with similar goal-posts. _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From Richard at primarysite.net Tue Feb 12 10:06:13 2019 From: Richard at primarysite.net (Richard Paul) Date: Tue, 12 Feb 2019 10:06:13 +0000 Subject: I'm about to embark on creating 12000 vhosts In-Reply-To: <691fa4469b1c256c4e007aebe6afc79b59d39bc9.camel@primarysite.net> References: <15117d4ab68ac3bc542bc01b571ff07d.NginxMailingListEnglish@forum.nginx.org> <97F94933-CB18-498D-81C4-12BA3DAF2F7C@ultra-secure.de> <57de577c3cc7683292061efca6bfd267279ae062.camel@primarysite.net> <05CD6497-0AB1-45C3-B52F-C112AE8D1FBC@lucasrolff.com> <691fa4469b1c256c4e007aebe6afc79b59d39bc9.camel@primarysite.net> Message-ID: <8bfa46503ea589189b905c4e01eb89dc557d80f4.camel@primarysite.net> And having looked at this further we would have to append the key to the end of the certificate bundle after it was issued from LE as an extra step in the processing so that this could work. This still seems to be the best way forward, even if it requires an extra step in this case. Kind regards, Richard On Tue, 2019-02-12 at 09:56 +0000, Richard Paul wrote: Hi Lucas, Well that looks great. I've not looked at HAproxy too much, as I've not used it before other than during a switch over just prior to Christmas last year where rinetd couldn't cope with the incoming traffic load and we had to cobble together a quick HAProxy layer 4 configuration to redirect traffic from AWS to GCP. I'll start digging into this a bit more as this looks like a better solution and I can maybe just use LE's webroot plugin without having to generate and sync Nginx configuration as well as the certs over to the TLS terminator instances. Kind regards, Richard On Tue, 2019-02-12 at 09:32 +0000, Lucas Rolff wrote: In haproxy, you simply specify a path where you have all your certificates. frontend https_frontend bind *:443 ssl crt /etc/haproxy/certs/default-cert.pem crt /etc/haproxy/certs alpn h2,http/1.1 This way, haproxy will read all certs, and when stuff comes in, it uses the host header to determine which certificate it should serve. There was a thread on the haproxy mailing list not long ago, with managing more than 100k certificates per haproxy instance, and they?re working on further optimizations with those kinds of deployments (if it?s not already done.. haven?t checked to be honest). Best Regards, From: nginx on behalf of Richard Paul Reply-To: "nginx at nginx.org" Date: Tuesday, 12 February 2019 at 10.04 To: "nginx at nginx.org" Subject: Re: I'm about to embark on creating 12000 vhosts Hi Jeff That's interesting, how do you manage the progamming to load the right certificate for the right domain coming in as the server name? We need to load the right certificate for the incoming domain and the 12000 figure is the number of unique vanity domains without the www. subdomains. We're planning to follow the same path as you though, we're essentially putting these Nginx TLS terminators (fronted by GCP load balancers) in front of our existing Varnish caching and Nginx backend infrastructure which currently only listen on port 80. I couldn't work out what the limits are at LE as it's not clear with regards to adding new unique domains limits. I'm going to have to ask in the forums at some point so that I can work out what our daily batches are going to be. Kind regards, Richard On Mon, 2019-02-11 at 14:33 -0500, Jeff Dyke wrote: I use haproxy in a similar way as stated by Rainer, rather than having hundreds and hundreds of config files (yes there are other ways), i have 1 for haproxy and 2(on multiple machines defined in HAProxy). One for my main domain that listens to an "real" server_name and another that listens to `server_name _;` All of the nginx servers simply listen on 80 and 81 to handle non H2 clients and the application does the correct thing with the domain. Which is where YMMV as all applications differ. I found this much simpler and easier to maintain over time. I got around the LE limits by a staggered migration, so i was only requesting what was in the limit each day, then have a custom script that calls LE (which is also on the same machine as HAProxy) when certs are about 10 days out, so the staggering stays within the limits. When i was using custom configuration, i was build them via python using a yaml file and nginx would effectively be a jinja2 template. But even that became onerous. When going down the nginx path ensure you pay attention to the variables that control domain hash sizes. http://nginx.org/en/docs/hash.html HTH, good luck! Jeff On Mon, Feb 11, 2019 at 1:58 PM Rainer Duffner > wrote: Am 11.02.2019 um 16:16 schrieb rick_pri >: However, our customers, with about 12000 domain names at present have Let?s Encrypt rate limits will likely make these very difficult to obtain and also to renew. If you own the DNS, maybe using Wildcard DNS entries is more practical. Then, HAProxy allows to just drop all the certificates in a directory and let itself figure out the domain-names it has to answer. At least, that?s what my co-worker told me. Also, there?s the fabio LB with similar goal-posts. _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Tue Feb 12 10:54:27 2019 From: nginx-forum at forum.nginx.org (joao.pereira) Date: Tue, 12 Feb 2019 05:54:27 -0500 Subject: STALE responses taking as much as MISS responses In-Reply-To: <2B5F67FA-9244-424C-8B4B-BCB16817B65B@me.com> References: <2B5F67FA-9244-424C-8B4B-BCB16817B65B@me.com> Message-ID: <9db1fa77e16f503b8d856bcb7632cc1b.NginxMailingListEnglish@forum.nginx.org> Hi Peter and Reinis, I do have have a lot of cache, currently I have ~45 millions of keys and its the beginning of our tests which I believe will get close to the 80 million you say. I will add some tests I have done, I set up flash (a python framework) that delays a response for 5 second then I do the request using my nginx as a proxy. The max-age is set to 1 second to force nginx to go to the backend, this plus the 5 seconds delay allow me to see both STALE and an UPDATING responses. What I can see is that, the first request after max-age expiration is a STALE and it takes much more time than the next request which is an UPDATING. See bellow: root at ip-127.0.0.1:~# curl -w "%{time_connect}:%{time_appconnect}:%{time_pretransfer}:%{time_redirect}:%{time_starttransfer}:%{time_total}\n" -D - -s -o /dev/null -H "host: flask-instance.com" -H "origin: flask-instance.com" http://localhost:6081/flask-instance.com/flask HTTP/1.1 200 OK Server: nginx Date: Tue, 12 Feb 2019 10:29:36 GMT Content-Type: text/html; charset=utf-8 Content-Length: 12 Connection: keep-alive Device: Mobile CDNOrigin: Mobile Cache-Control: max-age=1, stale-while-revalidate=60 X-MShield-Cache-Status: STALE 0.004329:0.000000:0.004364:0.000000:0.212526:0.212644 root at ip-127.0.0.1:~# curl -w "%{time_connect}:%{time_appconnect}:%{time_pretransfer}:%{time_redirect}:%{time_starttransfer}:%{time_total}\n" -D - -s -o /dev/null -H "host: flask-instance.com" -H "origin: flask-instance.com" http://localhost:6081/flask-instance.com/flask HTTP/1.1 200 OK Server: nginx Date: Tue, 12 Feb 2019 10:29:38 GMT Content-Type: text/html; charset=utf-8 Content-Length: 12 Connection: keep-alive Device: Mobile CDNOrigin: Mobile Cache-Control: max-age=1, stale-while-revalidate=60 X-MShield-Cache-Status: UPDATING 0.004289:0.000000:0.004315:0.000000:0.004574:0.004691 The request is always returning a '200' with 5 seconds delay, which mean that both responses would have (in theory) the same response time. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,282984,283021#msg-283021 From r at roze.lv Tue Feb 12 12:07:35 2019 From: r at roze.lv (Reinis Rozitis) Date: Tue, 12 Feb 2019 14:07:35 +0200 Subject: STALE responses taking as much as MISS responses In-Reply-To: <9db1fa77e16f503b8d856bcb7632cc1b.NginxMailingListEnglish@forum.nginx.org> References: <2B5F67FA-9244-424C-8B4B-BCB16817B65B@me.com> <9db1fa77e16f503b8d856bcb7632cc1b.NginxMailingListEnglish@forum.nginx.org> Message-ID: <000701d4c2cb$8a9f43b0$9fddcb10$@roze.lv> > X-MShield-Cache-Status: STALE > 0.004329:0.000000:0.004364:0.000000:0.212526:0.212644 I see according to the timings you hit the 200ms tcp_nopush delay. Try setting tcp_nopush off; For more explanation you can read up https://forum.nginx.org/read.php?2,280434,280462#msg-280462 rr From jeff.dyke at gmail.com Tue Feb 12 15:37:29 2019 From: jeff.dyke at gmail.com (Jeff Dyke) Date: Tue, 12 Feb 2019 10:37:29 -0500 Subject: I'm about to embark on creating 12000 vhosts In-Reply-To: <57de577c3cc7683292061efca6bfd267279ae062.camel@primarysite.net> References: <15117d4ab68ac3bc542bc01b571ff07d.NginxMailingListEnglish@forum.nginx.org> <97F94933-CB18-498D-81C4-12BA3DAF2F7C@ultra-secure.de> <57de577c3cc7683292061efca6bfd267279ae062.camel@primarysite.net> Message-ID: Hi Richard. HAProxy defaults to reading all certs in a directory and matching hosts names via SNI. Here is the top of my haproxy config, you can see how i redirect LE requests to another server, which solely services up responses to acme-challenges: frontend http mode http bind 0.0.0.0:80 #if this is a LE Request send it to a server on this host for renewals acl letsencrypt-request path_beg -i /.well-known/acme-challenge/ redirect scheme https code 301 unless letsencrypt-request use_backend letsencrypt-backend if letsencrypt-request frontend https mode tcp bind 0.0.0.0:443 ssl crt /etc/haproxy/certs alpn h2,http/1.1 ecdhe secp384r1 timeout http-request 10s log-format "%ci:%cp [%t] %ft %b/%s %Tw/%Tc/%Tt %B %ts \ %ac/%fc/%bc/%sc/%rc %sq/%bq SSL_version:%sslv SSL_cypher:%sslc SNI:%[ssl_fc_has_sni]" #send all HTTP/2 traffic to a specific backend use_backend http2-nodes if { ssl_fc_alpn -i h2 } #send HTTP/1.1 and HTTP/1.0 to default, which don't speak HTTP/2 default_backend http1-nodes I'm not sure exactly how this would work with GCP, but if you use AWS ELB's they will give you certs (you have to prove you own the domain), but you have to be able to use an ELB, which could change ips at any time. Unfortunately this didn't work for us b/c a few of our larger customers whitelist ips and not domain names. which is why i have stayed with HAProxy. Jeff On Tue, Feb 12, 2019 at 4:04 AM Richard Paul wrote: > Hi Jeff > > That's interesting, how do you manage the progamming to load the right > certificate for the right domain coming in as the server name? We need to > load the right certificate for the incoming domain and the 12000 figure is > the number of unique vanity domains without the www. subdomains. > > We're planning to follow the same path as you though, we're essentially > putting these Nginx TLS terminators (fronted by GCP load balancers) in > front of our existing Varnish caching and Nginx backend infrastructure > which currently only listen on port 80. > > I couldn't work out what the limits are at LE as it's not clear with > regards to adding new unique domains limits. I'm going to have to ask in > the forums at some point so that I can work out what our daily batches are > going to be. > > Kind regards, > Richard > > On Mon, 2019-02-11 at 14:33 -0500, Jeff Dyke wrote: > > I use haproxy in a similar way as stated by Rainer, rather than having > hundreds and hundreds of config files (yes there are other ways), i have 1 > for haproxy and 2(on multiple machines defined in HAProxy). One for my main > domain that listens to an "real" server_name and another that listens to > `server_name _;` All of the nginx servers simply listen on 80 and 81 to > handle non H2 clients and the application does the correct thing with the > domain. Which is where YMMV as all applications differ. > > I found this much simpler and easier to maintain over time. I got around > the LE limits by a staggered migration, so i was only requesting what was > in the limit each day, then have a custom script that calls LE (which is > also on the same machine as HAProxy) when certs are about 10 days out, so > the staggering stays within the limits. When i was using custom > configuration, i was build them via python using a yaml file and nginx > would effectively be a jinja2 template. But even that became onerous. > When going down the nginx path ensure you pay attention to the variables > that control domain hash sizes. http://nginx.org/en/docs/hash.html > > HTH, good luck! > Jeff > > On Mon, Feb 11, 2019 at 1:58 PM Rainer Duffner > wrote: > > > > Am 11.02.2019 um 16:16 schrieb rick_pri : > > However, our customers, with about 12000 domain names at present have > > > > Let?s Encrypt rate limits will likely make these very difficult to obtain > and also to renew. > > If you own the DNS, maybe using Wildcard DNS entries is more practical. > > Then, HAProxy allows to just drop all the certificates in a directory and > let itself figure out the domain-names it has to answer. > At least, that?s what my co-worker told me. > > Also, there?s the fabio LB with similar goal-posts. > > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > > http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From dusty at campbell-web.com Tue Feb 12 17:03:08 2019 From: dusty at campbell-web.com (Dusty Campbell) Date: Tue, 12 Feb 2019 11:03:08 -0600 Subject: HTTP 1.0 Message-ID: Hello, Is there a way to force HTTP 1.0 for a location? I need to proxy a feature that depends on HTTP 1.0, not just between Nginx and the backend server, but also between the client and Nginx. Thanks, Dusty Campbell -------------- next part -------------- An HTML attachment was scrubbed... URL: From joan.tomas at marfeel.com Tue Feb 12 18:31:28 2019 From: joan.tomas at marfeel.com (=?UTF-8?Q?Joan_Tom=C3=A0s_i_Buliart?=) Date: Tue, 12 Feb 2019 19:31:28 +0100 Subject: STALE responses taking as much as MISS responses In-Reply-To: <000701d4c2cb$8a9f43b0$9fddcb10$@roze.lv> References: <2B5F67FA-9244-424C-8B4B-BCB16817B65B@me.com> <9db1fa77e16f503b8d856bcb7632cc1b.NginxMailingListEnglish@forum.nginx.org> <000701d4c2cb$8a9f43b0$9fddcb10$@roze.lv> Message-ID: Hi, after applying tcp_nopush off, the test that we have in place is working as expected. The problem is that this improvement is not happening on production. Our production environment is mainly a CDN -> NGinx -> Origin. We want to use Nginx in order to control the eviction time of the content (our use case needs a long stale-while-revalidate time and CDN priorizes fresh content instead of stale). Our CDN give us the latency of our NGINX and after apply the change, we are not able to see any improvement. We have decided to put an ELB in front of Nginx, just to have another way to measure, and we can see the same behaviour. Could you give us any clue to discover what it is happening? On the other hand, we saw that $request_time, when STALE, is the time to refresh the cache, not the time to return the STALE content. Could somebody confirm this? Which could be the metric to measure the real "latency" from the user point of view (in our case CDN)? Many thanks for your help!! On Tue, 12 Feb 2019 at 13:07, Reinis Rozitis wrote: > > X-MShield-Cache-Status: STALE > > 0.004329:0.000000:0.004364:0.000000:0.212526:0.212644 > > I see according to the timings you hit the 200ms tcp_nopush delay. > > Try setting tcp_nopush off; > > For more explanation you can read up > https://forum.nginx.org/read.php?2,280434,280462#msg-280462 > > rr > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- [image: Inline image 2] Joan Tom?s-Buliart ES: (34) 93 178 59 50 <%2834%29%2093%20178%2059%2050%20%C2%A0ext.%20107> US: (1) 917-341-2540 <%281%29%20917-341-2540%20ext.%20107> -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: badge-and-logo.png Type: image/png Size: 10663 bytes Desc: not available URL: From mdounin at mdounin.ru Tue Feb 12 19:37:25 2019 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 12 Feb 2019 22:37:25 +0300 Subject: HTTP 1.0 In-Reply-To: References: Message-ID: <20190212193725.GH1877@mdounin.ru> Hello! On Tue, Feb 12, 2019 at 11:03:08AM -0600, Dusty Campbell wrote: > Is there a way to force HTTP 1.0 for a location? > > I need to proxy a feature that depends on HTTP 1.0, not just between Nginx > and the backend server, but also between the client and Nginx. There is no way to force HTTP/1.0. You can, however, disable various HTTP/1.1-specific mechanisms, including keepalive and chunked transfer encoding, see here: http://nginx.org/r/keepalive_timeout http://nginx.org/r/chunked_transfer_encoding And this is what actually happens when nginx talks to a HTTP/1.0 client. Depending on what you are trying achieve, some of the options might help. Note though, that if "a feature depends on HTTP 1.0", this likely means that it is something that actually breaks the protocol, including HTTP/1.0 protocol. And it might not work at all, regardless of settings and protocols used, or may require various non-standard quirks. You may want to be more specific on what you are trying to do. -- Maxim Dounin http://mdounin.ru/ From r at roze.lv Tue Feb 12 20:09:34 2019 From: r at roze.lv (Reinis Rozitis) Date: Tue, 12 Feb 2019 22:09:34 +0200 Subject: STALE responses taking as much as MISS responses In-Reply-To: References: <2B5F67FA-9244-424C-8B4B-BCB16817B65B@me.com> <9db1fa77e16f503b8d856bcb7632cc1b.NginxMailingListEnglish@forum.nginx.org> <000701d4c2cb$8a9f43b0$9fddcb10$@roze.lv> Message-ID: <001601d4c30e$df78c760$9e6a5620$@roze.lv> > after applying tcp_nopush off, the test that we have in place is working as expected. The problem is that this improvement is not happening on production. Our production environment is mainly a CDN -> NGinx -> Origin. We want to use Nginx in order to control the eviction time of the content (our use case needs a long stale-while-revalidate time and CDN priorizes fresh content instead of stale). Our CDN give us the latency of our NGINX and after apply the change, we are not able to see any improvement. We have decided to put an ELB in front of Nginx, just to have another way to measure, and we can see the same behaviour. In case of CDN -> nginx -> Origin does the latency appear also for HIT queries or only STALE? If you take out nginx and use CDN -> Origin what latency do you see then? Obviously if your CDN doesn't cache anything then all those will be like MISS requests, but maybe you can identify/measure if the CDN itself doesn't add noticeable extra time (depending on the setup - like if there is SSL offloading (and on which end it happens). Some CDNs also use nginx as edge servers (for example CloudFlare used to have a modified version) so maybe in some configurations the usage of TCP_CORK is still in effect. > On the other hand, we saw that $request_time, when STALE, is the time to refresh the cache, not the time to return the STALE content. > Could somebody confirm this? Which could be the metric to measure the real "latency" from the user point of view (in our case CDN)? $request_time represents "time elapsed between the first bytes were read from the client and the log write after the last bytes were sent to the client". So it will show the time between the request coming from the CDN edge server and served back to it. In case the object is fetched from backend/updated synchronously it will include the time spent on that. For more detailed picture you could also log $upstream_connect_time and $upstream_response_time to see how long it takes for nginx to get the response from backend and then compare with the timings you get on the client (curl) when requesting via CDN or directly. rr From Richard at primarysite.net Wed Feb 13 08:38:01 2019 From: Richard at primarysite.net (Richard Paul) Date: Wed, 13 Feb 2019 08:38:01 +0000 Subject: I'm about to embark on creating 12000 vhosts In-Reply-To: References: <15117d4ab68ac3bc542bc01b571ff07d.NginxMailingListEnglish@forum.nginx.org> <97F94933-CB18-498D-81C4-12BA3DAF2F7C@ultra-secure.de> <57de577c3cc7683292061efca6bfd267279ae062.camel@primarysite.net> Message-ID: Hi Jeff, This is pretty much what I'm now looking at doing, with Some HA proxy servers behind the External Google LB and then their backend being an internal Google LB which then balances across our Varnish caching layer and eventually the Nginx app servers. Thank you for sharing your config it'll be a good base for us to start from. We moved from AWS for cost/performance reasons but also because Google LBs allow for us to have a static public facing IP address. Currently our customers point the base of their domain at a server which just redirects requests to the www subdomain and the www subdomain is pointed at a friendly CNAME which used to then be pointed at an AWS ELB CNAME. It now points at an IP address and we can slowly get our customers to update their DNS to point the root of the domain at the Google LB IP before this work is ready. Once again many thanks Jeff and for everyone else for their replies, Kind regards, Richard On Tue, 2019-02-12 at 10:37 -0500, Jeff Dyke wrote: Hi Richard. HAProxy defaults to reading all certs in a directory and matching hosts names via SNI. Here is the top of my haproxy config, you can see how i redirect LE requests to another server, which solely services up responses to acme-challenges: frontend http mode http bind 0.0.0.0:80 #if this is a LE Request send it to a server on this host for renewals acl letsencrypt-request path_beg -i /.well-known/acme-challenge/ redirect scheme https code 301 unless letsencrypt-request use_backend letsencrypt-backend if letsencrypt-request frontend https mode tcp bind 0.0.0.0:443 ssl crt /etc/haproxy/certs alpn h2,http/1.1 ecdhe secp384r1 timeout http-request 10s log-format "%ci:%cp [%t] %ft %b/%s %Tw/%Tc/%Tt %B %ts \ %ac/%fc/%bc/%sc/%rc %sq/%bq SSL_version:%sslv SSL_cypher:%sslc SNI:%[ssl_fc_has_sni]" #send all HTTP/2 traffic to a specific backend use_backend http2-nodes if { ssl_fc_alpn -i h2 } #send HTTP/1.1 and HTTP/1.0 to default, which don't speak HTTP/2 default_backend http1-nodes I'm not sure exactly how this would work with GCP, but if you use AWS ELB's they will give you certs (you have to prove you own the domain), but you have to be able to use an ELB, which could change ips at any time. Unfortunately this didn't work for us b/c a few of our larger customers whitelist ips and not domain names. which is why i have stayed with HAProxy. Jeff On Tue, Feb 12, 2019 at 4:04 AM Richard Paul > wrote: Hi Jeff That's interesting, how do you manage the progamming to load the right certificate for the right domain coming in as the server name? We need to load the right certificate for the incoming domain and the 12000 figure is the number of unique vanity domains without the www. subdomains. We're planning to follow the same path as you though, we're essentially putting these Nginx TLS terminators (fronted by GCP load balancers) in front of our existing Varnish caching and Nginx backend infrastructure which currently only listen on port 80. I couldn't work out what the limits are at LE as it's not clear with regards to adding new unique domains limits. I'm going to have to ask in the forums at some point so that I can work out what our daily batches are going to be. Kind regards, Richard On Mon, 2019-02-11 at 14:33 -0500, Jeff Dyke wrote: I use haproxy in a similar way as stated by Rainer, rather than having hundreds and hundreds of config files (yes there are other ways), i have 1 for haproxy and 2(on multiple machines defined in HAProxy). One for my main domain that listens to an "real" server_name and another that listens to `server_name _;` All of the nginx servers simply listen on 80 and 81 to handle non H2 clients and the application does the correct thing with the domain. Which is where YMMV as all applications differ. I found this much simpler and easier to maintain over time. I got around the LE limits by a staggered migration, so i was only requesting what was in the limit each day, then have a custom script that calls LE (which is also on the same machine as HAProxy) when certs are about 10 days out, so the staggering stays within the limits. When i was using custom configuration, i was build them via python using a yaml file and nginx would effectively be a jinja2 template. But even that became onerous. When going down the nginx path ensure you pay attention to the variables that control domain hash sizes. http://nginx.org/en/docs/hash.html HTH, good luck! Jeff On Mon, Feb 11, 2019 at 1:58 PM Rainer Duffner > wrote: Am 11.02.2019 um 16:16 schrieb rick_pri >: However, our customers, with about 12000 domain names at present have Let?s Encrypt rate limits will likely make these very difficult to obtain and also to renew. If you own the DNS, maybe using Wildcard DNS entries is more practical. Then, HAProxy allows to just drop all the certificates in a directory and let itself figure out the domain-names it has to answer. At least, that?s what my co-worker told me. Also, there?s the fabio LB with similar goal-posts. _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From thomas.schmiedl at web.de Wed Feb 13 15:14:17 2019 From: thomas.schmiedl at web.de (Thomas Schmiedl) Date: Wed, 13 Feb 2019 16:14:17 +0100 Subject: nginx reverse proxy question Message-ID: <1ff4e537-9042-77ae-6624-5839a737212e@web.de> Hello, I use the xupnpd2 mediaserver (https://github.com/clark15b/xupnpd2) on my router to transfer some HLS-streams to my TV. xupnpd2 doesn't support https (the author doesn't want add https support and I'm not a developer). I try to receive HLS-streams from this site: https://www.mall.tv/zive, which only supports https. My idea is to use nginx as reverse proxy in this scenario on the router: xupnpd2 (client) <---> http-traffic <---> nginx reverse proxy <---> https-traffic <---> https://www.mall.tv/zive (server) xupnpd2 should receive the playlist (.m3u8) and the media-chunks (.ts) locally via nginx over http. The websites (eg. https://www.mall.tv/hlavni-nadrazi-praha) contain the playlist-url in the "source"-html-element. The url must be extended by "360/index.m3u8" or "720/index.m3u8", e.g. https://zeus.gjirafa.com/live/N4Ffx9rOlOzyDTfrok7L9SvE8NKDU6tI/t0g0qq720/index.m3u8. Thanks for your help and best regards, Thomas From satcse88 at gmail.com Wed Feb 13 16:56:51 2019 From: satcse88 at gmail.com (Sathish Kumar) Date: Thu, 14 Feb 2019 00:56:51 +0800 Subject: Nginx Reverse Proxy Caching Message-ID: Hi All, We have Nginx in front of our Application server. We would like to disable caching for html files. Sample config file: location /abc/ { proxy_pass http://127.0.0.1:8080; } We noticed few html files get stored in Chrome local disk cache and would like to fix this issue. Can anybody help, thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From peter_booth at me.com Wed Feb 13 18:02:01 2019 From: peter_booth at me.com (Peter Booth) Date: Wed, 13 Feb 2019 13:02:01 -0500 Subject: Nginx Reverse Proxy Caching In-Reply-To: References: Message-ID: <06F50867-DDDF-4564-97A0-825DBD349D17@me.com> Satish, The browser (client-side) cache isn?t related to the nginx reverse proxy cache. You can tell Chrome to not cache html by adding the following to your location definition: add_header Cache-Control 'no-store'; You can use Developer Tool in Chrome to check that it is working. Peter Sent from my iPhone > On Feb 13, 2019, at 11:56 AM, Sathish Kumar wrote: > > Hi All, > > We have Nginx in front of our Application server. We would like to disable caching for html files. > > Sample config file: > > location /abc/ { > proxy_pass http://127.0.0.1:8080; > } > > We noticed few html files get stored in Chrome local disk cache and would like to fix this issue. Can anybody help, thanks > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From dusty at campbell-web.com Wed Feb 13 18:12:15 2019 From: dusty at campbell-web.com (Dusty Campbell) Date: Wed, 13 Feb 2019 12:12:15 -0600 Subject: HTTP 1.0 In-Reply-To: References: Message-ID: Thanks for the help. > There is no way to force HTTP/1.0. You can, however, disable > various HTTP/1.1-specific mechanisms, including keepalive and > chunked transfer encoding, see here: > > http://nginx.org/r/keepalive_timeout > http://nginx.org/r/chunked_transfer_encoding > It appears that these were the directives I needed. Pending more testing it seems to be working now. > You may want to be more specific on what you are trying to do. What I was trying to achieve was RTSP-over-HTTP tunneling. https://opensource.apple.com/source/QuickTimeStreamingServer/QuickTimeStreamingServer-412.42/Documentation/RTSP_Over_HTTP.pdf Thanks again, Dusty Campbell From mdounin at mdounin.ru Wed Feb 13 19:05:23 2019 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 13 Feb 2019 22:05:23 +0300 Subject: HTTP 1.0 In-Reply-To: References: Message-ID: <20190213190522.GK1877@mdounin.ru> Hello! On Wed, Feb 13, 2019 at 12:12:15PM -0600, Dusty Campbell wrote: > Thanks for the help. > > > There is no way to force HTTP/1.0. You can, however, disable > > various HTTP/1.1-specific mechanisms, including keepalive and > > chunked transfer encoding, see here: > > > > http://nginx.org/r/keepalive_timeout > > http://nginx.org/r/chunked_transfer_encoding > > > > It appears that these were the directives I needed. Pending more > testing it seems to be working now. > > > You may want to be more specific on what you are trying to do. > > What I was trying to achieve was RTSP-over-HTTP tunneling. > https://opensource.apple.com/source/QuickTimeStreamingServer/QuickTimeStreamingServer-412.42/Documentation/RTSP_Over_HTTP.pdf Well, I wouldn't expect this to depend on HTTP/1.0 being used, as well as keepalive connections and/or chunked transfer encoding (unless the client is also buggy and announces HTTP/1.1 support without actually implementing HTTP/1.1). But clearly there will be problems with this tunneling, as it goes far beyond what is guaranteed by the HTTP standard. The most serious problem is an assumption that request and response bodies are streams. They are not. And, for example, it is not guaranteed that a proxy will start sending a POST request to the upstream server before it receives the full request body (which is not going to happen). With nginx, this is certainly not going to work by default, but may work with proxy_buffering and proxy_request_buffering disabled, see here: http://nginx.org/r/proxy_buffering http://nginx.org/r/proxy_request_buffering Also, the pdf in question suggests that Content-Length in requests is expected to be ignored by proxies. It is not in the modern world, and if the client relies on this, it is not going to work at all. If it is the case, the only option I can recommend would be to use stream proxy instead, see here: http://nginx.org/en/docs/stream/ngx_stream_core_module.html -- Maxim Dounin http://mdounin.ru/ From satcse88 at gmail.com Wed Feb 13 23:26:10 2019 From: satcse88 at gmail.com (Sathish Kumar) Date: Thu, 14 Feb 2019 07:26:10 +0800 Subject: Nginx Reverse Proxy Caching In-Reply-To: <06F50867-DDDF-4564-97A0-825DBD349D17@me.com> References: <06F50867-DDDF-4564-97A0-825DBD349D17@me.com> Message-ID: Hi Peter, Thanks, I am looking for the same solution but to enable only for html files. On Thu, Feb 14, 2019, 2:02 AM Peter Booth via nginx Satish, > > The browser (client-side) cache isn?t related to the nginx reverse proxy > cache. You can tell Chrome to not cache html by adding the following to > your location definition: > > add_header Cache-Control 'no-store'; > > You can use Developer Tool in Chrome to check that it is working. > > > Peter > > > Sent from my iPhone > > On Feb 13, 2019, at 11:56 AM, Sathish Kumar wrote: > > Hi All, > > We have Nginx in front of our Application server. We would like to disable > caching for html files. > > Sample config file: > > location /abc/ { > proxy_pass http://127.0.0.1:8080; > } > > We noticed few html files get stored in Chrome local disk cache and would > like to fix this issue. Can anybody help, thanks > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From satcse88 at gmail.com Thu Feb 14 02:00:41 2019 From: satcse88 at gmail.com (Sathish Kumar) Date: Thu, 14 Feb 2019 10:00:41 +0800 Subject: Nginx Reverse Proxy Caching In-Reply-To: References: <06F50867-DDDF-4564-97A0-825DBD349D17@me.com> Message-ID: Hi All, How can I achieve caching html files only for this location context /abc/* and not for other context path. On Thu, Feb 14, 2019, 7:26 AM Sathish Kumar Hi Peter, > > Thanks, I am looking for the same solution but to enable only for html > files. > > On Thu, Feb 14, 2019, 2:02 AM Peter Booth via nginx wrote: > >> Satish, >> >> The browser (client-side) cache isn?t related to the nginx reverse proxy >> cache. You can tell Chrome to not cache html by adding the following to >> your location definition: >> >> add_header Cache-Control 'no-store'; >> >> You can use Developer Tool in Chrome to check that it is working. >> >> >> Peter >> >> >> Sent from my iPhone >> >> On Feb 13, 2019, at 11:56 AM, Sathish Kumar wrote: >> >> Hi All, >> >> We have Nginx in front of our Application server. We would like to >> disable caching for html files. >> >> Sample config file: >> >> location /abc/ { >> proxy_pass http://127.0.0.1:8080; >> } >> >> We noticed few html files get stored in Chrome local disk cache and would >> like to fix this issue. Can anybody help, thanks >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Thu Feb 14 14:42:27 2019 From: nginx-forum at forum.nginx.org (mevans336) Date: Thu, 14 Feb 2019 09:42:27 -0500 Subject: intercept and modify upgrade-insecure-request header? Message-ID: <7b35fe268456395bb1df6d11256b0c60.NginxMailingListEnglish@forum.nginx.org> Apparently our web application server is sending an older version of the upgrade-insecure-request header which causes a brief "page cannot be displayed" in Chrome, but not Firefox or Safari.. We use Nginx as a reverse proxy to our application servers, can I intercept this header and just remove it? Specifically, it looks like I can fix this by just stripping the "HTTPS:1" header that Chrome is sending to the application server. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,283059,283059#msg-283059 From r at roze.lv Thu Feb 14 17:00:56 2019 From: r at roze.lv (Reinis Rozitis) Date: Thu, 14 Feb 2019 19:00:56 +0200 Subject: intercept and modify upgrade-insecure-request header? In-Reply-To: <7b35fe268456395bb1df6d11256b0c60.NginxMailingListEnglish@forum.nginx.org> References: <7b35fe268456395bb1df6d11256b0c60.NginxMailingListEnglish@forum.nginx.org> Message-ID: <002101d4c486$da49d300$8edd7900$@roze.lv> > We use Nginx as a reverse proxy to our application servers, can I intercept > this header and just remove it? Sure, to the proxy_pass block add: proxy_hide_header upgrade-insecure-request; http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_hide_header rr From satcse88 at gmail.com Fri Feb 15 12:25:23 2019 From: satcse88 at gmail.com (Sathish Kumar) Date: Fri, 15 Feb 2019 20:25:23 +0800 Subject: Nginx Reverse Proxy Caching In-Reply-To: References: <06F50867-DDDF-4564-97A0-825DBD349D17@me.com> Message-ID: Hi All, Is it possible to enable gzip and etag to solve caching problem. On Thu, Feb 14, 2019, 10:00 AM Sathish Kumar Hi All, > > How can I achieve caching html files only for this location context /abc/* > and not for other context path. > > > On Thu, Feb 14, 2019, 7:26 AM Sathish Kumar >> Hi Peter, >> >> Thanks, I am looking for the same solution but to enable only for html >> files. >> >> On Thu, Feb 14, 2019, 2:02 AM Peter Booth via nginx > wrote: >> >>> Satish, >>> >>> The browser (client-side) cache isn?t related to the nginx reverse proxy >>> cache. You can tell Chrome to not cache html by adding the following to >>> your location definition: >>> >>> add_header Cache-Control 'no-store'; >>> >>> You can use Developer Tool in Chrome to check that it is working. >>> >>> >>> Peter >>> >>> >>> Sent from my iPhone >>> >>> On Feb 13, 2019, at 11:56 AM, Sathish Kumar wrote: >>> >>> Hi All, >>> >>> We have Nginx in front of our Application server. We would like to >>> disable caching for html files. >>> >>> Sample config file: >>> >>> location /abc/ { >>> proxy_pass http://127.0.0.1:8080; >>> } >>> >>> We noticed few html files get stored in Chrome local disk cache and >>> would like to fix this issue. Can anybody help, thanks >>> >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx >>> >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Fri Feb 15 17:54:37 2019 From: nginx-forum at forum.nginx.org (joao.pereira) Date: Fri, 15 Feb 2019 12:54:37 -0500 Subject: STALE responses taking as much as MISS responses In-Reply-To: <001601d4c30e$df78c760$9e6a5620$@roze.lv> References: <001601d4c30e$df78c760$9e6a5620$@roze.lv> Message-ID: We are trying to measure times using the variables referred on the following article and on the emails above: https://www.nginx.com/blog/using-nginx-logging-for-application-performance-monitoring/#var_request_time But It came to our attention that those are not accurate, our log file: log_format main ??$request_time" - $upstream_response_time? - ?$upstream_connect_time? - ?$upstream_header_time?; STALE only show $request_time but no $upstream_* Hits are taking "0.000 to 0.001 which makes no sense because we are doing the request from EU to US, the MISS seems ok as they show both $request_time and $upstream_* , there is any way to measure these times properly ? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,282984,283065#msg-283065 From thomas.schmiedl at web.de Sat Feb 16 15:09:26 2019 From: thomas.schmiedl at web.de (Thomas Schmiedl) Date: Sat, 16 Feb 2019 16:09:26 +0100 Subject: nginx reverse proxy to https streaming backend Message-ID: <6b28b29f-fdc5-02e7-c175-2220491de0ec@web.de> Hello, I use the xupnpd2 mediaserver (https://github.com/clark15b/xupnpd2) on my router to display some hls-streams on my TV. xupnpd2 doesn't support https. The author doesn't want add https support. My idea is to use nginx in this scenario on the router: xupnpd2 (client) <---> http-traffic <---> nginx <---> https-traffic <---> https://www.mall.tv/zive (server) xupnpd2 should receive the playlist (.m3u8) and the media-chunks (.ts) locally via nginx over http. When using vlc-player with xupnpd2 and via nginx, the displayed stream (from this site: https://www.mall.tv/planespotting) is 4 hours behind the actual time. I hope someone could help me. Best regards, Thomas From mdounin at mdounin.ru Mon Feb 18 12:45:15 2019 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 18 Feb 2019 15:45:15 +0300 Subject: STALE responses taking as much as MISS responses In-Reply-To: References: <001601d4c30e$df78c760$9e6a5620$@roze.lv> Message-ID: <20190218124515.GR1877@mdounin.ru> Hello! On Fri, Feb 15, 2019 at 12:54:37PM -0500, joao.pereira wrote: > We are trying to measure times using the variables referred on the following > article and on the emails above: > https://www.nginx.com/blog/using-nginx-logging-for-application-performance-monitoring/#var_request_time > > But It came to our attention that those are not accurate, our log file: > log_format main ??$request_time" - $upstream_response_time? - > ?$upstream_connect_time? - ?$upstream_header_time?; > > STALE only show $request_time but no $upstream_* > Hits are taking "0.000 to 0.001 which makes no sense because we are doing > the request from EU to US, the MISS seems ok as they show both > $request_time and $upstream_* , there is any way to measure these times > properly ? The "HIT" status means that a response was returned from cache. Since there is no request to the upstream server, no upstream times are expected to be available. The $request_time variable counts time between reading the first byte of a request from the socket and writing the last byte of the response to the socket. As long as socket buffers are large enough, this can happen in one event loop iteration, so times like "0.000" are pretty normal for small responses. The "STALE" status means that the response was returned from cache, as per "proxy_cache_use_stale". And this implies that there will be no upstream times, much like with "HIT". Note well that if you are using "proxy_cache_background_update", which uses subrequests to update cache, the main request will have the "STALE" status, and there will be no upstream times as explained above, but $request_time will still include subrequest execution time. If you want to see subrequest details in log, including upstream times, consider the "log_subrequest" configuration directive (http://nginx.org/r/log_subrequest). -- Maxim Dounin http://mdounin.ru/ From nginx-forum at forum.nginx.org Wed Feb 20 10:33:17 2019 From: nginx-forum at forum.nginx.org (akashverma) Date: Wed, 20 Feb 2019 05:33:17 -0500 Subject: How can I remove backslash when log format use escape=json In-Reply-To: <51c0caa893cb26844bf5fcc417985f15.NginxMailingListEnglish@forum.nginx.org> References: <51c0caa893cb26844bf5fcc417985f15.NginxMailingListEnglish@forum.nginx.org> Message-ID: <867ef8bbba0b65f7a8bf1cc0d6f357c7.NginxMailingListEnglish@forum.nginx.org> you can try escape=none Posted at Nginx Forum: https://forum.nginx.org/read.php?2,281634,283081#msg-283081 From thomas.hartmann at desy.de Wed Feb 20 11:37:23 2019 From: thomas.hartmann at desy.de (Thomas Hartmann) Date: Wed, 20 Feb 2019 12:37:23 +0100 Subject: how to setup as caching reverse proxy for/rewritign global URLs Message-ID: <847b4f68-0ea5-445a-e269-ba7cce357c55@desy.de> Hi all, I would like to setup Nginx as a caching reverse proxy but with explicit requests in the URL and rewriting all subsequent requests Don;t know, if it really counts as reverse proxy and if it is understandable, so an example ;) For an original URL like https://org.url.baz/user/repo/foo I would like to be able to cache all request through nginx running at my.domain.foo but with an explicit "cache request" like wget http://my.domain.foo/cache/http://org.url.baz/user/repo/foo and rewrite all subsequent request and cache them. So, I am looking for something similar to Internet Archive's memento proxy https://web.archive.org/save/https://mailman.nginx.org/pipermail/nginx/2019-February/thread.html Since my idea is no 'true' reverse proxy, the example [1] needs probably a bit of extension and I am not sure, how to do the rewrites. So far my attempts [2] were not really successful - but then I have not much experiences in that direction and would be grateful, if someone has an idea for me? Cheers and thanks, Thomas [1] https://www.nginx.com/resources/wiki/start/topics/examples/reverseproxycachingexample/ [2] http { proxy_cache_path /data/nginx/cache levels=1:2 keys_zone=STATIC:10m inactive=1d max_size=10g; server { # server_name *; location /cache { rewrite ^(.*)$ /VirtualHostBase/cache$1 break; proxy_pass http://127.0.0.1:8080; proxy_cache_revalidate on; proxy_buffering on; proxy_cache STATIC; proxy_ignore_headers Cache-Control; proxy_cache_valid any 1d; proxy_cache_use_stale error timeout invalid_header updating http_500 http_502 http_503 http_504; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto https; proxy_set_header X-Forwarded-Port 443; proxy_set_header Host $host; } } -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 5334 bytes Desc: S/MIME Cryptographic Signature URL: From nginx-forum at forum.nginx.org Wed Feb 20 16:01:45 2019 From: nginx-forum at forum.nginx.org (emil.vikstrom) Date: Wed, 20 Feb 2019 11:01:45 -0500 Subject: Load balancing with least_conn strategy does round-robin Message-ID: Hi. I want to use nginx as a load balancer for tcp streams and because I want to share the load evenly over all machines I thought that least_conn would be the right strategy to use in this case. But I have stumbled upon an problem I don't really understand. In my lab environment I have three machines running server software. I start my testing establishing 12 connections to nginx and can see that they are getting evenly across all hosts. 4 connections per host. I then stop one of the machines and the client reestablishes the lost connections and they are getting spread evenly to the remaining hosts. Now we have 6 connections on 2 hosts. I then start up the closed machine and proceed to close 2 connections each on the servers that have been up all this time. The client tries to set up new connections and this is where I notice that the load not being equally shared anymore. The results I get is host1: 6 connections, host2: 5 connections, host3: 1 connection. As far as I can see it seems like nginx uses round-robin to balance the connections and not setting up connections to the host with least connections. Am I misinterpreting what least connections mean? >From the documentation Syntax: least_conn; Default: ? Context: upstream Specifies that a group should use a load balancing method where a connection is passed to the server with the least number of active connections, taking into account weights of servers. If there are several such servers, they are tried in turn using a weighted round-robin balancing method. How I interpret this is that it will only use weighted round-robin in the case where there are 2 or more servers with the same amount connections that satisfy the condition of having the least number of connections, which should not be the case in this where host3 should be the one with least amount of connections. I had the hypothesis that I used a shared memory zone in the upstream might be the cause of the problem, but removing that only made the results even more weird. After removing the zone the initial step of setting up 12 connections got me the following results Host1: 6 connections, Host2: 5 connections, Host3: 1 connection. I suspect that it is the core config that might be faulty but tweaking the parameters don't seem to get me further to solve this. My configuration user nginx; worker_processes 16; error_log /opt/nginx/logs/errors.log; worker_rlimit_nofile 10000; events { worker_connections 2048; worker_aio_requests 8; } stream { upstream stream1 { least_conn; zone stream1 128k; server host3:19953; server host2:19953; server host1:19953; } server { listen 9953; proxy_pass stream1; } access_log off; } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,283093,283093#msg-283093 From mdounin at mdounin.ru Wed Feb 20 16:18:05 2019 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 20 Feb 2019 19:18:05 +0300 Subject: Load balancing with least_conn strategy does round-robin In-Reply-To: References: Message-ID: <20190220161805.GY1877@mdounin.ru> Hello! On Wed, Feb 20, 2019 at 11:01:45AM -0500, emil.vikstrom wrote: > I want to use nginx as a load balancer for tcp streams and because I want to > share the load evenly over all machines I thought that least_conn would be > the right strategy to use in this case. But I have stumbled upon an problem > I don't really understand. > > In my lab environment I have three machines running server software. > > I start my testing establishing 12 connections to nginx and can see that > they are getting evenly across all hosts. 4 connections per host. > I then stop one of the machines and the client reestablishes the lost > connections and they are getting spread evenly to the remaining hosts. Now > we have 6 connections on 2 hosts. > I then start up the closed machine and proceed to close 2 connections each > on the servers that have been up all this time. The client tries to set up > new connections and this is where I notice that the load not being equally > shared anymore. > The results I get is host1: 6 connections, host2: 5 connections, host3: 1 > connection. > > As far as I can see it seems like nginx uses round-robin to balance the > connections and not setting up connections to the host with least > connections. Am I misinterpreting what least connections mean? The problem is that host3 is considered to be failed as per max_fails[1], so it only gets one "test" connection once fail_timeout expires. The host won't be considered fully operational till this connection is closed without an error. [1] http://nginx.org/en/docs/stream/ngx_stream_upstream_module.html#max_fails -- Maxim Dounin http://mdounin.ru/ From nginx-forum at forum.nginx.org Thu Feb 21 00:57:57 2019 From: nginx-forum at forum.nginx.org (joncard) Date: Wed, 20 Feb 2019 19:57:57 -0500 Subject: Sporadic long response times with upstream server Message-ID: This is similar to a previous question, but my log data shows the opposite problem. I am seeing rare requests that take perhaps 3s or more, but typical response times are 100ms or less. This is the log entry for one of the problematic responses: LOAD_BALANCER_IP - - [20/Feb/2019:13:36:12 +0000] "POST /DevicePost HTTP/1.1" 200 16 "-" "-" "client-ip-redacted" 3.052 0.002 . Here is the log format: log_format timed_main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for" ' '$request_time $upstream_response_time $pipe'; The 0.002 matches what the Node.js logs are reporting from the upstream server, and 3.052 matches what the Elastic Load Balancer is reporting as the response time is recorded. This was set up by AWS Elastic Beanstalk as the (mostly) default configuration for a Node.js application. The only changes were for this logging entry, and manually setting the Content-Type header (the client is an embedded device and their HTTP library doesn't include this header, which creates problems for Node.js Express). I think this indicates a delay in Node.js, but it is sporadic and I cannot tell if it is some kind of garbage collection, a caching problem (there should be no caching), or something else. Any help would be appreciated. (I'm sorry I don't know how to format the quotes. I can't find any instructions on what the admin called "quoting conventions typically used on mailing lists" in the post "Please Read Before Posting in this Forum"). Posted at Nginx Forum: https://forum.nginx.org/read.php?2,283099,283099#msg-283099 From nginx-forum at forum.nginx.org Thu Feb 21 07:51:04 2019 From: nginx-forum at forum.nginx.org (joncard) Date: Thu, 21 Feb 2019 02:51:04 -0500 Subject: Sporadic long response times with upstream server In-Reply-To: References: Message-ID: That's meant to read, "I think this indicates a delay in Nginx," which was the whole point. Sorry. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,283099,283101#msg-283101 From peter_booth at me.com Thu Feb 21 08:51:35 2019 From: peter_booth at me.com (Peter Booth) Date: Thu, 21 Feb 2019 12:51:35 +0400 Subject: Sporadic long response times with upstream server In-Reply-To: References: Message-ID: Jon, You need to find out what is ?true?. From the perspective of nginx, this post request took 3.02 secs - but where was the time actually spent? Do you have root access on both your nginx host and your upstream host that is behind your elastic load balancer? If so, you can run a filtered tcpdump on both to see what is occurring at the tcp level as you drive traffic through your web application. Then try to find out ?whats different about the 3 second scenario and the < 100ms scenario?? ?are there any persistent connections?? Is it this issue? https://labs.ripe.net/Members/gih/the-curious-case-of-the-crooked-tcp-handshake You can adjust the net.inet.tcp.finwait2_timeout and similar and see if that changes the length of your three second effect to something else. Hope this helps, Peter If > On 21 Feb 2019, at 11:51 AM, joncard wrote: > > That's meant to read, "I think this indicates a delay in Nginx," which was > the whole point. Sorry. > > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,283099,283101#msg-283101 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From iippolitov at nginx.com Thu Feb 21 10:05:01 2019 From: iippolitov at nginx.com (Igor A. Ippolitov) Date: Thu, 21 Feb 2019 13:05:01 +0300 Subject: Sporadic long response times with upstream server In-Reply-To: References: Message-ID: Assume, you have a really slow client. Nginx will get upstream response in milliseconds and will start feeding data to a client In 3 seconds nginx completed the transfer and issues a log entry. And you see what you see. If this issue involves a single client - most likely it's a client issue (possibly even a subnet issue) If suddenly a whole bunch of different requests start getting long request time - check your network load and connectivity On 21.02.2019 3:57, joncard wrote: > This is similar to a previous question, but my log data shows the opposite > problem. I am seeing rare requests that take perhaps 3s or more, but typical > response times are 100ms or less. This is the log entry for one of the > problematic responses: > > LOAD_BALANCER_IP - - [20/Feb/2019:13:36:12 +0000] "POST /DevicePost > HTTP/1.1" 200 16 "-" "-" "client-ip-redacted" 3.052 0.002 . > > Here is the log format: > > log_format timed_main '$remote_addr - $remote_user [$time_local] "$request" > ' > '$status $body_bytes_sent "$http_referer" ' > '"$http_user_agent" "$http_x_forwarded_for" ' > '$request_time $upstream_response_time $pipe'; > > The 0.002 matches what the Node.js logs are reporting from the upstream > server, and 3.052 matches what the Elastic Load Balancer is reporting as the > response time is recorded. This was set up by AWS Elastic Beanstalk as the > (mostly) default configuration for a Node.js application. The only changes > were for this logging entry, and manually setting the Content-Type header > (the client is an embedded device and their HTTP library doesn't include > this header, which creates problems for Node.js Express). > > I think this indicates a delay in Node.js, but it is sporadic and I cannot > tell if it is some kind of garbage collection, a caching problem (there > should be no caching), or something else. Any help would be appreciated. > > (I'm sorry I don't know how to format the quotes. I can't find any > instructions on what the admin called "quoting conventions typically used on > mailing lists" in the post "Please Read Before Posting in this Forum"). > > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,283099,283099#msg-283099 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From matthias_mueller at tu-dresden.de Thu Feb 21 12:50:22 2019 From: matthias_mueller at tu-dresden.de (Matthias =?ISO-8859-1?Q?M=FCller?=) Date: Thu, 21 Feb 2019 13:50:22 +0100 Subject: Fully transparent gzip/deflate compression in a reverse proxy setup Message-ID: When running NGINX as a reverse proxy (e.g. in front of an application server) I know how to switch on gzip/deflate from the documentation. What I am looking for is a *transparent* compression by NGINX, i.e. the proxied application server should be unaware that the original client request was asking for a compressed response [1]. That way I want to ensure that compression is always performd by NGINX rather that the proxied application server or a Microservice framework. What I expect is that NGINX applies compression more efficiently than the JAVA backend. So I have 2 questions: (1) Is my assumption correct that the NGINX reverse proxy compresses more efficiently (in terms of Speed and CPU load) than the proxied JAVA-based backend service? (2) How do I enable transparent compression, i.e. mask [1] from the proxied request so that the application server never attempts response compression and always lets NGINX perform that task? Thanks, Matthias [1]: Accept-Encoding: gzip, deflate From nginx-forum at forum.nginx.org Thu Feb 21 16:53:15 2019 From: nginx-forum at forum.nginx.org (exadra) Date: Thu, 21 Feb 2019 11:53:15 -0500 Subject: Nginx - canot read icons in PHP Message-ID: <2b419f40799d463e6ca3239901b64645.NginxMailingListEnglish@forum.nginx.org> Hello all I have a server usin Debian 9.3. I have migrated from Lighttpd to ngings to be able to use its reverse proxi capabilities to use OpenHab my default file is: " server { listen 80; server_name 192.168.1.246; error_log /etc/nginx/error.log; root /var/www/html; index index.html index.htm index.php phpliteadmin.php; location ~ \.php$ { try_files $uri =404; fastcgi_split_path_info ^(.+\.php)(/.+)$; # NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini # also check this next line in frm .d fastcgi_pass unix:/run/php/php7.0-fpm.sock; fastcgi_index index.php; include fastcgi.conf; } location / { proxy_pass http://localhost:8080/; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; } satisfy any; allow 192.168.1.1/24; allow 127.0.0.1; deny all; } auth_basic "Username and Password Required"; auth_basic_user_file /etc/nginx/.htpasswd; " and also did: ln -s /usr/share/phpmyadmin/ /var/www/html/phpmyadmin All works also the php programs but I have no icons in the php programs Thanks in advance Posted at Nginx Forum: https://forum.nginx.org/read.php?2,283114,283114#msg-283114 From nginx-forum at forum.nginx.org Thu Feb 21 18:14:58 2019 From: nginx-forum at forum.nginx.org (rafaelm) Date: Thu, 21 Feb 2019 13:14:58 -0500 Subject: Trouble with stream directive In-Reply-To: <515896ab55846e140c67630b066abe87.NginxMailingListEnglish@forum.nginx.org> References: <515896ab55846e140c67630b066abe87.NginxMailingListEnglish@forum.nginx.org> Message-ID: <7500b57b1e1bd73485e4a2707e965ac0.NginxMailingListEnglish@forum.nginx.org> Hi was there any solution for this issue? thanks a lot Posted at Nginx Forum: https://forum.nginx.org/read.php?2,259748,283116#msg-283116 From arekm at maven.pl Thu Feb 21 21:15:18 2019 From: arekm at maven.pl (=?UTF-8?Q?Arkadiusz_Mi=c5=9bkiewicz?=) Date: Thu, 21 Feb 2019 22:15:18 +0100 Subject: Fully transparent gzip/deflate compression in a reverse proxy setup In-Reply-To: References: Message-ID: On 21/02/2019 13:50, Matthias M?ller wrote: > > (2) How do I enable transparent compression, i.e. mask [1] from the > proxied request so that the application server never attempts response > compression and always lets NGINX perform that task? Try proxy_set_header Accept-Encoding ""; -- Arkadiusz Mi?kiewicz, arekm / ( maven.pl | pld-linux.org ) From francis at daoine.org Fri Feb 22 13:15:02 2019 From: francis at daoine.org (Francis Daly) Date: Fri, 22 Feb 2019 13:15:02 +0000 Subject: Nginx - canot read icons in PHP In-Reply-To: <2b419f40799d463e6ca3239901b64645.NginxMailingListEnglish@forum.nginx.org> References: <2b419f40799d463e6ca3239901b64645.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20190222131502.a63wwuhuwu32k4s3@daoine.org> On Thu, Feb 21, 2019 at 11:53:15AM -0500, exadra wrote: Hi there, > root /var/www/html; > index index.html index.htm index.php phpliteadmin.php; > > location ~ \.php$ { > fastcgi_pass unix:/run/php/php7.0-fpm.sock; > location / { > proxy_pass http://localhost:8080/; > All works also the php programs but I have no icons in the php programs What is an icon? As in: in the html source, what is the url (img src=) associated with an icon that you do not see? If it corresponds to your nginx server -- what does the nginx access_log say about the matching request? If it does not correspond to your nginx server -- there is probably another thing that needs fixing first. Good luck with it, f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Fri Feb 22 21:08:14 2019 From: nginx-forum at forum.nginx.org (joncard) Date: Fri, 22 Feb 2019 16:08:14 -0500 Subject: Sporadic long response times with upstream server In-Reply-To: References: Message-ID: <5b517cf11e3e4f54b0efc03707ea4977.NginxMailingListEnglish@forum.nginx.org> Thank you for your response, and for your patience with my delays. I have set up a tcpdump log, to see if this may be problem, but it'll have to wait until the next time the glitch happens to get a reading. I have admin access to all servers, and host box. I had added timing logs consistent with (https://www.lunchbadger.com/blog/tra cking-the-performance-of-express-js-routes-and-middleware) to the Node.js scripts. So the timeline for the requests seems to be: 2019-02-20T13:36:09.276959Z - Timestamp on load balancer access log (presumably, this timestamp is on receipt of the request, even though it logs the timing of of the response. The docs here (https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-access-logs.html#access-log-entry-format) seem to imply it is the end of the response, but this doesn't make much sense with the values provided) [13:36:12.328] [13:36:12.329] [13:36:12.330] [13:36:12.359] - Timestamps in Node.js logs. There are several logs entries during the fulfillment of the request, written live, so there is no cconfusion about whether the timestamp reflects the receipt of the request or end of the response. (For weird reasons I don't think are relevant, the third one is when the response is sent and the fourth one is when additional processing for the request is finished after the response is sent. Response back to Nginx is about 2ms; the additional 28 or 29 seconds is not relevant to Nginx) 20/Feb/2019:13:36:12 +0000 - Timestamp in Nginx describing the 3.052 second fulfillment time. Does not have millisecond resolution, but is consistent with the 3 second reported duration, the timestamp from the load balancer, and the timing information from Node.js. The conclusion I have is that the delay is somewhere between the load balancer receiving the request and Nginx dispatching the request to Node.js, and Nginx is returning the response from Node.js immediately. I do not have an explanation for why the timestamp on the load balancer seems to for the time of the receipt of the request, and not for the time of finished fulfillment. All clients are actually the same kind of device. This application is being hosted by the manufacturer of an RTOS embedded system, and in support of features for clients running "in the wild". This endpoint is connected to every 15 minutes by clients running the new release of their OS, and the same client connected with no problems at 2019-02-20T13:21:08.136315Z, 15 minutes before. It's also why a single response of 3 seconds is being treated as a big deal, and why we have done so much to ensure fast response times. The feature can be deactivated, of course, but if it is running, fast response is very important. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,283099,283125#msg-283125 From nginx-forum at forum.nginx.org Sat Feb 23 09:15:53 2019 From: nginx-forum at forum.nginx.org (HasanAtizaz) Date: Sat, 23 Feb 2019 04:15:53 -0500 Subject: Nginx Proxy Buffer Message-ID: I am having some difficultly in understanding the following parameters for nginx, Scenario (nginx is used for serving static content ) proxy_buffer_size 4k proxy_buffers 16 32k proxy_busy_buffers_size 64k proxy_buffering off I like to know if i exceed the proxy_buffer_size to lets say 128 and proxy_buffers to 4 256 does it affects nginx performance in anyway ? if proxy_buffering is set to off, does proxy_buffers value in anyway has any affect on nginx ? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,283126,283126#msg-283126 From cyflhn at 163.com Mon Feb 25 07:45:27 2019 From: cyflhn at 163.com (yf chu) Date: Mon, 25 Feb 2019 15:45:27 +0800 (CST) Subject: How to control keepalive connections for upstream before the version of 1.15.3 Message-ID: <45f7d5d4.f49c.169239da40c.Coremail.cyflhn@163.com> The Nginx document said that there are two upstream-related directives introduced in the version of 1.15.3: "keepalive_requests" and "keepalive_timeout". But the directive "keepalive" has already been instroduced in the version of 1.1.4. So I want to know how does Nginx handle the keepalive connections for upstream before the version 1.15.3 was released. When will the keepalive connections for upstream be closed? -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Mon Feb 25 12:22:56 2019 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 25 Feb 2019 15:22:56 +0300 Subject: How to control keepalive connections for upstream before the version of 1.15.3 In-Reply-To: <45f7d5d4.f49c.169239da40c.Coremail.cyflhn@163.com> References: <45f7d5d4.f49c.169239da40c.Coremail.cyflhn@163.com> Message-ID: <20190225122256.GH1877@mdounin.ru> Hello! On Mon, Feb 25, 2019 at 03:45:27PM +0800, yf chu wrote: > The Nginx document said that there are two upstream-related > directives introduced in the version of 1.15.3: > "keepalive_requests" and "keepalive_timeout". But the directive > "keepalive" has already been instroduced in the version of > 1.1.4. So I want to know how does Nginx handle the keepalive > connections for upstream before the version 1.15.3 was released. > When will the keepalive connections for upstream be closed? Before the introduction of "keepalive_requests" and "keepalive_timeout" directives in 1.15.3, upstream connections were simply kept open by nginx, regardless of the number of requests made in these connections, or the time these connections were idle. Connections were closed when the upstream server decided to close them, or when a connection was evicted from the cache by other connections. -- Maxim Dounin http://mdounin.ru/ From cyflhn at 163.com Mon Feb 25 13:46:56 2019 From: cyflhn at 163.com (yf chu) Date: Mon, 25 Feb 2019 21:46:56 +0800 (CST) Subject: How to control keepalive connections for upstream before the version of 1.15.3 In-Reply-To: <20190225122256.GH1877@mdounin.ru> References: <45f7d5d4.f49c.169239da40c.Coremail.cyflhn@163.com> <20190225122256.GH1877@mdounin.ru> Message-ID: <5867aefc.15fe1.16924e894bc.Coremail.cyflhn@163.com> But if a connection to upstream is dead or there are some other network problems in this connection, how could Nginx handle it, will the HTTP requests on this connection be affected? For example, is it possible that the HTTP requests on this connection is hanging until timeout? At 2019-02-25 20:22:56, "Maxim Dounin" wrote: >Hello! > >On Mon, Feb 25, 2019 at 03:45:27PM +0800, yf chu wrote: > >> The Nginx document said that there are two upstream-related >> directives introduced in the version of 1.15.3: >> "keepalive_requests" and "keepalive_timeout". But the directive >> "keepalive" has already been instroduced in the version of >> 1.1.4. So I want to know how does Nginx handle the keepalive >> connections for upstream before the version 1.15.3 was released. >> When will the keepalive connections for upstream be closed? > >Before the introduction of "keepalive_requests" and >"keepalive_timeout" directives in 1.15.3, upstream connections >were simply kept open by nginx, regardless of the number of >requests made in these connections, or the time these connections >were idle. Connections were closed when the upstream server >decided to close them, or when a connection was evicted from the >cache by other connections. > >-- >Maxim Dounin >http://mdounin.ru/ >_______________________________________________ >nginx mailing list >nginx at nginx.org >http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Mon Feb 25 14:43:14 2019 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 25 Feb 2019 17:43:14 +0300 Subject: How to control keepalive connections for upstream before the version of 1.15.3 In-Reply-To: <5867aefc.15fe1.16924e894bc.Coremail.cyflhn@163.com> References: <45f7d5d4.f49c.169239da40c.Coremail.cyflhn@163.com> <20190225122256.GH1877@mdounin.ru> <5867aefc.15fe1.16924e894bc.Coremail.cyflhn@163.com> Message-ID: <20190225144314.GM1877@mdounin.ru> Hello! On Mon, Feb 25, 2019 at 09:46:56PM +0800, yf chu wrote: > But if a connection to upstream is dead or there are some other > network problems in this connection, how could Nginx handle it, > will the HTTP requests on this connection be affected? For > example, is it possible that the HTTP requests on this > connection is hanging until timeout? If a connection is silently dead, nginx will have to wait till a relevant timeout expires. If a network error occurs, nginx will be able to detect this and will act accordingly. If a network error or timeout occurs when re-using a keepalive connection, nginx will retry a request as per proxy_next_upstream (and will allow an additional retry attempt to make sure even requests to a single upstream server a retryed). Note this doesn't really depend on using keepalive connections, as well as keepalive_requests and keepalive_timeout directives. Though some network problems may become more obvious when using keepalive connections. In particular, a statefull firewall between nginx and a backend can be a problem if states are dropped after some inactivity timeout, and using keepalive_timeout may help to mitigate such problems (though using proxy_socket_keepalive or removing the firewall might be a better way to go). The main goal of the keepalive_timeout directive is to avoid the race between closing of a connection by the upstream server and using this connection for another request, most importantly in case of non-idempotent requests which cannot be retried. The main goal of the keepalive_requests directive is to make sure connections will be closed periodically and connection-specific memory allocations will be freed. -- Maxim Dounin http://mdounin.ru/ From nginx-forum at forum.nginx.org Mon Feb 25 14:55:40 2019 From: nginx-forum at forum.nginx.org (reecis) Date: Mon, 25 Feb 2019 09:55:40 -0500 Subject: version upgrade Message-ID: <9fe87287e5648195dc3baa29bbd3e384.NginxMailingListEnglish@forum.nginx.org> I use nginx 1.14.0 now , want to upgrade to latest stable version . would advise how to do that , I need to remove the existing version and install the new version to the server , or just execute the upgrade patch , is there any suggestion to do the upgrade . thanks Posted at Nginx Forum: https://forum.nginx.org/read.php?2,283156,283156#msg-283156 From nginx-forum at forum.nginx.org Mon Feb 25 19:28:31 2019 From: nginx-forum at forum.nginx.org (fmarch) Date: Mon, 25 Feb 2019 14:28:31 -0500 Subject: SignatureDoesNotMatch - S3 upload with KMS encryption fails through Nginx proxy end point In-Reply-To: References: Message-ID: <3af3c61b538853ebccbe25aae0b9a0ae.NginxMailingListEnglish@forum.nginx.org> I am having the same issue. Did you ever figure this out? Thanks, Frank Posted at Nginx Forum: https://forum.nginx.org/read.php?2,282707,283161#msg-283161 From 959895043 at qq.com Tue Feb 26 06:24:06 2019 From: 959895043 at qq.com (=?gb18030?B?tq3N8b79?=) Date: Tue, 26 Feb 2019 14:24:06 +0800 Subject: njs question Message-ID: hello! Hello! I would like to ask some questions about the development of NJS. First: when will NJS improve the ES6 standard? Second: can NJS be made into nodejs similar to nodejs? Can import third-party js files through "require" script. Thus realize the transformation from js to AST; Third: we want to add js to AST and AST to js functions in NJS. Do you have any good plans and Suggestions? Thank you very much! -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Tue Feb 26 11:46:02 2019 From: nginx-forum at forum.nginx.org (prasad.walke@freshgravity.com) Date: Tue, 26 Feb 2019 06:46:02 -0500 Subject: SMTP Forward Nginx Proxy Message-ID: <6612c899a613c47ded269b299c9fcbaa.NginxMailingListEnglish@forum.nginx.org> I would like to setup an nginx configuration which accepts SMTP connections to it and then proxies them to another IP address (the third-party SMTP service) so the requests to the mail server always appear to the third-party SMTP service as if they came from the same server. Is it possible to solve this issue with ngingx smtp proxy? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,283168,283168#msg-283168 From igor at sysoev.ru Tue Feb 26 11:52:27 2019 From: igor at sysoev.ru (Igor Sysoev) Date: Tue, 26 Feb 2019 14:52:27 +0300 Subject: SMTP Forward Nginx Proxy In-Reply-To: <6612c899a613c47ded269b299c9fcbaa.NginxMailingListEnglish@forum.nginx.org> References: <6612c899a613c47ded269b299c9fcbaa.NginxMailingListEnglish@forum.nginx.org> Message-ID: <772D30AE-0704-4344-9E95-2377919570F1@sysoev.ru> > On 26 Feb 2019, at 14:46, prasad.walke at freshgravity.com wrote: > > I would like to setup an nginx configuration which accepts SMTP connections > to it and then proxies them to another IP address (the third-party SMTP > service) so the requests to the mail server always appear to the third-party > SMTP service as if they came from the same server. > > Is it possible to solve this issue with ngingx smtp proxy? nginx SMTP proxy is intended to be used for authenticated SMTP only. -- Igor Sysoev http://nginx.com From nginx-forum at forum.nginx.org Tue Feb 26 11:54:36 2019 From: nginx-forum at forum.nginx.org (ashish.thorat@freshgravity.com) Date: Tue, 26 Feb 2019 06:54:36 -0500 Subject: Proxy to a local SMTP server via NGINX Message-ID: I have a requirement to route my SMTP traffic to a local SMTP server via NGINX on a public facing server. Also, I am using SMTP to route the HTTP requests on the same server. Can someone provide some insights on this, how this can be achieved? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,283170,283170#msg-283170 From maxim at nginx.com Tue Feb 26 12:13:23 2019 From: maxim at nginx.com (Maxim Konovalov) Date: Tue, 26 Feb 2019 15:13:23 +0300 Subject: SMTP Forward Nginx Proxy In-Reply-To: <6612c899a613c47ded269b299c9fcbaa.NginxMailingListEnglish@forum.nginx.org> References: <6612c899a613c47ded269b299c9fcbaa.NginxMailingListEnglish@forum.nginx.org> Message-ID: <0df045bd-6a43-ddae-00ca-55ad9d5ede03@nginx.com> On 26/02/2019 14:46, prasad.walke at freshgravity.com wrote: > I would like to setup an nginx configuration which accepts SMTP connections > to it and then proxies them to another IP address (the third-party SMTP > service) so the requests to the mail server always appear to the third-party > SMTP service as if they came from the same server. > > Is it possible to solve this issue with ngingx smtp proxy? > It is possible with the stream module and proxy_bind transparent. The configuration could be non-trivial though: https://www.nginx.com/blog/ip-transparency-direct-server-return-nginx-plus-transparent-proxy/ -- Maxim Konovalov From mdounin at mdounin.ru Tue Feb 26 15:52:45 2019 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 26 Feb 2019 18:52:45 +0300 Subject: nginx-1.15.9 Message-ID: <20190226155245.GW1877@mdounin.ru> Changes with nginx 1.15.9 26 Feb 2019 *) Feature: variables support in the "ssl_certificate" and "ssl_certificate_key" directives. *) Feature: the "poll" method is now available on Windows when using Windows Vista or newer. *) Bugfix: if the "select" method was used on Windows and an error occurred while establishing a backend connection, nginx waited for the connection establishment timeout to expire. *) Bugfix: the "proxy_upload_rate" and "proxy_download_rate" directives in the stream module worked incorrectly when proxying UDP datagrams. -- Maxim Dounin http://nginx.org/ From xeioex at nginx.com Tue Feb 26 16:12:31 2019 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Tue, 26 Feb 2019 19:12:31 +0300 Subject: njs-0.2.8 Message-ID: <533afd74-ac15-87d6-4a40-8f8972413498@nginx.com> Hello, I'm glad to announce a new release of NGINX JavaScript module (njs). This release proceeds to extend the coverage of ECMAScript specifications and modules functionality. - Added support for setting nginx variables. - Added support for delete operation in r.headersOut. - Properties of HTTP request deprecated in 0.2.2 were removed. - Added labels support. - Added support for shorthand property names for Object literals. : > var a = 1, b = 2 : undefined : > ({a, b}) : { : a: 1, : b: 2 : } You can learn more about njs: - Overview and introduction: http://nginx.org/en/docs/njs/ - Presentation: https://youtu.be/Jc_L6UffFOs Feel free to try it and give us feedback on: - Github: https://github.com/nginx/njs/issues - Mailing list: http://mailman.nginx.org/mailman/listinfo/nginx-devel Changes with njs 0.2.8 26 Feb 2019 nginx modules: *) Change: properties of HTTP request deprecated in 0.2.2 are removed. *) Feature: added support for delete operation in r.headersOut. *) Feature: added support for setting nginx variables. *) Bugfix: fixed r.subrequest() for empty body value. *) Improvement: setting special response headers in r.headersOut. Core: *) Feature: added labels support. *) Feature: added setImmediate() method. *) Feature: added support for shorthand property names for Object literals. *) Bugfix: fixed Function.prototype.bind(). *) Bugfix: fixed parsing of string literals containing newline characters. *) Bugfix: fixed line number in reporting variable reference errors. *) Bugfix: fixed creation of long UTF8 strings. *) Bugfix: fixed String.prototype.split() for unicode strings. *) Bugfix: fixed heap-buffer-overflow in String.prototype.split(). *) Bugfix: fixed Array.prototype.fill(). Thanks to Artem S. Povalyukhin. *) Improvement: code related to function invocation is refactored. Thanks to ??? (Hong Zhi Dao). *) Improvement: code related to variables is refactored. Thanks to ??? (Hong Zhi Dao). *) Improvement: parser is refactored. Thanks to ??? (Hong Zhi Dao). *) Improvement: reporting filenames in exceptions. From xeioex at nginx.com Tue Feb 26 16:53:02 2019 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Tue, 26 Feb 2019 19:53:02 +0300 Subject: njs question In-Reply-To: References: Message-ID: <1eed762d-8256-86ff-bc72-29848339b199@nginx.com> On 26.02.2019 9:24, ??? wrote: > hello! > Hello! I would like to ask some questions about the development of NJS. Hi ??? > First: when will NJS improve the ES6 standard? According to http://nginx.org/en/docs/njs/index.html we plan to extend coverage of ES6 and later specifications. This is ongoing process. You can see what is currently under development here: https://github.com/nginx/njs > Second: can NJS be made into nodejs similar to nodejs? > Can import third-party js files through "require" script. No, require() in njs works only with built-in modules. But we plan to support loading external files using ES6 import statements. > Thus realize the transformation from js to AST; > Third: we want to add js to AST and AST to js functions in NJS. Do you > have any good plans and Suggestions? I am sorry, I didn't get it. Can you elaborate? > Thank you very much! > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > From kworthington at gmail.com Tue Feb 26 17:55:52 2019 From: kworthington at gmail.com (Kevin Worthington) Date: Tue, 26 Feb 2019 12:55:52 -0500 Subject: [nginx-announce] nginx-1.15.9 In-Reply-To: <20190226155251.GX1877@mdounin.ru> References: <20190226155251.GX1877@mdounin.ru> Message-ID: Hello Nginx users, Now available: Nginx 1.15.9 for Windows https://kevinworthington.com/nginxwin1159 (32-bit and 64-bit versions) These versions are to support legacy users who are already using Cygwin based builds of Nginx. Officially supported native Windows binaries are at nginx.org. Announcements are also available here: Twitter http://twitter.com/kworthington Google+ https://plus.google.com/+KevinWorthington/ Thank you, Kevin -- Kevin Worthington kworthington *@* (gmail] [dot} {com) https://kevinworthington.com/ https://twitter.com/kworthington https://plus.google.com/+KevinWorthington/ On Tue, Feb 26, 2019 at 10:53 AM Maxim Dounin wrote: > Changes with nginx 1.15.9 26 Feb > 2019 > > *) Feature: variables support in the "ssl_certificate" and > "ssl_certificate_key" directives. > > *) Feature: the "poll" method is now available on Windows when using > Windows Vista or newer. > > *) Bugfix: if the "select" method was used on Windows and an error > occurred while establishing a backend connection, nginx waited for > the connection establishment timeout to expire. > > *) Bugfix: the "proxy_upload_rate" and "proxy_download_rate" directives > in the stream module worked incorrectly when proxying UDP datagrams. > > > -- > Maxim Dounin > http://nginx.org/ > _______________________________________________ > nginx-announce mailing list > nginx-announce at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-announce > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Tue Feb 26 20:09:35 2019 From: nginx-forum at forum.nginx.org (loopback_proxy) Date: Tue, 26 Feb 2019 15:09:35 -0500 Subject: proxy_cache without proxy_buffers Message-ID: I am wondering if Nginx will ever support caching without buffering responses? Buffering the full response before sending the data out to client increases the first byte latency (aka TTFB). In a perfect world if nginx can stream the data to the cache file and to the client simultaneously that would solve the TTFB issues. From experience i know that squid follows this methodology. I am curious why Nginx went with the buffering approach. You can make it even more efficient by using splice. splice upstream fd to disk and splice from disk to downstream fd. Thanks for all the replies. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,283191,283191#msg-283191 From nginx-forum at forum.nginx.org Wed Feb 27 04:07:34 2019 From: nginx-forum at forum.nginx.org (shadowgun1102) Date: Tue, 26 Feb 2019 23:07:34 -0500 Subject: Advanced Rewrite request url to match the query string and normalization Message-ID: <9fd26f0797ae86539ee193b9a016fc70.NginxMailingListEnglish@forum.nginx.org> I have a simple nginx forward proxy, configured as: server { listen 8000; resolver 8.8.8.8; location / { proxy_pass http://$host; proxy_set_header Host $host; } } The client behind its isp firewall sends the request (per nginx log): GET http://www.clientisp.com/path/rewrite.do?url=http%3A%2F%2Fwww.example.com HTTP/1.1 How do I transform the requested url to http://www.example.com before it is sent to the upstream? I looked up many posts online, but I am still confused at: 1. The online examples usually teach how you match the uri part, but my goal is to obtain the queried string only, i.e., everything after the equation mark"=", http%3A%2F%2Fwww.example.com. 2. I have no idea how to decode the percentage coded symbols into normalized one. Thanks for your input! Posted at Nginx Forum: https://forum.nginx.org/read.php?2,283192,283192#msg-283192 From lahiruprasad at gmail.com Thu Feb 28 10:04:36 2019 From: lahiruprasad at gmail.com (Lahiru Prasad) Date: Thu, 28 Feb 2019 15:34:36 +0530 Subject: Nginx front-end and back-end socket details Message-ID: Hi, Is there a way to get back-end service socket details matching to each front-end? My requirement is; User(78.12.34.12) connects to Nginx(23.34.12.53:80) Then Nginx(172.16.2.2) connects to back-end service(172.16.2.3:8080) How can I get above socket details, by filtering front-end socket, I need to get the matching back-end socket. So what I expect is something like this; 78.12.34.12:2312 > 23.34.12.53:80 172.16.2.2:3243 > 172.16.2.3:8080 Regards, Lahiru Prasad. -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris at cretaforce.gr Thu Feb 28 12:47:45 2019 From: chris at cretaforce.gr (Christos Chatzaras) Date: Thu, 28 Feb 2019 14:47:45 +0200 Subject: question about not found Message-ID: <313B33E9-FC4E-41BD-8C11-2AA79E0DBFAC@cretaforce.gr> If I try to visit an image that doesn't exist I get: "404 Not Found" If I add a location / { } then I get: "File not found" In both cases using curl -I http://www.example.com/image.png I get: HTTP/1.1 404 Not Found Server: nginx Date: Thu, 28 Feb 2019 12:46:56 GMT Content-Type: text/html Content-Length: 162 Connection: keep-alive Any idea why it shows different message? -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris at cretaforce.gr Thu Feb 28 13:06:35 2019 From: chris at cretaforce.gr (Christos Chatzaras) Date: Thu, 28 Feb 2019 15:06:35 +0200 Subject: question about not found In-Reply-To: <313B33E9-FC4E-41BD-8C11-2AA79E0DBFAC@cretaforce.gr> References: <313B33E9-FC4E-41BD-8C11-2AA79E0DBFAC@cretaforce.gr> Message-ID: <8C1EE7DA-1C23-475D-87A7-6FBF8FE3434C@cretaforce.gr> > On 28 Feb 2019, at 14:47, Christos Chatzaras wrote: > > If I try to visit an image that doesn't exist I get: > > "404 Not Found" > > If I add a location / { } then I get: > > "File not found" > > In both cases using curl -I http://www.example.com/image.png I get: > > HTTP/1.1 404 Not Found > Server: nginx > Date: Thu, 28 Feb 2019 12:46:56 GMT > Content-Type: text/html > Content-Length: 162 > Connection: keep-alive > > Any idea why it shows different message? The "File not found" message is displayed with Safari. With Curl and Firefox it shows "404 Not Found". Does Safari overrides the original message? -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris at cretaforce.gr Thu Feb 28 13:30:22 2019 From: chris at cretaforce.gr (Christos Chatzaras) Date: Thu, 28 Feb 2019 15:30:22 +0200 Subject: question about not found In-Reply-To: <8C1EE7DA-1C23-475D-87A7-6FBF8FE3434C@cretaforce.gr> References: <313B33E9-FC4E-41BD-8C11-2AA79E0DBFAC@cretaforce.gr> <8C1EE7DA-1C23-475D-87A7-6FBF8FE3434C@cretaforce.gr> Message-ID: <7F3A136C-B14E-4EAF-B827-0876125A476B@cretaforce.gr> > On 28 Feb 2019, at 15:06, Christos Chatzaras wrote: > > > >> On 28 Feb 2019, at 14:47, Christos Chatzaras wrote: >> >> If I try to visit an image that doesn't exist I get: >> >> "404 Not Found" >> >> If I add a location / { } then I get: >> >> "File not found" >> >> In both cases using curl -I http://www.example.com/image.png I get: >> >> HTTP/1.1 404 Not Found >> Server: nginx >> Date: Thu, 28 Feb 2019 12:46:56 GMT >> Content-Type: text/html >> Content-Length: 162 >> Connection: keep-alive >> >> Any idea why it shows different message? > > The "File not found" message is displayed with Safari. With Curl and Firefox it shows "404 Not Found". > > Does Safari overrides the original message? Finally I think it's related to nginx cache. From mdounin at mdounin.ru Thu Feb 28 13:53:08 2019 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 28 Feb 2019 16:53:08 +0300 Subject: question about not found In-Reply-To: <7F3A136C-B14E-4EAF-B827-0876125A476B@cretaforce.gr> References: <313B33E9-FC4E-41BD-8C11-2AA79E0DBFAC@cretaforce.gr> <8C1EE7DA-1C23-475D-87A7-6FBF8FE3434C@cretaforce.gr> <7F3A136C-B14E-4EAF-B827-0876125A476B@cretaforce.gr> Message-ID: <20190228135308.GE1877@mdounin.ru> Hello! On Thu, Feb 28, 2019 at 03:30:22PM +0200, Christos Chatzaras wrote: > > On 28 Feb 2019, at 15:06, Christos Chatzaras wrote: > > > >> On 28 Feb 2019, at 14:47, Christos Chatzaras wrote: > >> > >> If I try to visit an image that doesn't exist I get: > >> > >> "404 Not Found" > >> > >> If I add a location / { } then I get: > >> > >> "File not found" > >> > >> In both cases using curl -I http://www.example.com/image.png I get: > >> > >> HTTP/1.1 404 Not Found > >> Server: nginx > >> Date: Thu, 28 Feb 2019 12:46:56 GMT > >> Content-Type: text/html > >> Content-Length: 162 > >> Connection: keep-alive > >> > >> Any idea why it shows different message? > > > > The "File not found" message is displayed with Safari. With Curl and Firefox it shows "404 Not Found". > > > > Does Safari overrides the original message? > > Finally I think it's related to nginx cache. The only message generated by nginx itself is "404 Not Found". If you see different messages, they are generated elsewhere. Depending on your exact configuration, this may be, for example, your backend server, the "error_page" you've configured, or your client. Note well that there is a difference between what browser shows and what "curl -I" shows. While "curl -I" shows response headers, browsers show the response body (or a "friendly" error page from the browser itself in some cases). To get something comparable with what browsers show you have to use "curl" without "-I". Also note that browsers cache responses by default, and testing configuration changes with browsers might be tricky. -- Maxim Dounin http://mdounin.ru/ From chris at cretaforce.gr Thu Feb 28 16:01:06 2019 From: chris at cretaforce.gr (Christos Chatzaras) Date: Thu, 28 Feb 2019 18:01:06 +0200 Subject: question about not found In-Reply-To: <20190228135308.GE1877@mdounin.ru> References: <313B33E9-FC4E-41BD-8C11-2AA79E0DBFAC@cretaforce.gr> <8C1EE7DA-1C23-475D-87A7-6FBF8FE3434C@cretaforce.gr> <7F3A136C-B14E-4EAF-B827-0876125A476B@cretaforce.gr> <20190228135308.GE1877@mdounin.ru> Message-ID: > On 28 Feb 2019, at 15:53, Maxim Dounin wrote: > > Hello! > > On Thu, Feb 28, 2019 at 03:30:22PM +0200, Christos Chatzaras wrote: > >>> On 28 Feb 2019, at 15:06, Christos Chatzaras wrote: >>> >>>> On 28 Feb 2019, at 14:47, Christos Chatzaras wrote: >>>> >>>> If I try to visit an image that doesn't exist I get: >>>> >>>> "404 Not Found" >>>> >>>> If I add a location / { } then I get: >>>> >>>> "File not found" >>>> >>>> In both cases using curl -I http://www.example.com/image.png I get: >>>> >>>> HTTP/1.1 404 Not Found >>>> Server: nginx >>>> Date: Thu, 28 Feb 2019 12:46:56 GMT >>>> Content-Type: text/html >>>> Content-Length: 162 >>>> Connection: keep-alive >>>> >>>> Any idea why it shows different message? >>> >>> The "File not found" message is displayed with Safari. With Curl and Firefox it shows "404 Not Found". >>> >>> Does Safari overrides the original message? >> >> Finally I think it's related to nginx cache. > > The only message generated by nginx itself is "404 Not Found". If > you see different messages, they are generated elsewhere. > Depending on your exact configuration, this may be, for example, > your backend server, the "error_page" you've configured, or your > client. > > Note well that there is a difference between what browser shows > and what "curl -I" shows. While "curl -I" shows response headers, > browsers show the response body (or a "friendly" error page from > the browser itself in some cases). To get something comparable > with what browsers show you have to use "curl" without "-I". > > Also note that browsers cache responses by default, and testing > configuration changes with browsers might be tricky. > > -- > Maxim Dounin > http://mdounin.ru/ Thank you for the reply. With fastcgi_intercept_errors enabled I always see the Nginx message. I am searching now if the "File not found" message is generated from PHP-FPM. From nginx-forum at forum.nginx.org Thu Feb 28 18:43:18 2019 From: nginx-forum at forum.nginx.org (wkbrad) Date: Thu, 28 Feb 2019 13:43:18 -0500 Subject: Possible memory leak? Message-ID: <4c5fdff224b1af7d8cd04654bd14f238.NginxMailingListEnglish@forum.nginx.org> We're running Nginx version 1.15.8 but we've been seeing similar issues with other versions too and on all of our servers that have a high number of vhosts. The issue is that when you do an nginx reload it ends up using almost 2x the ram as it was previously. Here is a test I ran. -------------------------------------------------------------------------------- 21.3 MiB + 1.4 GiB = 1.4 GiB nginx (3) 21.3 MiB + 1.4 GiB = 1.4 GiB nginx (3) 484.2 MiB + 1.4 GiB = 1.9 GiB nginx (3) 588.1 MiB + 1.4 GiB = 2.0 GiB nginx (3) 720.3 MiB + 1.4 GiB = 2.1 GiB nginx (3) 1.4 GiB + 1.4 GiB = 2.8 GiB nginx (3) 18.0 MiB + 2.7 GiB = 2.7 GiB nginx (3) 20.8 MiB + 2.7 GiB = 2.7 GiB nginx (3) 20.8 MiB + 2.7 GiB = 2.7 GiB nginx (3) -------------------------------------------------------------------------------- I expect the ram usage to increase while the reload is happening but after it's done shouldn't the ram usage go back to about the same level? This issue is completely reproducible across all of our servers and if I do a full restart, ram usage goes back down to normal. Any thoughts? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,283216,283216#msg-283216 From mdounin at mdounin.ru Thu Feb 28 20:07:45 2019 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 28 Feb 2019 23:07:45 +0300 Subject: Possible memory leak? In-Reply-To: <4c5fdff224b1af7d8cd04654bd14f238.NginxMailingListEnglish@forum.nginx.org> References: <4c5fdff224b1af7d8cd04654bd14f238.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20190228200745.GL1877@mdounin.ru> Hello! On Thu, Feb 28, 2019 at 01:43:18PM -0500, wkbrad wrote: > We're running Nginx version 1.15.8 but we've been seeing similar issues with > other versions too and on all of our servers that have a high number of > vhosts. > > The issue is that when you do an nginx reload it ends up using almost 2x the > ram as it was previously. Here is a test I ran. > -------------------------------------------------------------------------------- > 21.3 MiB + 1.4 GiB = 1.4 GiB nginx (3) > 21.3 MiB + 1.4 GiB = 1.4 GiB nginx (3) > 484.2 MiB + 1.4 GiB = 1.9 GiB nginx (3) > 588.1 MiB + 1.4 GiB = 2.0 GiB nginx (3) > 720.3 MiB + 1.4 GiB = 2.1 GiB nginx (3) > 1.4 GiB + 1.4 GiB = 2.8 GiB nginx (3) > 18.0 MiB + 2.7 GiB = 2.7 GiB nginx (3) > 20.8 MiB + 2.7 GiB = 2.7 GiB nginx (3) > 20.8 MiB + 2.7 GiB = 2.7 GiB nginx (3) > -------------------------------------------------------------------------------- > > I expect the ram usage to increase while the reload is happening but after > it's done shouldn't the ram usage go back to about the same level? > > This issue is completely reproducible across all of our servers and if I do > a full restart, ram usage goes back down to normal. > > Any thoughts? Configuration reload implies that master process parses the configuration and creates new configuration structures in memory. That is, memory usage is expected to be 2x compared to a clean startup assuming most of the memory is used for the configuration. Once the configuration is correctly parsed and applied, master process will free the old configuration. At this point memory usage is expected to be the same as after a clean start, but given memory allocation details it is almost never the case. For example, assuming the system allocator simply uses sbrk() without any caching, after the configuration reload the new configuration will use addresses higher than the original one used, so allocator will not be able to release no-longer-needed memory (previously used by the original configuration) to the system. -- Maxim Dounin http://mdounin.ru/ From nginx-forum at forum.nginx.org Thu Feb 28 20:54:24 2019 From: nginx-forum at forum.nginx.org (wkbrad) Date: Thu, 28 Feb 2019 15:54:24 -0500 Subject: Possible memory leak? In-Reply-To: <20190228200745.GL1877@mdounin.ru> References: <20190228200745.GL1877@mdounin.ru> Message-ID: <1ca6ddd3ddf44031a99b4bd41a037728.NginxMailingListEnglish@forum.nginx.org> Maxim Dounin Wrote: ------------------------------------------------------- > so allocator will not be able to release no-longer-needed > memory (previously used by the original configuration) to the > system. Thanks Maxim! That sounds like the very definition of a memory leak to me. :) But I'm not sure how accurate it is. For example, if I reload Nginx again it doesn't then use 3x or 4x more ram. It stays at the same, but elevated, level as the first reload. So it does appear to be able to release that memory, maybe just not on the first reload. But why would that be and is there a solution to that kind of problem? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,283216,283222#msg-283222 From nginx-forum at forum.nginx.org Thu Feb 28 21:35:44 2019 From: nginx-forum at forum.nginx.org (necromonger) Date: Thu, 28 Feb 2019 16:35:44 -0500 Subject: Rewrite does not work Message-ID: <8652b24242d254f16d9f4cee3bc3769e.NginxMailingListEnglish@forum.nginx.org> Hello forum members, I have the following problem and I hope someone can help me. I've added the following rewrite line in the Config website: rewrite ^tagged\/(.*)$ /?p=blog&blog_tag_name=$1 break; It is supposedly called by the following URL: /tagged/Server be redirected to the following internally: /blog?blog_name=&blog_tag_name=Server Unfortunately, I always get an Error 404. Maybe someone can help me, where the mistake lies. Greeting Posted at Nginx Forum: https://forum.nginx.org/read.php?2,283225,283225#msg-283225 From mdounin at mdounin.ru Thu Feb 28 21:48:39 2019 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 1 Mar 2019 00:48:39 +0300 Subject: Possible memory leak? In-Reply-To: <1ca6ddd3ddf44031a99b4bd41a037728.NginxMailingListEnglish@forum.nginx.org> References: <20190228200745.GL1877@mdounin.ru> <1ca6ddd3ddf44031a99b4bd41a037728.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20190228214839.GM1877@mdounin.ru> Hello! On Thu, Feb 28, 2019 at 03:54:24PM -0500, wkbrad wrote: > Maxim Dounin Wrote: > ------------------------------------------------------- > > so allocator will not be able to release no-longer-needed > > memory (previously used by the original configuration) to the > > system. > > Thanks Maxim! That sounds like the very definition of a memory leak to me. > :) Well, if it does, you'll probably want to re-check your definition of a memory leak. This is a result of how a particular allocator works, not a memory leak. -- Maxim Dounin http://mdounin.ru/ From nginx-forum at forum.nginx.org Thu Feb 28 22:06:24 2019 From: nginx-forum at forum.nginx.org (wkbrad) Date: Thu, 28 Feb 2019 17:06:24 -0500 Subject: Possible memory leak? In-Reply-To: <20190228214839.GM1877@mdounin.ru> References: <20190228214839.GM1877@mdounin.ru> Message-ID: Thanks Maxim! I really appreciate your time! As for what I was referring to a memory leak, I guess I'm just referring to the effects I'm seeing across a number of systems with highly different configurations. I'm not an expert so I'll defer to you on that. :) My question still stands though, is there a way to solve that particular issue? It is causing us problems when the ram that Nginx is using doubles. Are you saying this is an issue with our OS ( CentOS 6 and 7 ), specific Nginx build ( we use 2 different builds but they are the same version ), or something else? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,283216,283227#msg-283227 From r at roze.lv Thu Feb 28 22:13:59 2019 From: r at roze.lv (Reinis Rozitis) Date: Fri, 1 Mar 2019 00:13:59 +0200 Subject: Rewrite does not work In-Reply-To: <8652b24242d254f16d9f4cee3bc3769e.NginxMailingListEnglish@forum.nginx.org> References: <8652b24242d254f16d9f4cee3bc3769e.NginxMailingListEnglish@forum.nginx.org> Message-ID: <000501d4cfb2$e7b6c430$b7244c90$@roze.lv> > > I've added the following rewrite line in the Config website: > > rewrite ^tagged\/(.*)$ /?p=blog&blog_tag_name=$1 break; > > It is supposedly called by the following URL: /tagged/Server be redirected to the > following internally: /blog?blog_name=&blog_tag_name=Server If the rewrite directive in your config is exactly as written here, you are missing / after the ^ (as uri always starts with /) and you don't need to escape / so should be: rewrite ^/tagged/(.*)$ /?p=blog&blog_tag_name=$1 break; rr From r at roze.lv Thu Feb 28 22:27:50 2019 From: r at roze.lv (Reinis Rozitis) Date: Fri, 1 Mar 2019 00:27:50 +0200 Subject: Possible memory leak? In-Reply-To: References: <20190228214839.GM1877@mdounin.ru> Message-ID: <000601d4cfb4$d6e4d190$84ae74b0$@roze.lv> > My question still stands though, is there a way to solve that particular issue? It is > causing us problems when the ram that Nginx is using doubles. Theoretically if that?s a problem you could instead of a reload send USR2 and QUIT to the nginx process (http://nginx.org/en/docs/control.html) which should spawn a new master and the gracefully quit the old one. Correct me if I'm wrong (actually haven't tested for memory usage). rr From glasswalk3r at yahoo.com.br Thu Feb 28 23:00:00 2019 From: glasswalk3r at yahoo.com.br (Alceu R. de Freitas Jr.) Date: Thu, 28 Feb 2019 23:00:00 +0000 (UTC) Subject: Possible memory leak? In-Reply-To: <000601d4cfb4$d6e4d190$84ae74b0$@roze.lv> References: <20190228214839.GM1877@mdounin.ru> <000601d4cfb4$d6e4d190$84ae74b0$@roze.lv> Message-ID: <1410821155.6795920.1551394800573@mail.yahoo.com> That should do it, AFAIK a process cannot give back memory already allocated to the system. I would be more than interested to know more about a different technique that doesn't involve a fork system call. Em quinta-feira, 28 de fevereiro de 2019 19:28:00 BRT, Reinis Rozitis escreveu: > My question still stands though, is there a way to solve that particular issue? It is > causing us problems when the ram that Nginx is using doubles. Theoretically if that?s a problem you could instead of a reload send USR2 and QUIT to the nginx process (http://nginx.org/en/docs/control.html) which should spawn a new master and the gracefully quit the old one. Correct me if I'm wrong (actually haven't tested for memory usage). rr _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Thu Feb 28 23:44:52 2019 From: nginx-forum at forum.nginx.org (wkbrad) Date: Thu, 28 Feb 2019 18:44:52 -0500 Subject: Possible memory leak? In-Reply-To: <000601d4cfb4$d6e4d190$84ae74b0$@roze.lv> References: <000601d4cfb4$d6e4d190$84ae74b0$@roze.lv> Message-ID: Thanks Reinis! That's some really great info and from the short tests that I've run so far I think this is going to be the solution. I used this command as the test: pid="$(cat /run/nginx.pid)"; kill -USR2 $pid; sleep 10; kill -QUIT $pid And here is what happened with the reload: 37.0 MiB + 1.4 GiB = 1.4 GiB nginx (3) 495.4 MiB + 1.4 GiB = 1.9 GiB nginx (4) 606.6 MiB + 1.4 GiB = 2.0 GiB nginx (4) 738.3 MiB + 1.4 GiB = 2.1 GiB nginx (4) 40.1 MiB + 1.7 GiB = 1.8 GiB nginx (4) 57.1 MiB + 2.8 GiB = 2.8 GiB nginx (6) 57.4 MiB + 2.8 GiB = 2.8 GiB nginx (6) 1.3 GiB + 1.4 GiB = 2.7 GiB nginx (5) 14.6 MiB + 1.4 GiB = 1.4 GiB nginx (4) Started at 1.4G and ended at 1.4G. Yay! I also tested whether it was reloading gracefully (i.e. not killing active connections ) and indeed it is. Yay again! I'm going to run some more tests tomorrow and modify the systemd script on one of our servers as another test. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,283216,283232#msg-283232