From lists at lazygranch.com Fri Dec 1 02:08:29 2017 From: lists at lazygranch.com (lists at lazygranch.com) Date: Thu, 30 Nov 2017 18:08:29 -0800 Subject: How to control the total requests in Ngnix In-Reply-To: References: <20171130094457.DF7902C56ACF@mail.nginx.com> <2017113019525810745325@migu.cn> Message-ID: <20171130180829.5a9eb442.lists@lazygranch.com> Here is a log of real life IP limiting with a 30 connection limit: 86.184.152.14 British Telecommunications PLC 8.37.235.199 Level 3 Communications Inc. 130.76.186.14 The Boeing Company security.5.bz2:Nov 29 20:50:53 theranch kernel: ipfw: 5005 drop session type 40 86.184.152.14 58714 -> myip 80, 34 too many entries security.6.bz2:Nov 29 16:01:31 theranch kernel: ipfw: 5005 drop session type 40 8.37.235.199 10363 -> myip 80, 42 too many entries above repeated twice security.8.bz2:Nov 29 06:39:15 theranch kernel: ipfw: 5005 drop session type 40 130.76.186.14 34056 -> myip 80, 31 too many entries above repeated 18 times I have an Alexa rating around 960,000. Hey, at least I made to the top one million websites. But my point is even with a limit of 30, I'm kicking out readers. Look at the nature of the IPs. British Telecom is one of those huge ISPs where I guess different users are sharing the same IP. (Not sure.) Level 3 is the provider at many Starbucks, besides being a significant traffic carrier. Boeing has decent IP space, but maybe only a few IPs per facility. Who knows. My point is if you set the limit at two, that is way too low. The only real way to protect from DDOS is to use a commercial reverse proxy. I don't think limiting connection in Nginx (or in the firewall) will solve a real attack. It will probably stop some kid in his parents basement. But today you can rent DDOS attacks on the dark web. If you really want to improve performance of your server, do severe IP filtering at the firewall. Limit the number of search engines that can read your site. Block major hosting companies and virtual private servers. There are no eyeballs there. Just VPNs (who can drop the VPN if they really want to read your site) and hackers. Easily half the internet traffic is bots. Per some discussions on this list, it is best not to block using nginx, but rather use the firewall. Nginx parses the http request even if blocking the IP, so the CPU load isn't insignificant. As an alternative, you can use a reputation based blocking list. (I don't use one on web servers, just on email servers.) From tongshushan at migu.cn Fri Dec 1 02:12:48 2017 From: tongshushan at migu.cn (tongshushan at migu.cn) Date: Fri, 1 Dec 2017 10:12:48 +0800 Subject: How to control the total requests in Ngnix References: <20171130094457.DF7902C56ACF@mail.nginx.com>, <2017113019525810745325@migu.cn>, Message-ID: <2017120110124843881630@migu.cn> my website is busier than I think I can handle Tong From: Peter Booth Date: 2017-12-01 06:25 To: nginx Subject: Re: How to control the total requests in Ngnix So what exactly are you trying to protect against? Against ?bad people? or ?my website is busier than I think I can handle?? Sent from my iPhone On Nov 30, 2017, at 6:52 AM, "tongshushan at migu.cn" wrote: a limit of two connections per address is just a example. What does 2000 requests mean? Is that per second? yes?it's QPS. ??? ?????????? ??? Mobile?13818663262 Telephone?021-51856688(81275) Email?tongshushan at migu.cn ???? Gary ????? 2017-11-30 17:44 ???? nginx ??? Re: ??: How to control the total requests in Ngnix I think a limit of two connections per address is too low. I know that tip pages suggest a low limit in so-called anti-DDOS (really just flood protection). Some large carriers can generate 30+ connections per IP, probably because they lack sufficient IPV4 address space for their millions of users. This is based on my logs. I used to have a limit of 10 and it was reached quite often just from corporate users. The 10 per second rate is fine, and probably about as low as you should go. What does 2000 requests mean? Is that per second? From: tongshushan at migu.cn Sent: November 30, 2017 1:14 AM To: nginx at nginx.org Reply-to: nginx at nginx.org Subject: ??: How to control the total requests in Ngnix Additional: the total requests will be sent from different client ips. Tong ???? tongshushan at migu.cn ????? 2017-11-30 17:12 ???? nginx ??? How to control the total requests in Ngnix Hi guys, I want to use ngnix to protect my system,to allow max 2000 requests sent to my service(http location). The below configs are only for per client ip,not for the total requests control. ##########method 1########## limit_conn_zone $binary_remote_addr zone=addr:10m; server { location /mylocation/ { limit_conn addr 2; proxy_pass http://my_server/mylocation/; proxy_set_header Host $host:$server_port; } } ##########method 2########## limit_req_zone $binary_remote_addr zone=one:10m rate=10r/s; server { location /mylocation/ { limit_req zone=one burst=5 nodelay; proxy_pass http://my_server/mylocation/; proxy_set_header Host $host:$server_port; } } How can I do it? Tong _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From tongshushan at migu.cn Fri Dec 1 03:18:06 2017 From: tongshushan at migu.cn (tongshushan at migu.cn) Date: Fri, 1 Dec 2017 11:18:06 +0800 Subject: How to control the total requests in Ngnix References: <2017113017121771024321@migu.cn>, <20171130101707.GN3127@daoine.org>, <2017113020044137777429@migu.cn>, <20171130183832.GO3127@daoine.org> Message-ID: <2017120111180643331137@migu.cn> I configured as below: limit_req_zone "all" zone=all:100m rate=2000r/s; limit_req zone=all burst=100 nodelay; but when testing,I use tool to send the request at: Qps:486.1(not reach 2000) I got the many many 503 error,and the error info as below: 2017/12/01 11:08:29 [error] 26592#37196: *15466 limiting requests, excess: 101.000 by zone "all", client: 127.0.0.1, server: localhost, request: "GET /private/rush2purchase/inventory/aquire?productId=product1 HTTP/1.1", host: "localhost" Why excess: 101.000? I set it as 2000r/s ? ??? ?????????? ??? Mobile?13818663262 Telephone?021-51856688(81275) Email?tongshushan at migu.cn From: Francis Daly Date: 2017-12-01 02:38 To: nginx Subject: Re: Re: How to control the total requests in Ngnix On Thu, Nov 30, 2017 at 08:04:41PM +0800, tongshushan at migu.cn wrote: Hi there, > what is the same "key " for all requests from different client ips for limit_conn_zone/limit_req_zone? I have no idea on this. Any $variable might be different in different connections. Any fixed string will not be. So: limit_conn_zone "all" zone=all... for example. f -- Francis Daly francis at daoine.org _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists at lazygranch.com Fri Dec 1 04:17:05 2017 From: lists at lazygranch.com (Gary) Date: Thu, 30 Nov 2017 20:17:05 -0800 Subject: How to control the total requests in Ngnix In-Reply-To: <2017120111180643331137@migu.cn> Message-ID: An HTML attachment was scrubbed... URL: From tongshushan at migu.cn Fri Dec 1 04:52:46 2017 From: tongshushan at migu.cn (tongshushan at migu.cn) Date: Fri, 1 Dec 2017 12:52:46 +0800 Subject: How to control the total requests in Ngnix References: <20171201041715.7960B2C56B9E@mail.nginx.com> Message-ID: <2017120112524613639839@migu.cn> I sent the test requests from one fron only 1 server. ??? ?????????? ??? Mobile?13818663262 Telephone?021-51856688(81275) Email?tongshushan at migu.cn From: Gary Date: 2017-12-01 12:17 To: nginx Subject: Re: How to control the total requests in Ngnix I thought the rate is per IP address, not for whole server. From: tongshushan at migu.cn Sent: November 30, 2017 7:18 PM To: nginx at nginx.org Reply-to: nginx at nginx.org Subject: Re: Re: How to control the total requests in Ngnix I configured as below: limit_req_zone "all" zone=all:100m rate=2000r/s; limit_req zone=all burst=100 nodelay; but when testing,I use tool to send the request at: Qps:486.1(not reach 2000) I got the many many 503 error,and the error info as below: 2017/12/01 11:08:29 [error] 26592#37196: *15466 limiting requests, excess: 101.000 by zone "all", client: 127.0.0.1, server: localhost, request: "GET /private/rush2purchase/inventory/aquire?productId=product1 HTTP/1.1", host: "localhost" Why excess: 101.000? I set it as 2000r/s ? ??? ?????????? ??? Mobile?13818663262 Telephone?021-51856688(81275) Email?tongshushan at migu.cn From: Francis Daly Date: 2017-12-01 02:38 To: nginx Subject: Re: Re: How to control the total requests in Ngnix On Thu, Nov 30, 2017 at 08:04:41PM +0800, tongshushan at migu.cn wrote: Hi there, > what is the same "key " for all requests from different client ips for limit_conn_zone/limit_req_zone? I have no idea on this. Any $variable might be different in different connections. Any fixed string will not be. So: limit_conn_zone "all" zone=all... for example. f -- Francis Daly francis at daoine.org _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From tongshushan at migu.cn Fri Dec 1 04:53:56 2017 From: tongshushan at migu.cn (tongshushan at migu.cn) Date: Fri, 1 Dec 2017 12:53:56 +0800 Subject: How to control the total requests in Ngnix References: <20171201041715.7960B2C56B9E@mail.nginx.com> Message-ID: <2017120112535613099940@migu.cn> I sent the test requests from only one client. ??? ?????????? ??? Mobile?13818663262 Telephone?021-51856688(81275) Email?tongshushan at migu.cn From: Gary Date: 2017-12-01 12:17 To: nginx Subject: Re: How to control the total requests in Ngnix I thought the rate is per IP address, not for whole server. From: tongshushan at migu.cn Sent: November 30, 2017 7:18 PM To: nginx at nginx.org Reply-to: nginx at nginx.org Subject: Re: Re: How to control the total requests in Ngnix I configured as below: limit_req_zone "all" zone=all:100m rate=2000r/s; limit_req zone=all burst=100 nodelay; but when testing,I use tool to send the request at: Qps:486.1(not reach 2000) I got the many many 503 error,and the error info as below: 2017/12/01 11:08:29 [error] 26592#37196: *15466 limiting requests, excess: 101.000 by zone "all", client: 127.0.0.1, server: localhost, request: "GET /private/rush2purchase/inventory/aquire?productId=product1 HTTP/1.1", host: "localhost" Why excess: 101.000? I set it as 2000r/s ? ??? ?????????? ??? Mobile?13818663262 Telephone?021-51856688(81275) Email?tongshushan at migu.cn From: Francis Daly Date: 2017-12-01 02:38 To: nginx Subject: Re: Re: How to control the total requests in Ngnix On Thu, Nov 30, 2017 at 08:04:41PM +0800, tongshushan at migu.cn wrote: Hi there, > what is the same "key " for all requests from different client ips for limit_conn_zone/limit_req_zone? I have no idea on this. Any $variable might be different in different connections. Any fixed string will not be. So: limit_conn_zone "all" zone=all... for example. f -- Francis Daly francis at daoine.org _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Fri Dec 1 11:25:53 2017 From: nginx-forum at forum.nginx.org (traquila) Date: Fri, 01 Dec 2017 06:25:53 -0500 Subject: Multiple Cache Manager Processes or Threads In-Reply-To: <20171130182155.GP78325@mdounin.ru> References: <20171130182155.GP78325@mdounin.ru> Message-ID: <6b877966dd92c1a470273c571d64aa35.NginxMailingListEnglish@forum.nginx.org> Thank you for your answer, I am using an old version (1.8.1). I will try to upgrade to 1.12 and check if it solve my problem. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,277597,277616#msg-277616 From tongshushan at migu.cn Fri Dec 1 13:09:51 2017 From: tongshushan at migu.cn (tongshushan at migu.cn) Date: Fri, 1 Dec 2017 21:09:51 +0800 Subject: lua code in log_by_lua_file not executed when the upstream server is down Message-ID: <2017120121095108228851@migu.cn> the nginx.conf as below: upstream my_server { server localhost:8095; keepalive 2000; } location /private/rush2purchase/ { limit_conn addr 20; proxy_pass http://my_server/private/rush2purchase/; proxy_set_header Host $host:$server_port; rewrite_by_lua_file D:/tmp/lua/draw_r.lua; log_by_lua_file D:/tmp/lua/draw_decr.lua; } when I send request to http://localhost/private/rush2purchase/ ,it works fine the the stream server is up, but when I shutdown the upstream server(port:8095),I find the code not executed in log_by_lua_file (draw_decr.lua). info in nginx access.log: 127.0.0.1 - - [01/Dec/2017:21:03:20 +0800] "GET /private/rush2purchase/ HTTP/1.1" 504 558 "-" "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3236.0 Safari/537.36" error message in nginx error.log: 2017/12/01 21:02:20 [error] 35292#42868: *3298 upstream timed out (10060: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond) while connecting to upstream, client: 127.0.0.1, server: localhost, request: "GET /private/rush2purchase/ HTTP/1.1", upstream: "http://[::1]:8095/private/rush2purchase/", host: "localhost" 2017/12/01 21:03:20 [error] 35292#42868: *3298 upstream timed out (10060: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond) while connecting to upstream, client: 127.0.0.1, server: localhost, request: "GET /private/rush2purchase/ HTTP/1.1", upstream: "http://127.0.0.1:8095/private/rush2purchase/", host: "localhost" How to fix it? Tong -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Fri Dec 1 13:46:02 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 1 Dec 2017 16:46:02 +0300 Subject: How to control the total requests in Ngnix In-Reply-To: <2017120111180643331137@migu.cn> References: <2017113017121771024321@migu.cn> <20171130101707.GN3127@daoine.org> <2017113020044137777429@migu.cn> <20171130183832.GO3127@daoine.org> <2017120111180643331137@migu.cn> Message-ID: <20171201134602.GS78325@mdounin.ru> Hello! On Fri, Dec 01, 2017 at 11:18:06AM +0800, tongshushan at migu.cn wrote: > I configured as below: > limit_req_zone "all" zone=all:100m rate=2000r/s; > limit_req zone=all burst=100 nodelay; > but when testing,I use tool to send the request at: Qps:486.1(not reach 2000) I got the many many 503 error,and the error info as below: > > 2017/12/01 11:08:29 [error] 26592#37196: *15466 limiting requests, excess: 101.000 by zone "all", client: 127.0.0.1, server: localhost, request: "GET /private/rush2purchase/inventory/aquire?productId=product1 HTTP/1.1", host: "localhost" > > Why excess: 101.000? I set it as 2000r/s ? You've configured "burst=100", and nginx starts to reject requests when the accumulated number of requests (excess) exceeds the configured burst size. In short, the algorithm works as follows: every request increments excess by 1, and decrements it according to the rate configured. If the resulting value is greater than burst, the request is rejected. You can read more about the algorithm used in Wikipedia, see https://en.wikipedia.org/wiki/Leaky_bucket. -- Maxim Dounin http://mdounin.ru/ From lists at lazygranch.com Fri Dec 1 14:43:36 2017 From: lists at lazygranch.com (Gary) Date: Fri, 01 Dec 2017 06:43:36 -0800 Subject: How to control the total requests in Ngnix In-Reply-To: <20171201134602.GS78325@mdounin.ru> Message-ID: Is this limiting for one connection or rate limiting for the entire server? I interpret this as a limit for one connection. I got rid of the trailing period. https://en.wikipedia.org/wiki/Leaky_bucket A request is one line in the access log I assume, typically a html verb like "get". I use a single core VPS, so I don't have much CPU power. Unless the verb action is trivial, I doubt I would hit 2000/s. From experimentation, a burst of 10 gets the images going mostly unimpeded, and a rate of 10/s is where you see a page just start to slow down. I think a rate of 2000/s isn't much of a limit. ? Original Message ? From: mdounin at mdounin.ru Sent: December 1, 2017 5:46 AM To: nginx at nginx.org Reply-to: nginx at nginx.org Subject: Re: Re: How to control the total requests in Ngnix Hello! On Fri, Dec 01, 2017 at 11:18:06AM +0800, tongshushan at migu.cn wrote: > I configured as below: > limit_req_zone "all" zone=all:100m rate=2000r/s; > limit_req zone=all burst=100 nodelay; > but when testing,I use tool to send the request at: Qps:486.1(not reach 2000)? I got the many many 503 error,and the error info as below: > >? 2017/12/01 11:08:29 [error] 26592#37196: *15466 limiting requests, excess: 101.000 by zone "all", client: 127.0.0.1, server: localhost, request: "GET /private/rush2purchase/inventory/aquire?productId=product1 HTTP/1.1", host: "localhost" > > Why excess: 101.000? I set it as 2000r/s ? You've configured "burst=100", and nginx starts to reject requests when the accumulated number of requests (excess) exceeds the configured burst size. In short, the algorithm works as follows: every request increments excess by 1, and decrements it according to the rate configured.? If the resulting value is greater than burst, the request is rejected.?? You can read more about the algorithm used in Wikipedia, see https://en.wikipedia.org/wiki/Leaky_bucket. -- Maxim Dounin http://mdounin.ru/ _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From mdounin at mdounin.ru Fri Dec 1 16:13:17 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 1 Dec 2017 19:13:17 +0300 Subject: How to control the total requests in Ngnix In-Reply-To: <20171201144353.5BA842C56BF6@mail.nginx.com> References: <20171201134602.GS78325@mdounin.ru> <20171201144353.5BA842C56BF6@mail.nginx.com> Message-ID: <20171201161317.GU78325@mdounin.ru> Hello! On Fri, Dec 01, 2017 at 06:43:36AM -0800, Gary wrote: > Is this limiting for one connection or rate limiting for the > entire server? I interpret this as a limit for one connection. The request limiting can be configured in multiple ways. It is typically configured using $binary_remote_addr variable as a key (see http://nginx.org/r/limit_req_zone), and hence the limit is per IP address. The particular configuration uses : limit_req_zone "all" zone=all:100m rate=2000r/s; That is, the limit is applied for the "all" key - a static string without any variables. This means that all requests (where limit_req with the zone in question applies) will share the same limit. -- Maxim Dounin http://mdounin.ru/ From sophie at klunky.co.uk Sat Dec 2 09:32:53 2017 From: sophie at klunky.co.uk (Sophie Loewenthal) Date: Sat, 2 Dec 2017 10:32:53 +0100 Subject: debian package nginx-naxsi Message-ID: <4B952DA8-74DE-4BEE-82CA-5DF86B54632C@klunky.co.uk> Hi, Debian Wheezy had a package called nginx-naxsi that had the WAF NAXSI compiled. I?ve not seen this on recent versions of Debian. Does this still exist or was this moved to another repository? Kind regards, Sophie From tongshushan at migu.cn Sat Dec 2 09:57:02 2017 From: tongshushan at migu.cn (tongshushan at migu.cn) Date: Sat, 2 Dec 2017 17:57:02 +0800 Subject: How to control the total requests in Ngnix References: <20171201144353.7C36B2C56BF9@mail.nginx.com> Message-ID: <2017120217570237739652@migu.cn> I want to set the rate limiting for the entire server not for each client ip. ??? ?????????? ??? Mobile?13818663262 Telephone?021-51856688(81275) Email?tongshushan at migu.cn From: Gary Date: 2017-12-01 22:43 To: nginx Subject: Re: How to control the total requests in Ngnix Is this limiting for one connection or rate limiting for the entire server? I interpret this as a limit for one connection. I got rid of the trailing period. https://en.wikipedia.org/wiki/Leaky_bucket A request is one line in the access log I assume, typically a html verb like "get". I use a single core VPS, so I don't have much CPU power. Unless the verb action is trivial, I doubt I would hit 2000/s. From experimentation, a burst of 10 gets the images going mostly unimpeded, and a rate of 10/s is where you see a page just start to slow down. I think a rate of 2000/s isn't much of a limit. Original Message From: mdounin at mdounin.ru Sent: December 1, 2017 5:46 AM To: nginx at nginx.org Reply-to: nginx at nginx.org Subject: Re: Re: How to control the total requests in Ngnix Hello! On Fri, Dec 01, 2017 at 11:18:06AM +0800, tongshushan at migu.cn wrote: > I configured as below: > limit_req_zone "all" zone=all:100m rate=2000r/s; > limit_req zone=all burst=100 nodelay; > but when testing,I use tool to send the request at: Qps:486.1(not reach 2000) I got the many many 503 error,and the error info as below: > > 2017/12/01 11:08:29 [error] 26592#37196: *15466 limiting requests, excess: 101.000 by zone "all", client: 127.0.0.1, server: localhost, request: "GET /private/rush2purchase/inventory/aquire?productId=product1 HTTP/1.1", host: "localhost" > > Why excess: 101.000? I set it as 2000r/s ? You've configured "burst=100", and nginx starts to reject requests when the accumulated number of requests (excess) exceeds the configured burst size. In short, the algorithm works as follows: every request increments excess by 1, and decrements it according to the rate configured. If the resulting value is greater than burst, the request is rejected. You can read more about the algorithm used in Wikipedia, see https://en.wikipedia.org/wiki/Leaky_bucket. -- Maxim Dounin http://mdounin.ru/ _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Sat Dec 2 11:02:16 2017 From: francis at daoine.org (Francis Daly) Date: Sat, 2 Dec 2017 11:02:16 +0000 Subject: How to control the total requests in Ngnix In-Reply-To: <2017120111180643331137@migu.cn> References: <2017113017121771024321@migu.cn> <20171130101707.GN3127@daoine.org> <2017113020044137777429@migu.cn> <20171130183832.GO3127@daoine.org> <2017120111180643331137@migu.cn> Message-ID: <20171202110216.GP3127@daoine.org> On Fri, Dec 01, 2017 at 11:18:06AM +0800, tongshushan at migu.cn wrote: Hi there, Others have already given some details, so I'll try to put everything together. > limit_req_zone "all" zone=all:100m rate=2000r/s; The size of the zone (100m, above) relates to the number of individual key values that the zone can store -- if you have too many values for the size, then things can break. In your case, you want just one key; so you can have a much smaller zone size. Using 100m won't break things, but it will be wasteful. The way that nginx uses the "rate" value is not "start of second, allow that number, block the rest until the start of the next second". It is "turn that number into time-between-requests, and block the second request if it is within that time of the first". > limit_req zone=all burst=100 nodelay; "block" can be "return error immediately", or can be "delay until the right time", depending on what you configure. "nodelay" above means "return error immediately". Rather than strictly requiring a fixed time between requests always, it can be useful to enforce an average rate; in this case, you configure "burst" to allow that many requests as quickly as they arrive, before delaying-or-erroring on the next ones. That is, to use different numbers: rate=1r/s with burst=10 would mean that it would accept 10 requests all at once, but would not accept the 11th until 10s later (in order to bring the average rate down to 1r/s). Note: that is not exactly what happens -- for that, read the fine source -- but it is hopefully a clear high-level description of the intent. And one other thing is relevant here: nginx counts in milliseconds. So I think that you are unlikely to get useful rate limiting once you approach 1000r/s. > but when testing,I use tool to send the request at: Qps:486.1(not reach 2000) I got the many many 503 error,and the error info as below: > > 2017/12/01 11:08:29 [error] 26592#37196: *15466 limiting requests, excess: 101.000 by zone "all", client: 127.0.0.1, server: localhost, request: "GET /private/rush2purchase/inventory/aquire?productId=product1 HTTP/1.1", host: "localhost" > > Why excess: 101.000? I set it as 2000r/s ? If your tool sends all requests at once, nginx will handle "burst" before trying to enforce your rate, and your "nodelay" means that nginx should error immediately then. If you remove "nodelay", then nginx should slow down processing without sending the 503 errors. If your tool sends one request every 0.5 ms, then nginx would have a chance to process them all without exceeding the declared limit rate. (But the server cannot rely on the client to behave, so the server has to be told what to do when there is a flood of requests.) As a way of learning how to limit requests into nginx, this is useful. As a way of solving a specific problem that you have right now, it may or may not be useful -- that depends on what the problem is. Good luck with it, f -- Francis Daly francis at daoine.org From tongshushan at migu.cn Sun Dec 3 03:58:16 2017 From: tongshushan at migu.cn (tongshushan at migu.cn) Date: Sun, 3 Dec 2017 11:58:16 +0800 Subject: How to control the total requests in Ngnix References: <2017113017121771024321@migu.cn>, <20171130101707.GN3127@daoine.org>, <2017113020044137777429@migu.cn>, <20171130183832.GO3127@daoine.org>, <2017120111180643331137@migu.cn>, <20171202110216.GP3127@daoine.org> Message-ID: <2017120311581628063464@migu.cn> Hi Francis, Thanks for help. I might have misunderstood some concepts and rectify them here: burst--bucket size; rate--water leaks speed (not requests sent speed) right? Tong From: Francis Daly Date: 2017-12-02 19:02 To: nginx Subject: Re: Re: How to control the total requests in Ngnix On Fri, Dec 01, 2017 at 11:18:06AM +0800, tongshushan at migu.cn wrote: Hi there, Others have already given some details, so I'll try to put everything together. > limit_req_zone "all" zone=all:100m rate=2000r/s; The size of the zone (100m, above) relates to the number of individual key values that the zone can store -- if you have too many values for the size, then things can break. In your case, you want just one key; so you can have a much smaller zone size. Using 100m won't break things, but it will be wasteful. The way that nginx uses the "rate" value is not "start of second, allow that number, block the rest until the start of the next second". It is "turn that number into time-between-requests, and block the second request if it is within that time of the first". > limit_req zone=all burst=100 nodelay; "block" can be "return error immediately", or can be "delay until the right time", depending on what you configure. "nodelay" above means "return error immediately". Rather than strictly requiring a fixed time between requests always, it can be useful to enforce an average rate; in this case, you configure "burst" to allow that many requests as quickly as they arrive, before delaying-or-erroring on the next ones. That is, to use different numbers: rate=1r/s with burst=10 would mean that it would accept 10 requests all at once, but would not accept the 11th until 10s later (in order to bring the average rate down to 1r/s). Note: that is not exactly what happens -- for that, read the fine source -- but it is hopefully a clear high-level description of the intent. And one other thing is relevant here: nginx counts in milliseconds. So I think that you are unlikely to get useful rate limiting once you approach 1000r/s. > but when testing,I use tool to send the request at: Qps:486.1(not reach 2000) I got the many many 503 error,and the error info as below: > > 2017/12/01 11:08:29 [error] 26592#37196: *15466 limiting requests, excess: 101.000 by zone "all", client: 127.0.0.1, server: localhost, request: "GET /private/rush2purchase/inventory/aquire?productId=product1 HTTP/1.1", host: "localhost" > > Why excess: 101.000? I set it as 2000r/s ? If your tool sends all requests at once, nginx will handle "burst" before trying to enforce your rate, and your "nodelay" means that nginx should error immediately then. If you remove "nodelay", then nginx should slow down processing without sending the 503 errors. If your tool sends one request every 0.5 ms, then nginx would have a chance to process them all without exceeding the declared limit rate. (But the server cannot rely on the client to behave, so the server has to be told what to do when there is a flood of requests.) As a way of learning how to limit requests into nginx, this is useful. As a way of solving a specific problem that you have right now, it may or may not be useful -- that depends on what the problem is. Good luck with it, f -- Francis Daly francis at daoine.org _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists at lazygranch.com Sun Dec 3 21:08:02 2017 From: lists at lazygranch.com (Gary) Date: Sun, 03 Dec 2017 13:08:02 -0800 Subject: How to control the total requests in Ngnix In-Reply-To: <20171202110216.GP3127@daoine.org> Message-ID: For what situation would it be appropriate to use "nodelay"? ? Original Message ? From: francis at daoine.org Sent: December 2, 2017 3:02 AM To: nginx at nginx.org Reply-to: nginx at nginx.org Subject: Re: Re: How to control the total requests in Ngnix On Fri, Dec 01, 2017 at 11:18:06AM +0800, tongshushan at migu.cn wrote: Hi there, Others have already given some details, so I'll try to put everything together. > limit_req zone=all burst=100 nodelay; "block" can be "return error immediately", or can be "delay until the right time", depending on what you configure. "nodelay" above means "return error immediately". f -- Francis Daly??????? francis at daoine.org _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From zchao1995 at gmail.com Mon Dec 4 03:26:49 2017 From: zchao1995 at gmail.com (Zhang Chao) Date: Sun, 3 Dec 2017 19:26:49 -0800 Subject: lua code in log_by_lua_file not executed when the upstream server is down In-Reply-To: <2017120121095108228851@migu.cn> References: <2017120121095108228851@migu.cn> Message-ID: Hi! I think you should paste this issue to the openresty mail list https://groups.google.com/forum/#!forum/openresty-en For this problem, have you configured any error_page directives? the error_page will trigger an internal redirect. On 1 December 2017 at 21:10:12, tongshushan at migu.cn (tongshushan at migu.cn) wrote: the nginx.conf as below: upstream my_server { server localhost:8095; keepalive 2000; } location /private/rush2purchase/ { limit_conn addr 20; proxy_pass http://my_server/private/rush2purchase/; proxy_set_header Host $host:$server_port; rewrite_by_lua_file D:/tmp/lua/draw_r.lua; *log_by_lua_file* D:/tmp/lua/draw_decr.lua; } when I send request to http://localhost/private/rush2purchase/ ,it works fine the the stream server is up, but when I shutdown the upstream server(port:8095) ,I find the code not executed in *log_by_lua_file *(draw_decr.lua). *info in nginx access.log:* 127.0.0.1 - - [01/Dec/2017:21:03:20 +0800] "GET /private/rush2purchase/ HTTP/1.1" *504 * 558 "-" "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3236.0 Safari/537.36" *error message in nginx error.log:* 2017/12/01 21:02:20 [error] 35292#42868: *3298 upstream timed out (10060: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond) while connecting to upstream, client: 127.0.0.1, server: localhost, request: "GET /private/rush2purchase/ HTTP/1.1", upstream: "http:// [::1]:8095/private/rush2purchase/", host: "localhost" 2017/12/01 21:03:20 [error] 35292#42868: *3298 upstream timed out (10060: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond) while connecting to upstream, client: 127.0.0.1, server: localhost, request: "GET /private/rush2purchase/ HTTP/1.1", upstream: " http://127.0.0.1:8095/private/rush2purchase/", host: "localhost" How to fix it? ------------------------------ Tong _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From peter_booth at me.com Mon Dec 4 06:11:13 2017 From: peter_booth at me.com (Peter Booth) Date: Mon, 04 Dec 2017 01:11:13 -0500 Subject: How to control the total requests in Ngnix In-Reply-To: <20171203210818.4CD802C50CD5@mail.nginx.com> References: <20171203210818.4CD802C50CD5@mail.nginx.com> Message-ID: <74A81BA4-BDD7-47A5-8373-E1033C4AC3C3@me.com> I?m a situation where you are confident that the workload is coming from a DDOS attack and not a real user. For this example the limit is very low and nodelay wouldn?t seem appropriate. If you look at the techempower benchmark results you can see that a single vote VM should be able to serve over 10,000 requests per sec. Sent from my iPhone > On Dec 3, 2017, at 4:08 PM, Gary wrote: > > > For what situation would it be appropriate to use "nodelay"? > > Original Message > From: francis at daoine.org > Sent: December 2, 2017 3:02 AM > To: nginx at nginx.org > Reply-to: nginx at nginx.org > Subject: Re: Re: How to control the total requests in Ngnix > > On Fri, Dec 01, 2017 at 11:18:06AM +0800, tongshushan at migu.cn wrote: > > Hi there, > > Others have already given some details, so I'll try to put everything > together. > > > >> limit_req zone=all burst=100 nodelay; > > "block" can be "return error immediately", or can be "delay until the > right time", depending on what you configure. "nodelay" above means > "return error immediately". > > > > > > f > -- > Francis Daly francis at daoine.org > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From tongshushan at migu.cn Mon Dec 4 07:50:59 2017 From: tongshushan at migu.cn (tongshushan at migu.cn) Date: Mon, 4 Dec 2017 15:50:59 +0800 Subject: lua code in log_by_lua_file not executed when the upstream server is down References: <2017120121095108228851@migu.cn>, Message-ID: <2017120413345952029066@migu.cn> Yes,it's of after the error_page directive removed. Tong From: Zhang Chao Date: 2017-12-04 11:26 To: nginx Subject: Re: lua code in log_by_lua_file not executed when the upstream server is down Hi! I think you should paste this issue to the openresty mail list https://groups.google.com/forum/#!forum/openresty-en For this problem, have you configured any error_page directives? the error_page will trigger an internal redirect. On 1 December 2017 at 21:10:12, tongshushan at migu.cn (tongshushan at migu.cn) wrote: the nginx.conf as below: upstream my_server { server localhost:8095; keepalive 2000; } location /private/rush2purchase/ { limit_conn addr 20; proxy_pass http://my_server/private/rush2purchase/; proxy_set_header Host $host:$server_port; rewrite_by_lua_file D:/tmp/lua/draw_r.lua; log_by_lua_file D:/tmp/lua/draw_decr.lua; } when I send request to http://localhost/private/rush2purchase/ ,it works fine the the stream server is up, but when I shutdown the upstream server(port:8095),I find the code not executed in log_by_lua_file (draw_decr.lua). info in nginx access.log: 127.0.0.1 - - [01/Dec/2017:21:03:20 +0800] "GET /private/rush2purchase/ HTTP/1.1" 504 558 "-" "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3236.0 Safari/537.36" error message in nginx error.log: 2017/12/01 21:02:20 [error] 35292#42868: *3298 upstream timed out (10060: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond) while connecting to upstream, client: 127.0.0.1, server: localhost, request: "GET /private/rush2purchase/ HTTP/1.1", upstream: "http://[::1]:8095/private/rush2purchase/", host: "localhost" 2017/12/01 21:03:20 [error] 35292#42868: *3298 upstream timed out (10060: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond) while connecting to upstream, client: 127.0.0.1, server: localhost, request: "GET /private/rush2purchase/ HTTP/1.1", upstream: "http://127.0.0.1:8095/private/rush2purchase/", host: "localhost" How to fix it? Tong _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From peter_booth at me.com Mon Dec 4 09:22:53 2017 From: peter_booth at me.com (Peter Booth) Date: Mon, 04 Dec 2017 04:22:53 -0500 Subject: How to control the total requests in Ngnix In-Reply-To: <74A81BA4-BDD7-47A5-8373-E1033C4AC3C3@me.com> References: <20171203210818.4CD802C50CD5@mail.nginx.com> <74A81BA4-BDD7-47A5-8373-E1033C4AC3C3@me.com> Message-ID: <9279C755-8BC9-4ABD-AC03-CAF25ED95E7E@me.com> I?ve used the equivalent of nodelay with a rate of 2000 req/sec per IP when a retail website was being attacked by hackers. This was in combination with microcaching and CDN to protect the back end and endure the site could continue to function normally. Sent from my iPhone > On Dec 4, 2017, at 1:11 AM, Peter Booth wrote: > > I?m a situation where you are confident that the workload is coming from a DDOS attack and not a real user. > > For this example the limit is very low and nodelay wouldn?t seem appropriate. If you look at the techempower benchmark results you can see that a single vote VM should be able to serve over 10,000 requests per sec. > > Sent from my iPhone > >> On Dec 3, 2017, at 4:08 PM, Gary wrote: >> >> >> For what situation would it be appropriate to use "nodelay"? >> >> Original Message >> From: francis at daoine.org >> Sent: December 2, 2017 3:02 AM >> To: nginx at nginx.org >> Reply-to: nginx at nginx.org >> Subject: Re: Re: How to control the total requests in Ngnix >> >> On Fri, Dec 01, 2017 at 11:18:06AM +0800, tongshushan at migu.cn wrote: >> >> Hi there, >> >> Others have already given some details, so I'll try to put everything >> together. >> >> >> >>> limit_req zone=all burst=100 nodelay; >> >> "block" can be "return error immediately", or can be "delay until the >> right time", depending on what you configure. "nodelay" above means >> "return error immediately". >> >> >> >> >> >> f >> -- >> Francis Daly francis at daoine.org >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From sophie at klunky.co.uk Mon Dec 4 16:49:45 2017 From: sophie at klunky.co.uk (Sophie Loewenthal) Date: Mon, 4 Dec 2017 17:49:45 +0100 Subject: location ~* .(a|b|c)$ {} caused an error Message-ID: <0E935918-7B71-4BDD-BFB0-9DEE34A19671@klunky.co.uk> Hi, When I put this location block for case insensitive matching into the vhost produced an error. nginx: [emerg] location "/" is outside location ".(jpg|jpeg|png|gif|ico|css|js)$" in /etc/nginx/sites-enabled/example.conf:13 My vhost has this: server { ? location ~* .(jpg|jpeg|png|gif|ico|css|js)$ { expires 65d; location / { allow all; limit_req zone=app burst=5; limit_rate 64k; } ? } Did I misread the http://nginx.org/en/docs/http/ngx_http_core_module.html#location doxs? Quote "A location can either be defined by a prefix string, or by a regular expression. Regular expressions are specified with the preceding ?~*? modifier (for case-insensitive matching)," Thanks, Sophie. From hobson42 at gmail.com Mon Dec 4 17:57:15 2017 From: hobson42 at gmail.com (Ian Hobson) Date: Mon, 4 Dec 2017 17:57:15 +0000 Subject: location ~* .(a|b|c)$ {} caused an error In-Reply-To: <0E935918-7B71-4BDD-BFB0-9DEE34A19671@klunky.co.uk> References: <0E935918-7B71-4BDD-BFB0-9DEE34A19671@klunky.co.uk> Message-ID: Hi Sophie, On 04/12/2017 16:49, Sophie Loewenthal wrote: > Hi, > > When I put this location block for case insensitive matching into the vhost produced an error. > > nginx: [emerg] location "/" is outside location ".(jpg|jpeg|png|gif|ico|css|js)$" in /etc/nginx/sites-enabled/example.conf:13 > > My vhost has this: > > server { > ? > location ~* .(jpg|jpeg|png|gif|ico|css|js)$ { > expires 65d; You need to close the first location here, before opening the second one. > > location / { > allow all; > limit_req zone=app burst=5; > limit_rate 64k; > } > ? > } > > Did I misread the http://nginx.org/en/docs/http/ngx_http_core_module.html#location doxs? Quite possibly - matching is rather complex. I have been caught by the order of matching which is NOT the order of declaration. I need to check it every time I create a configuration. Regards Ian From sophie at klunky.co.uk Mon Dec 4 18:14:45 2017 From: sophie at klunky.co.uk (Sophie Loewenthal) Date: Mon, 4 Dec 2017 19:14:45 +0100 Subject: location ~* .(a|b|c)$ {} caused an error In-Reply-To: References: <0E935918-7B71-4BDD-BFB0-9DEE34A19671@klunky.co.uk> Message-ID: <1F50ABE1-49F6-4413-ADFA-DB4556BBE2A5@klunky.co.uk> Oh. the closing } It?s been a long day <:-I > On 4 Dec 2017, at 18:57, Ian Hobson wrote: > > Hi Sophie, > > On 04/12/2017 16:49, Sophie Loewenthal wrote: >> Hi, >> When I put this location block for case insensitive matching into the vhost produced an error. >> nginx: [emerg] location "/" is outside location ".(jpg|jpeg|png|gif|ico|css|js)$" in /etc/nginx/sites-enabled/example.conf:13 >> My vhost has this: >> server { >> ? >> location ~* .(jpg|jpeg|png|gif|ico|css|js)$ { >> expires 65d; > You need to close the first location here, before opening the second one. >> location / { >> allow all; >> limit_req zone=app burst=5; >> limit_rate 64k; >> } >> ? >> } >> Did I misread the http://nginx.org/en/docs/http/ngx_http_core_module.html#location doxs? > > Quite possibly - matching is rather complex. > > I have been caught by the order of matching which is NOT the order of declaration. I need to check it every time I create a configuration. > > Regards > > Ian > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Tue Dec 5 08:51:43 2017 From: francis at daoine.org (Francis Daly) Date: Tue, 5 Dec 2017 08:51:43 +0000 Subject: How to control the total requests in Ngnix In-Reply-To: <2017120311581628063464@migu.cn> References: <2017113017121771024321@migu.cn> <20171130101707.GN3127@daoine.org> <2017113020044137777429@migu.cn> <20171130183832.GO3127@daoine.org> <2017120111180643331137@migu.cn> <20171202110216.GP3127@daoine.org> <2017120311581628063464@migu.cn> Message-ID: <20171205085143.GQ3127@daoine.org> On Sun, Dec 03, 2017 at 11:58:16AM +0800, tongshushan at migu.cn wrote: Hi there, > I might have misunderstood some concepts and rectify them here: > burst--bucket size; Yes. > rate--water leaks speed (not requests sent speed) Yes. The server can control how long it waits before starting to process a request (rate) and how many requests it will process quickly (burst). It cannot control when the requests are sent to it. f -- Francis Daly francis at daoine.org From tongshushan at migu.cn Wed Dec 6 01:49:45 2017 From: tongshushan at migu.cn (tongshushan at migu.cn) Date: Wed, 6 Dec 2017 09:49:45 +0800 Subject: How to control the total requests in Ngnix References: <2017113017121771024321@migu.cn>, <20171130101707.GN3127@daoine.org>, <2017113020044137777429@migu.cn>, <20171130183832.GO3127@daoine.org>, <2017120111180643331137@migu.cn>, <20171202110216.GP3127@daoine.org>, <2017120311581628063464@migu.cn>, <20171205085143.GQ3127@daoine.org> Message-ID: <20171206094944965140102@migu.cn> Hi there, I have combined the client ip concurrency and cluster concurrency(total requests) together as below ,seems it works as expected. limit_req_zone $binary_remote_addr zone=one:10m rate=10r/s; # one client ip concurrency limit_req_zone "all" zone=all:20m rate=2000r/s; #cluster concurrency server { location /private/rush2purchase/ { proxy_pass http://my_server/private/rush2purchase/; proxy_set_header Host $host:$server_port; limit_req zone=one burst=15 nodelay; limit_req zone=all burst=3000 nodelay; } } Thanks all guys. Tong From: Francis Daly Date: 2017-12-05 16:51 To: nginx Subject: Re: Re: How to control the total requests in Ngnix On Sun, Dec 03, 2017 at 11:58:16AM +0800, tongshushan at migu.cn wrote: Hi there, > I might have misunderstood some concepts and rectify them here: > burst--bucket size; Yes. > rate--water leaks speed (not requests sent speed) Yes. The server can control how long it waits before starting to process a request (rate) and how many requests it will process quickly (burst). It cannot control when the requests are sent to it. f -- Francis Daly francis at daoine.org _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From tkadm30 at yandex.com Wed Dec 6 17:38:43 2017 From: tkadm30 at yandex.com (Etienne Robillard) Date: Wed, 6 Dec 2017 12:38:43 -0500 Subject: memory leak in django app? In-Reply-To: <0c2788c4-d6c2-3e5c-1eb4-222a3a9620ce@djangodeployment.com> References: <1e6a9dfa-5b8b-194b-b860-084fc6887890@yandex.com> <0c2788c4-d6c2-3e5c-1eb4-222a3a9620ce@djangodeployment.com> Message-ID: Hi Antonis, My development server appears unaffected by this problem. Plus I can get heap stats using guppy, which is pretty cool. :) Both my production and development servers have __debug__ enabled in Python 2.7.13. However when using ab -c 100 to benchmark my nginx server I get: SSL handshake failed (1). Do you have any ideas how to configure nginx to allow a minimum of 100 concurrent connections when using SSL encryption? I have worker_connections set to 512 in my nginx.conf Regards, Etienne Le 2017-12-06 ? 12:17, Antonis Christofides a ?crit?: > Does this happen only in production? What about when you run a development > server? What is the memory usage of your development server? > > Antonis Christofides > http://djangodeployment.com > > > On 2017-12-06 15:05, Etienne Robillard wrote: >> Hi Antonis, >> >> Thank you for your reply. I installed the htop utility and found that 2 of my >> 4 uWSGI processes are using 882M (42.7%) of resident memory each. Theses two >> processes takes about? (85%) of the available RAM memory! That can explain why >> I get "out of memory" errors when no more memory is available for sshd. >> >> I tried to debug memory allocation with guppy following instructions here: >> https://www.toofishes.net/blog/using-guppy-debug-django-memory-leaks/ >> >> But I get "hp.Nothing" when I attempt to get the heap stats from the master >> uWSGI process. >> >> Example: >>>>> hp.setref() >>>>> hp.heap() >> hp.Nothing >> >> Any help would be appreciated! >> >> Etienne >> >> >> Le 2017-12-06 ? 07:53, Antonis Christofides a ?crit?: >>> Hello, >>> >>> the amount of memory you need depends on what Django does and how many workers >>> (instances of Django) you run (which usually depends on how many requests you >>> are getting and how I/O intensive your Django application is). For many >>> applications, 512 MB is enough. >>> >>> Why are you worried? The only symptom you describe is that your free memory is >>> decreasing. This is absolutely normal. The operating system doesn't like RAM >>> that is sitting down doing nothing, so it will do its best to make free RAM >>> nearly zero. Whenever there's much RAM available, it uses more for its caches. >>> >>> How much memory is your Django app consuming? You can find out by executing >>> "top" and pressing "M" to sort by memory usage. >>> >>> Regards, >>> >>> Antonis >>> >>> Antonis Christofides >>> http://djangodeployment.com >>> >>> On 2017-12-06 14:04, Etienne Robillard wrote: >>>> Hi all, >>>> >>>> I'm struggling to understand how django/python may allocate and unallocate >>>> memory when used with uWSGI. >>>> >>>> I have a Debian system running Python 2.7 and uwsgi with 2GB of RAM and 2 CPUs. >>>> >>>> Is that enough RAM memory for a uWSGI/Gevent based WSGI app running Django? >>>> >>>> I'm running uWSGI with the --gevent switch in order to allow cooperative >>>> multithreading, but my free RAM memory is always decreasing when nginx is >>>> running. >>>> >>>> How can I debug memory allocation in a Django/uWSGI app? >>>> >>>> I defined also in my sitecustomize.py "gc.enable()" to allow garbage >>>> collection, but it does not appears to make any differences. >>>> >>>> Can you recommend any libraries to debug/profile memory allocation in Python >>>> 2.7 ? >>>> >>>> Is Django more memory efficient with --pymalloc or by using the default linux >>>> malloc() ? >>>> >>>> Thank you in advance, >>>> >>>> Etienne >>>> -- Etienne Robillard tkadm30 at yandex.com https://www.isotopesoftware.ca/ From nginx-forum at forum.nginx.org Wed Dec 6 22:27:01 2017 From: nginx-forum at forum.nginx.org (qwazi) Date: Wed, 06 Dec 2017 17:27:01 -0500 Subject: simple reverse web proxy need a little help Message-ID: <2e942522d6ab5052f9b44c0bdad488f2.NginxMailingListEnglish@forum.nginx.org> I'm new to nginx but needed a solution like this. It's very cool but I'm a newbie with a small problem. I'm using nginx as a simple reverse web proxy. I have 3 domains on 2 servers. I'm using 3 files in sites-enabled called your-vhost1.conf , your-vhost2.conf and so on. The stand alone domain is vhost1. The problem is, one of the domains on the server that has 2 isn't resolving correctly from the outside world. It only resolves correctly when you use just http://domain.com. If you use http://www.domain.com it resolves to the vhost1 domain. I tried shuffling the vhost1,2, & 3 files to different domains but that breaks it. A bit more info I've got an A record in DNS for WWW for the domain in question. It is hosted on a Windows server with IIS7 and I also have WWW in site bindings. This server was standalone before we added the 3rd domain on the second server. It did resolve correctly before we added the nginx server so I'm fairly certain I just don't have the syntax right. The standalone server is Debian with a Wordpress site. Here's the vhost files: VHOST1 (Standalone) server { server_name domain1.com; set $upstream 192.168.7.8; location / { proxy_pass_header Authorization; proxy_pass http://domain1.com; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-forwarded-For $proxy_add_x_forwarded_for; proxy_http_version 1.1; proxy_set_header Connection ""; proxy_buffering off; client_max_body_size 0; proxy_read_timeout 36000s; proxy_redirect off; } } VHOST2 server { server_name domain2.com; set $upstream 192.168.7.254; location / { proxy_pass_header Authorization; proxy_pass http://www.domain2.com; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-forwarded-For $proxy_add_x_forwarded_for; proxy_http_version 1.1; proxy_set_header Connection ""; proxy_buffering off; client_max_body_size 0; proxy_read_timeout 36000s; proxy_redirect off; } } VHOST3 server { server_name domain3.com; set $upstream 192.168.7.254; location / { proxy_pass_header Authorization; proxy_pass http://domain3.com; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-forwarded-For $proxy_add_x_forwarded_for; proxy_http_version 1.1; proxy_set_header Connection ""; proxy_buffering off; client_max_body_size 0; proxy_read_timeout 36000s; proxy_redirect off; } } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,277674,277674#msg-277674 From peter_booth at me.com Wed Dec 6 22:41:38 2017 From: peter_booth at me.com (Peter Booth) Date: Wed, 06 Dec 2017 17:41:38 -0500 Subject: simple reverse web proxy need a little help In-Reply-To: <2e942522d6ab5052f9b44c0bdad488f2.NginxMailingListEnglish@forum.nginx.org> References: <2e942522d6ab5052f9b44c0bdad488f2.NginxMailingListEnglish@forum.nginx.org> Message-ID: <93E6236A-EA2B-4CE3-A46E-2B71E79A060C@me.com> First Step Use something like http://www.kloth.net/services/nslookup.php To check the IP addresses returned for all six names (with and without www for the three domains) Do these look correct? Sent from my iPhone > On Dec 6, 2017, at 5:27 PM, qwazi wrote: > > I'm new to nginx but needed a solution like this. It's very cool but I'm a > newbie with a small problem. > > I'm using nginx as a simple reverse web proxy. I have 3 domains on 2 > servers. I'm using 3 files in sites-enabled called your-vhost1.conf , > your-vhost2.conf and so on. The stand alone domain is vhost1. The problem > is, one of the domains on the server that has 2 isn't resolving correctly > from the outside world. It only resolves correctly when you use just > http://domain.com. If you use http://www.domain.com it resolves to the > vhost1 domain. I tried shuffling the vhost1,2, & 3 files to different > domains but that breaks it. > > A bit more info I've got an A record in DNS for WWW for the domain in > question. It is hosted on a Windows server with IIS7 and I also have WWW in > site bindings. This server was standalone before we added the 3rd domain on > the second server. It did resolve correctly before we added the nginx > server so I'm fairly certain I just don't have the syntax right. The > standalone server is Debian with a Wordpress site. Here's the vhost files: > > VHOST1 (Standalone) > > server { > > server_name domain1.com; > > set $upstream 192.168.7.8; > > location / { > > proxy_pass_header Authorization; > proxy_pass http://domain1.com; > proxy_set_header Host $host; > proxy_set_header X-Real-IP $remote_addr; > proxy_set_header X-forwarded-For $proxy_add_x_forwarded_for; > proxy_http_version 1.1; > proxy_set_header Connection ""; > proxy_buffering off; > client_max_body_size 0; > proxy_read_timeout 36000s; > proxy_redirect off; > > } > } > > > VHOST2 > > server { > > server_name domain2.com; > > set $upstream 192.168.7.254; > > location / { > > proxy_pass_header Authorization; > proxy_pass http://www.domain2.com; > proxy_set_header Host $host; > proxy_set_header X-Real-IP $remote_addr; > proxy_set_header X-forwarded-For $proxy_add_x_forwarded_for; > proxy_http_version 1.1; > proxy_set_header Connection ""; > proxy_buffering off; > client_max_body_size 0; > proxy_read_timeout 36000s; > proxy_redirect off; > > } > } > > VHOST3 > > server { > > server_name domain3.com; > > set $upstream 192.168.7.254; > > location / { > > proxy_pass_header Authorization; > proxy_pass http://domain3.com; > proxy_set_header Host $host; > proxy_set_header X-Real-IP $remote_addr; > proxy_set_header X-forwarded-For $proxy_add_x_forwarded_for; > proxy_http_version 1.1; > proxy_set_header Connection ""; > proxy_buffering off; > client_max_body_size 0; > proxy_read_timeout 36000s; > proxy_redirect off; > > } > } > > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,277674,277674#msg-277674 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nuco2005 at gmail.com Wed Dec 6 22:47:45 2017 From: nuco2005 at gmail.com (Nicolas Legroux) Date: Wed, 6 Dec 2017 22:47:45 +0000 Subject: Can Nginx used as a reverse proxy send HTTP(s) requests through a forward proxy ? Message-ID: Hi, I'm wondering if it's possible to do what's described in the mail subject ? I've had a look through Internet and docs but haven't been able to figure it out. The question is similar to the one that's asked here : https://stackoverflow.com/questions/45900356/how-to-configure-nginx-as-reverse-proxy-for-the-site-which-is-behind-squid-prox, but that thread doesn't provide an answer. I've been able to do this with Apache and its ProxyRemote directive, but I can't figure out if this is doable with Nginx. Thanks, Nicolas -- -------------- next part -------------- An HTML attachment was scrubbed... URL: From peter_booth at me.com Thu Dec 7 00:55:24 2017 From: peter_booth at me.com (Peter Booth) Date: Wed, 06 Dec 2017 19:55:24 -0500 Subject: Can Nginx used as a reverse proxy send HTTP(s) requests through a forward proxy ? In-Reply-To: References: Message-ID: <52C7242B-981D-430F-B3DD-D174CE89072F@me.com> Take a look at the stream directive in the nginx docs. I?ve used that to proxy an https connection to a backend when I needed to make use of preecisting SSO Sent from my iPhone > On Dec 6, 2017, at 5:47 PM, Nicolas Legroux wrote: > > Hi, > > I'm wondering if it's possible to do what's described in the mail subject ? > I've had a look through Internet and docs but haven't been able to figure it out. The question is similar to the one that's asked here : https://stackoverflow.com/questions/45900356/how-to-configure-nginx-as-reverse-proxy-for-the-site-which-is-behind-squid-prox, but that thread doesn't provide an answer. > I've been able to do this with Apache and its ProxyRemote directive, but I can't figure out if this is doable with Nginx. > > > Thanks, > > Nicolas > > -- > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Thu Dec 7 02:17:24 2017 From: nginx-forum at forum.nginx.org (qwazi) Date: Wed, 06 Dec 2017 21:17:24 -0500 Subject: simple reverse web proxy need a little help In-Reply-To: <93E6236A-EA2B-4CE3-A46E-2B71E79A060C@me.com> References: <93E6236A-EA2B-4CE3-A46E-2B71E79A060C@me.com> Message-ID: <21cc16144e6c06901a4fd086db3e6fed.NginxMailingListEnglish@forum.nginx.org> I use geopeeker.com to accomplish the same thing only it renders the site so you can see it, not just resolves the name. I fixed the problem but I don't think it's the right way to do it. I created a 4th conf file for just the WWW and it seems to be working correctly now. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,277674,277678#msg-277678 From nginx-forum at forum.nginx.org Thu Dec 7 02:29:29 2017 From: nginx-forum at forum.nginx.org (qwazi) Date: Wed, 06 Dec 2017 21:29:29 -0500 Subject: simple reverse web proxy need a little help In-Reply-To: <21cc16144e6c06901a4fd086db3e6fed.NginxMailingListEnglish@forum.nginx.org> References: <93E6236A-EA2B-4CE3-A46E-2B71E79A060C@me.com> <21cc16144e6c06901a4fd086db3e6fed.NginxMailingListEnglish@forum.nginx.org> Message-ID: <25ad01e67a9b09f686221e0277d5a857.NginxMailingListEnglish@forum.nginx.org> I had to go back and create another conf file for the third domain as it was now directing to the first domain if I use the WWW on it. WEIRD! But creating another conf file works so....unless you want to tell me I'm doing it wrong, I'll leave it for now. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,277674,277679#msg-277679 From awenter at drei-welten.com Thu Dec 7 08:03:08 2017 From: awenter at drei-welten.com (Alexander Wenter) Date: Thu, 7 Dec 2017 08:03:08 +0000 Subject: can i use a another function like more_set_input_headers Message-ID: <0074da5b6dc74832a56a8c2c411f5377@drei-welten.com> hi everybody, i have a question about more_set_input_headers. I use your default dist package from fedora 27, but i mess the packets more headers. I would like to convert this two functions more_set_headers -s 401 'WWW-Authenticate: Basic realm="server.domain.tld"'; more_set_input_headers 'Authorization: $http_authorization'; I have try with proxy_set_header, but its not working. Have anyone a idea, who can i set up with a basic function on nginx? Best wishes Alex From francis at daoine.org Thu Dec 7 08:04:37 2017 From: francis at daoine.org (Francis Daly) Date: Thu, 7 Dec 2017 08:04:37 +0000 Subject: simple reverse web proxy need a little help In-Reply-To: <2e942522d6ab5052f9b44c0bdad488f2.NginxMailingListEnglish@forum.nginx.org> References: <2e942522d6ab5052f9b44c0bdad488f2.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20171207080437.GR3127@daoine.org> On Wed, Dec 06, 2017 at 05:27:01PM -0500, qwazi wrote: Hi there, > is, one of the domains on the server that has 2 isn't resolving correctly > from the outside world. It only resolves correctly when you use just > http://domain.com. If you use http://www.domain.com it resolves to the > vhost1 domain. Your client does a lookup for whatever name you request, and finds the IP address to connect to; then it connects to that. If that IP address corresponds to nginx, then nginx can get involved. > server { > server_name domain1.com; > server { > server_name domain2.com; > server { > server_name domain3.com; When a request comes to nginx, it will look at all of the server{} blocks (in this case), and then choose which one to use based on the server_name setting and the Host header that the client sent -- which is the name that you want to connect to. If you use domain2.com, nginx will choose the second one, because it matches. If you use www.domain2.com, it does not match any of the server_name settings that you have, so nginx does not know which server{} to choose. It chooses its default server{}. http://nginx.org/r/server_name Change your nginx config to tell it all of the names that you want each server{} to handle. f -- Francis Daly francis at daoine.org From francis at daoine.org Thu Dec 7 08:12:06 2017 From: francis at daoine.org (Francis Daly) Date: Thu, 7 Dec 2017 08:12:06 +0000 Subject: Can Nginx used as a reverse proxy send HTTP(s) requests through a forward proxy ? In-Reply-To: References: Message-ID: <20171207081206.GS3127@daoine.org> On Wed, Dec 06, 2017 at 10:47:45PM +0000, Nicolas Legroux wrote: Hi there, > I'm wondering if it's possible to do what's described in the mail subject ? No. nginx does not speak the proxy-http protocol. You can: use something other than nginx; or change your forward proxy to be configured in "transparent" mode so that it is not expecting to be addressed as a proxy; or change your clients to speak the proxy-http protocol to an nginx that is configured as a "stream" pass-through and not as a http server. Depending on your requirements, I would suggest one of the first two. f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Thu Dec 7 16:42:47 2017 From: nginx-forum at forum.nginx.org (qwazi) Date: Thu, 07 Dec 2017 11:42:47 -0500 Subject: simple reverse web proxy need a little help In-Reply-To: <20171207080437.GR3127@daoine.org> References: <20171207080437.GR3127@daoine.org> Message-ID: <8085d887ea44d52d0168079f55a2ed50.NginxMailingListEnglish@forum.nginx.org> Very cool. Very simple to understand once you understand it. I did figure this out on my own simply by creating more files for the WWW side of each domain. It now works properly. Thank you. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,277674,277695#msg-277695 From Jason.Whittington at equifax.com Thu Dec 7 18:02:44 2017 From: Jason.Whittington at equifax.com (Jason Whittington) Date: Thu, 7 Dec 2017 18:02:44 +0000 Subject: [IE] Can Nginx used as a reverse proxy send HTTP(s) requests through a forward proxy ? In-Reply-To: References: Message-ID: <995C5C9AD54A3C419AF1C20A8B6AB9A432AA1D9F@STLEISEXCMBX3.eis.equifax.com> Are you trying to do something like this? server foo { listen 443 ssl; ...other settings elided... location /foo/ { https://external_site/; } } If https://external_site/ traverses a proxy then the answer is ?no? ? nginx can?t deal with proxy situations where it has to issue HTTP CONNECT. I don?t know much about squid but I expect you?ll have the same problem. You?ll see that requests will just ?fail. I got around it by creating a little node.js app to fetch the external resource from external_site and pointed nginx at the node app. Feels like a bit of a hack but it works well enough. Incidentally I found pm2 to be a really nice container to turn 50 lines of js into a decent service that runs as a non-privileged user, starts on boot, restarts automatically on exception, logs, etc. etc. Jason From: nginx [mailto:nginx-bounces at nginx.org] On Behalf Of Nicolas Legroux Sent: Wednesday, December 06, 2017 4:48 PM To: nginx at nginx.org Subject: [IE] Can Nginx used as a reverse proxy send HTTP(s) requests through a forward proxy ? Hi, I'm wondering if it's possible to do what's described in the mail subject ? I've had a look through Internet and docs but haven't been able to figure it out. The question is similar to the one that's asked here : https://stackoverflow.com/questions/45900356/how-to-configure-nginx-as-reverse-proxy-for-the-site-which-is-behind-squid-prox, but that thread doesn't provide an answer. I've been able to do this with Apache and its ProxyRemote directive, but I can't figure out if this is doable with Nginx. Thanks, Nicolas -- This message contains proprietary information from Equifax which may be confidential. If you are not an intended recipient, please refrain from any disclosure, copying, distribution or use of this information and note that such actions are prohibited. If you have received this transmission in error, please notify by e-mail postmaster at equifax.com. Equifax? is a registered trademark of Equifax Inc. All rights reserved. -------------- next part -------------- An HTML attachment was scrubbed... URL: From djczaski at gmail.com Thu Dec 7 22:54:38 2017 From: djczaski at gmail.com (Danomi Czaski) Date: Thu, 7 Dec 2017 17:54:38 -0500 Subject: safari websocket using different ssl_session_id Message-ID: I am trying to use a Cookie along with ssl_session_id to identify connections. This seems to work fine in Chrome and Firefox, but Safari looks like it uses a different ssl_session_id when it makes a websocket connection. Is there something else I can use to uniquely tie the cookie to a connection? From nginx-forum at forum.nginx.org Fri Dec 8 06:18:34 2017 From: nginx-forum at forum.nginx.org (carrotis) Date: Fri, 08 Dec 2017 01:18:34 -0500 Subject: Multiple HTTP2 reverse proxy support ? Message-ID: Hello I'm trying to configure a reverse proxy for multiple domains with a single nginx server. Is that possible eg. ----[HTTP 2.0]--> < Nginx > ---[HTTP 1.1]--- > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,277698,277698#msg-277698 From ruz at sports.ru Fri Dec 8 07:17:33 2017 From: ruz at sports.ru (=?UTF-8?B?0KDRg9GB0LvQsNC9INCX0LDQutC40YDQvtCy?=) Date: Fri, 8 Dec 2017 10:17:33 +0300 Subject: Multiple HTTP2 reverse proxy support ? In-Reply-To: References: Message-ID: Why not? server { listen 80; server_name localhost; location / { alias /Users/ruz/projs/localhost/; } } server { listen 80; server_name example.com; location / { proxy_pass ...; } } server { listen 80; server_name example.ru; location / { proxy_pass ...; } } On Fri, Dec 8, 2017 at 9:18 AM, carrotis wrote: > Hello > > > I'm trying to configure a reverse proxy for multiple domains with a single > nginx server. > > Is that possible > > eg. > > ----[HTTP 2.0]--> < Nginx > ---[HTTP 1.1]--- > > > Posted at Nginx Forum: https://forum.nginx.org/read. > php?2,277698,277698#msg-277698 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- ?????? ??????? ???????????? ?????? ?????????? ???-???????? +7(916) 597-92-69, ruz @ -------------- next part -------------- An HTML attachment was scrubbed... URL: From alec.muffett at gmail.com Fri Dec 8 20:48:55 2017 From: alec.muffett at gmail.com (Alec Muffett) Date: Fri, 8 Dec 2017 20:48:55 +0000 Subject: Problem using `proxy_redirect` to rewrite the same Location 2-or-more times? Message-ID: Hi! I am using Nginx 1.12.2 in a large and complex reverse-proxy configuration, with lots of content-rewriting (subs_filter, lua, ...). Problem: - the client connects to my proxy - my proxy forwards the request to the origin - the origin responds with a 302: "Location: http://www.foo.com/" ...or "Location: http://www.bar.com/" ...or "Location: http://www.baz.com/" ...and I have the following dynamic rewriting that I want to perform: * foo.com -> ding.org * bar.com -> dong.org * baz.com -> dell.org * ...more? So I have the following three (or more) rules: http { # ...etc proxy_redirect ~*^(.*?)\\b{foo\\.com}\\b(.*)$ $1ding.org$2; proxy_redirect ~*^(.*?)\\b{bar\\.com}\\b(.*)$ $1dong.org$2; proxy_redirect ~*^(.*?)\\b{baz\\.com}\\b(.*)$ $1dell.org$2; # ...more? ...and these work well most of the time. However: these do not function as-desired when the origin produces a 302 which mentions two or more *different* rewritable site names: "Location: http://www.foo.com/?next=%2F%2Fcdn.baz.com%2F" <-- INPUT ...which I *want* to be rewritten as: "Location: http://www.ding.org/?next=%2F%2Fcdn.dell.org%2F" <-- WANTED ...but instead I get: "Location: http://www.ding.org/?next=%2F%2Fcdn.baz.com%2F" <-- ACTUAL i.e. the location is converted from `foo.com` to `ding.org`, but no further processing happens to convert `baz.com` in this example. The issue seems to be that `proxy_redirect` stops after executing the first rule that succeeds? Is this intended behaviour, please? And is there a way to achieve what I want, e.g. via options-flags or Lua? I am making heavy use of subs_filter, proxy_cookie_domain, etc. I've put one of my Nginx configuration files at https://gist.github.com/alecmuffett/f5cd8abcf161dbdaffd7a81ed8a088b9 if you'd like to see the issue in context. Thanks! - alec -- http://dropsafe.crypticide.com/aboutalecm -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Sat Dec 9 10:49:53 2017 From: nginx-forum at forum.nginx.org (janixmel) Date: Sat, 09 Dec 2017 05:49:53 -0500 Subject: rubbing your epidermis invest the a lodermabellix skin tag remover reviewsah and or a skin tag re Message-ID: <51806821faaa70e6028eb35e40d8cd43.NginxMailingListEnglish@forum.nginx.org> mover reviews you follow themFeature Material, will help you to acquire the results you want and your personal human whole body will thank you too.A sore penis is a distressing problem that can easily put an end to a romantic encounter. Knowing how to soothe red penis epidermis and pain can help men to manage more rapidly and enjoy the benefits dermabellix skin tag remover reviews sleek, smooth epidermis. One dermabellix skin tag remover reviews the most well-known complaints that dermabellix skin tag remover reviews is, dermabellix skin tag remover reviews course, alarming ? but in most cases, the option would be simple straightforward. While partner-transmitted diseases and epidermis illness can be a concern, the problem dermabellix skin tag remover reviewsten this is the result dermabellix skin tag remover reviews utmost, overenthusiastic self-stimulation. Fortunately, the manhood is a resilient appendage, and with the right attention to penis appropriate proper care, most men will find themselves fully recovered in a couple dermabellix skin tag remover reviews a few several weeks, ready for any action that might come their way. However, men who have a sore penis accompanied by a fever, rash, discharge, swelling or open cuts or ulcers should seek prompt health proper care. For minor problems such as irritated epidermis or a red penis, the following advice can help: 1. Give it an opportunity. Male epidermis that is sore due to assertive self-stimulation or a particularly enthusiastic round dermabellix skin tag remover reviews coupling dermabellix skin tag remover reviewsten needs nothing more than a few days dermabellix skin tag remover reviewsf to be able to manage. This solution might seem overly simplistic, but many men ? especially in their teens ? can look for the urge to be too overwhelming and throw caution to the wind. As challenging as it may be, men might find out that the most reliable way to handle Posted at Nginx Forum: https://forum.nginx.org/read.php?2,277713,277713#msg-277713 From nginx-forum at forum.nginx.org Sun Dec 10 08:10:22 2017 From: nginx-forum at forum.nginx.org (jomilachok) Date: Sun, 10 Dec 2017 03:10:22 -0500 Subject: https://evaherbalist.com/patriot-power-greens/ Message-ID: Partnership has baffled industry by launching its superb natural powdered eat that technique for reduce spatriot power greensing, the actual cause patriot power greens vs athletic greens all ailments, and by doing so it turns clocks returning and can help you stay fit and energetic throughout your daily lpatriot power greens vs athletic greensestyle. This wonderful much healthier and much healthier drink can signpatriot power greens vs athletic greensicantly revive fitness and patriot power greensness, and moreover, its elements, as patriot power greens as the advantages that it technique for deliver, have been thoroughly tested and confirmed by research. This wonderful and key fitness and patriot power greensness tonic is known as the Patriot Power Veggies. So patriot power greens vs athletic greens you are tired patriot power greens vs athletic greens getting tired and y ou desperately want to regain that younger energy you once had that this awesome complement is definitely for you. Patriot Power Veggies is a 10 nutrient healthful eat that aims to combat reducing spatriot power greensing, and it can possibly help anyone to live a appropriate and sound lpatriot power greens vs athletic greensestyle even in the old age patriot power greens vs athletic greens their times. Let us find out out what Patriot Power Veggies is and how it works fight spatriot power greensing and enhance fitness and patriot power greensness. >> GET PATRIOT POWER GREENS FOR $3.95 For Limited Time Only<< Patriot Power Greens To help individuals experience excellent and much healthier and much healthier, and get out patriot power greens vs athletic greens their beds energetically Patriot Health Partnership concocted the Patriot Power Veggies an amazingly wonderful natural eat, infused with the goodness patriot power greens vs athletic greens 55 healthful and nutritious elements. It is an amalgamation patriot power greens vs athletic greens extremely meals that signpatriot power greens vs athletic greensicantly enhance fitness and patriot power greensness insurance plan coverage enhance one?s fitness and patriot power greens-being. Patriot Power Veggies is a tasty natural fitness https://evaherbalist.com/patriot-power-greens/ Posted at Nginx Forum: https://forum.nginx.org/read.php?2,277716,277716#msg-277716 From nginx-forum at forum.nginx.org Mon Dec 11 14:51:51 2017 From: nginx-forum at forum.nginx.org (drook) Date: Mon, 11 Dec 2017 09:51:51 -0500 Subject: -e vs -f and -d Message-ID: <8557e18239f1ac89a9980a58554866c2.NginxMailingListEnglish@forum.nginx.org> Hi, Considering that I don't have symbolic links why do these configs work differently ? ===config one=== if (!-f $request_filename) { rewrite ^/(.*)$ /init.php; } if (!-d $request_filename) { rewrite ^/(.*)$ /init.php; } ===config one=== This one above works, rewrite happens. Being changed to this it stops working, all other lines left intact: ===config two=== if (!-e $request_filename) { rewrite ^/(.*)$ /init.php; } ===config two=== This is inside the location / {}. Thanks. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,277731,277731#msg-277731 From mdounin at mdounin.ru Mon Dec 11 16:10:25 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 11 Dec 2017 19:10:25 +0300 Subject: -e vs -f and -d In-Reply-To: <8557e18239f1ac89a9980a58554866c2.NginxMailingListEnglish@forum.nginx.org> References: <8557e18239f1ac89a9980a58554866c2.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20171211161025.GZ78325@mdounin.ru> Hello! On Mon, Dec 11, 2017 at 09:51:51AM -0500, drook wrote: > Hi, > > Considering that I don't have symbolic links why do these configs work > differently ? > > ===config one=== > if (!-f $request_filename) { > rewrite ^/(.*)$ /init.php; > } > if (!-d $request_filename) { > rewrite ^/(.*)$ /init.php; > } > ===config one=== > > This one above works, rewrite happens. This will rewrite all requests, since you have two independant cases when a rewrite happens: - rewrite if the requested resource is not a file; - rewrite if the requested resource is not a directory; Since no resource can be a file and a directory at the same time, rewrite always happens. That is, this configuration will rewrite anything, including requests to existing files and directories. > Being changed to this it stops working, all other lines left intact: > ===config two=== > if (!-e $request_filename) { > rewrite ^/(.*)$ /init.php; > } > ===config two=== This will not rewrite resources which are either a file, or a directory. Everything else will be rewritten. That is, this configuration will not rewrite requests to existing files and directories. See here for more information on rewrite module directives and their execution: http://nginx.org/en/docs/http/ngx_http_rewrite_module.html In particular, "rewrite_log on;" might be helpful if you want to better understand what happens in your configuration. -- Maxim Dounin http://mdounin.ru/ From nginx-forum at forum.nginx.org Mon Dec 11 16:19:07 2017 From: nginx-forum at forum.nginx.org (joseph-pg) Date: Mon, 11 Dec 2017 11:19:07 -0500 Subject: Multiple HTTP2 reverse proxy support ? In-Reply-To: References: Message-ID: It's possible. #put this in the http context proxy_http_version 1.1; #default is HTTP/1.0 - #example server blocks #redirect to https server { listen 80; server_name "~^(.+\.)?example\d{2}\.com$"; #regex to match example[number].com and *.example[number].com return 301 https://$host$request_uri; } #https #example01.com server { listen 443 ssl http2; server_name example01.com; proxy_pass http://10.0.2.2; #your config; } #foo.example01.com server { listen 443 ssl http2; server_name foo.example01.com; proxy_pass http://10.0.2.2:12000; #your config; } #example02.com server { listen 443 ssl http2; server_name example02.com; proxy_pass https://10.0.3.2; #your config; } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,277698,277733#msg-277733 From nginx-forum at forum.nginx.org Mon Dec 11 17:52:27 2017 From: nginx-forum at forum.nginx.org (joseph-pg) Date: Mon, 11 Dec 2017 12:52:27 -0500 Subject: Multiple HTTP2 reverse proxy support ? In-Reply-To: References: Message-ID: Edit: proxy_pass should be put inside a location block. Example: location / { proxy_pass http://10.0.2.2; } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,277698,277735#msg-277735 From Jason.Whittington at equifax.com Tue Dec 12 16:34:36 2017 From: Jason.Whittington at equifax.com (Jason Whittington) Date: Tue, 12 Dec 2017 16:34:36 +0000 Subject: "sub_filter_once off" not working as advertised? Message-ID: <995C5C9AD54A3C419AF1C20A8B6AB9A432AA2DE6@STLEISEXCMBX3.eis.equifax.com> I have a rule like the following where I am trying to replace instances of /spf/ with /ec/apps/symmetry/spf/. I?ve used sub_filter to do this sort of thing before and had luck with it. location /ec/apps/symmetry/ { proxy_pass http://stl-biz-d2-hrxsymmetry.devapp.c9.equifax.com:55943/; proxy_redirect / https://$host/ec/apps/symmetry/; #sub filter doesn't work if the upstream server returns compressed or zipped content. proxy_set_header Accept-Encoding identity; sub_filter once off; # doesn?t seem to have an effect sub_filter '/spf/' '/ec/apps/symmetry/spf/'; add_header X-nginx-rule /ec/apps/symmetry; } The thing I am seeing here is that even though I specified sub_filter_once off I still only see the first link modified in the html that comes back. When this html is returned Welcome This is what I see coming out of nginx: Welcome I can see that ?X-nginx-rule? header show up in the browser confirming that the request traversing this location block. The fact that anything is modified tells me that gzip and content-type are not getting in the way. sub_filter is filtering, it?s just only doing it one time ?. I was able to work around this by just accepting /spf/ in another location block but for future reference ? has anyone seen sub_filter only make one substitution even when sub_filter once off is specified? Jason Whittington | Architect, PD Shared Services [cid:image001.jpg at 01CD7B01.8E79C0A0] WORKFORCE SOLUTIONS (o) 314.214.7163 | (m) 636.284.4082 jason.whittington at equifax.com This message contains proprietary information from Equifax which may be confidential. If you are not an intended recipient, please refrain from any disclosure, copying, distribution or use of this information and note that such actions are prohibited. If you have received this transmission in error, please notify by e-mail postmaster at equifax.com. Equifax? is a registered trademark of Equifax Inc. All rights reserved. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 5085 bytes Desc: image001.png URL: From mdounin at mdounin.ru Tue Dec 12 17:07:01 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 12 Dec 2017 20:07:01 +0300 Subject: "sub_filter_once off" not working as advertised? In-Reply-To: <995C5C9AD54A3C419AF1C20A8B6AB9A432AA2DE6@STLEISEXCMBX3.eis.equifax.com> References: <995C5C9AD54A3C419AF1C20A8B6AB9A432AA2DE6@STLEISEXCMBX3.eis.equifax.com> Message-ID: <20171212170701.GB78325@mdounin.ru> Hello! On Tue, Dec 12, 2017 at 04:34:36PM +0000, Jason Whittington wrote: > I have a rule like the following where I am trying to replace > instances of /spf/ with /ec/apps/symmetry/spf/. I?ve used > sub_filter to do this sort of thing before and had luck with it. > > location /ec/apps/symmetry/ { > > proxy_pass http://stl-biz-d2-hrxsymmetry.devapp.c9.equifax.com:55943/; > proxy_redirect / https://$host/ec/apps/symmetry/; > > #sub filter doesn't work if the upstream server returns compressed or zipped content. > proxy_set_header Accept-Encoding identity; > > sub_filter once off; # doesn?t seem to have an effect Given the space between "sub_filter" and "once", it is expected to replace "once" with "off" somewhere in the response. You probably mean to write sub_filter_once off; instead. [...] -- Maxim Dounin http://mdounin.ru/ From Jason.Whittington at equifax.com Tue Dec 12 17:37:50 2017 From: Jason.Whittington at equifax.com (Jason Whittington) Date: Tue, 12 Dec 2017 17:37:50 +0000 Subject: [IE] Re: "sub_filter_once off" not working as advertised? In-Reply-To: <20171212170701.GB78325@mdounin.ru> References: <995C5C9AD54A3C419AF1C20A8B6AB9A432AA2DE6@STLEISEXCMBX3.eis.equifax.com> <20171212170701.GB78325@mdounin.ru> Message-ID: <995C5C9AD54A3C419AF1C20A8B6AB9A432AA2E5E@STLEISEXCMBX3.eis.equifax.com> Oh geez - I knew it was going to be something stupid! Thanks! :) -----Original Message----- From: nginx [mailto:nginx-bounces at nginx.org] On Behalf Of Maxim Dounin Sent: Tuesday, December 12, 2017 11:07 AM To: nginx at nginx.org Subject: [IE] Re: "sub_filter_once off" not working as advertised? Hello! On Tue, Dec 12, 2017 at 04:34:36PM +0000, Jason Whittington wrote: > I have a rule like the following where I am trying to replace > instances of /spf/ with /ec/apps/symmetry/spf/. I?ve used > sub_filter to do this sort of thing before and had luck with it. > > location /ec/apps/symmetry/ { > > proxy_pass http://stl-biz-d2-hrxsymmetry.devapp.c9.equifax.com:55943/; > proxy_redirect / https://$host/ec/apps/symmetry/; > > #sub filter doesn't work if the upstream server returns compressed or zipped content. > proxy_set_header Accept-Encoding identity; > > sub_filter once off; # doesn?t seem to have an effect Given the space between "sub_filter" and "once", it is expected to replace "once" with "off" somewhere in the response. You probably mean to write sub_filter_once off; instead. [...] -- Maxim Dounin http://mdounin.ru/ _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx This message contains proprietary information from Equifax which may be confidential. If you are not an intended recipient, please refrain from any disclosure, copying, distribution or use of this information and note that such actions are prohibited. If you have received this transmission in error, please notify by e-mail postmaster at equifax.com. Equifax? is a registered trademark of Equifax Inc. All rights reserved. From alex.trofimchouk at gmail.com Fri Dec 15 12:48:20 2017 From: alex.trofimchouk at gmail.com (Alexander Trofimchouk) Date: Fri, 15 Dec 2017 15:48:20 +0300 Subject: nginx rewrite does not work without "permanent" directive Message-ID: <040101d375a2$fd5b7ee0$f8127ca0$@gmail.com> Hello! Please would you mind helping me. My nginx rewrite works only if I add "permanent" directive. Without it there is no rewrite seen even in browser's network log. I have an image cache system which works this way: - there is a folder for uploaded images ( ) - folder for small thumbnails of uploaded images ( ) - php script (Symfony controller). If the user requests not yet existing thumbnail for already uploaded image, than the php script is called and it generates a thumbnail and stores it in a corresponding folder. For example: User requests http://site.com/files/imagecache/thumb/1.jpg, Nginx tries to find the file or redirects to http://site.com/www2/web/app_dev.php/image/cache?path=thumb/1.jpg But instead I get 404 not found /files/imagecache/thumb/1.jpg - this message is provided by Symfony (PHP), not by nginx itself. If I add "permanent" I get Symfony controller output in browser - which is OK. What did I do wrong? Full nginx config with folders, symfony config and ordinary php config follows. Thank you in advance! server { . root /home/anima/projects/sfedu/sfedu-php; ... # SYMFONY DEV location ~ ^/www2/web/(app_dev|config)\.php(/|$) { fastcgi_pass unix:/run/php/php7.0-fpm.sock; fastcgi_split_path_info ^(.+\.php)(/.*)$; include fastcgi_params; fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name; fastcgi_param DOCUMENT_ROOT $realpath_root; } # PROD location ~ ^/www2/web/app\.php(/|$) { fastcgi_pass unix:/run/php/php7.0-fpm.sock; fastcgi_split_path_info ^(.+\.php)(/.*)$; include fastcgi_params; fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name; fastcgi_param DOCUMENT_ROOT $realpath_root; internal; } location ~ ^/www2/web { allow all; try_files $uri /www2/web/app.php$is_args$args; } location ~ ^/www2 { deny all; } # END OF SYMFONY BLOCK location ~ (\.php$|\.php/|\.php\?) { fastcgi_pass unix:/run/php/php7.0-fpm.sock; fastcgi_index index.php; include fastcgi_params; set $path_info ""; set $real_script_name $fastcgi_script_name; if ($fastcgi_script_name ~ "^(.+\.php)(/.+)$") { set $real_script_name $1; set $path_info $2; } fastcgi_param SCRIPT_FILENAME $document_root$real_script_name; fastcgi_param SCRIPT_NAME $real_script_name; fastcgi_param PATH_INFO $path_info; fastcgi_param PATH_TRANSLATED $document_root$real_script_name; } location /files/imagecache { root /home/anima/projects/http-upload; try_files $uri @imagecache; } location /files { root /home/anima/projects/http-upload; } location @imagecache { rewrite ^/files/imagecache/(.*)$ /www2/web/app_dev.php/images/cache?path=$1 permanent; #Here should be no "permanent" } } Regards, Alexander Trofimchouk. -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Fri Dec 15 13:18:00 2017 From: francis at daoine.org (Francis Daly) Date: Fri, 15 Dec 2017 13:18:00 +0000 Subject: nginx rewrite does not work without "permanent" directive In-Reply-To: <040101d375a2$fd5b7ee0$f8127ca0$@gmail.com> References: <040101d375a2$fd5b7ee0$f8127ca0$@gmail.com> Message-ID: <20171215131800.GT3127@daoine.org> On Fri, Dec 15, 2017 at 03:48:20PM +0300, Alexander Trofimchouk wrote: Hi there, > My nginx rewrite works only if I add "permanent" directive. Without it there > is no rewrite seen even in browser's network log. http://nginx.org/r/rewrite "rewrite" does not directly lead to "http redirect" without specific configuration. > For example: User requests http://site.com/files/imagecache/thumb/1.jpg, > Nginx tries to find the file or redirects to > http://site.com/www2/web/app_dev.php/image/cache?path=thumb/1.jpg No, that's not what your config says. nginx *rewrites* to /www2/web/app_dev.php/images/cache?path=thumb/1.jpg That rewritten request (depending on the omitted config) probably is handled within location ~ ^/www2/web/(app_dev|config)\.php(/|$) { which does the fastcgi_pass to PHP. If you want nginx to *redirect*, you have to tell it to, using one of the three documented methods. > If I add "permanent" I get Symfony controller output in browser - which is > OK. What did I do wrong? You left off "permanent" or "redirect" or didn't start the replacement string with "http://", if you wanted nginx to issue a redirect. f -- Francis Daly francis at daoine.org From jorohr at gmail.com Fri Dec 15 14:25:20 2017 From: jorohr at gmail.com (Johannes Rohr) Date: Fri, 15 Dec 2017 15:25:20 +0100 Subject: Moving Joomla from subdir to root -> rewrite / redirect problem Message-ID: Dear all, in order to have prettier URLs I have decided to move my joomla from /web/ to /, but I want old URLs to transparently redirect. I am struggling how to do this. First I though of something like location /web { ? try_files $uri $uri/ /index.php?$args; ?} but this obviously did not work as the arguments passed to index.php still contain the /web/ ? part? , which would have to go. So I tried instead: location / { ??? rewrite ^/web/(.*)$ /$1; ? try_files $uri $uri/ /index.php?$args; } thinking that the first line would modify the URI, taking away the /web/ part so that in the next line the changed uri would be fed to the try_files command. But this only resulted in 404s. Can someone enlighten me on where I am going wrong with this? Thanks a lot in advance, Johannes -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 512 bytes Desc: OpenPGP digital signature URL: From alex.trofimchouk at gmail.com Fri Dec 15 14:32:25 2017 From: alex.trofimchouk at gmail.com (=?UTF-8?B?0JDQu9C10LrRgdCw0L3QtNGAINCi0YDQvtGE0LjQvNGH0YPQug==?=) Date: Fri, 15 Dec 2017 17:32:25 +0300 Subject: nginx rewrite does not work without "permanent" directive In-Reply-To: <20171215131800.GT3127@daoine.org> References: <040101d375a2$fd5b7ee0$f8127ca0$@gmail.com> <20171215131800.GT3127@daoine.org> Message-ID: Thank you for advice! I will add the "redirect". Nevertheless I wonder why the following (almost similar) config works? I get php output when requesting for http://site.com//files/imagecache/small/1.jpg Though rewrite goes to another location with some additional parameters the configurations looks to me similar to one in the first message. Config that works(nginx 1.8): location ~ (\.php$|\.php/|\.php\?) { fastcgi_pass 127.0.0.1:9002; fastcgi_index index.php; include fastcgi_params; set $path_info ""; set $real_script_name $fastcgi_script_name; if ($fastcgi_script_name ~ "^(.+\.php)(/.+)$") { set $real_script_name $1; set $path_info $2; } fastcgi_param SCRIPT_FILENAME $document_root$real_script_name; fastcgi_param SCRIPT_NAME $real_script_name; fastcgi_param PATH_INFO $path_info; fastcgi_param PATH_TRANSLATED $document_root$real_script_name; } location /files/imagecache { root /.mnt/www-sfedu-data; try_files $uri @imagecache; } location /files { root /.mnt/www-sfedu-data; } location @imagecache { rewrite ^/files/imagecache/(.*)$ /php_j/imagecache/index.php?q=$1; #no permanent or redirect here ! } Thank you in advance! Regards, Alexander Trofimchouk 2017-12-15 16:18 GMT+03:00 Francis Daly : > On Fri, Dec 15, 2017 at 03:48:20PM +0300, Alexander Trofimchouk wrote: > > Hi there, > > > My nginx rewrite works only if I add "permanent" directive. Without it > there > > is no rewrite seen even in browser's network log. > > http://nginx.org/r/rewrite > > "rewrite" does not directly lead to "http redirect" without specific > configuration. > > > For example: User requests http://site.com/files/imagecache/thumb/1.jpg, > > Nginx tries to find the file or redirects to > > http://site.com/www2/web/app_dev.php/image/cache?path=thumb/1.jpg > > No, that's not what your config says. > > nginx *rewrites* to /www2/web/app_dev.php/images/cache?path=thumb/1.jpg > > That rewritten request (depending on the omitted config) probably is > handled within > > location ~ ^/www2/web/(app_dev|config)\.php(/|$) { > > which does the fastcgi_pass to PHP. > > If you want nginx to *redirect*, you have to tell it to, using one of > the three documented methods. > > > If I add "permanent" I get Symfony controller output in browser - which > is > > OK. What did I do wrong? > > You left off "permanent" or "redirect" or didn't start the replacement > string with "http://", if you wanted nginx to issue a redirect. > > f > -- > Francis Daly francis at daoine.org > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Fri Dec 15 17:28:42 2017 From: francis at daoine.org (Francis Daly) Date: Fri, 15 Dec 2017 17:28:42 +0000 Subject: nginx rewrite does not work without "permanent" directive In-Reply-To: References: <040101d375a2$fd5b7ee0$f8127ca0$@gmail.com> <20171215131800.GT3127@daoine.org> Message-ID: <20171215172842.GU3127@daoine.org> On Fri, Dec 15, 2017 at 05:32:25PM +0300, ????????? ????????? wrote: Hi there, > Nevertheless I wonder why the following (almost similar) config works? I don't see it as similar. Am I missing something? > I get php output when requesting for > http://site.com//files/imagecache/small/1.jpg > Though rewrite goes to another location with some additional parameters the > configurations looks to me similar to one in the first message. > Config that works(nginx 1.8): > > location ~ (\.php$|\.php/|\.php\?) { > fastcgi_pass 127.0.0.1:9002; > fastcgi_index index.php; > include fastcgi_params; > > set $path_info ""; > set $real_script_name $fastcgi_script_name; > if ($fastcgi_script_name ~ "^(.+\.php)(/.+)$") { > set $real_script_name $1; > set $path_info $2; > } > fastcgi_param SCRIPT_FILENAME $document_root$real_script_name; > fastcgi_param SCRIPT_NAME $real_script_name; > fastcgi_param PATH_INFO $path_info; > fastcgi_param PATH_TRANSLATED $document_root$real_script_name; > } The configuration that did not work for you was location ~ ^/www2/web/(app_dev|config)\.php(/|$) { fastcgi_pass unix:/run/php/php7.0-fpm.sock; fastcgi_split_path_info ^(.+\.php)(/.*)$; include fastcgi_params; fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name; fastcgi_param DOCUMENT_ROOT $realpath_root; } which looks very different from the location{} above that does work for you. Both do a fastcgi_pass, so the real question is: what does your fastcgi server do that is different between the two calls? Which fastcgi_param values does your fastcgi server care about, and do they have different values in the two cases? f -- Francis Daly francis at daoine.org From nskachkov at apple.com Sat Dec 16 01:32:52 2017 From: nskachkov at apple.com (Nikolay Skachkov) Date: Fri, 15 Dec 2017 17:32:52 -0800 Subject: ssl_password_file directive is duplicate Message-ID: Hi! I want to use several ssl_certificate followed by ssl_certivicate_key in configuration. Also I try to supply ssl_password_file for each pair. Assuming that each key has its own password. I get: ssl_password_file directive is duplicate Please advice. NikolayNikolay -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Sun Dec 17 02:13:42 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 17 Dec 2017 05:13:42 +0300 Subject: ssl_password_file directive is duplicate In-Reply-To: References: Message-ID: <20171217021342.GL78325@mdounin.ru> Hello! On Fri, Dec 15, 2017 at 05:32:52PM -0800, Nikolay Skachkov wrote: > Hi! > > I want to use several ssl_certificate followed by ssl_certivicate_key in configuration. > Also I try to supply ssl_password_file for each pair. Assuming that each key has its own password. > > I get: ssl_password_file directive is duplicate Only one ssl_password_file directive is allowed in a given configuration context. You can, however, provide multiple passwords in a single file, one per line, they will be tried in order. See http://nginx.org/r/ssl_password_file for details. -- Maxim Dounin http://mdounin.ru/ From joel.parker.gm at gmail.com Mon Dec 18 19:21:41 2017 From: joel.parker.gm at gmail.com (Joel Parker) Date: Mon, 18 Dec 2017 13:21:41 -0600 Subject: curl connection refused Message-ID: I have seen this a lot on google but have not been able to find a suitable solution. My firewall is setup correctly. I am listening on port 80 netstat -anltp Active Internet connections (servers and established) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 1114/nginx: master curl localhost - works curl 172.31.22.230 -works when running on the local machine but when I try to run it from the outside, I get: curl http://172.31.22.230/ curl: (7) Failed to connect to 172.31.22.228 port 80: Connection refused I have made the config as simple as possible but have not figured out a way to run curl http://172.31.22.230 from another machine Here is the config: # cat nginx.conf load_module modules/ndk_http_module.so; load_module modules/ngx_http_lua_module.so; user ec2-user ec2-user; worker_processes 1; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; gzip on; server { listen 80; location / { proxy_pass http://18.220.148.14; } } } ## End http -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Mon Dec 18 19:26:50 2017 From: francis at daoine.org (Francis Daly) Date: Mon, 18 Dec 2017 19:26:50 +0000 Subject: curl connection refused In-Reply-To: References: Message-ID: <20171218192650.GV3127@daoine.org> On Mon, Dec 18, 2017 at 01:21:41PM -0600, Joel Parker wrote: Hi there, > I have seen this a lot on google but have not been able to find a suitable > solution. My firewall is setup correctly. This looks like something that nginx can not do anything about. > curl localhost - works > curl 172.31.22.230 -works when running on the local machine That suggests that the local machine has network connectivity to and from that IP address. > but when I try to run it from the outside, I get: > > curl http://172.31.22.230/ > curl: (7) Failed to connect to 172.31.22.228 port 80: Connection refused That suggests that the outside machine does not has network connectivity both to and from that IP address. Note that you have one address in the curl request, and a different one in the response. If that is real, something odd is going on. > I have made the config as simple as possible but have not figured out a way > to run > curl http://172.31.22.230 from another machine If the traffic does not get to nginx, nginx can do nothing about the traffic. You need to ensure that your network configuration -- routing, firewalling, everything else -- is correct, before nginx gets involved. Good luck with it, f -- Francis Daly francis at daoine.org From joel.parker.gm at gmail.com Mon Dec 18 19:31:57 2017 From: joel.parker.gm at gmail.com (Joel Parker) Date: Mon, 18 Dec 2017 13:31:57 -0600 Subject: curl connection refused In-Reply-To: <20171218192650.GV3127@daoine.org> References: <20171218192650.GV3127@daoine.org> Message-ID: Yeah, network connectivity, firewall, etc. are all configured correctly but still having issues when trying to hit the proxy. It's is very strange, since the ports are listening. I even turned off the firewall and the same issues. Joel On Mon, Dec 18, 2017 at 1:26 PM, Francis Daly wrote: > On Mon, Dec 18, 2017 at 01:21:41PM -0600, Joel Parker wrote: > > Hi there, > > > I have seen this a lot on google but have not been able to find a > suitable > > solution. My firewall is setup correctly. > > This looks like something that nginx can not do anything about. > > > curl localhost - works > > curl 172.31.22.230 -works when running on the local machine > > That suggests that the local machine has network connectivity to and > from that IP address. > > > but when I try to run it from the outside, I get: > > > > curl http://172.31.22.230/ > > curl: (7) Failed to connect to 172.31.22.228 port 80: Connection refused > > That suggests that the outside machine does not has network connectivity > both to and from that IP address. > > Note that you have one address in the curl request, and a different one > in the response. If that is real, something odd is going on. > > > I have made the config as simple as possible but have not figured out a > way > > to run > > curl http://172.31.22.230 from another machine > > If the traffic does not get to nginx, nginx can do nothing about the > traffic. > > You need to ensure that your network configuration -- routing, > firewalling, everything else -- is correct, before nginx gets involved. > > Good luck with it, > > f > -- > Francis Daly francis at daoine.org > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Mon Dec 18 19:36:52 2017 From: francis at daoine.org (Francis Daly) Date: Mon, 18 Dec 2017 19:36:52 +0000 Subject: Moving Joomla from subdir to root -> rewrite / redirect problem In-Reply-To: References: Message-ID: <20171218193652.GW3127@daoine.org> On Fri, Dec 15, 2017 at 03:25:20PM +0100, Johannes Rohr wrote: Hi there, I do not know joomla; and I do not have a direct answer for you. But some parts of your question seem unclear to me. Perhaps they are also unclear to someone who otherwise could give an answer. If you can clarify them, maybe that other person can help. > in order to have prettier URLs I have decided to move my joomla from > /web/ to /, but I want old URLs to transparently redirect. When you say "transparently redirect", can you give an example of what you mean? As in: when I request /web/something, should I get a http redirect so that I next request /something; or should I get something else? > location /web { > ? try_files $uri $uri/ /index.php?$args; > ?} > > but this obviously did not work as the arguments passed to index.php > still contain the /web/ ? part? , which would have to go. Without knowing joomla... why is this obvious? Why would arguments to index.php include /web/? And: what, specifically, did you do on the joomla side, to move it from /web/ to /? > So I tried instead: > > location / { > ??? rewrite ^/web/(.*)$ /$1; > ? try_files $uri $uri/ /index.php?$args; > } > > thinking that the first line would modify the URI, taking away the /web/ > part so that in the next line the changed uri would be fed to the > try_files command. Yes. It does not do anything about "arguments to index.php", though, which was your reported problem with the first attempt. (I'd probably include "break" on the rewrite line; but it might depend on what the rest of your config says.) > But this only resulted in 404s. Can you show one request that you make, and describe how you would like it to be handled? Cheers, f -- Francis Daly francis at daoine.org From francis at daoine.org Mon Dec 18 19:39:21 2017 From: francis at daoine.org (Francis Daly) Date: Mon, 18 Dec 2017 19:39:21 +0000 Subject: curl connection refused In-Reply-To: References: <20171218192650.GV3127@daoine.org> Message-ID: <20171218193921.GX3127@daoine.org> On Mon, Dec 18, 2017 at 01:31:57PM -0600, Joel Parker wrote: Hi there, > network connectivity, firewall, etc. are all configured correctly but still > having issues when trying to hit the proxy. It's is very strange, since the > ports are listening. I even turned off the firewall and the same issues. When you say "configured correctly", can you see the incoming traffic when you run tcpdump on the nginx server? If tcpdump does not see it, then the problem is not on the nginx server. If tcpdump does see it, then the problem may be on the nginx server (but still, probably not in the nginx application.) f -- Francis Daly francis at daoine.org From joel.parker.gm at gmail.com Mon Dec 18 20:56:13 2017 From: joel.parker.gm at gmail.com (Joel Parker) Date: Mon, 18 Dec 2017 14:56:13 -0600 Subject: curl connection refused In-Reply-To: <20171218193921.GX3127@daoine.org> References: <20171218192650.GV3127@daoine.org> <20171218193921.GX3127@daoine.org> Message-ID: Yeah, it was a network issue. tcpdump helped. Thanks > On Dec 18, 2017, at 1:39 PM, Francis Daly wrote: > > On Mon, Dec 18, 2017 at 01:31:57PM -0600, Joel Parker wrote: > > Hi there, > >> network connectivity, firewall, etc. are all configured correctly but still >> having issues when trying to hit the proxy. It's is very strange, since the >> ports are listening. I even turned off the firewall and the same issues. > > When you say "configured correctly", can you see the incoming traffic > when you run tcpdump on the nginx server? > > If tcpdump does not see it, then the problem is not on the nginx > server. If tcpdump does see it, then the problem may be on the nginx > server (but still, probably not in the nginx application.) > > f > -- > Francis Daly francis at daoine.org > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From francis at daoine.org Mon Dec 18 22:26:39 2017 From: francis at daoine.org (Francis Daly) Date: Mon, 18 Dec 2017 22:26:39 +0000 Subject: Problem using `proxy_redirect` to rewrite the same Location 2-or-more times? In-Reply-To: References: Message-ID: <20171218222639.GY3127@daoine.org> On Fri, Dec 08, 2017 at 08:48:55PM +0000, Alec Muffett wrote: Hi there, I have not tested this, but it looks like you might already have the answer... > - the origin responds with a 302: > > "Location: http://www.foo.com/" ...or > "Location: http://www.bar.com/" ...or > "Location: http://www.baz.com/" > > ...and I have the following dynamic rewriting that I want to perform: > > * foo.com -> ding.org > * bar.com -> dong.org > * baz.com -> dell.org > * ...more? > The issue seems to be that `proxy_redirect` stops after executing the first > rule that succeeds? > > Is this intended behaviour, please? And is there a way to achieve what I > want, e.g. via options-flags or Lua? I am making heavy use of subs_filter, > proxy_cookie_domain, etc. I do not know if that stop-on-first-match is the intended behaviour. (But I suspect that proxy_redirect expects to be called in a much simpler scenario than yours.) You do currently have a header_filter_by_lua_block{} section, where you appear to rewrite three specific content-security-related headers based on multiple regex matches. Could you not do exactly the same with the "Location" header, since it is just another header from upstream that will be sent to the client? It seems so obvious that I guess you must have tried it; but you didn't explicitly say that you did, so maybe it is worth a shot. Cheers, f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Tue Dec 19 02:14:43 2017 From: nginx-forum at forum.nginx.org (erick3k) Date: Mon, 18 Dec 2017 21:14:43 -0500 Subject: Problem with proxy_cache_path and limit Message-ID: <7c065faa27afb646464401051c537bdd.NginxMailingListEnglish@forum.nginx.org> nginx -v nginx version: nginx/1.13.7 # Server globals user www-data; worker_processes auto; worker_rlimit_nofile 65535; error_log /var/log/nginx/error.log crit; pid /var/run/nginx.pid; # Worker config events { worker_connections 1024; use epoll; multi_accept on; } http { proxy_cache_path /home/admin/cachemaster levels=1:2 keys_zone=my_cache:100m max_size=1g inactive=60m use_temp_path=off; } server { listen 107.170.204.190:443; server_name sf1.example www.sf1.example; ssl on; ssl_certificate /home/admin/conf/web/ssl.sf1.example.pem; ssl_certificate_key /home/admin/conf/web/ssl.sf1.example.key; error_log /var/log/apache2/domains/sf1.example.error.log error; location / { proxy_cache my_cache; proxy_set_header X-Real-IP $remote_addr; proxy_cache_revalidate on; proxy_cache_min_uses 3; proxy_cache_valid 200 301 7d; proxy_pass https://example:443; proxy_ignore_headers X-Accel-Expires Expires Cache-Control; } } but the folder /home/admin/cachemaster fills up beyond 1gb, what am i missing Thanks Posted at Nginx Forum: https://forum.nginx.org/read.php?2,277816,277816#msg-277816 From zchao1995 at gmail.com Tue Dec 19 02:41:41 2017 From: zchao1995 at gmail.com (Zhang Chao) Date: Mon, 18 Dec 2017 18:41:41 -0800 Subject: Problem with proxy_cache_path and limit In-Reply-To: <7c065faa27afb646464401051c537bdd.NginxMailingListEnglish@forum.nginx.org> References: <7c065faa27afb646464401051c537bdd.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hello! > nginx -v > nginx version: nginx/1.13.7 > # Server globals > user www-data; > worker_processes auto; > worker_rlimit_nofile 65535; > error_log /var/log/nginx/error.log crit; > pid /var/run/nginx.pid; > > > # Worker config > events { > worker_connections 1024; > use epoll; > multi_accept on; > } > > > http { > > proxy_cache_path /home/admin/cachemaster levels=1:2 keys_zone=my_cache:100m > max_size=1g inactive=60m use_temp_path=off; > > } > server { > listen 107.170.204.190:443; > server_name sf1.example www.sf1.example; > ssl on; > ssl_certificate /home/admin/conf/web/ssl.sf1.example.pem; > ssl_certificate_key /home/admin/conf/web/ssl.sf1.example.key; > error_log /var/log/apache2/domains/sf1.example.error.log error; > > location / { > proxy_cache my_cache; > proxy_set_header X-Real-IP $remote_addr; > proxy_cache_revalidate on; > proxy_cache_min_uses 3; > proxy_cache_valid 200 301 7d; > proxy_pass https://example:443 ; > proxy_ignore_headers X-Accel-Expires Expires Cache-Control; > > } > } > > but the folder /home/admin/cachemaster fills up beyond 1gb, what am i > missing > > Thanks I think is 1GB rather than 1Gb? since the minimum unit is bytes. how long has this be going on? or just a sudden burst? -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Tue Dec 19 02:51:36 2017 From: nginx-forum at forum.nginx.org (erick3k) Date: Mon, 18 Dec 2017 21:51:36 -0500 Subject: Problem with proxy_cache_path and limit In-Reply-To: References: Message-ID: Hi Mate i tried both nginx -t nginx: [emerg] invalid max_size value "max_size=1GB" in /etc/nginx/nginx.conf:109 nginx: configuration file /etc/nginx/nginx.conf test failed root at nyc1:~# nginx -t nginx: [emerg] invalid max_size value "max_size=1Gb" in /etc/nginx/nginx.conf:109 nginx: configuration file /etc/nginx/nginx.conf test failed root at nyc1:~# Posted at Nginx Forum: https://forum.nginx.org/read.php?2,277816,277818#msg-277818 From nginx-forum at forum.nginx.org Tue Dec 19 02:59:22 2017 From: nginx-forum at forum.nginx.org (erick3k) Date: Mon, 18 Dec 2017 21:59:22 -0500 Subject: Problem with proxy_cache_path and limit In-Reply-To: <7c065faa27afb646464401051c537bdd.NginxMailingListEnglish@forum.nginx.org> References: <7c065faa27afb646464401051c537bdd.NginxMailingListEnglish@forum.nginx.org> Message-ID: <4822f4b2bd278bc748cf73d2b30e5015.NginxMailingListEnglish@forum.nginx.org> nothing, even tried 1024m Posted at Nginx Forum: https://forum.nginx.org/read.php?2,277816,277819#msg-277819 From abiliojr at gmail.com Tue Dec 19 05:36:00 2017 From: abiliojr at gmail.com (Abilio Marques) Date: Tue, 19 Dec 2017 06:36:00 +0100 Subject: limit_conn not working Message-ID: limit_conn is not working for me. I set up a test in nodejs, I'm doing GET requests to http://localhost/, they are coming from different connections (different origin ports), and all the connections are still open until the very end, still, no response other than 200 is received. I double check with wireshark. What am I missing?? Minimal configuration I can reproduce it with: https://paste.ngx.cc/70 Source code for the test: https://paste.ngx.cc/6f -------------- next part -------------- An HTML attachment was scrubbed... URL: From iseeprimenumbers at gmail.com Tue Dec 19 12:49:55 2017 From: iseeprimenumbers at gmail.com (Francisco V.) Date: Tue, 19 Dec 2017 09:49:55 -0300 Subject: Mailing list help Message-ID: Hello all, Sorry to bother, I've unsubscribed from this mailing list (In theory), did all the steps, but I'm still gettings mails from it. What's wrong? The volume, for now, is easily manageable, but shouldn't I be receiving nothing if I'm unsubscribed? Is there some admin that could help me? Thank you in advance, Francisco -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Tue Dec 19 13:45:03 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 19 Dec 2017 16:45:03 +0300 Subject: Problem with proxy_cache_path and limit In-Reply-To: <7c065faa27afb646464401051c537bdd.NginxMailingListEnglish@forum.nginx.org> References: <7c065faa27afb646464401051c537bdd.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20171219134503.GQ78325@mdounin.ru> Hello! On Mon, Dec 18, 2017 at 09:14:43PM -0500, erick3k wrote: [...] > proxy_cache_path /home/admin/cachemaster levels=1:2 keys_zone=my_cache:100m > max_size=1g inactive=60m use_temp_path=off; [...] > but the folder /home/admin/cachemaster fills up beyond 1gb, what am i > missing Could you please clarify what exactly do you observe? Note that "max_size=" is not a strict limit, but rather a threshold when nginx will start cleaning old cache items. It doesn't guarantee that the size of the folder will not exceed the limit specified - rather, nginx will take actions when the size exceeds the limit. Note well that with "use_temp_path=off" temporary files are placed into the cache directory, but they will not be counted against max_size. That is, with the specified configuration it is expected that the folder can contain more than 1gb under load. Especially if there are large cacheable responses in flight, which will occupy space as temporary files. -- Maxim Dounin http://mdounin.ru/ From mdounin at mdounin.ru Tue Dec 19 14:26:58 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 19 Dec 2017 17:26:58 +0300 Subject: limit_conn not working In-Reply-To: References: Message-ID: <20171219142658.GR78325@mdounin.ru> Hello! On Tue, Dec 19, 2017 at 06:36:00AM +0100, Abilio Marques wrote: > limit_conn is not working for me. I set up a test in nodejs, I'm doing GET > requests to http://localhost/, they are coming from different connections > (different origin ports), and all the connections are still open until the > very end, still, no response other than 200 is received. I double check > with wireshark. > > What am I missing?? > > Minimal configuration I can reproduce it with: https://paste.ngx.cc/70 > Source code for the test: https://paste.ngx.cc/6f The limit_conn limit only limits connections with active requests. Moreover, it only applies after reading request headers - as nginx needs to know requested host and URI to check limits appropriate for particular server and location blocks. As a result, it is almost impossible to trigger limit_conn by requests to small static files. To trigger limit_conn, consider testing it with files large enough to fill up socket buffers, and/or with proxying. -- Maxim Dounin http://mdounin.ru/ From abiliojr at gmail.com Tue Dec 19 14:29:22 2017 From: abiliojr at gmail.com (Abilio Marques) Date: Tue, 19 Dec 2017 15:29:22 +0100 Subject: limit_conn not working In-Reply-To: <20171219142658.GR78325@mdounin.ru> References: <20171219142658.GR78325@mdounin.ru> Message-ID: Thanks, I imagined to be something like that, but this is not obvious from the documentation. Is there a way to clarify it for future readers? On Tue, Dec 19, 2017 at 3:26 PM, Maxim Dounin wrote: > Hello! > > On Tue, Dec 19, 2017 at 06:36:00AM +0100, Abilio Marques wrote: > > > limit_conn is not working for me. I set up a test in nodejs, I'm doing > GET > > requests to http://localhost/, they are coming from different > connections > > (different origin ports), and all the connections are still open until > the > > very end, still, no response other than 200 is received. I double check > > with wireshark. > > > > What am I missing?? > > > > Minimal configuration I can reproduce it with: https://paste.ngx.cc/70 > > Source code for the test: https://paste.ngx.cc/6f > > The limit_conn limit only limits connections with active requests. > Moreover, it only applies after reading request headers - as nginx > needs to know requested host and URI to check limits appropriate > for particular server and location blocks. > > As a result, it is almost impossible to trigger limit_conn by > requests to small static files. To trigger limit_conn, consider > testing it with files large enough to fill up socket buffers, > and/or with proxying. > > -- > Maxim Dounin > http://mdounin.ru/ > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Tue Dec 19 15:37:56 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 19 Dec 2017 18:37:56 +0300 Subject: limit_conn not working In-Reply-To: References: <20171219142658.GR78325@mdounin.ru> Message-ID: <20171219153755.GS78325@mdounin.ru> Hello! On Tue, Dec 19, 2017 at 03:29:22PM +0100, Abilio Marques wrote: > I imagined to be something like that, but this is not obvious from the > documentation. Is there a way to clarify it for future readers? The documentation already says (http://nginx.org/en/docs/http/ngx_http_limit_conn_module.html): : Not all connections are counted. A connection is counted only if : it has a request processed by the server and the whole request : header has already been read. in an attempt to clarify things. This is more or less identical to what I wrote. The difference is recommendations on how to better trigger the limit, but I doubt such recommendations should be in the documentation. -- Maxim Dounin http://mdounin.ru/ From abiliojr at gmail.com Tue Dec 19 17:03:46 2017 From: abiliojr at gmail.com (Abilio Marques) Date: Tue, 19 Dec 2017 18:03:46 +0100 Subject: limit_conn not working In-Reply-To: <20171219153755.GS78325@mdounin.ru> References: <20171219142658.GR78325@mdounin.ru> <20171219153755.GS78325@mdounin.ru> Message-ID: For me the documentation reads in a way in which a connection with keep-alive that already received one request satisfies those two conditions: - It has a request processed by the server. (processed is past tense, which is true after the first one was made) - The whole request has already been read. While the actual behavior is: it has a request "being" processed by the server and the whole request header has already been read. Once the request is completely processed it doesn't count anymore. On Tue, Dec 19, 2017 at 4:37 PM, Maxim Dounin wrote: > Hello! > > On Tue, Dec 19, 2017 at 03:29:22PM +0100, Abilio Marques wrote: > > > I imagined to be something like that, but this is not obvious from the > > documentation. Is there a way to clarify it for future readers? > > The documentation already says > (http://nginx.org/en/docs/http/ngx_http_limit_conn_module.html): > > : Not all connections are counted. A connection is counted only if > : it has a request processed by the server and the whole request > : header has already been read. > > in an attempt to clarify things. This is more or less identical > to what I wrote. The difference is recommendations on how to > better trigger the limit, but I doubt such recommendations should > be in the documentation. > > -- > Maxim Dounin > http://mdounin.ru/ > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Tue Dec 19 17:48:41 2017 From: nginx-forum at forum.nginx.org (erick3k) Date: Tue, 19 Dec 2017 12:48:41 -0500 Subject: Problem with proxy_cache_path and limit In-Reply-To: <20171219134503.GQ78325@mdounin.ru> References: <20171219134503.GQ78325@mdounin.ru> Message-ID: Interesting, what i observe is the folder cachemaster going over 1gb, way over until it fills the hard drive. so what is the correct way to limit the cache size used by nginx? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,277816,277831#msg-277831 From nginx-forum at forum.nginx.org Tue Dec 19 17:51:35 2017 From: nginx-forum at forum.nginx.org (erick3k) Date: Tue, 19 Dec 2017 12:51:35 -0500 Subject: Problem with proxy_cache_path and limit In-Reply-To: <20171219134503.GQ78325@mdounin.ru> References: <20171219134503.GQ78325@mdounin.ru> Message-ID: <097b1b1af807cfdffff0561e6099fc45.NginxMailingListEnglish@forum.nginx.org> removing use_temp_path=off max_size is literally ignored, still fills the hard drive just in default nginx temp path Posted at Nginx Forum: https://forum.nginx.org/read.php?2,277816,277832#msg-277832 From mdounin at mdounin.ru Tue Dec 19 18:10:27 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 19 Dec 2017 21:10:27 +0300 Subject: Problem with proxy_cache_path and limit In-Reply-To: References: <20171219134503.GQ78325@mdounin.ru> Message-ID: <20171219181027.GW78325@mdounin.ru> Hello! On Tue, Dec 19, 2017 at 12:48:41PM -0500, erick3k wrote: > what i observe is the folder cachemaster going over 1gb, way over until it > fills the hard drive. Please define "until it fills the hard drive". It is ok if you have only 1gb or 2gb free, but it is certainly not ok if it fills a free 200gb drive. Additional question is about temporary files. As previously explained, temporary files are not counted against the cache max_size limit, yet will occupy space in the same folder due to "use_temp_path=off". Consider using "use_temp_path=on" (the default) with a proxy_temp_path pointing to a different folder to test if the space is occupied by the cache or temporary files. You may also want to take a look into the error log, to see if there are any crit/alert/emerg errors. Also you may want to check what the nginx cache manager process is doing. > so what is the correct way to limit the cache size used by nginx? The "proxy_cache_path ... max_size=..." is the correct way. Though it should be understood that it doesn't limit the size of the folder, but rather instruct nginx to remove old cache items if the size of the cache is above the limit. -- Maxim Dounin http://mdounin.ru/ From Jason.Whittington at equifax.com Tue Dec 19 18:12:12 2017 From: Jason.Whittington at equifax.com (Jason Whittington) Date: Tue, 19 Dec 2017 18:12:12 +0000 Subject: [IE] Re: limit_conn not working In-Reply-To: References: <20171219142658.GR78325@mdounin.ru> Message-ID: <995C5C9AD54A3C419AF1C20A8B6AB9A432AB30C2@STLEISEXCMBX3.eis.equifax.com> If you have a github account you can fork the nginx wiki troubleshooting and send them a pull request ? https://www.nginx.com/resources/wiki/start/topics/tutorials/config_pitfalls/ Look for ?Edit this page? in the rightmost column. Jason From: nginx [mailto:nginx-bounces at nginx.org] On Behalf Of Abilio Marques Sent: Tuesday, December 19, 2017 8:29 AM To: nginx at nginx.org Subject: [IE] Re: limit_conn not working Thanks, I imagined to be something like that, but this is not obvious from the documentation. Is there a way to clarify it for future readers? On Tue, Dec 19, 2017 at 3:26 PM, Maxim Dounin > wrote: Hello! On Tue, Dec 19, 2017 at 06:36:00AM +0100, Abilio Marques wrote: > limit_conn is not working for me. I set up a test in nodejs, I'm doing GET > requests to http://localhost/, they are coming from different connections > (different origin ports), and all the connections are still open until the > very end, still, no response other than 200 is received. I double check > with wireshark. > > What am I missing?? > > Minimal configuration I can reproduce it with: https://paste.ngx.cc/70 > Source code for the test: https://paste.ngx.cc/6f The limit_conn limit only limits connections with active requests. Moreover, it only applies after reading request headers - as nginx needs to know requested host and URI to check limits appropriate for particular server and location blocks. As a result, it is almost impossible to trigger limit_conn by requests to small static files. To trigger limit_conn, consider testing it with files large enough to fill up socket buffers, and/or with proxying. -- Maxim Dounin http://mdounin.ru/ _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx This message contains proprietary information from Equifax which may be confidential. If you are not an intended recipient, please refrain from any disclosure, copying, distribution or use of this information and note that such actions are prohibited. If you have received this transmission in error, please notify by e-mail postmaster at equifax.com. Equifax? is a registered trademark of Equifax Inc. All rights reserved. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Tue Dec 19 18:17:33 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 19 Dec 2017 21:17:33 +0300 Subject: limit_conn not working In-Reply-To: References: <20171219142658.GR78325@mdounin.ru> <20171219153755.GS78325@mdounin.ru> Message-ID: <20171219181733.GX78325@mdounin.ru> Hello! On Tue, Dec 19, 2017 at 06:03:46PM +0100, Abilio Marques wrote: > For me the documentation reads in a way in which a connection with > keep-alive that already received one request satisfies those two conditions: > - It has a request processed by the server. (processed is past tense, which > is true after the first one was made) > - The whole request has already been read. > > While the actual behavior is: it has a request "being" processed by the > server and the whole request header has already been read. Once the request > is completely processed it doesn't count anymore. Yes, thanks, this needs fixing. I'll take care of this. (Funny enough, the correct tense was changed to an incorrect one during a "text review", which was supposed to fix language errors, http://hg.nginx.org/nginx.org/rev/95c3c3bbf1ce#l20.15.) -- Maxim Dounin http://mdounin.ru/ From nginx-forum at forum.nginx.org Tue Dec 19 18:31:10 2017 From: nginx-forum at forum.nginx.org (erick3k) Date: Tue, 19 Dec 2017 13:31:10 -0500 Subject: Problem with proxy_cache_path and limit In-Reply-To: <20171219181027.GW78325@mdounin.ru> References: <20171219181027.GW78325@mdounin.ru> Message-ID: <6b4cb262d18ddbdda19ee616b27e5681.NginxMailingListEnglish@forum.nginx.org> and that is done by the cache manager correct? when is this function triggered ? i suspect the cache manager is not working, how can i tell if is loaded? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,277816,277836#msg-277836 From mdounin at mdounin.ru Tue Dec 19 19:39:30 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 19 Dec 2017 22:39:30 +0300 Subject: Problem with proxy_cache_path and limit In-Reply-To: <6b4cb262d18ddbdda19ee616b27e5681.NginxMailingListEnglish@forum.nginx.org> References: <20171219181027.GW78325@mdounin.ru> <6b4cb262d18ddbdda19ee616b27e5681.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20171219193930.GY78325@mdounin.ru> Hello! On Tue, Dec 19, 2017 at 01:31:10PM -0500, erick3k wrote: > and that is done by the cache manager correct? when is this function > triggered ? i suspect the cache manager is not working, how can i tell if is > loaded? Try something like "ps -alx", cache manager process title will be "nginx: cache manager process". -- Maxim Dounin http://mdounin.ru/ From mdounin at mdounin.ru Tue Dec 19 20:34:23 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 19 Dec 2017 23:34:23 +0300 Subject: limit_conn not working In-Reply-To: <20171219181733.GX78325@mdounin.ru> References: <20171219142658.GR78325@mdounin.ru> <20171219153755.GS78325@mdounin.ru> <20171219181733.GX78325@mdounin.ru> Message-ID: <20171219203423.GZ78325@mdounin.ru> Hello! On Tue, Dec 19, 2017 at 09:17:33PM +0300, Maxim Dounin wrote: > On Tue, Dec 19, 2017 at 06:03:46PM +0100, Abilio Marques wrote: > > > For me the documentation reads in a way in which a connection with > > keep-alive that already received one request satisfies those two conditions: > > - It has a request processed by the server. (processed is past tense, which > > is true after the first one was made) > > - The whole request has already been read. > > > > While the actual behavior is: it has a request "being" processed by the > > server and the whole request header has already been read. Once the request > > is completely processed it doesn't count anymore. > > Yes, thanks, this needs fixing. I'll take care of this. Should be fixed now, http://hg.nginx.org/nginx.org/rev/4931a7ba6a32. -- Maxim Dounin http://mdounin.ru/ From alec.muffett at gmail.com Tue Dec 19 20:53:58 2017 From: alec.muffett at gmail.com (Alec Muffett) Date: Tue, 19 Dec 2017 20:53:58 +0000 Subject: Problem using `proxy_redirect` to rewrite the same Location 2-or-more times? In-Reply-To: <20171218222639.GY3127@daoine.org> References: <20171218222639.GY3127@daoine.org> Message-ID: Hi Francis! On 18 December 2017 at 22:26, Francis Daly wrote: > > You do currently have a header_filter_by_lua_block{} section, where you > appear to rewrite three specific content-security-related headers based > on multiple regex matches. > > Could you not do exactly the same with the "Location" header, since it > is just another header from upstream that will be sent to the client? > > It seems so obvious that I guess you must have tried it; but you didn't > explicitly say that you did, so maybe it is worth a shot. > Eventually it struck me that this was indeed a solution, but only after I worked out that the proxy_cookie_domain command was similarly operating on a first-match-wins principle. Then I just dove into it, and coded it. I am now using Lua to global-search-and-replace upon "Access-Control-Allow-Origin", "Content-Security-Policy", "Content-Security-Policy-Report-Only", "Link", "Location" and "Set-Cookie", and I am getting the behaviour that I wanted, and I am keeping an eye open for other headers to fix. I believe that I will be able to use the same/similar mechanism to obviate use of `more_clear_headers`, and possibly I can find somewhere convenient to replace the inbound `proxy_set_header` and `proxy_hide_header` with Lua, too. Perhaps the best thing to do would be (to get someone?) to document this behaviour in the `proxy_redirect` (etc) manuals; I feel that I was led astray because (by comparison) multiple invocations of `subs_filter` work exactly as expected. What do you think? -- http://dropsafe.crypticide.com/aboutalecm -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Tue Dec 19 21:13:10 2017 From: nginx-forum at forum.nginx.org (erick3k) Date: Tue, 19 Dec 2017 16:13:10 -0500 Subject: Problem with proxy_cache_path and limit In-Reply-To: <20171219193930.GY78325@mdounin.ru> References: <20171219193930.GY78325@mdounin.ru> Message-ID: Yep is there, so am still back to 0, nginx is still filling up the hard disk please help Posted at Nginx Forum: https://forum.nginx.org/read.php?2,277816,277840#msg-277840 From lists at lazygranch.com Wed Dec 20 08:07:16 2017 From: lists at lazygranch.com (lists at lazygranch.com) Date: Wed, 20 Dec 2017 00:07:16 -0800 Subject: Centos 7 file permission problem Message-ID: <20171220000716.088a1816.lists@lazygranch.com> I'm setting up a web server on a Centos 7 VPS. I'm relatively sure I have the firewalls set up properly since I can see my browser requests in the access and error log. That said, I have file permission problem. nginx 1.12.2 Linux servername 3.10.0-693.5.2.el7.x86_64 #1 SMP Fri Oct 20 20:32:50 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux nginx.conf (with comments removed for brevity and my domain name remove because google) ------- user nginx; worker_processes auto; error_log /var/log/nginx/error.log; pid /run/nginx.pid; events { worker_connections 1024; } http { log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; include /etc/nginx/mime.types; default_type application/octet-stream; server { listen 80; server_name mydomain.com www.mydomain.com; return 301 https://$host$request_uri; } server { listen 443 ssl http2; server_name mydomain.com www.mydomain.com; ssl_dhparam /etc/ssl/certs/dhparam.pem; root /usr/share/nginx/html/mydomain.com/public_html; ssl_certificate /etc/letsencrypt/live/mydomain.com/fullchain.pem; # managed by Certbot ssl_certificate_key /etc/letsencrypt/live/mydomain.com/privkey.pem; # managed by Certbot ssl_ciphers HIGH:!aNULL:!MD5; ssl_prefer_server_ciphers on; location / { root /usr/share/nginx/html/mydomain.com/public_html; index index.html index.htm; } # error_page 404 /404.html; location = /40x.html { } # error_page 500 502 503 504 /50x.html; location = /50x.html { } } } I have firefox set up with no cache and do not save history. ------------------------------------------------------------- access log: mypi - - [20/Dec/2017:07:46:44 +0000] "GET /index.html HTTP/2.0" 403 169 "-" "Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Firefox/52.0" "-" myip - - [20/Dec/2017:07:48:44 +0000] "GET /index.html HTTP/2.0" 403 169 "-" "Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Firefox/52.0" "-" ------------------------------- error log: 2017/12/20 07:46:44 [error] 10146#0: *48 open() "/usr/share/nginx/html/mydomain.com/public_html/index.html" failed (13: Permission denied), client: myip, server: mydomain.com, request: "GET /index.html HTTP/2.0", host: "mydomain.com" 2017/12/20 07:48:44 [error] 10146#0: *48 open() "/usr/share/nginx/html/mydomain.com/public_html/index.html" failed (13: Permission denied), client: myip, server: mydomain.com, request: "GET /index.html HTTP/2.0", host: "mydomain.com" Directory permissions: For now, I made eveything 755 with ownership nginx:nginx I did chmod and chown with the -R option /etc/nginx: drwxr-xr-x. 4 nginx nginx 4096 Dec 20 07:39 nginx /usr/share/nginx: drwxr-xr-x. 4 nginx nginx 33 Dec 15 08:47 nginx /var/log: drwx------. 2 nginx nginx 4096 Dec 20 07:51 nginx -------------------------------------------------------------- systemctl status nginx ? nginx.service - The nginx HTTP and reverse proxy server Loaded: loaded (/usr/lib/systemd/system/nginx.service; enabled; vendor preset: disabled) Active: active (running) since Wed 2017-12-20 04:21:37 UTC; 3h 37min ago Process: 10145 ExecReload=/bin/kill -s HUP $MAINPID (code=exited, status=0/SUCCESS) Main PID: 9620 (nginx) CGroup: /system.slice/nginx.service ?? 9620 nginx: master process /usr/sbin/nginx ??10146 nginx: worker process Dec 20 07:18:33 servername systemd[1]: Reloaded The nginx HTTP and reverse proxy server. -------------------------------------------------------------- ps aux | grep nginx root 9620 0.0 0.3 71504 3848 ? Ss 04:21 0:00 nginx: master process /usr/sbin/nginx nginx 10146 0.0 0.4 72004 4216 ? S 07:18 0:00 nginx: worker process root 10235 0.0 0.0 112660 952 pts/1 S+ 08:01 0:00 grep ngin ----------------------------------- firewall-cmd --zone=public --list-all public (active) target: default icmp-block-inversion: no interfaces: eth0 sources: services: ssh dhcpv6-client http https ports: protocols: masquerade: no forward-ports: source-ports: icmp-blocks: rich rules: From mig at 1984.cz Wed Dec 20 08:21:45 2017 From: mig at 1984.cz (mig at 1984.cz) Date: Wed, 20 Dec 2017 09:21:45 +0100 Subject: Nginx cache returns MISS after a few hours, can't be set up to cache "forever" In-Reply-To: <20171128121831.GB546@Romans-MacBook-Air.local> References: <20171128113232.jmmf7lkk4i3isibj@me.localdomain> <20171128121831.GB546@Romans-MacBook-Air.local> Message-ID: <20171220082145.6xjy74zjbwkfmpdu@me.localdomain> Subject: Re: Nginx cache returns MISS after a few hours, can't be set up to cache "forever" Hi, thank you, yes, the problem was the keys_zone too low. There were ~80000 keys in the cache with only 10M zone size. Jan From nginx-forum at forum.nginx.org Wed Dec 20 10:01:35 2017 From: nginx-forum at forum.nginx.org (foxgab) Date: Wed, 20 Dec 2017 05:01:35 -0500 Subject: get cookie value which name contains hyphen Message-ID: <7bb4ea202d14a47b1c8218614da7dce5.NginxMailingListEnglish@forum.nginx.org> my app set a cookie which named like SESSIONID-MYAPP, i want to write the value of that cookie to log file, but i tried $cookie_SESSIONID_MYAPP, $cookie_SESSIONID-MYAPP, but i can't get what i want. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,277845,277845#msg-277845 From arozyev at nginx.com Wed Dec 20 11:17:18 2017 From: arozyev at nginx.com (Aziz Rozyev) Date: Wed, 20 Dec 2017 14:17:18 +0300 Subject: Centos 7 file permission problem In-Reply-To: <20171220000716.088a1816.lists@lazygranch.com> References: <20171220000716.088a1816.lists@lazygranch.com> Message-ID: Hi, have you checked this with disabled selinux ? br, Aziz. > On 20 Dec 2017, at 11:07, lists at lazygranch.com wrote: > > I'm setting up a web server on a Centos 7 VPS. I'm relatively sure I > have the firewalls set up properly since I can see my browser requests > in the access and error log. That said, I have file permission problem. > > nginx 1.12.2 > Linux servername 3.10.0-693.5.2.el7.x86_64 #1 SMP Fri Oct 20 20:32:50 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux > > > nginx.conf (with comments removed for brevity and my domain name remove > because google) > ------- > user nginx; > worker_processes auto; > error_log /var/log/nginx/error.log; > pid /run/nginx.pid; > > events { > worker_connections 1024; > } > > http { > log_format main '$remote_addr - $remote_user [$time_local] "$request" ' > '$status $body_bytes_sent "$http_referer" ' > '"$http_user_agent" "$http_x_forwarded_for"'; > > access_log /var/log/nginx/access.log main; > > sendfile on; > tcp_nopush on; > tcp_nodelay on; > keepalive_timeout 65; > types_hash_max_size 2048; > > include /etc/nginx/mime.types; > default_type application/octet-stream; > > server { > listen 80; > server_name mydomain.com www.mydomain.com; > > return 301 https://$host$request_uri; > } > > server { > listen 443 ssl http2; > server_name mydomain.com www.mydomain.com; > ssl_dhparam /etc/ssl/certs/dhparam.pem; > root /usr/share/nginx/html/mydomain.com/public_html; > > ssl_certificate /etc/letsencrypt/live/mydomain.com/fullchain.pem; # managed by Certbot > ssl_certificate_key /etc/letsencrypt/live/mydomain.com/privkey.pem; # managed by Certbot > ssl_ciphers HIGH:!aNULL:!MD5; > ssl_prefer_server_ciphers on; > > location / { > root /usr/share/nginx/html/mydomain.com/public_html; > index index.html index.htm; > } > # > error_page 404 /404.html; > location = /40x.html { > } > # > error_page 500 502 503 504 /50x.html; > location = /50x.html { > } > } > > } > > I have firefox set up with no cache and do not save history. > ------------------------------------------------------------- > access log: > > mypi - - [20/Dec/2017:07:46:44 +0000] "GET /index.html HTTP/2.0" 403 169 > "-" "Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 > Firefox/52.0" "-" > > myip - - [20/Dec/2017:07:48:44 +0000] "GET /index.html > HTTP/2.0" 403 169 "-" "Mozilla/5.0 (X11; Linux x86_64; rv:52.0) > Gecko/20100101 Firefox/52.0" "-" > ------------------------------- > error log: > > 2017/12/20 07:46:44 [error] 10146#0: *48 open() "/usr/share/nginx/html/mydomain.com/public_html/index.html" failed (13: Permission denied), client: myip, server: mydomain.com, request: "GET /index.html HTTP/2.0", host: "mydomain.com" > 2017/12/20 07:48:44 [error] 10146#0: *48 open() "/usr/share/nginx/html/mydomain.com/public_html/index.html" failed (13: Permission denied), client: myip, server: mydomain.com, request: "GET /index.html HTTP/2.0", host: "mydomain.com" > > > Directory permissions: > For now, I made eveything 755 with ownership nginx:nginx I did chmod > and chown with the -R option > > /etc/nginx: > drwxr-xr-x. 4 nginx nginx 4096 Dec 20 07:39 nginx > > /usr/share/nginx: > drwxr-xr-x. 4 nginx nginx 33 Dec 15 08:47 nginx > > /var/log: > drwx------. 2 nginx nginx 4096 Dec 20 07:51 nginx > -------------------------------------------------------------- > systemctl status nginx > ? nginx.service - The nginx HTTP and reverse proxy server > Loaded: loaded (/usr/lib/systemd/system/nginx.service; enabled; vendor preset: disabled) > Active: active (running) since Wed 2017-12-20 04:21:37 UTC; 3h 37min ago > Process: 10145 ExecReload=/bin/kill -s HUP $MAINPID (code=exited, status=0/SUCCESS) > Main PID: 9620 (nginx) > CGroup: /system.slice/nginx.service > ?? 9620 nginx: master process /usr/sbin/nginx > ??10146 nginx: worker process > > > Dec 20 07:18:33 servername systemd[1]: Reloaded The nginx HTTP and reverse proxy server. > -------------------------------------------------------------- > > ps aux | grep nginx > root 9620 0.0 0.3 71504 3848 ? Ss 04:21 0:00 nginx: master process /usr/sbin/nginx > nginx 10146 0.0 0.4 72004 4216 ? S 07:18 0:00 nginx: worker process > root 10235 0.0 0.0 112660 952 pts/1 S+ 08:01 0:00 grep ngin > > ----------------------------------- > firewall-cmd --zone=public --list-all > public (active) > target: default > icmp-block-inversion: no > interfaces: eth0 > sources: > services: ssh dhcpv6-client http https > ports: > protocols: > masquerade: no > forward-ports: > source-ports: > icmp-blocks: > rich rules: > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From mdounin at mdounin.ru Wed Dec 20 13:06:17 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 20 Dec 2017 16:06:17 +0300 Subject: Problem with proxy_cache_path and limit In-Reply-To: References: <20171219193930.GY78325@mdounin.ru> Message-ID: <20171220130617.GA78325@mdounin.ru> Hello! On Tue, Dec 19, 2017 at 04:13:10PM -0500, erick3k wrote: > Yep is there, so am still back to 0, nginx is still filling up the hard disk > please help I believe I've already asked you to define "filling up". And you were already asked to do some diagnostic steps, including: - check if space is occupied by temporary files or the cache itself; - check error logs; - check what cache manager process is doing. If you really want others to be able to help, you may want to do what you were asked to and share the results. -- Maxim Dounin http://mdounin.ru/ From mdounin at mdounin.ru Wed Dec 20 13:12:43 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 20 Dec 2017 16:12:43 +0300 Subject: get cookie value which name contains hyphen In-Reply-To: <7bb4ea202d14a47b1c8218614da7dce5.NginxMailingListEnglish@forum.nginx.org> References: <7bb4ea202d14a47b1c8218614da7dce5.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20171220131243.GB78325@mdounin.ru> Hello! On Wed, Dec 20, 2017 at 05:01:35AM -0500, foxgab wrote: > my app set a cookie which named like SESSIONID-MYAPP, i want to write the > value of that cookie to log file, but i tried $cookie_SESSIONID_MYAPP, > $cookie_SESSIONID-MYAPP, but i can't get what i want. You wan't be able to access cookies with characters like '-' in their names using the $cookie_* variables. Instead, consider using the $http_cookie variable and extracting a particular cookie yourself. -- Maxim Dounin http://mdounin.ru/ From lists at lazygranch.com Thu Dec 21 00:33:14 2017 From: lists at lazygranch.com (lists at lazygranch.com) Date: Wed, 20 Dec 2017 16:33:14 -0800 Subject: Centos 7 file permission problem In-Reply-To: References: <20171220000716.088a1816.lists@lazygranch.com> Message-ID: <20171220163314.411060d0.lists@lazygranch.com> Well that was it. You can't believe how many hours I wasted on that. Thanks. Double thanks. I'm going to mention this in the Digital Ocean help pages. I disabled selinx, but I have a book laying around on how to set it up. Eh, it is on the list. On Wed, 20 Dec 2017 14:17:18 +0300 Aziz Rozyev wrote: > Hi, > > have you checked this with disabled selinux ? > > br, > Aziz. > > > > > > > On 20 Dec 2017, at 11:07, lists at lazygranch.com wrote: > > > > I'm setting up a web server on a Centos 7 VPS. I'm relatively sure I > > have the firewalls set up properly since I can see my browser > > requests in the access and error log. That said, I have file > > permission problem. > > > > nginx 1.12.2 > > Linux servername 3.10.0-693.5.2.el7.x86_64 #1 SMP Fri Oct 20 > > 20:32:50 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux > > > > > > nginx.conf (with comments removed for brevity and my domain name > > remove because google) > > ------- > > user nginx; > > worker_processes auto; > > error_log /var/log/nginx/error.log; > > pid /run/nginx.pid; > > > > events { > > worker_connections 1024; > > } > > > > http { > > log_format main '$remote_addr - $remote_user [$time_local] > > "$request" ' '$status $body_bytes_sent "$http_referer" ' > > '"$http_user_agent" "$http_x_forwarded_for"'; > > > > access_log /var/log/nginx/access.log main; > > > > sendfile on; > > tcp_nopush on; > > tcp_nodelay on; > > keepalive_timeout 65; > > types_hash_max_size 2048; > > > > include /etc/nginx/mime.types; > > default_type application/octet-stream; > > > > server { > > listen 80; > > server_name mydomain.com www.mydomain.com; > > > > return 301 https://$host$request_uri; > > } > > > > server { > > listen 443 ssl http2; > > server_name mydomain.com www.mydomain.com; > > ssl_dhparam /etc/ssl/certs/dhparam.pem; > > root /usr/share/nginx/html/mydomain.com/public_html; > > > > ssl_certificate /etc/letsencrypt/live/mydomain.com/fullchain.pem; # > > managed by Certbot > > ssl_certificate_key /etc/letsencrypt/live/mydomain.com/privkey.pem; > > # managed by Certbot ssl_ciphers HIGH:!aNULL:!MD5; > > ssl_prefer_server_ciphers on; > > > > location / { > > root /usr/share/nginx/html/mydomain.com/public_html; > > index index.html index.htm; > > } > > # > > error_page 404 /404.html; > > location = /40x.html { > > } > > # > > error_page 500 502 503 504 /50x.html; > > location = /50x.html { > > } > > } > > > > } > > > > I have firefox set up with no cache and do not save history. > > ------------------------------------------------------------- > > access log: > > > > mypi - - [20/Dec/2017:07:46:44 +0000] "GET /index.html HTTP/2.0" > > 403 169 "-" "Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 > > Firefox/52.0" "-" > > > > myip - - [20/Dec/2017:07:48:44 +0000] "GET /index.html > > HTTP/2.0" 403 169 "-" "Mozilla/5.0 (X11; Linux x86_64; rv:52.0) > > Gecko/20100101 Firefox/52.0" "-" > > ------------------------------- > > error log: > > > > 2017/12/20 07:46:44 [error] 10146#0: *48 open() > > "/usr/share/nginx/html/mydomain.com/public_html/index.html" failed > > (13: Permission denied), client: myip, server: mydomain.com, > > request: "GET /index.html HTTP/2.0", host: "mydomain.com" > > 2017/12/20 07:48:44 [error] 10146#0: *48 open() > > "/usr/share/nginx/html/mydomain.com/public_html/index.html" failed > > (13: Permission denied), client: myip, server: mydomain.com, > > request: "GET /index.html HTTP/2.0", host: "mydomain.com" > > > > > > Directory permissions: > > For now, I made eveything 755 with ownership nginx:nginx I did chmod > > and chown with the -R option > > > > /etc/nginx: > > drwxr-xr-x. 4 nginx nginx 4096 Dec 20 07:39 nginx > > > > /usr/share/nginx: > > drwxr-xr-x. 4 nginx nginx 33 Dec 15 08:47 nginx > > > > /var/log: > > drwx------. 2 nginx nginx 4096 Dec 20 07:51 nginx > > -------------------------------------------------------------- > > systemctl status nginx > > ? nginx.service - The nginx HTTP and reverse proxy server > > Loaded: loaded (/usr/lib/systemd/system/nginx.service; enabled; > > vendor preset: disabled) Active: active (running) since Wed > > 2017-12-20 04:21:37 UTC; 3h 37min ago Process: 10145 > > ExecReload=/bin/kill -s HUP $MAINPID (code=exited, > > status=0/SUCCESS) Main PID: 9620 (nginx) > > CGroup: /system.slice/nginx.service ?? 9620 nginx: master > > process /usr/sbin/nginx ??10146 nginx: worker process > > > > > > Dec 20 07:18:33 servername systemd[1]: Reloaded The nginx HTTP and > > reverse proxy server. > > -------------------------------------------------------------- > > > > ps aux | grep nginx > > root 9620 0.0 0.3 71504 3848 ? Ss 04:21 0:00 > > nginx: master process /usr/sbin/nginx nginx 10146 0.0 0.4 > > 72004 4216 ? S 07:18 0:00 nginx: worker process > > root 10235 0.0 0.0 112660 952 pts/1 S+ 08:01 0:00 > > grep ngin > > > > ----------------------------------- > > firewall-cmd --zone=public --list-all > > public (active) > > target: default > > icmp-block-inversion: no > > interfaces: eth0 > > sources: > > services: ssh dhcpv6-client http https > > ports: > > protocols: > > masquerade: no > > forward-ports: > > source-ports: > > icmp-blocks: > > rich rules: > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From quinefang at gmail.com Thu Dec 21 05:20:14 2017 From: quinefang at gmail.com (=?UTF-8?B?5pa55Z2k?=) Date: Thu, 21 Dec 2017 13:20:14 +0800 Subject: Centos 7 file permission problem In-Reply-To: <20171220163314.411060d0.lists@lazygranch.com> References: <20171220000716.088a1816.lists@lazygranch.com> <20171220163314.411060d0.lists@lazygranch.com> Message-ID: This time, SELinux again, seems to be a real problem for new talents. I remembered my hours headached with that. -------------- next part -------------- An HTML attachment was scrubbed... URL: From arozyev at nginx.com Thu Dec 21 07:06:46 2017 From: arozyev at nginx.com (Aziz Rozyev) Date: Thu, 21 Dec 2017 10:06:46 +0300 Subject: Centos 7 file permission problem In-Reply-To: <20171220163314.411060d0.lists@lazygranch.com> References: <20171220000716.088a1816.lists@lazygranch.com> <20171220163314.411060d0.lists@lazygranch.com> Message-ID: <39D5C9AD-2242-46AB-9371-AFA06347A8DE@nginx.com> no problem, btw, check out this post https://www.nginx.com/blog/nginx-se-linux-changes-upgrading-rhel-6-6/ br, Aziz. > On 21 Dec 2017, at 03:33, lists at lazygranch.com wrote: > > Well that was it. You can't believe how many hours I wasted on that. > Thanks. Double thanks. > I'm going to mention this in the Digital Ocean help pages. > > I disabled selinx, but I have a book laying around on how to set it up. > Eh, it is on the list. > > On Wed, 20 Dec 2017 14:17:18 +0300 > Aziz Rozyev wrote: > >> Hi, >> >> have you checked this with disabled selinux ? >> >> br, >> Aziz. >> >> >> >> >> >>> On 20 Dec 2017, at 11:07, lists at lazygranch.com wrote: >>> >>> I'm setting up a web server on a Centos 7 VPS. I'm relatively sure I >>> have the firewalls set up properly since I can see my browser >>> requests in the access and error log. That said, I have file >>> permission problem. >>> >>> nginx 1.12.2 >>> Linux servername 3.10.0-693.5.2.el7.x86_64 #1 SMP Fri Oct 20 >>> 20:32:50 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux >>> >>> >>> nginx.conf (with comments removed for brevity and my domain name >>> remove because google) >>> ------- >>> user nginx; >>> worker_processes auto; >>> error_log /var/log/nginx/error.log; >>> pid /run/nginx.pid; >>> >>> events { >>> worker_connections 1024; >>> } >>> >>> http { >>> log_format main '$remote_addr - $remote_user [$time_local] >>> "$request" ' '$status $body_bytes_sent "$http_referer" ' >>> '"$http_user_agent" "$http_x_forwarded_for"'; >>> >>> access_log /var/log/nginx/access.log main; >>> >>> sendfile on; >>> tcp_nopush on; >>> tcp_nodelay on; >>> keepalive_timeout 65; >>> types_hash_max_size 2048; >>> >>> include /etc/nginx/mime.types; >>> default_type application/octet-stream; >>> >>> server { >>> listen 80; >>> server_name mydomain.com www.mydomain.com; >>> >>> return 301 https://$host$request_uri; >>> } >>> >>> server { >>> listen 443 ssl http2; >>> server_name mydomain.com www.mydomain.com; >>> ssl_dhparam /etc/ssl/certs/dhparam.pem; >>> root /usr/share/nginx/html/mydomain.com/public_html; >>> >>> ssl_certificate /etc/letsencrypt/live/mydomain.com/fullchain.pem; # >>> managed by Certbot >>> ssl_certificate_key /etc/letsencrypt/live/mydomain.com/privkey.pem; >>> # managed by Certbot ssl_ciphers HIGH:!aNULL:!MD5; >>> ssl_prefer_server_ciphers on; >>> >>> location / { >>> root /usr/share/nginx/html/mydomain.com/public_html; >>> index index.html index.htm; >>> } >>> # >>> error_page 404 /404.html; >>> location = /40x.html { >>> } >>> # >>> error_page 500 502 503 504 /50x.html; >>> location = /50x.html { >>> } >>> } >>> >>> } >>> >>> I have firefox set up with no cache and do not save history. >>> ------------------------------------------------------------- >>> access log: >>> >>> mypi - - [20/Dec/2017:07:46:44 +0000] "GET /index.html HTTP/2.0" >>> 403 169 "-" "Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 >>> Firefox/52.0" "-" >>> >>> myip - - [20/Dec/2017:07:48:44 +0000] "GET /index.html >>> HTTP/2.0" 403 169 "-" "Mozilla/5.0 (X11; Linux x86_64; rv:52.0) >>> Gecko/20100101 Firefox/52.0" "-" >>> ------------------------------- >>> error log: >>> >>> 2017/12/20 07:46:44 [error] 10146#0: *48 open() >>> "/usr/share/nginx/html/mydomain.com/public_html/index.html" failed >>> (13: Permission denied), client: myip, server: mydomain.com, >>> request: "GET /index.html HTTP/2.0", host: "mydomain.com" >>> 2017/12/20 07:48:44 [error] 10146#0: *48 open() >>> "/usr/share/nginx/html/mydomain.com/public_html/index.html" failed >>> (13: Permission denied), client: myip, server: mydomain.com, >>> request: "GET /index.html HTTP/2.0", host: "mydomain.com" >>> >>> >>> Directory permissions: >>> For now, I made eveything 755 with ownership nginx:nginx I did chmod >>> and chown with the -R option >>> >>> /etc/nginx: >>> drwxr-xr-x. 4 nginx nginx 4096 Dec 20 07:39 nginx >>> >>> /usr/share/nginx: >>> drwxr-xr-x. 4 nginx nginx 33 Dec 15 08:47 nginx >>> >>> /var/log: >>> drwx------. 2 nginx nginx 4096 Dec 20 07:51 nginx >>> -------------------------------------------------------------- >>> systemctl status nginx >>> ? nginx.service - The nginx HTTP and reverse proxy server >>> Loaded: loaded (/usr/lib/systemd/system/nginx.service; enabled; >>> vendor preset: disabled) Active: active (running) since Wed >>> 2017-12-20 04:21:37 UTC; 3h 37min ago Process: 10145 >>> ExecReload=/bin/kill -s HUP $MAINPID (code=exited, >>> status=0/SUCCESS) Main PID: 9620 (nginx) >>> CGroup: /system.slice/nginx.service ?? 9620 nginx: master >>> process /usr/sbin/nginx ??10146 nginx: worker process >>> >>> >>> Dec 20 07:18:33 servername systemd[1]: Reloaded The nginx HTTP and >>> reverse proxy server. >>> -------------------------------------------------------------- >>> >>> ps aux | grep nginx >>> root 9620 0.0 0.3 71504 3848 ? Ss 04:21 0:00 >>> nginx: master process /usr/sbin/nginx nginx 10146 0.0 0.4 >>> 72004 4216 ? S 07:18 0:00 nginx: worker process >>> root 10235 0.0 0.0 112660 952 pts/1 S+ 08:01 0:00 >>> grep ngin >>> >>> ----------------------------------- >>> firewall-cmd --zone=public --list-all >>> public (active) >>> target: default >>> icmp-block-inversion: no >>> interfaces: eth0 >>> sources: >>> services: ssh dhcpv6-client http https >>> ports: >>> protocols: >>> masquerade: no >>> forward-ports: >>> source-ports: >>> icmp-blocks: >>> rich rules: >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx From quinefang at gmail.com Thu Dec 21 07:49:35 2017 From: quinefang at gmail.com (=?UTF-8?B?5pa55Z2k?=) Date: Thu, 21 Dec 2017 15:49:35 +0800 Subject: Centos 7 file permission problem In-Reply-To: <39D5C9AD-2242-46AB-9371-AFA06347A8DE@nginx.com> References: <20171220000716.088a1816.lists@lazygranch.com> <20171220163314.411060d0.lists@lazygranch.com> <39D5C9AD-2242-46AB-9371-AFA06347A8DE@nginx.com> Message-ID: I generally disable SELinux after installing CentOS, once and for all, and I guess I am not the only guy who repeat this. SELinux was likely to be designed not for regular use. On Thu, Dec 21, 2017 at 3:06 PM, Aziz Rozyev wrote: > no problem, btw, check out this post > > https://www.nginx.com/blog/nginx-se-linux-changes-upgrading-rhel-6-6/ > > > br, > Aziz. > > > > > >> On 21 Dec 2017, at 03:33, lists at lazygranch.com wrote: >> >> Well that was it. You can't believe how many hours I wasted on that. >> Thanks. Double thanks. >> I'm going to mention this in the Digital Ocean help pages. >> >> I disabled selinx, but I have a book laying around on how to set it up. >> Eh, it is on the list. >> >> On Wed, 20 Dec 2017 14:17:18 +0300 >> Aziz Rozyev wrote: >> >>> Hi, >>> >>> have you checked this with disabled selinux ? >>> >>> br, >>> Aziz. >>> >>> >>> >>> >>> >>>> On 20 Dec 2017, at 11:07, lists at lazygranch.com wrote: >>>> >>>> I'm setting up a web server on a Centos 7 VPS. I'm relatively sure I >>>> have the firewalls set up properly since I can see my browser >>>> requests in the access and error log. That said, I have file >>>> permission problem. >>>> >>>> nginx 1.12.2 >>>> Linux servername 3.10.0-693.5.2.el7.x86_64 #1 SMP Fri Oct 20 >>>> 20:32:50 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux >>>> >>>> >>>> nginx.conf (with comments removed for brevity and my domain name >>>> remove because google) >>>> ------- >>>> user nginx; >>>> worker_processes auto; >>>> error_log /var/log/nginx/error.log; >>>> pid /run/nginx.pid; >>>> >>>> events { >>>> worker_connections 1024; >>>> } >>>> >>>> http { >>>> log_format main '$remote_addr - $remote_user [$time_local] >>>> "$request" ' '$status $body_bytes_sent "$http_referer" ' >>>> '"$http_user_agent" "$http_x_forwarded_for"'; >>>> >>>> access_log /var/log/nginx/access.log main; >>>> >>>> sendfile on; >>>> tcp_nopush on; >>>> tcp_nodelay on; >>>> keepalive_timeout 65; >>>> types_hash_max_size 2048; >>>> >>>> include /etc/nginx/mime.types; >>>> default_type application/octet-stream; >>>> >>>> server { >>>> listen 80; >>>> server_name mydomain.com www.mydomain.com; >>>> >>>> return 301 https://$host$request_uri; >>>> } >>>> >>>> server { >>>> listen 443 ssl http2; >>>> server_name mydomain.com www.mydomain.com; >>>> ssl_dhparam /etc/ssl/certs/dhparam.pem; >>>> root /usr/share/nginx/html/mydomain.com/public_html; >>>> >>>> ssl_certificate /etc/letsencrypt/live/mydomain.com/fullchain.pem; # >>>> managed by Certbot >>>> ssl_certificate_key /etc/letsencrypt/live/mydomain.com/privkey.pem; >>>> # managed by Certbot ssl_ciphers HIGH:!aNULL:!MD5; >>>> ssl_prefer_server_ciphers on; >>>> >>>> location / { >>>> root /usr/share/nginx/html/mydomain.com/public_html; >>>> index index.html index.htm; >>>> } >>>> # >>>> error_page 404 /404.html; >>>> location = /40x.html { >>>> } >>>> # >>>> error_page 500 502 503 504 /50x.html; >>>> location = /50x.html { >>>> } >>>> } >>>> >>>> } >>>> >>>> I have firefox set up with no cache and do not save history. >>>> ------------------------------------------------------------- >>>> access log: >>>> >>>> mypi - - [20/Dec/2017:07:46:44 +0000] "GET /index.html HTTP/2.0" >>>> 403 169 "-" "Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 >>>> Firefox/52.0" "-" >>>> >>>> myip - - [20/Dec/2017:07:48:44 +0000] "GET /index.html >>>> HTTP/2.0" 403 169 "-" "Mozilla/5.0 (X11; Linux x86_64; rv:52.0) >>>> Gecko/20100101 Firefox/52.0" "-" >>>> ------------------------------- >>>> error log: >>>> >>>> 2017/12/20 07:46:44 [error] 10146#0: *48 open() >>>> "/usr/share/nginx/html/mydomain.com/public_html/index.html" failed >>>> (13: Permission denied), client: myip, server: mydomain.com, >>>> request: "GET /index.html HTTP/2.0", host: "mydomain.com" >>>> 2017/12/20 07:48:44 [error] 10146#0: *48 open() >>>> "/usr/share/nginx/html/mydomain.com/public_html/index.html" failed >>>> (13: Permission denied), client: myip, server: mydomain.com, >>>> request: "GET /index.html HTTP/2.0", host: "mydomain.com" >>>> >>>> >>>> Directory permissions: >>>> For now, I made eveything 755 with ownership nginx:nginx I did chmod >>>> and chown with the -R option >>>> >>>> /etc/nginx: >>>> drwxr-xr-x. 4 nginx nginx 4096 Dec 20 07:39 nginx >>>> >>>> /usr/share/nginx: >>>> drwxr-xr-x. 4 nginx nginx 33 Dec 15 08:47 nginx >>>> >>>> /var/log: >>>> drwx------. 2 nginx nginx 4096 Dec 20 07:51 nginx >>>> -------------------------------------------------------------- >>>> systemctl status nginx >>>> ? nginx.service - The nginx HTTP and reverse proxy server >>>> Loaded: loaded (/usr/lib/systemd/system/nginx.service; enabled; >>>> vendor preset: disabled) Active: active (running) since Wed >>>> 2017-12-20 04:21:37 UTC; 3h 37min ago Process: 10145 >>>> ExecReload=/bin/kill -s HUP $MAINPID (code=exited, >>>> status=0/SUCCESS) Main PID: 9620 (nginx) >>>> CGroup: /system.slice/nginx.service ?? 9620 nginx: master >>>> process /usr/sbin/nginx ??10146 nginx: worker process >>>> >>>> >>>> Dec 20 07:18:33 servername systemd[1]: Reloaded The nginx HTTP and >>>> reverse proxy server. >>>> -------------------------------------------------------------- >>>> >>>> ps aux | grep nginx >>>> root 9620 0.0 0.3 71504 3848 ? Ss 04:21 0:00 >>>> nginx: master process /usr/sbin/nginx nginx 10146 0.0 0.4 >>>> 72004 4216 ? S 07:18 0:00 nginx: worker process >>>> root 10235 0.0 0.0 112660 952 pts/1 S+ 08:01 0:00 >>>> grep ngin >>>> >>>> ----------------------------------- >>>> firewall-cmd --zone=public --list-all >>>> public (active) >>>> target: default >>>> icmp-block-inversion: no >>>> interfaces: eth0 >>>> sources: >>>> services: ssh dhcpv6-client http https >>>> ports: >>>> protocols: >>>> masquerade: no >>>> forward-ports: >>>> source-ports: >>>> icmp-blocks: >>>> rich rules: >>>> _______________________________________________ >>>> nginx mailing list >>>> nginx at nginx.org >>>> http://mailman.nginx.org/mailman/listinfo/nginx >>> >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From falecom at rodrigomonteiro.net Thu Dec 21 14:09:11 2017 From: falecom at rodrigomonteiro.net (M. Rodrigo Monteiro) Date: Thu, 21 Dec 2017 11:09:11 -0300 Subject: Proxy pass and URL rewrite with upstream Message-ID: Hi! I'm new to nginx and I need you help do setup the way I need. The server is nginx-1.12.2-1.el7.x86_64 (rpm) on CentOS 7.2 64. My scenario is: all my systems are http://systems.ltda.local/NAME (systems.ltda.local is nginx as reverse proxy) The nginx must rewrite (or proxy, or whatever) to 4 Apaches servers responding with the virtual host as NAME.ltda.local and the URL in the browser must not change. When a user types http://systems.ltda.local/phpmyadmin it goes to nginx and nginx proxy to apache on the URL phpmyadmin.ltda.local but the URL keeps the same on the browser. My config: # cat upstream.conf upstream wpapp { server XXX.XXX.XXX.XXX:80 fail_timeout=60; server XXX.XXX.XXX.XXX:80 fail_timeout=60; server XXX.XXX.XXX.XXX:80 fail_timeout=60; server XXX.XXX.XXX.XXX:80 fail_timeout=60; } # cat systems.ltda.local.conf server { listen 80; server_name systems.ltda.local; access_log /var/log/nginx/systems.ltda.local_access.log; error_log /var/log/nginx/systems.ltda.local_error.log; location /phpmyadmin { proxy_pass http://wpapp/; sub_filter "http://systems.ltda.local/phpmyadmin" "http://phpmyadmin.ltda.local"; sub_filter "http://systems.ltda.local/phpmyadmin/" "http://phpmyadmin.ltda/"; sub_filter_once off; } } With this configuration, only works the URL with a trailing slash "http://systems.ltda.local/phpmyadmin/" and not "http://systems.ltda.local/phpmyadmin". Best regards, Rodrigo. From arozyev at nginx.com Thu Dec 21 14:56:53 2017 From: arozyev at nginx.com (Aziz Rozyev) Date: Thu, 21 Dec 2017 17:56:53 +0300 Subject: Proxy pass and URL rewrite with upstream In-Reply-To: References: Message-ID: <88F720FD-D940-4381-9834-5E653CB4EDE5@nginx.com> Hi create 4 separate upstreams for each of these apaches, create 4 locations, within each location block proxy_pass to appropriate upstream. avoid using sub_filters, they are mostly forrewriting bodies of html documents. http { # for phpadmin upstream phpadminup { server phpadmin.ltda.local:80; } upstream whateverup { server whatevername.ltda.local:80; } server { listen 80; location /phpadmin/ { proxy_pass http://phpadmin; } location /whatevername/ { proxy_pass http://whatever; } ... } } br, Aziz. > On 21 Dec 2017, at 17:09, M. Rodrigo Monteiro wrote: > > > server XXX.XXX.XXX.XXX:80 fail_timeout=60; > server XXX.XXX.XXX.XXX:80 fail_timeout=60; > server XXX.XXX.XXX.XXX:80 fail_timeout=60; > server XXX.XXX.XXX.XXX:80 fail_timeout=60; > } > > # cat systems.ltda.local.conf > > server { > listen 80; > server_name systems.ltda.local; > access_log /var/log/nginx/systems.ltda.local_access.log; > error_log /var/log/nginx/systems.ltda.local_error.log; > > location /phpmyadmin { > proxy_pass http://wpapp/; > sub_filter "http://systems.ltda.local/phpmyadmin" > "http://phpmyadmin.ltda.local"; > sub_filter "http://systems.ltda.local/phpmyadmin/" "http://phpmyadmin.ltda/"; > sub_filter_once off; > } From zchao1995 at gmail.com Fri Dec 22 01:56:42 2017 From: zchao1995 at gmail.com (Zhang Chao) Date: Thu, 21 Dec 2017 20:56:42 -0500 Subject: Proxy pass and URL rewrite with upstream In-Reply-To: References: Message-ID: Hello! > With this configuration, only works the URL with a trailing slash > "http://systems.ltda.local/phpmyadmin/" and not > "http://systems.ltda.local/phpmyadmin?. What do you mean of it doesn?t work with the URL without the trailing slash? Does the backend Apache server just returns 404? The URL path that your nginx passes to the backend will be: / if types http://systems.ltda.local/phpmyadmin // if types http://systems.ltda.local/phpmyadmin/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From jorohr at gmail.com Fri Dec 22 18:57:16 2017 From: jorohr at gmail.com (Johannes Rohr) Date: Fri, 22 Dec 2017 19:57:16 +0100 Subject: Moving Joomla from subdir to root -> rewrite / redirect problem In-Reply-To: <20171218193652.GW3127@daoine.org> References: <20171218193652.GW3127@daoine.org> Message-ID: Am 18.12.2017 um 20:36 schrieb Francis Daly: > On Fri, Dec 15, 2017 at 03:25:20PM +0100, Johannes Rohr wrote: > > Hi there, > > I do not know joomla; and I do not have a direct answer for you. > > But some parts of your question seem unclear to me. > > Perhaps they are also unclear to someone who otherwise could give an answer. > > If you can clarify them, maybe that other person can help. I will try my best. >> in order to have prettier URLs I have decided to move my joomla from >> /web/ to /, but I want old URLs to transparently redirect. > When you say "transparently redirect", can you give an example of what > you mean? > > As in: when I request /web/something, should I get a http redirect so > that I next request /something; or should I get something else? Yes, sure, that's precisely what I want to achive >> location /web { >> ? try_files $uri $uri/ /index.php?$args; >> ?} >> >> but this obviously did not work as the arguments passed to index.php >> still contain the /web/ ? part? , which would have to go. > Without knowing joomla... why is this obvious? Why would arguments to > index.php include /web/? Because that is the subdir where it is currently installed. (which I want to change) > > And: what, specifically, did you do on the joomla side, to move it from > /web/ to /? Nothing really besides moving the files, as Joomal adjusts all the variables on the fly. And Joomla is perfectly happy with that, only the redirect does not work >> So I tried instead: >> >> location / { >> ??? rewrite ^/web/(.*)$ /$1; >> ? try_files $uri $uri/ /index.php?$args; >> } >> >> thinking that the first line would modify the URI, taking away the /web/ >> part so that in the next line the changed uri would be fed to the >> try_files command. > Yes. It does not do anything about "arguments to index.php", though, > which was your reported problem with the first attempt. > > (I'd probably include "break" on the rewrite line; but it might depend > on what the rest of your config says.) Why? Doesn't "break" mean that processing is ended after the current line? What I want is that the re-written url is then fed to the try_files directive below, which calls joomla's index.php >> But this only resulted in 404s. > Can you show one request that you make, and describe how you would like > it to be handled? So now I am trying this: ???? # Support Clean (aka Search Engine Friendly) URLs ??????? location / { if ($request_uri ~* "^/web/(.*/)$") { ??????? return 301 $1; ??? } ??????????????? try_files $uri $uri/ /index.php?$args; ??????? } I kind of hoped this would would catch each request that contails /web/ and rewrite it into something with /web/ stripped and then feed it to try_files directive below. But in the nginx log I see: 2017/12/22 19:43:55 [error] 21149#21149: *19151503 FastCGI sent in stderr: "Unable to open primary script: /var/www/chrooted/jrweb/htdocs/www.infoe.de/web/index.php (No such file or directory)" while reading response header from upstream, client: 65.19.138.35, server: www.infoe.de, request: "GET /web/index.php?format=feed&type=rss HTTP/1.1", upstream: "fastcgi://unix:/run/php/php7.0-fpm.sock:", host: "www.infoe.de" So, somehow, the rewrite is not happening, nginx still looks for /web/index.php instead of /index.php Of course, the return the browser gets is a 404 I find this topic so confusing (and reading the official nginx docs does not make the confusion go away), would be grateful if anyone could enlighten me! Cheers, Johannes > > Cheers, > > f -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 512 bytes Desc: OpenPGP digital signature URL: From francis at daoine.org Sat Dec 23 12:27:43 2017 From: francis at daoine.org (Francis Daly) Date: Sat, 23 Dec 2017 12:27:43 +0000 Subject: Moving Joomla from subdir to root -> rewrite / redirect problem In-Reply-To: References: <20171218193652.GW3127@daoine.org> Message-ID: <20171223122743.GC3127@daoine.org> On Fri, Dec 22, 2017 at 07:57:16PM +0100, Johannes Rohr wrote: > Am 18.12.2017 um 20:36 schrieb Francis Daly: > > On Fri, Dec 15, 2017 at 03:25:20PM +0100, Johannes Rohr wrote: Hi there, > > As in: when I request /web/something, should I get a http redirect so > > that I next request /something; or should I get something else? > Yes, sure, that's precisely what I want to achive Add location ^~ /web/ { rewrite /web(/.*) $1 permanent; } to your previously-working nginx config. Report if anything now does not work as you want it to. The rest of this mail is background information; the above config addition might be all that you need. > > Without knowing joomla... why is this obvious? Why would arguments to > > index.php include /web/? > Because that is the subdir where it is currently installed. (which I > want to change) Ok. I still don't understand why arguments to index.php would include /web/, but I guess that I don't need to understand it. > >> location / { > >> ??? rewrite ^/web/(.*)$ /$1; > >> ? try_files $uri $uri/ /index.php?$args; > >> } > >> > >> thinking that the first line would modify the URI, taking away the /web/ > >> part so that in the next line the changed uri would be fed to the > >> try_files command. http://nginx.org/r/rewrite and http://nginx.org/r/location Your "rewrite" will start with /web/something and end with /something, and then nginx will find the correct location{} to handle /something, which may or may not be this location{}, depending on what other location{}s are defined. > > (I'd probably include "break" on the rewrite line; but it might depend > > on what the rest of your config says.) > Why? Doesn't "break" mean that processing is ended after the current > line? What I want is that the re-written url is then fed to the > try_files directive below, which calls joomla's index.php "break" says "stops processing the current set of ngx_http_rewrite_module directives" with "If a directive is specified inside the location, further processing of the request continues in this location.". And you want the processing to be in this location, and not start the normal search for the correct location for the new subrequest /something. > ??????? location / { > if ($request_uri ~* "^/web/(.*/)$") { > ??????? return 301 $1; > ??? } > > ??????????????? try_files $uri $uri/ /index.php?$args; > ??????? } > > > I kind of hoped this would would catch each request that contails /web/ > and rewrite it into something with /web/ stripped and then feed it to > try_files directive below. If this location{} is used for /web/something, then the browser gets a 301 redirect and makes a whole new request for /something. If this location{} is used for /something, then the try_files will be used. > But in the nginx log I see: > > 2017/12/22 19:43:55 [error] 21149#21149: *19151503 FastCGI sent in > stderr: "Unable to open primary script: > /var/www/chrooted/jrweb/htdocs/www.infoe.de/web/index.php (No such file > or directory)" while reading response header from upstream, client: > 65.19.138.35, server: www.infoe.de, request: "GET > /web/index.php?format=feed&type=rss HTTP/1.1", upstream: > "fastcgi://unix:/run/php/php7.0-fpm.sock:", host: "www.infoe.de" > > So, somehow, the rewrite is not happening, nginx still looks for > /web/index.php instead of /index.php Is the location{} above used for the request /web/index.php? I suspect you may have another location like location ~ php that means that your "location /" will not be used for the request /web/index.php > I find this topic so confusing (and reading the official nginx docs does > not make the confusion go away), would be grateful if anyone could > enlighten me! In nginx, one request is handled in one location. Under certain circumstances, including "rewrite", one http request might lead to an nginx request followed by an nginx subrequest; that subrequest, in general, counts as a new request to nginx. Something like "grep location nginx.conf" might show all of the location{}s that are defined for this server{}; given that, and the documented rules, it should be possible to see which one location a specific request is handled in. (Note: "rewrite"-module directives can change the logic, so it is not necessarily as straightforward as described there.) f -- Francis Daly francis at daoine.org From al-nginx at none.at Tue Dec 26 11:02:03 2017 From: al-nginx at none.at (Aleksandar Lazic) Date: Tue, 26 Dec 2017 11:02:03 +0000 Subject: The repository 'http://nginx.org/packages/mainline/debian xenial Release' does not have a Release file. Message-ID: Hi. I tried today to install nginx on a ubuntu 16.04.3 LTS but `apt-cache showpkg nginx` does not show me the nginx package from nginx.org. I followed the command on this page http://nginx.org/en/linux_packages.html#mainline The `apt-get update` output #### apt-get update Get:1 http://security.ubuntu.com/ubuntu xenial-security InRelease [102 kB] Hit:2 http://ppa.launchpad.net/ondrej/php/ubuntu xenial InRelease Hit:3 http://de.archive.ubuntu.com/ubuntu xenial InRelease Hit:4 http://de.archive.ubuntu.com/ubuntu xenial-updates InRelease Hit:5 http://ppa.launchpad.net/vbernat/haproxy-1.8/ubuntu xenial InRelease Hit:6 http://de.archive.ubuntu.com/ubuntu xenial-backports InRelease Ign:7 http://nginx.org/packages/mainline/debian xenial InRelease Ign:8 http://nginx.org/packages/mainline/debian xenial Release Hit:9 http://packages.nginx.org/unit/ubuntu xenial InRelease Ign:10 http://nginx.org/packages/mainline/debian xenial/nginx Sources Ign:11 http://nginx.org/packages/mainline/debian xenial/nginx amd64 Packages Ign:12 http://nginx.org/packages/mainline/debian xenial/nginx all Packages Ign:13 http://nginx.org/packages/mainline/debian xenial/nginx Translation-en_US Ign:14 http://nginx.org/packages/mainline/debian xenial/nginx Translation-en Ign:10 http://nginx.org/packages/mainline/debian xenial/nginx Sources Ign:11 http://nginx.org/packages/mainline/debian xenial/nginx amd64 Packages Ign:12 http://nginx.org/packages/mainline/debian xenial/nginx all Packages Ign:13 http://nginx.org/packages/mainline/debian xenial/nginx Translation-en_US Ign:14 http://nginx.org/packages/mainline/debian xenial/nginx Translation-en Ign:10 http://nginx.org/packages/mainline/debian xenial/nginx Sources Ign:11 http://nginx.org/packages/mainline/debian xenial/nginx amd64 Packages Ign:12 http://nginx.org/packages/mainline/debian xenial/nginx all Packages Ign:13 http://nginx.org/packages/mainline/debian xenial/nginx Translation-en_US Ign:14 http://nginx.org/packages/mainline/debian xenial/nginx Translation-en Ign:10 http://nginx.org/packages/mainline/debian xenial/nginx Sources Ign:11 http://nginx.org/packages/mainline/debian xenial/nginx amd64 Packages Ign:12 http://nginx.org/packages/mainline/debian xenial/nginx all Packages Ign:13 http://nginx.org/packages/mainline/debian xenial/nginx Translation-en_US Ign:14 http://nginx.org/packages/mainline/debian xenial/nginx Translation-en Ign:10 http://nginx.org/packages/mainline/debian xenial/nginx Sources Ign:11 http://nginx.org/packages/mainline/debian xenial/nginx amd64 Packages Ign:12 http://nginx.org/packages/mainline/debian xenial/nginx all Packages Ign:13 http://nginx.org/packages/mainline/debian xenial/nginx Translation-en_US Ign:14 http://nginx.org/packages/mainline/debian xenial/nginx Translation-en Err:10 http://nginx.org/packages/mainline/debian xenial/nginx Sources 404 Not Found [IP: 206.251.255.63 80] Ign:11 http://nginx.org/packages/mainline/debian xenial/nginx amd64 Packages Ign:12 http://nginx.org/packages/mainline/debian xenial/nginx all Packages Ign:13 http://nginx.org/packages/mainline/debian xenial/nginx Translation-en_US Ign:14 http://nginx.org/packages/mainline/debian xenial/nginx Translation-en Fetched 102 kB in 5s (17.7 kB/s) Reading package lists... Done W: The repository 'http://nginx.org/packages/mainline/debian xenial Release' does not have a Release file. N: Data from such a repository can't be authenticated and is therefore potentially dangerous to use. N: See apt-secure(8) manpage for repository creation and user configuration details. E: Failed to fetch http://nginx.org/packages/mainline/debian/dists/xenial/nginx/source/Sources 404 Not Found [IP: 206.251.255.63 80] E: Some index files failed to download. They have been ignored, or old ones used instead. #### lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 16.04.3 LTS Release: 16.04 Codename: xenial Best regards Aleks From thresh at nginx.com Tue Dec 26 11:15:50 2017 From: thresh at nginx.com (Konstantin Pavlov) Date: Tue, 26 Dec 2017 14:15:50 +0300 Subject: The repository 'http://nginx.org/packages/mainline/debian xenial Release' does not have a Release file. In-Reply-To: References: Message-ID: <00bd0bdf-4407-833c-d9e5-bdc7eee3f268@nginx.com> Hi Aleksandar, On 26/12/2017 14:02, Aleksandar Lazic wrote: > Hi. > > I tried today to install nginx on a ubuntu 16.04.3 LTS but `apt-cache showpkg nginx` does not show me the nginx package from nginx.org. > > I followed the command on this page http://nginx.org/en/linux_packages.html#mainline > W: The repository 'http://nginx.org/packages/mainline/debian xenial Release' does not have a Release file. You should use http://nginx.org/packages/mainline/ubuntu instead of http://nginx.org/packages/mainline/debian for Ubuntus :-) -- Konstantin Pavlov www.nginx.com From mdounin at mdounin.ru Tue Dec 26 16:10:47 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 26 Dec 2017 19:10:47 +0300 Subject: nginx-1.13.8 Message-ID: <20171226161047.GD34136@mdounin.ru> Changes with nginx 1.13.8 26 Dec 2017 *) Feature: now nginx automatically preserves the CAP_NET_RAW capability in worker processes when using the "transparent" parameter of the "proxy_bind", "fastcgi_bind", "memcached_bind", "scgi_bind", and "uwsgi_bind" directives. *) Feature: improved CPU cache line size detection. Thanks to Debayan Ghosh. *) Feature: new directives in vim syntax highlighting scripts. Thanks to Gena Makhomed. *) Bugfix: binary upgrade refused to work if nginx was re-parented to a process with PID different from 1 after its parent process has finished. *) Bugfix: the ngx_http_autoindex_module incorrectly handled requests with bodies. *) Bugfix: in the "proxy_limit_rate" directive when used with the "keepalive" directive. *) Bugfix: some parts of a response might be buffered when using "proxy_buffering off" if the client connection used SSL. Thanks to Patryk Lesiewicz. *) Bugfix: in the "proxy_cache_background_update" directive. *) Bugfix: it was not possible to start a parameter with a variable in the "${name}" form with the name in curly brackets without enclosing the parameter into single or double quotes. -- Maxim Dounin http://nginx.org/ From al-nginx at none.at Wed Dec 27 00:15:02 2017 From: al-nginx at none.at (Aleksandar Lazic) Date: Wed, 27 Dec 2017 00:15:02 +0000 Subject: The repository 'http://nginx.org/packages/mainline/debian xenial Release' does not have a Release file. In-Reply-To: <00bd0bdf-4407-833c-d9e5-bdc7eee3f268@nginx.com> References: <00bd0bdf-4407-833c-d9e5-bdc7eee3f268@nginx.com> Message-ID: ------ Originalnachricht ------ Von: "Konstantin Pavlov" An: nginx at nginx.org; "Aleksandar Lazic" Gesendet: 26.12.2017 12:15:50 Betreff: Re: The repository 'http://nginx.org/packages/mainline/debian xenial Release' does not have a Release file. >Hi Aleksandar, > >On 26/12/2017 14:02, Aleksandar Lazic wrote: >>Hi. >> >>I tried today to install nginx on a ubuntu 16.04.3 LTS but `apt-cache >>showpkg nginx` does not show me the nginx package from nginx.org. >> >>I followed the command on this page >>http://nginx.org/en/linux_packages.html#mainline > >>W: The repository 'http://nginx.org/packages/mainline/debian xenial >>Release' does not have a Release file. >You should use http://nginx.org/packages/mainline/ubuntu instead of >http://nginx.org/packages/mainline/debian for Ubuntus :-) Ups you are right. Sorry for the rush > >-- >Konstantin Pavlov >www.nginx.com Regards Aleks From wade.girard at gmail.com Wed Dec 27 22:03:04 2017 From: wade.girard at gmail.com (Wade Girard) Date: Wed, 27 Dec 2017 16:03:04 -0600 Subject: 504 gateway timeouts Message-ID: I am using nginx on an ubuntu server as a proxy to a tomcat server. The nginx server is setup for https. I don't know how to determine what version of nginx I am using, but I install it on the ubuntu 1.16 server using apt-get. I have an issue that I have resolved locally on my Mac (using version 1.12 of nginx and Tomcat 7) where requests through the proxy that take more than 60 seconds were failing, they are now working. What seemed to be the fix was adding the following to the nginx.conf file proxy_connect_timeout 600; proxy_send_timeout 600; proxy_read_timeout 600; send_timeout 600; in the location section for my proxy. However this same change in the ubuntu servers has no effect at all. The way I am testing this is that I create a request that sleeps the thread for 5 minutes before retiring a response. Any help appreciated. Thanks -- Wade Girard c: 612.363.0902 -------------- next part -------------- An HTML attachment was scrubbed... URL: From igal at lucee.org Wed Dec 27 22:18:42 2017 From: igal at lucee.org (Igal @ Lucee.org) Date: Wed, 27 Dec 2017 14:18:42 -0800 Subject: 504 gateway timeouts In-Reply-To: References: Message-ID: <7608c451-ae57-ed45-cf5c-139bb756300e@lucee.org> On 12/27/2017 2:03 PM, Wade Girard wrote: > I am using nginx on an ubuntu server as a proxy to a tomcat server. > > The nginx server is setup for https. > > I don't know how to determine what version of nginx I am using, but I > install it on the ubuntu 1.16 server using apt-get. Run:? nginx -v > > I have an issue that I have resolved locally on my Mac (using version > 1.12 of nginx and Tomcat 7) where requests through the proxy that take > more than 60 seconds were failing, they are now working. > > What seemed to be the fix was adding the following to the nginx.conf file > > proxy_connect_timeout ? ? ? 600; > > proxy_send_timeout? ? ? ? ? 600; > > proxy_read_timeout? ? ? ? ? 600; > > send_timeout? ? ? ? ? ? ? ? 600; > > in the location section for my proxy. > > > However this same change in the ubuntu servers has no effect at all. > > Try to flush out some output early on so that nginx will know that Tomcat is alive. Igal Sapir Lucee Core Developer Lucee.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From kworthington at gmail.com Thu Dec 28 01:42:01 2017 From: kworthington at gmail.com (Kevin Worthington) Date: Wed, 27 Dec 2017 20:42:01 -0500 Subject: [nginx-announce] nginx-1.13.8 In-Reply-To: <20171226161051.GE34136@mdounin.ru> References: <20171226161051.GE34136@mdounin.ru> Message-ID: Hello Nginx users, Now available: Nginx 1.13.8 for Windows https://kevinworthington.com/n ginxwin1138 (32-bit and 64-bit versions) These versions are to support legacy users who are already using Cygwin based builds of Nginx. Officially supported native Windows binaries are at nginx.org. Announcements are also available here: Twitter http://twitter.com/kworthington Google+ https://plus.google.com/+KevinWorthington/ Thank you, Kevin -- Kevin Worthington kworthington *@* (gmail] [dot} {com) https://kevinworthington.com/ https://twitter.com/kworthington https://plus.google.com/+KevinWorthington/ On Tue, Dec 26, 2017 at 11:10 AM, Maxim Dounin wrote: > Changes with nginx 1.13.8 26 Dec > 2017 > > *) Feature: now nginx automatically preserves the CAP_NET_RAW > capability > in worker processes when using the "transparent" parameter of the > "proxy_bind", "fastcgi_bind", "memcached_bind", "scgi_bind", and > "uwsgi_bind" directives. > > *) Feature: improved CPU cache line size detection. > Thanks to Debayan Ghosh. > > *) Feature: new directives in vim syntax highlighting scripts. > Thanks to Gena Makhomed. > > *) Bugfix: binary upgrade refused to work if nginx was re-parented to a > process with PID different from 1 after its parent process has > finished. > > *) Bugfix: the ngx_http_autoindex_module incorrectly handled requests > with bodies. > > *) Bugfix: in the "proxy_limit_rate" directive when used with the > "keepalive" directive. > > *) Bugfix: some parts of a response might be buffered when using > "proxy_buffering off" if the client connection used SSL. > Thanks to Patryk Lesiewicz. > > *) Bugfix: in the "proxy_cache_background_update" directive. > > *) Bugfix: it was not possible to start a parameter with a variable in > the "${name}" form with the name in curly brackets without enclosing > the parameter into single or double quotes. > > > -- > Maxim Dounin > http://nginx.org/ > _______________________________________________ > nginx-announce mailing list > nginx-announce at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-announce > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lucas at lucasrolff.com Thu Dec 28 19:16:29 2017 From: lucas at lucasrolff.com (Lucas Rolff) Date: Thu, 28 Dec 2017 19:16:29 +0000 Subject: NGINX and RFC7540 (http2) violation Message-ID: Hi guys, I was playing around with nginx and haproxy recently to decide whether to go for nginx or haproxy in a specific environment. One of the requirements was http2 support which both pieces of software support (with nginx having supported it for a lot longer than haproxy). However, one thing I saw, is that according to the http2 specification section 8.1.2.2 (https://tools.ietf.org/html/rfc7540#section-8.1.2.2 ), HTTP2 does not use the Connection header field to indicate connection-specific headers in the protocol. If a client sends a Connection: keep-alive the client effectively violates the specification which surely should not happen, but in case the client actually would send the Connection header the server MUST treat the messages containing the connection header as malformed. I saw that this is not the case for nginx in any way, which causes it to not follow the actual specification. Can I ask why it was decided to implement it to simply ?ignore? the fact that a client might violate the spec? And is there any plans to make nginx compliant with the current http2 specification? I?ve found that both Firefox and Safari violates this very specific section, and they?re violated because servers implementing the http2 specification allowed them to do so, effectively causing the specification not to be followed. Thanks in advance. Best Regards, Lucas Rolff -------------- next part -------------- An HTML attachment was scrubbed... URL: From vbart at nginx.com Fri Dec 29 00:47:44 2017 From: vbart at nginx.com (Valentin V. Bartenev) Date: Fri, 29 Dec 2017 03:47:44 +0300 Subject: NGINX and RFC7540 (http2) violation In-Reply-To: References: Message-ID: <2692150.PZVn1yPWKq@vbart-laptop> On Thursday, 28 December 2017 22:16:29 MSK Lucas Rolff wrote: > Hi guys, > > I was playing around with nginx and haproxy recently to decide whether to go for nginx or haproxy in a specific environment. > One of the requirements was http2 support which both pieces of software support (with nginx having supported it for a lot longer than haproxy). > > However, one thing I saw, is that according to the http2 specification section 8.1.2.2 (https://tools.ietf.org/html/rfc7540#section-8.1.2.2 ), HTTP2 does not use the Connection header field to indicate connection-specific headers in the protocol. > > If a client sends a Connection: keep-alive the client effectively violates the specification which surely should not happen, but in case the client actually would send the Connection header the server MUST treat the messages containing the connection header as malformed. > > I saw that this is not the case for nginx in any way, which causes it to not follow the actual specification. > > Can I ask why it was decided to implement it to simply ?ignore? the fact that a client might violate the spec? And is there any plans to make nginx compliant with the current http2 specification? > > I?ve found that both Firefox and Safari violates this very specific section, and they?re violated because servers implementing the http2 specification allowed them to do so, effectively causing the specification not to be followed. > > Thanks in advance. > There is a theory in the specification and there is a practise in the real world. Strictly following every aspect of a specification is intresting from academical point of view, but nginx is made for the real world, where it's used by hundreds of millions of websites and therefor has to deal with many different clients and protocol implementations. See: https://news.netcraft.com/archives/2017/12/26/december-2017-web-server-survey.html These implementations may contain various bugs, some clients is impossible to fix (some are unmaintainable hardware boxes or Android phones that already reached EOL, for example). One of the most important aspects of a server with such scale is interoperability. It has to reliably work with any clients in every environment. If it doesn't, website owners will loose audience, will loose money, will blame us for that and eventually switch to something else. For that purpose sometimes we even have to make ugly hacks in nginx. See for example: http://hg.nginx.org/nginx/rev/8df664ebe037 We immediately started receiving bug reports about this problem and it took about a year for Chrome developers to release the fix. And more time is needed for everybody to upgrade on a fixed version. You can't force someone to buy a new phone, update or fix their software just because of the idea of full compliance with a protocol specification. That doesn't work this way. Moreover, some aspects of specifications are just impossible to follow without security or perfomance risks, or without turning your code into unmaitanable mess. A fully compliant implementation is usually something that doesn't exist or just are not used. In my personal opinion, the HTTP/2 specification (and the protocol itself) is a bad example. Some aspects of the protocol are pure overengineering, some of them are ugly hacks, some of them are just complexity without any benefits, and some of them are vectors of DoS attack. I also suggest to read an article written by the author of Varnish: https://queue.acm.org/detail.cfm?id=2716278 wbr, Valentin V. Bartenev From lists at lazygranch.com Fri Dec 29 03:06:16 2017 From: lists at lazygranch.com (lists at lazygranch.com) Date: Thu, 28 Dec 2017 19:06:16 -0800 Subject: MAP location in conf file Message-ID: <20171228190616.5df08587.lists@lazygranch.com> Presently I'm putting maps in the server location. Can they be put in the very top to make them work for all servers? If not, I can just make the maps into include files and insert as needed, but maybe making the map global is more efficient. From arozyev at nginx.com Fri Dec 29 10:16:11 2017 From: arozyev at nginx.com (Aziz Rozyev) Date: Fri, 29 Dec 2017 11:16:11 +0100 Subject: MAP location in conf file In-Reply-To: <20171228190616.5df08587.lists@lazygranch.com> References: <20171228190616.5df08587.lists@lazygranch.com> Message-ID: <800DAACE-E80F-4111-AE2B-DE095317F860@nginx.com> check docs: http://nginx.org/en/docs/http/ngx_http_map_module.html every directive description has a ?Context? field, where you may find in which configuration context you can put the directive. regarding your questions, yes, you actually have to put it on the very top - in ?http' context. br, Aziz. > On 29 Dec 2017, at 04:06, lists at lazygranch.com wrote: > > Presently I'm putting maps in the server location. Can they be put in > the very top to make them work for all servers? If not, I can just make > the maps into include files and insert as needed, but maybe making the > map global is more efficient. > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From janl at thoughtmakers.co.za Fri Dec 29 12:51:30 2017 From: janl at thoughtmakers.co.za (Jan Lewis) Date: Fri, 29 Dec 2017 12:51:30 +0000 Subject: nginx using openssl chil engine Message-ID: <4ABBBEAD-8349-4867-9CEE-6B8209EB4E4E@thoughtmakers.co.za> anyone know how to setup nginx config for using an openssl chil engine? I have the following added as directives main context : ssl_engine chil; in the server context I reference to a preloaded private key as ssl_certificate_key engine:chil:prikeyid; when I run "nginx -t" I get nginx: [emrg] ENGINE_load_private_key (?prikeyid?) failed (SSL: error: 26096075:engine routines:ENGINE_load_private_key:not initialised) What am I missing? or what do I need to check ? Thank you Jan -------------- next part -------------- An HTML attachment was scrubbed... URL: From vbart at nginx.com Fri Dec 29 17:22:18 2017 From: vbart at nginx.com (Valentin V. Bartenev) Date: Fri, 29 Dec 2017 20:22:18 +0300 Subject: unit-0.3 beta release Message-ID: <1831270.J5FWkbrzmH@vbart-laptop> Hello, I'm glad to announce that a new beta version of NGINX Unit has been released. Changes with Unit 0.3 28 Dec 2017 *) Change: the Go package name changed to "nginx/unit". *) Change: in the "limits.timeout" application option: application start time and time in queue now are not accounted. *) Feature: the "limits.requests" application option. *) Feature: application request processing latency optimization. *) Feature: HTTP keep-alive connections support. *) Feature: the "home" Python virtual environment configuration option. *) Feature: Python atexit hook support. *) Feature: various Go package improvements. *) Bugfix: various crashes fixed. With this release we have started to build more Linux packages: - https://unit.nginx.org/installation/#precompiled-packages Also, here is a new blog post about some of our plans for the near future: - https://www.nginx.com/blog/nginx-unit-progress-and-next-steps/ Happy New Year and best wishes from all of the Unit team. Stay tuned. wbr, Valentin V. Bartenev From nginx-forum at forum.nginx.org Fri Dec 29 20:23:07 2017 From: nginx-forum at forum.nginx.org (naktinis) Date: Fri, 29 Dec 2017 15:23:07 -0500 Subject: Error response body not sent if upload is incomplete Message-ID: This happens using the ngx_http_uwsgi_module, but it seems this might be more generic (i.e. also affects at least upstream servers). Here's what happens: * I send a HTTP/1.1 POST request with a Content-Type: multipart/form-data; header and a ~600kb file * Nginx receives the first part of the request and passes it to a uwsgi app * The uwsgi app determines that a 403 response along with a JSON body should be returned * Nginx sends the 403 response to the client, but only containing the headers (not the JSON body) However, if I do everything the same way, but the uploaded file is tiny (e.g. 1 byte), I do get the error response body as expected. Non-error responses also work fine. It seems that nginx for some reason decides to ignore the response body (but still sends the headers) if the payload hasn't finished uploading. This looks like an inconsistent behaviour (or even a bug), but correct me know if there is something I misunderstood. Please find curl outputs and links to other users complaining about a similar thing below. Here's curl verbose output when uploading a bigger file: $ curl -v -F 'content=@large_file' http://0.0.0.0:5000/ * Trying 0.0.0.0... * TCP_NODELAY set * Connected to 0.0.0.0 (127.0.0.1) port 5000 (#0) > POST / HTTP/1.1 > Host: 0.0.0.0:5000 > User-Agent: curl/7.55.1 > Accept: */* > Content-Length: 654430 > Expect: 100-continue > Content-Type: multipart/form-data; boundary=------------------------6404e93291dc3c9f > < HTTP/1.1 100 Continue < HTTP/1.1 403 FORBIDDEN < Server: nginx/1.9.11 < Date: Fri, 29 Dec 2017 19:41:57 GMT < Content-Type: application/json < Content-Length: 54 < Connection: keep-alive * HTTP error before end of send, stop sending < * transfer closed with 54 bytes remaining to read * Closing connection 0 curl: (18) transfer closed with 54 bytes remaining to read And this is curl output with the smaller fine (this is what I would expect independently of the payload size): $ curl -v -F 'content=@tiny_file' http://0.0.0.0:5000/ * Trying 0.0.0.0... * TCP_NODELAY set * Connected to 0.0.0.0 (127.0.0.1) port 5000 (#0) > POST / HTTP/1.1 > Host: 0.0.0.0:5000 > User-Agent: curl/7.55.1 > Accept: */* > Content-Length: 205 > Expect: 100-continue > Content-Type: multipart/form-data; boundary=------------------------8cc5b005486613a4 > < HTTP/1.1 100 Continue < HTTP/1.1 403 FORBIDDEN < Server: nginx/1.9.11 < Date: Fri, 29 Dec 2017 20:12:41 GMT < Content-Type: application/json < Content-Length: 54 < Connection: keep-alive * HTTP error before end of send, stop sending < * Closing connection 0 {"error": {"message": "Invalid key", "code": 403}} Other users reporting similar behaviour: https://stackoverflow.com/questions/32208360/return-a-body-through-nginx-when-theres-an-error-mid-post https://stackoverflow.com/questions/34771225/nginx-http-error-before-end-of-send Posted at Nginx Forum: https://forum.nginx.org/read.php?2,277935,277935#msg-277935 From vbart at nginx.com Fri Dec 29 20:54:17 2017 From: vbart at nginx.com (Valentin V. Bartenev) Date: Fri, 29 Dec 2017 23:54:17 +0300 Subject: Error response body not sent if upload is incomplete In-Reply-To: References: Message-ID: <2796511.iUK7Q0K1Mb@vbart-laptop> On Friday, 29 December 2017 23:23:07 MSK naktinis wrote: > This happens using the ngx_http_uwsgi_module, but it seems this might be > more generic (i.e. also affects at least upstream servers). > > Here's what happens: > * I send a HTTP/1.1 POST request with a Content-Type: multipart/form-data; > header and a ~600kb file > * Nginx receives the first part of the request and passes it to a uwsgi > app > * The uwsgi app determines that a 403 response along with a JSON body > should be returned > * Nginx sends the 403 response to the client, but only containing the > headers (not the JSON body) > > However, if I do everything the same way, but the uploaded file is tiny > (e.g. 1 byte), I do get the error response body as expected. Non-error > responses also work fine. > > It seems that nginx for some reason decides to ignore the response body (but > still sends the headers) if the payload hasn't finished uploading. > > This looks like an inconsistent behaviour (or even a bug), but correct me > know if there is something I misunderstood. > > Please find curl outputs and links to other users complaining about a > similar thing below. [..] What's in the error log? The error log from one of your links suggests, that the problme in uwsgi server, not in nginx: 2015/08/25 15:28:49 [error] 10#0: *103 readv() failed (104: Connection reset by peer) while reading upstream wbr, Valentin V. Bartenev From nginx-forum at forum.nginx.org Sat Dec 30 23:04:26 2017 From: nginx-forum at forum.nginx.org (naktinis) Date: Sat, 30 Dec 2017 18:04:26 -0500 Subject: Error response body not sent if upload is incomplete In-Reply-To: <2796511.iUK7Q0K1Mb@vbart-laptop> References: <2796511.iUK7Q0K1Mb@vbart-laptop> Message-ID: Find the server logs below. It does seem to match what you've quoted. Do you think the upstream server (uwsgi) is the one not returning the body? I was able to fix this by consuming the request body in my application before returning the response. However, I'm still wondering how nginx is supposed to behave in such situations. Log for the request with the larger file: api-server_1 | [pid: 21|app: 0|req: 1/1] 172.19.0.1 () {38 vars in 601 bytes} [Sat Dec 30 11:14:46 2017] POST / => generated 54 bytes in 2 msecs (HTTP/1.1 403) 2 headers in 78 bytes (1 switches on core 0) api-server_1 | 2017/12/30 11:14:46 [error] 12#12: *1 readv() failed (104: Connection reset by peer) while reading upstream, client: 172.19.0.1, server: , request: "POST / HTTP/1.1", upstream: "uwsgi://unix:///tmp/uwsgi.sock:", host: "0.0.0.0:5000" api-server_1 | 172.19.0.1 - - [30/Dec/2017:11:14:46 +0000] "POST / HTTP/1.1" 403 25 "-" "curl/7.55.1" "-" With the smaller file: api-server_1 | 172.19.0.1 - - [30/Dec/2017:11:15:41 +0000] "POST / HTTP/1.1" 403 79 "-" "curl/7.55.1" "-" api-server_1 | [pid: 20|app: 0|req: 3/4] 172.19.0.1 () {38 vars in 595 bytes} [Sat Dec 30 11:15:41 2017] POST / => generated 54 bytes in 1 msecs (HTTP/1.1 403) 2 headers in 78 bytes (1 switches on core 0) Posted at Nginx Forum: https://forum.nginx.org/read.php?2,277935,277950#msg-277950 From llbgurs at gmail.com Sun Dec 31 01:10:32 2017 From: llbgurs at gmail.com (linbo liao) Date: Sun, 31 Dec 2017 09:10:32 +0800 Subject: Nginx manage multiple https website with keepalived Message-ID: I already submit an issue in keepalived github issue page and stackoverflow. Paste again for more people's help. I want to use Nginx to manager multiple https website, refer to nginx document (Name-based HTTPS servers section), one method is to assign a separate IP for every HTTPS servers. And in our environment, this is the only method. Due to single-point issue, I want to use keepalived to manage master-backup Nginx node. The logic is: 1. Setup master/backup nginx node 2. Master nginx will assign multiple vip via keepalived 3. Master nginx will be up, backup nginx is down. (due to backup nginx has no vip, start will fail) 4. If master nginx is down, vip transfer to backup node, backup nginx start. I test in Centos 7 with keepalived v1.3.5, but meet some issue. Configurationmaster node global_defs { router_id LVS_DEVEL} vrrp_script chk_nginx { script "/usr/sbin/pidof nginx" interval 3 !weight -5 rise 1 fall 2} vrrp_instance VI_1 { state MASTER interface eth1 virtual_router_id 51 priority 100 advert_int 1 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 192.168.2.16 192.168.2.17 } track_script { chk_nginx } notify /etc/keepalived/notify_keepalived.sh notify_stop "systemctl stop nginx"} backup node global_defs { router_id LVS_DEVEL} vrrp_script chk_nginx { script "/usr/sbin/pidof nginx" interval 3 !weight -5 rise 1 fall 2} vrrp_instance VI_1 { state BACKUP interface eth1 virtual_router_id 51 priority 96 advert_int 1 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 192.168.2.16 192.168.2.17 } track_script { chk_nginx } notify /etc/keepalived/notify_keepalived.sh notify_stop "systemctl stop nginx"} check script: $ cat /etc/keepalived/notify_keepalived.sh#!/bin/bash TYPE=$1 NAME=$2 STATE=$3 echo $STATE > /tmp/k.log:case $STATE in "MASTER") systemctl start nginx exit 0 ;; "BACKUP") systemctl stop nginx exit 0 ;; "FAULT") systemctl stop nginx exit 0 ;; *) echo "ipsec unknown state" exit 1 ;;esac method 1 If unset weight, keepalived startup will check nginx pid immediately, even I set interval and fall parameter. Master nginx won't enter master state, all node will enter fault state. No master will elect and no active nginx will come up. Dec 30 04:59:00 localhost systemd: Starting LVS and VRRP High Availability Monitor... Dec 30 04:59:00 localhost Keepalived[20039]: Starting Keepalived v1.3.5 (03/19,2017), git commit v1.3.5-6-g6fa32f2 Dec 30 04:59:00 localhost Keepalived[20039]: Unable to resolve default script username 'keepalived_script' - ignoring Dec 30 04:59:00 localhost Keepalived[20039]: Opening file '/etc/keepalived/keepalived.conf'. Dec 30 04:59:00 localhost systemd: PID file /var/run/keepalived.pid not readable (yet?) after start. Dec 30 04:59:00 localhost Keepalived[20040]: Starting Healthcheck child process, pid=20041 Dec 30 04:59:00 localhost Keepalived[20040]: Starting VRRP child process, pid=20042 Dec 30 04:59:00 localhost systemd: Started LVS and VRRP High Availability Monitor. Dec 30 04:59:00 localhost Keepalived_healthcheckers[20041]: Opening file '/etc/keepalived/keepalived.conf'. Dec 30 04:59:00 localhost Keepalived_vrrp[20042]: Registering Kernel netlink reflector Dec 30 04:59:00 localhost Keepalived_vrrp[20042]: Registering Kernel netlink command channel Dec 30 04:59:00 localhost Keepalived_vrrp[20042]: Registering gratuitous ARP shared channel Dec 30 04:59:00 localhost Keepalived_vrrp[20042]: Opening file '/etc/keepalived/keepalived.conf'. Dec 30 04:59:00 localhost Keepalived_vrrp[20042]: VRRP_Instance(VI_1) removing protocol VIPs. Dec 30 04:59:00 localhost Keepalived_vrrp[20042]: WARNING - script `systemctl` resolved by path search to `/usr/bin/systemctl`. Please specify full path. Dec 30 04:59:00 localhost Keepalived_vrrp[20042]: SECURITY VIOLATION - scripts are being executed but script_security not enabled. Dec 30 04:59:00 localhost Keepalived_vrrp[20042]: Using LinkWatch kernel netlink reflector... Dec 30 04:59:00 localhost Keepalived_vrrp[20042]: VRRP sockpool: [ifindex(3), proto(112), unicast(0), fd(10,11)] Dec 30 04:59:00 localhost Keepalived_vrrp[20042]: /usr/sbin/pidof nginx exited with status 1 Dec 30 04:59:01 localhost Keepalived_vrrp[20042]: VRRP_Instance(VI_1) Now in FAULT state Dec 30 04:59:03 localhost Keepalived_vrrp[20042]: /usr/sbin/pidof nginx exited with status 1 Dec 30 04:59:06 localhost Keepalived_vrrp[20042]: /usr/sbin/pidof nginx exited with status 1 method 2 If uncomment weigth, startup works fine. Master node assign vip and master nginx startup. Backup nginx is down. However, when I shutdown master nginx, master node priority (100-5) > backup node (96-5). Although master nginx is down, but vip still be in master node. method 3 set master weight -5, set backup weigth 2. 1. Startup keepalived, master node get vip, master nginx start. Backup nginx is down. 2. Shutdown master nginx, master node priority 95 < backup node 96, backup node get vip, backup nginx start. 3. Shutdown backup nginx, master node priority 95 < backup node 96(98-2), backup still hold vip, no active nginx come up. For this scenario, program startup depends on vip, how to manage HA? Thanks. -------------- next part -------------- An HTML attachment was scrubbed... URL: