From igal at lucee.org Sun Oct 1 22:47:23 2017 From: igal at lucee.org (Igal @ Lucee.org) Date: Sun, 1 Oct 2017 15:47:23 -0700 Subject: Nginx CPU Issue : Remain at 100% In-Reply-To: References: <10538001.t6D0DEl2ps@vbart-laptop> Message-ID: Hello, On 9/30/2017 12:43 AM, Yogesh Sharma wrote: > Thank you Valentin. Will give a chance. > > On Fri, 29 Sep 2017 at 5:20 PM, Valentin V. Bartenev > wrote: > > On Friday, 29 Sep 2017 14:38:58 MSK Yogesh Sharma wrote: > > Team, > > > > I am using nginx as Reverse proxy, where I see that once CPU > goes up for > > Nginx it never comes down and remain there forever until we kill > that > > worker. We tried tweaking worker_processes to number of cpu we > have, but it > > did not helped. > I wonder if this is the same issue that I reported a few months ago at https://www.mail-archive.com/search?l=nginx at nginx.org&q=subject:%22nginx+hogs+cpu+on+Windows%22&o=newest&f=1 I've actually missed Maxim's reply and haven't noticed it until now.? I did not experience the issue lately, but I'll be sure to follow Maxim's advice if it happens again. Igal -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Sun Oct 1 23:39:49 2017 From: francis at daoine.org (Francis Daly) Date: Mon, 2 Oct 2017 00:39:49 +0100 Subject: limit_conn is dropping valid connections and causing memory leaks on nginx reload In-Reply-To: <599c564bab9d7b70a8729e87c47a710b.NginxMailingListEnglish@forum.nginx.org> References: <599c564bab9d7b70a8729e87c47a710b.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20171001233949.GY20907@daoine.org> On Sat, Sep 30, 2017 at 06:05:15AM -0400, Dejan Grofelnik Pelzel wrote: Hi there, I don't have an answer for you. I don't understand what precisely you are reporting. It may help others to understand what you are reporting if you can show a nginx.conf and test case that displays the behaviour that you report. > We are running the nginx 1.13.5 with HTTP/2 in a proxy_pass proxy_cache > configuration with clients having relatively long open connections. Our For example: if you see the failure with http/2 and also with plain https; and maybe even also with just http; then that means that the test nginx.conf can be simpler -- remove as much as possible in your test environment while still demonstrating the problem. > system does automatic reloads for any new configuration and we recently > introduced a limit_conn to some of the config files. I suspect that rapid repeated reloads is not an expected use case for nginx; but I guess that it should work if you have the resources available. > We used the following configuration as recommended by pretty much any > example: > limit_conn_zone $binary_remote_addr zone=1234con:10m; > limit_conn zone1234con 10; If you can copy-paste a nginx.conf that shows the problem, that should make it easier for someone else to recreate the test. The output of "nginx -V" will probably be useful too, in case the problem cannot easily be seen by someone else using their compiled version. Good luck with it, f -- Francis Daly francis at daoine.org From vbart at nginx.com Mon Oct 2 12:53:56 2017 From: vbart at nginx.com (Valentin V. Bartenev) Date: Mon, 02 Oct 2017 15:53:56 +0300 Subject: limit_conn is dropping valid connections and causing memory leaks on nginx reload In-Reply-To: <599c564bab9d7b70a8729e87c47a710b.NginxMailingListEnglish@forum.nginx.org> References: <599c564bab9d7b70a8729e87c47a710b.NginxMailingListEnglish@forum.nginx.org> Message-ID: <1774260.cAftf3v8fj@vbart-workstation> On Saturday 30 September 2017 06:05:15 Dejan Grofelnik Pelzel wrote: > Hello, > > We are running the nginx 1.13.5 with HTTP/2 in a proxy_pass proxy_cache > configuration with clients having relatively long open connections. Our > system does automatic reloads for any new configuration and we recently > introduced a limit_conn to some of the config files. After that, I've > started noticing a rapid drop in connections and outgoing network every-time > the system would perform a configuration reload. Even stranger, on every > reload the memory usage would go up for about 1-2GB until ultimately > everything crashed if the reloads were too frequent. The memory usage did go > down after old workers were released, but that could take up to 30 minutes, > while the configuration could get reloaded up to twice per minute. > > We used the following configuration as recommended by pretty much any > example: > limit_conn_zone $binary_remote_addr zone=1234con:10m; > limit_conn zone1234con 10; > > I was able to verify the connection drop by doing a simple ab test, for > example, I would run ab -c 100 -n -k 1000 https://127.0.0.1/file.bin > 990 of the connections went through, however, 10 would still be active. > Immediately after the reload, those would get dropped as well. Adding -r > option would help the problem, but that doesn't fix our problem. > > Finally, after I tried to create a workaround, I've configured the limit > zone to: > limit_conn_zone "v$binary_remote_addr" zone=1234con:10m; > > Suddenly everything magically started to work. The connections were not > being dropped, the limit worked as expected and even more surprisingly the > memory usage was not going up anymore. I've been tearing my hair out almost > all day yesterday trying to figure this out. While I was very happy to see > this resolved, I am now confused as to why nginx behaves in such a way. > > I'm thinking this might likely be a bug, so I'm just wondering if anyone > could explain why it is happening or has a similar problem. > > Thank you! > Have you checked error log? wbr, Valentin V. Bartenev From nikolai at lusan.id.au Mon Oct 2 19:28:28 2017 From: nikolai at lusan.id.au (Nikolai Lusan) Date: Tue, 03 Oct 2017 05:28:28 +1000 Subject: Using map directive for multiple virtual hosts Message-ID: <1506972508.4963.30.camel@lusan.id.au> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512 Hi, I have a requirement to rewrite incoming URL requests for multiple virtual hosts, each with a unique (possibly overlapping) set of incoming requests. We have each vhost in a server block, but since the map directive lives in the http block I think that the behaviour we are currently seeing is a result of having multiple map declarations not working - currently the first virtual host with a configured map is working as expected, subsequent maps are just returning the default. Each of our server directives, and it's associated map, are living in one file per hostname. These files are included using a globbed include in the main nginx.conf files http directive. So the configuration looks something like this (these are abbreviated examples with all the php processing stuff pulled out of it) nginx.conf: http { include /etc/nginx/mime.types; default_type application/octet-stream; access_log /var/log/nginx/access.log; log_format main '$status $request'; sendfile on; server_tokens on; ... include /etc/nginx/our-sites/*; } /etc/nginx/our-sites/site1.com: map $request_uri $new_uri_site1 { /store/product1 /store/new_products; /contact/us /contact/about; /services/consulting /contact/consulting; } server { listen 443 ssl http2; listen [::]:443 ssl http2; ssl on; access_log /var/log/nginx/site1.com.access-tls.log combined; error_log /var/log/nginx/site1.com.error-tls.log; server_name site1.com; root /var/www/site1.com; index index.php; location / { try_files $uri $uri/ index.php?part=$uri&$args; } ... } /etc/nginx/our-sites/site2.com: map $request_uri $new_uri_site2 { /contact/us /contact/about; /services /contact/consulting; /people /staff/all; } server { listen 443 ssl http2; listen [::]:443 ssl http2; ssl on; access_log /var/log/nginx/site2.com.access-tls.log combined; error_log /var/log/nginx/site2.com.error-tls.log; server_name site2.com; root /var/www/site2.com; index index.php; location / { try_files $uri $uri/ index.php?section=$uri; } ... } Apart from moving the map directives inside each relevant server block as rewrite rules does anyone know of a working solution for this situation? - -- Nikolai Lusan Email: nikolai at lusan.id.au Phone: 0425 661 620 -----BEGIN PGP SIGNATURE----- iQIzBAEBCgAdFiEEVfd4GW6z4nsBxdLo4ZaDRV2VL6QFAlnSk1wACgkQ4ZaDRV2V L6Sy0w//fkjJGg5BNHT92bsgNgZQFYO1wHsx5AoWwFUm4vdYeKCjlaHppID/u5U2 BbBP0WhDTi+nlH1vf7I9JWjSzWcgmRYkmS81JtJKBULM4GILl9OEB90IETGOkQSt DO43KyeWfJ6joGCb7OqA1MVVApmvlUzyWkiR9v84qSalF7iGcGVOB0Ru4r95iAka priuY1SLJToQF9MBjEexX72xh8gTjgJstFBtB1ENHsaV0nxDGo18j7z1I8SUIwPB cWrxCEsLoz2aJiMqP+hQm5uuTd0bvdLILvwJ7q+akVPdPVJeQ6YbGoX9TD9Micap wEHbgFVGCL5tmTLQDzq4OtA1Qxb5xBZUgOlNsWFGJB43VHk/+zGFKlOZsbadAmhL 1SnqLuVx2J6QmWlhzZnChj7NLPalvZBDw8KrantYdIszIRRmxG2II4KTwO6GqS0H p96yXKIEOVLkf0mC9FjWINrhN5FOa4h2Fxua1VYuwth66MGz12w+Qh8MHoaJvCTW qyotERM1kUsEiGUFKSN334uvr034WCNOZacLn0yJF74Cfr/NsAQlAk6uS784XLI+ QoAjh+qjhuuZHnicq1oCCXpakQfwj3kSPjOFy70Rfw/hJuTzsiv66cVrvrztRaBD 5ARnMZtGXIVyfQ/eNGtWnBiV3J3GKabIB9wO+WQ1jmDOwZImFgQ= =ZjS6 -----END PGP SIGNATURE----- From nginx-forum at forum.nginx.org Mon Oct 2 22:23:37 2017 From: nginx-forum at forum.nginx.org (halfpastjohn) Date: Mon, 02 Oct 2017 18:23:37 -0400 Subject: How to grab a value from a request Message-ID: <31c0ef6ed5f86812b79b0010a2fb2c88.NginxMailingListEnglish@forum.nginx.org> I'm setting up nginx as a proxy router for various APIs. How can I take a specific value form the incoming request and dynamically populate it into the upstream? Request: https://api.domain.com/[resource]/v2/ Upstream: https://app.domain.com/api/[resource]/v2/ I this case, I want to take [resource] from the request and populate it into the corresponding [resource] of the upstream path. Thanks Posted at Nginx Forum: https://forum.nginx.org/read.php?2,276652,276652#msg-276652 From francis at daoine.org Mon Oct 2 23:40:01 2017 From: francis at daoine.org (Francis Daly) Date: Tue, 3 Oct 2017 00:40:01 +0100 Subject: Using map directive for multiple virtual hosts In-Reply-To: <1506972508.4963.30.camel@lusan.id.au> References: <1506972508.4963.30.camel@lusan.id.au> Message-ID: <20171002234001.GZ20907@daoine.org> On Tue, Oct 03, 2017 at 05:28:28AM +1000, Nikolai Lusan wrote: Hi there, > I have a requirement to rewrite incoming URL requests for multiple > virtual hosts, each with a unique (possibly overlapping) set of incoming > requests. We have each vhost in a server block, but since the map directive > lives in the http block I think that the behaviour we are currently seeing > is a result of having multiple map declarations not working - currently the > first virtual host with a configured map is working as expected, subsequent > maps are just returning the default. What request do you make? What response do you get? What response do you want instead? > So the configuration looks something like this (these are abbreviated > examples with all the php processing stuff pulled out of it) I think you may have abbreviated these a bit too much. You never try to use $new_uri_site1 or $new_uri_site2. > /etc/nginx/our-sites/site1.com: > map $request_uri $new_uri_site1 { > /store/product1 /store/new_products; > /contact/us /contact/about; > /services/consulting /contact/consulting; > } > Apart from moving the map directives inside each relevant server block as > rewrite rules does anyone know of a working solution for this situation? http://nginx.org/r/map map does not go inside server{}. Can you give a specific example of a response from nginx that is not what you want? f -- Francis Daly francis at daoine.org From francis at daoine.org Mon Oct 2 23:43:24 2017 From: francis at daoine.org (Francis Daly) Date: Tue, 3 Oct 2017 00:43:24 +0100 Subject: How to grab a value from a request In-Reply-To: <31c0ef6ed5f86812b79b0010a2fb2c88.NginxMailingListEnglish@forum.nginx.org> References: <31c0ef6ed5f86812b79b0010a2fb2c88.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20171002234324.GA20907@daoine.org> On Mon, Oct 02, 2017 at 06:23:37PM -0400, halfpastjohn wrote: Hi there, > I'm setting up nginx as a proxy router for various APIs. How can I take a > specific value form the incoming request and dynamically populate it into > the upstream? In general, you can use map (http://nginx.org/r/map) with named regex captures. In this specific case: > Request: https://api.domain.com/[resource]/v2/ > Upstream: https://app.domain.com/api/[resource]/v2/ location / { proxy_pass https://app.domain.com/api/; } might do exactly what you want. f -- Francis Daly francis at daoine.org From francis at daoine.org Tue Oct 3 00:06:51 2017 From: francis at daoine.org (Francis Daly) Date: Tue, 3 Oct 2017 01:06:51 +0100 Subject: 404 on try_files In-Reply-To: References: Message-ID: <20171003000651.GB20907@daoine.org> On Thu, Sep 21, 2017 at 09:18:45AM -0700, Nathan Zabaldo wrote: Hi there, > The $request_uri > > /h_32/w_36/test.jpg > > needs to be routed to > > /index.php/img/i/h_32/w_36/test.jpg What does "routed to" mean, specifically? nginx can proxy_pass to a http server or fastcgi_pass to a fastcgi server or do an internal rewrite to another url. It might be useful for you to think in nginx terms when you are using nginx. > index.php will route the request to the "img" controller and "i" method, > then process the image and return it. However, my MVC works off of the > REQUEST_URI. So simply rewriting the url will not work. The REQUEST_URI > needs to modified. When your fastcgi server gets multiple fastcgi_param entries with the same "key", does it use the first, the last, or a random one? What fastcgi_param values are you sending? (The debug log, or tcpdump of the network traffic, should show you.) > You can see in the last location block that I'm passing in the modified > REQUEST_URI, but Nginx is trying to open /var/www/vhosts/ > ezrshop.com/htdocs/h_32/w_36/test.jpg (see **Error Logs** below) and > throwing a 404. > > Shouldn't Nginx be trying to send it for processing to index.php?? Why the > 404? Can you show the complete nginx.conf that leads to this response? Perhaps there is some other configuration that is being used here. > In a browser, if I go directly to > https://www.example.com/img/i/h_32/w_36/test.jpg the page comes up just > fine. If I try to go to https://www.example.com/h_32/w_36/test.jpg in my > browser I get 404 and the **Error Logs** you can see below. > > root /var/www/vhosts/example.com/htdocs; > index index.php index.html; > > set $request_url $request_uri; > > location ~ (h|w|fm|trim|fit|pad|border|or|bg)_.*\.(jpg|png)$ { What values do you want $1 and $2 to have here? What values do they actually have? (Hint: $2 is either "jpg" or "png".) > if ($request_uri !~ "/img/i/") { > set $request_url /index.php/img/i$1.$2; > } You've used "if" inside "location" without using something like "return". That may not do what you hope it will do. > try_files $uri $uri/ /index.php/img/i$1.$2; > } > > location / { > try_files $uri $uri/ /index.php$uri?$args; > } > > location ~ ^(.+\.php)(.*)$ { > fastcgi_pass 127.0.0.1:9000; > fastcgi_param CI_ENV production; #CI environment constant > fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; > include fastcgi_params; Does that "include" include a value for REQUEST_URI? > fastcgi_param REQUEST_URI $request_url; > } f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Tue Oct 3 03:51:33 2017 From: nginx-forum at forum.nginx.org (rnmx18) Date: Mon, 02 Oct 2017 23:51:33 -0400 Subject: Proxy buffering and slow clients related issues Message-ID: <712fbafb368a941206f869d7be9a97e1.NginxMailingListEnglish@forum.nginx.org> Hi, We are trying to use NGINX for caching service in low bandwidth, high latency mobile networks. The service is to stream 10-sec video segments of different types ranging from 2MB to 50MB. NGINX proxy_buffering configuration is as follows: proxy_buffering on; proxy_buffer_size 4k; proxy_buffers 64 4k; proxy_busy_buffers_size 128k; proxy_temp_file_write_size 64k; The slow clients results in NGINX buffering of the response and writing data to disk as part of temporary buffering. The disk IO is causing higher TTFB and higher load time for video download. We have tried to configure the proxy_max_temp_file_size to 0 to disable the buffering. This change results in interrupts not being balanced as shown below in the top command output - core0 and core15 is using 100%. top - 13:53:04 up 6 days, 2:31, 5 users, load average: 6.42, 4.35, 3.84 Tasks: 370 total, 6 running, 364 sleeping, 0 stopped, 0 zombie %Cpu0 : 0.0 us, 0.0 sy, 0.0 ni, 0.0 id, 0.0 wa, 0.0 hi,100.0 si, 0.0 st %Cpu1 : 7.9 us, 10.3 sy, 0.0 ni, 79.7 id, 0.0 wa, 0.0 hi, 2.1 si, 0.0 st %Cpu2 : 18.7 us, 14.8 sy, 0.0 ni, 49.0 id, 0.0 wa, 0.0 hi, 17.5 si, 0.0 st %Cpu3 : 18.7 us, 11.5 sy, 0.0 ni, 52.7 id, 0.0 wa, 0.0 hi, 17.2 si, 0.0 st %Cpu4 : 20.1 us, 12.4 sy, 0.0 ni, 47.1 id, 0.0 wa, 0.0 hi, 20.5 si, 0.0 st %Cpu5 : 22.2 us, 12.3 sy, 0.0 ni, 45.6 id, 0.0 wa, 0.0 hi, 19.9 si, 0.0 st %Cpu6 : 15.1 us, 13.0 sy, 0.0 ni, 68.5 id, 0.0 wa, 0.0 hi, 3.4 si, 0.0 st %Cpu7 : 9.6 us, 10.7 sy, 0.0 ni, 77.7 id, 0.3 wa, 0.0 hi, 1.7 si, 0.0 st %Cpu8 : 9.9 us, 12.0 sy, 0.0 ni, 75.3 id, 0.0 wa, 0.0 hi, 2.7 si, 0.0 st %Cpu9 : 11.7 us, 11.7 sy, 0.0 ni, 73.5 id, 0.0 wa, 0.0 hi, 3.1 si, 0.0 st %Cpu10 : 10.0 us, 11.0 sy, 0.0 ni, 74.9 id, 0.0 wa, 0.0 hi, 4.1 si, 0.0 st %Cpu11 : 5.5 us, 12.3 sy, 0.0 ni, 80.9 id, 0.0 wa, 0.0 hi, 1.4 si, 0.0 st %Cpu12 : 1.7 us, 3.7 sy, 0.0 ni, 93.6 id, 0.0 wa, 0.0 hi, 1.0 si, 0.0 st %Cpu13 : 2.0 us, 3.7 sy, 0.0 ni, 93.6 id, 0.0 wa, 0.0 hi, 0.7 si, 0.0 st %Cpu14 : 1.4 us, 2.4 sy, 0.0 ni, 95.3 id, 0.7 wa, 0.0 hi, 0.3 si, 0.0 st %Cpu15 : 0.0 us, 0.0 sy, 0.0 ni, 0.0 id, 0.0 wa, 0.0 hi,100.0 si, 0.0 st %Cpu16 : 18.0 us, 10.9 sy, 0.0 ni, 55.5 id, 0.0 wa, 0.0 hi, 15.6 si, 0.0 st %Cpu17 : 18.8 us, 11.9 sy, 0.0 ni, 48.8 id, 0.0 wa, 0.0 hi, 20.4 si, 0.0 st %Cpu18 : 6.4 us, 6.8 sy, 0.0 ni, 85.1 id, 0.0 wa, 0.0 hi, 1.7 si, 0.0 st %Cpu19 : 3.8 us, 6.5 sy, 0.0 ni, 86.3 id, 0.0 wa, 0.0 hi, 3.4 si, 0.0 st %Cpu20 : 1.7 us, 5.1 sy, 0.0 ni, 92.2 id, 0.3 wa, 0.0 hi, 0.7 si, 0.0 st %Cpu21 : 2.4 us, 3.0 sy, 0.0 ni, 93.9 id, 0.0 wa, 0.0 hi, 0.7 si, 0.0 st %Cpu22 : 1.0 us, 4.0 sy, 0.0 ni, 93.9 id, 0.0 wa, 0.0 hi, 1.0 si, 0.0 st %Cpu23 : 1.3 us, 3.7 sy, 0.0 ni, 94.6 id, 0.0 wa, 0.0 hi, 0.3 si, 0.0 st KiB Mem : 11523494+total, 27725092 free, 10556008 used, 76953848 buff/cache KiB Swap: 16777212 total, 16675408 free, 101804 used. 83445760 avail Mem Couple of queries: a) Why do we get unbalanced interrupts when buffering is disabled? b) How to configure NGINX to throttle the upstream read and avoid temp buffering? Thanks Rajesh Posted at Nginx Forum: https://forum.nginx.org/read.php?2,276656,276656#msg-276656 From nginx-forum at forum.nginx.org Tue Oct 3 04:47:53 2017 From: nginx-forum at forum.nginx.org (sachin.shetty@gmail.com) Date: Tue, 03 Oct 2017 00:47:53 -0400 Subject: Info messages in the log: Connection reset by peer) while sending to client Message-ID: <6aaaf716bbff4de7cea7d8982e6fe1a6.NginxMailingListEnglish@forum.nginx.org> Hi, I had an upstream defined in my config with keepalive 60. But the server is a legacy one and does not handle keep alive properly. So I removed the keepalive attribute and the errors I was seeing on the client from the upstream went away. But now I see a ton these info log lines: 2017/10/03 04:37:51 [info] 1933#0: *6091340 recv() failed (104: Connection reset by peer) while sending to client, client: 164.40.242.212, server: kong, request: "POST /public-api/v1/fs/Shared/functional_tests_2017_10_03_06_37_49_068 HTTP/1.1", upstream: "http://10.116.4.42:7280/public-api/v1/fs/Shared/functional_tests_2017_10_03_06_37_49_068", host: "wdio.qa-egnyte.com" I dont see any functional errors in our system, and the info log seems harmless, but I was still curious to understand what this means and if it has any side effects at scale. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,276657,276657#msg-276657 From zchao1995 at gmail.com Tue Oct 3 06:05:03 2017 From: zchao1995 at gmail.com (Zhang Chao) Date: Tue, 3 Oct 2017 14:05:03 +0800 Subject: Info messages in the log: Connection reset by peer) while sending to client In-Reply-To: <6aaaf716bbff4de7cea7d8982e6fe1a6.NginxMailingListEnglish@forum.nginx.org> References: <6aaaf716bbff4de7cea7d8982e6fe1a6.NginxMailingListEnglish@forum.nginx.org> Message-ID: HI! I think this log message is?irrelevant with the keepalive directive inside your upstream configuration block, it just tell you the connection was reset by the client side. On 3 October 2017 at 12:48:02, sachin.shetty at gmail.com (nginx-forum at forum.nginx.org) wrote: Hi, I had an upstream defined in my config with keepalive 60. But the server is a legacy one and does not handle keep alive properly. So I removed the keepalive attribute and the errors I was seeing on the client from the upstream went away. But now I see a ton these info log lines: 2017/10/03 04:37:51 [info] 1933#0: *6091340 recv() failed (104: Connection reset by peer) while sending to client, client: 164.40.242.212, server: kong, request: "POST /public-api/v1/fs/Shared/functional_tests_2017_10_03_06_37_49_068 HTTP/1.1", upstream: "http://10.116.4.42:7280/public-api/v1/fs/Shared/functional_tests_2017_10_03_06_37_49_068", host: "wdio.qa-egnyte.com" I dont see any functional errors in our system, and the info log seems harmless, but I was still curious to understand what this means and if it has any side effects at scale. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,276657,276657#msg-276657 _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Tue Oct 3 09:50:48 2017 From: nginx-forum at forum.nginx.org (sachin.shetty@gmail.com) Date: Tue, 03 Oct 2017 05:50:48 -0400 Subject: Info messages in the log: Connection reset by peer) while sending to client In-Reply-To: References: Message-ID: <7251e0368e7ad10b0082bbb1e3a44412.NginxMailingListEnglish@forum.nginx.org> Hi, The message in the logs started coming after I removed the "keepalive 60" from the upstream block. The message is connection reset by peer and not client, so i am a bit worried. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,276657,276659#msg-276659 From mdounin at mdounin.ru Tue Oct 3 13:11:23 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 3 Oct 2017 16:11:23 +0300 Subject: Proxy buffering and slow clients related issues In-Reply-To: <712fbafb368a941206f869d7be9a97e1.NginxMailingListEnglish@forum.nginx.org> References: <712fbafb368a941206f869d7be9a97e1.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20171003131123.GH16067@mdounin.ru> Hello! On Mon, Oct 02, 2017 at 11:51:33PM -0400, rnmx18 wrote: > Hi, > > We are trying to use NGINX for caching service in low bandwidth, high > latency mobile networks. The service is to stream 10-sec video segments of > different types ranging from 2MB to 50MB. > > NGINX proxy_buffering configuration is as follows: > > proxy_buffering on; > proxy_buffer_size 4k; > proxy_buffers 64 4k; > proxy_busy_buffers_size 128k; > proxy_temp_file_write_size 64k; > > The slow clients results in NGINX buffering of the response and writing data > to disk as part of temporary buffering. The disk IO is causing higher TTFB > and higher load time for video download. > > We have tried to configure the proxy_max_temp_file_size to 0 to disable the > buffering. This change results in interrupts not being balanced as shown > below in the top command output - core0 and core15 is using 100%. > > top - 13:53:04 up 6 days, 2:31, 5 users, load average: 6.42, 4.35, 3.84 > Tasks: 370 total, 6 running, 364 sleeping, 0 stopped, 0 zombie > %Cpu0 : 0.0 us, 0.0 sy, 0.0 ni, 0.0 id, 0.0 wa, 0.0 hi,100.0 si, 0.0 > st [...] > Couple of queries: > a) Why do we get unbalanced interrupts when buffering is disabled? Try looking on what runs on these CPUs. Given that it's "100.0 si", I would suggest that it is your NIC interrupt threads trying to cope with load. > b) How to configure NGINX to throttle the upstream read and avoid temp > buffering? With proxy_max_temp_file_size set to 0 nginx won't buffer anything to disk, and will read from upstream up to available proxy_buffers. As long as configured buffers are full, nginx will stop reading from the upstream server till at least one buffer is free. That is, nginx will read from the upstream at a rate controlled by bandwidth of connected clients. You can use normal client limiting mechanisms such as limit_rate and limit_conn if the rate observed is too high. Additionly, the proxy_limit_rate directive can be used to control rate limiting of connections to upstream servers, see http://nginx.org/r/proxy_limit_rate. Though this is primary useful when you don't disable disk buffering but rather have to keep it enabled, for example, when using cache. -- Maxim Dounin http://nginx.org/ From nginx-forum at forum.nginx.org Tue Oct 3 15:29:44 2017 From: nginx-forum at forum.nginx.org (halfpastjohn) Date: Tue, 03 Oct 2017 11:29:44 -0400 Subject: How to grab a value from a request In-Reply-To: <20171002234324.GA20907@daoine.org> References: <20171002234324.GA20907@daoine.org> Message-ID: I'm sorry i don't quite follow. How is your example using the map directive? Just for clarity, here is what I'm trying to do. location /[resource]/v2/ { proxy_pass http://app.domain.com/api/[resource]/v2; } In fact, the location bit will be hardcoded with the actual resource, I just need the proxy_pass to grab that value and pop it into its path. So more something like this: location /users/v2/ { proxy_pass http://app.domain.com/api/{{users}}/v2; } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,276652,276661#msg-276661 From r at roze.lv Tue Oct 3 16:10:17 2017 From: r at roze.lv (Reinis Rozitis) Date: Tue, 3 Oct 2017 19:10:17 +0300 Subject: How to grab a value from a request In-Reply-To: References: <20171002234324.GA20907@daoine.org> Message-ID: <56DB108ED149495588A056BDB5C2455D@Neiroze> > I'm sorry i don't quite follow. How is your example using the map > directive? > Just for clarity, here is what I'm trying to do. > location /[resource]/v2/ { > proxy_pass http://app.domain.com/api/[resource]/v2; > } Well in general it's something like: location ~ ^/(.*)/v2/ { proxy_pass http://app.domain.com/api/$1/v2; } Depending on if there are other url parts and/or variables the other way would be just to use rewrite within matching location block (or just globaly if every request is proxied). For example: location /users/v2/ { rewrite ^/(.*)/v2/ /api/$1/v2 break; proxy_pass http://app.domain.com; } rr From nginx-forum at forum.nginx.org Tue Oct 3 17:41:06 2017 From: nginx-forum at forum.nginx.org (halfpastjohn) Date: Tue, 03 Oct 2017 13:41:06 -0400 Subject: How to grab a value from a request In-Reply-To: <56DB108ED149495588A056BDB5C2455D@Neiroze> References: <56DB108ED149495588A056BDB5C2455D@Neiroze> Message-ID: <2b253fa883919075c8e9f89021f70ba1.NginxMailingListEnglish@forum.nginx.org> Would this work? location ~ ^/users/v2/ { proxy_pass http://app.domain.com/api/$1/v2; } Would $1 resolve as users or does it need to be inside ()? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,276652,276664#msg-276664 From nginx-forum at forum.nginx.org Tue Oct 3 18:00:39 2017 From: nginx-forum at forum.nginx.org (halfpastjohn) Date: Tue, 03 Oct 2017 14:00:39 -0400 Subject: duplicate upstream server marked as down Message-ID: <7ba22baf58c0e9b6984e1f96796f4c86.NginxMailingListEnglish@forum.nginx.org> Can i have two, identical, server hostnames in an upstream, with one of them marked as "down"? Like this: resolver 10.0.0.8; upstream backend { server backend.example.com down resolve; server backend.example.com/api/v2/; } The reason being is that i need to route to the second one (with the longer path), but i also need to resolve the hostname. Unfortunately it won't resolve when there is additional pathing tacked onto the end. So, i'm hoping that this will allow the hostname to be resolved but only send traffic to the full path. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,276665,276665#msg-276665 From nginx-forum at forum.nginx.org Wed Oct 4 06:39:35 2017 From: nginx-forum at forum.nginx.org (akmanocha) Date: Wed, 04 Oct 2017 02:39:35 -0400 Subject: HTTP Status Code 463 Message-ID: <260f53d44b79883eec60e3d2c19a58a3.NginxMailingListEnglish@forum.nginx.org> Hi, I am getting a strange error while using nginx as reverse proxy. >From my upstream I get a response like in acess logs - {"time": "2017-10-04T11:51:37+05:30", "remote_addr": "52.76.220.40", "remote_user": "-", "body_bytes_sent": "0", "request_time": "0.002", "status": "463", "request": "GET / HTTP/1.1", "request_method": "GET", "request_header": "-", "request_body": "-", "http_referer": "-", "http_user_agent": "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/36.0.1985.67 Safari/537.36", "application": "nb-qa-nearbuy-nginx", "xforwarder": "52.76.85.66, 52.76.85.66, 52.76.220.40, 52.76.220.40, 52.76.220.40, 52.76.220.40, 52.76.220.40, 52.76.220.40, 52.76.220.40, 52.76.220.40, 52.76.220.40, 52.76.220.40, 52.76.220.40, 52.76.220.40, 52.76.220.40, 52.76.220.40, 52.76.220.40, 52.76.220.40, 52.76.220.40, 52.76.220.40, 52.76.220.40, 52.76.220.40, 52.76.220.40, 52.76.220.40, 52.76.220.40, 52.76.220.40, 52.76.220.40, 52.76.220.40, 52.76.220.40, 52.76.220.40, 52.76.220.40","host":"www.iwanto.in","http_HEADER":"-"} No logs in error logs- Could not find anything on net about status code 463 Looks like an issue with xforwarder has some issues? Why so many repeated IPs? Has anybody else encountered the same issues? Rgs Posted at Nginx Forum: https://forum.nginx.org/read.php?2,276667,276667#msg-276667 From francis at daoine.org Wed Oct 4 07:16:18 2017 From: francis at daoine.org (Francis Daly) Date: Wed, 4 Oct 2017 08:16:18 +0100 Subject: How to grab a value from a request In-Reply-To: References: <20171002234324.GA20907@daoine.org> Message-ID: <20171004071618.GC20907@daoine.org> On Tue, Oct 03, 2017 at 11:29:44AM -0400, halfpastjohn wrote: Hi there, > I'm sorry i don't quite follow. How is your example using the map directive? It isn't. If you want to pick pieces from the request, you can use "map". However, what you have described so far as your use case does not need to pick pieces from the request. > Just for clarity, here is what I'm trying to do. > > location /[resource]/v2/ { > proxy_pass http://app.domain.com/api/[resource]/v2; > } > > In fact, the location bit will be hardcoded with the actual resource, I just If you can hardcode the location bit, you can hardcode exactly the same thing in the proxy_pass bit, no? > location /users/v2/ { > proxy_pass http://app.domain.com/api/{{users}}/v2; Make that be proxy_pass http://app.domain.com/api/users/v2/; and it should Just Work, no? f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Wed Oct 4 08:32:41 2017 From: nginx-forum at forum.nginx.org (Dingo) Date: Wed, 04 Oct 2017 04:32:41 -0400 Subject: Reverse cache not working on start pages Message-ID: <267d8da089f0431bd920f59ccc01f1e0.NginxMailingListEnglish@forum.nginx.org> Hi! I am unable to get reverse cache working on startpages. I am using Ubuntu 16.04 with everything updated. I have tried this example: https://www.nginx.com/resources/wiki/start/topics/examples/reverseproxycachingexample/ http { proxy_cache_path /data/nginx/cache levels=1:2 keys_zone=STATIC:10m inactive=24h max_size=1g; server { location / { proxy_pass http://1.2.3.4; proxy_set_header Host $host; # proxy_cache STATIC; proxy_cache_valid 200 1d; proxy_cache_use_stale error timeout invalid_header updating http_500 http_502 http_503 http_504; } } } It works fine with the row "proxy_cache STATIC" disabled, but when I enable it, I can not get any startpages woking. If I use http://asiteonmyiisserver.com, it will only show the default IIS page, but if I type: http://asiteonmyiisserver.com/idex.html it will work. What also works is http://asiteonmyiisserver.com/products, and so on. I also have one other server that only hosts one site (apache server) with no host headers involved, and that one works even with the startpage. I am totally lost when it comes Nginx, so this could be something really simple. I have searched a lot on the Internet and only found one guy with the same problem. The server pointed to by proxy_pass is an IIS-server hosting mostly wordpress domains on a single IP using host headers. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,276670,276670#msg-276670 From nginx-forum at forum.nginx.org Wed Oct 4 09:42:10 2017 From: nginx-forum at forum.nginx.org (Dingo) Date: Wed, 04 Oct 2017 05:42:10 -0400 Subject: Reverse cache not working on start pages In-Reply-To: <267d8da089f0431bd920f59ccc01f1e0.NginxMailingListEnglish@forum.nginx.org> References: <267d8da089f0431bd920f59ccc01f1e0.NginxMailingListEnglish@forum.nginx.org> Message-ID: <067ad68f6ad39736b85557c325f5b9ca.NginxMailingListEnglish@forum.nginx.org> A small update to my problem: I started Wireshark and saw that there was no any requests going to my server as long as I used the cache. The problem seems to be in the cached content. I changed: proxy_cache_valid 200 1d; to proxy_cache_valid 200 1m; But I don't seem to get any updates to the cache. No traffic between my Nginx and IIS, everything is still fetched from the cache. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,276670,276671#msg-276671 From nginx-forum at forum.nginx.org Wed Oct 4 10:01:35 2017 From: nginx-forum at forum.nginx.org (Dingo) Date: Wed, 04 Oct 2017 06:01:35 -0400 Subject: Reverse cache not working on start pages In-Reply-To: <067ad68f6ad39736b85557c325f5b9ca.NginxMailingListEnglish@forum.nginx.org> References: <267d8da089f0431bd920f59ccc01f1e0.NginxMailingListEnglish@forum.nginx.org> <067ad68f6ad39736b85557c325f5b9ca.NginxMailingListEnglish@forum.nginx.org> Message-ID: <5166a4e052f8673165df9d5c82951cdd.NginxMailingListEnglish@forum.nginx.org> Another update: I finally deleted /data/nginx/cache/*, and now everything seems to be working. It looks like Nginx don't bother about what cache timeout I use. If it is 1 day, as in the example from nginx.org, everything that was cached at that time will be remembered for 1 day. I have no clue why my startpages was not working, but they are now that I deleted the cache. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,276670,276673#msg-276673 From nginx-forum at forum.nginx.org Wed Oct 4 12:01:48 2017 From: nginx-forum at forum.nginx.org (soulseekah) Date: Wed, 04 Oct 2017 08:01:48 -0400 Subject: nginx null bytes on static image Message-ID: <9333fe2b8bb6e3af725ae0bc7e7d0822.NginxMailingListEnglish@forum.nginx.org> nginx/1.12.1 Creating new files with PHP (image resizing) works fine, the new files are served well. Then suddenly after a couple of days the file is returned as a sequence of NULL bytes Content-Length long. HTTP/1.1 200 OK Server: nginx/1.12.1 Date: Wed, 04 Oct 2017 11:33:57 GMT Content-Type: image/jpeg Content-Length: 8915 Last-Modified: Sun, 01 Oct 2017 19:30:46 GMT Connection: keep-alive ETag: "59d14266-22d3" Expires: Thu, 31 Dec 2037 23:55:55 GMT Cache-Control: max-age=315360000 Accept-Ranges: bytes 00000000: 0000 0000 0000 0000 0000 0000 0000 0000 ................ 00000010: 0000 0000 0000 0000 0000 0000 0000 0000 ................ 00000020: 0000 0000 0000 0000 0000 0000 0000 0000 ................ ** snip ** 000022c0: 0000 0000 0000 0000 0000 0000 0000 0000 ................ 000022d0: 0000 00 Looking at the file via SSH under the nginx user the file is fine: sudo -u nginx cat path/to/file.jpg | xxd 00000000: ffd8 ffe0 0010 4a46 4946 0001 0100 0001 ......JFIF...... 00000010: 0001 0000 fffe 003b 4352 4541 544f 523a .......;CREATOR: 00000020: 2067 642d 6a70 6567 2076 312e 3020 2875 gd-jpeg v1.0 (u 00000030: 7369 6e67 2049 4a47 204a 5045 4720 7636 sing IJG JPEG v6 00000040: 3229 2c20 7175 616c 6974 7920 3d20 3832 2), quality = 82 00000050: 0aff db00 4300 0604 0405 0404 0605 0505 ....C........... Any ideas what may be going wrong? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,276676,276676#msg-276676 From nginx-forum at forum.nginx.org Wed Oct 4 13:38:15 2017 From: nginx-forum at forum.nginx.org (manas86) Date: Wed, 04 Oct 2017 09:38:15 -0400 Subject: How to set up nginx as a 2-factor authentication portal that becomes transparent once auth'd? In-Reply-To: <1365807719.11946.140661217079830.54641F7E@webmail.messagingengine.com> References: <1365807719.11946.140661217079830.54641F7E@webmail.messagingengine.com> Message-ID: <8f89628b0b94ce0c8335496af19cfc37.NginxMailingListEnglish@forum.nginx.org> Hi Did you find any solution ? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,238332,276678#msg-276678 From mdounin at mdounin.ru Wed Oct 4 14:24:43 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 4 Oct 2017 17:24:43 +0300 Subject: HTTP Status Code 463 In-Reply-To: <260f53d44b79883eec60e3d2c19a58a3.NginxMailingListEnglish@forum.nginx.org> References: <260f53d44b79883eec60e3d2c19a58a3.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20171004142443.GO16067@mdounin.ru> Hello! On Wed, Oct 04, 2017 at 02:39:35AM -0400, akmanocha wrote: > Hi, > > I am getting a strange error while using nginx as reverse proxy. > > From my upstream I get a response like in acess logs - > > {"time": "2017-10-04T11:51:37+05:30", "remote_addr": "52.76.220.40", > "remote_user": "-", "body_bytes_sent": "0", "request_time": "0.002", > "status": "463", "request": "GET / HTTP/1.1", "request_method": "GET", > "request_header": "-", "request_body": "-", "http_referer": "-", > "http_user_agent": "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 > (KHTML, like Gecko) Chrome/36.0.1985.67 Safari/537.36", "application": > "nb-qa-nearbuy-nginx", "xforwarder": "52.76.85.66, 52.76.85.66, > 52.76.220.40, 52.76.220.40, 52.76.220.40, 52.76.220.40, 52.76.220.40, > 52.76.220.40, 52.76.220.40, 52.76.220.40, 52.76.220.40, 52.76.220.40, > 52.76.220.40, 52.76.220.40, 52.76.220.40, 52.76.220.40, 52.76.220.40, > 52.76.220.40, 52.76.220.40, 52.76.220.40, 52.76.220.40, 52.76.220.40, > 52.76.220.40, 52.76.220.40, 52.76.220.40, 52.76.220.40, 52.76.220.40, > 52.76.220.40, 52.76.220.40, 52.76.220.40, > 52.76.220.40","host":"www.iwanto.in","http_HEADER":"-"} > > No logs in error logs- > > Could not find anything on net about status code 463 > Looks like an issue with xforwarder has some issues? Why so many repeated > IPs? Has anybody else encountered the same issues? nginx itself never returns 463, it is something non-standard likely returned by your backend. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Wed Oct 4 14:47:18 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 4 Oct 2017 17:47:18 +0300 Subject: nginx null bytes on static image In-Reply-To: <9333fe2b8bb6e3af725ae0bc7e7d0822.NginxMailingListEnglish@forum.nginx.org> References: <9333fe2b8bb6e3af725ae0bc7e7d0822.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20171004144718.GP16067@mdounin.ru> Hello! On Wed, Oct 04, 2017 at 08:01:48AM -0400, soulseekah wrote: > nginx/1.12.1 > > Creating new files with PHP (image resizing) works fine, the new files are > served well. Then suddenly after a couple of days the file is returned as a > sequence of NULL bytes Content-Length long. > > HTTP/1.1 200 OK > Server: nginx/1.12.1 > Date: Wed, 04 Oct 2017 11:33:57 GMT > Content-Type: image/jpeg > Content-Length: 8915 > Last-Modified: Sun, 01 Oct 2017 19:30:46 GMT > Connection: keep-alive > ETag: "59d14266-22d3" > Expires: Thu, 31 Dec 2037 23:55:55 GMT > Cache-Control: max-age=315360000 > Accept-Ranges: bytes > > 00000000: 0000 0000 0000 0000 0000 0000 0000 0000 ................ > 00000010: 0000 0000 0000 0000 0000 0000 0000 0000 ................ > 00000020: 0000 0000 0000 0000 0000 0000 0000 0000 ................ > ** snip ** > 000022c0: 0000 0000 0000 0000 0000 0000 0000 0000 ................ > 000022d0: 0000 00 > > Looking at the file via SSH under the nginx user the file is fine: > > sudo -u nginx cat path/to/file.jpg | xxd > > 00000000: ffd8 ffe0 0010 4a46 4946 0001 0100 0001 ......JFIF...... > 00000010: 0001 0000 fffe 003b 4352 4541 544f 523a .......;CREATOR: > 00000020: 2067 642d 6a70 6567 2076 312e 3020 2875 gd-jpeg v1.0 (u > 00000030: 7369 6e67 2049 4a47 204a 5045 4720 7636 sing IJG JPEG v6 > 00000040: 3229 2c20 7175 616c 6974 7920 3d20 3832 2), quality = 82 > 00000050: 0aff db00 4300 0604 0405 0404 0605 0505 ....C........... > > Any ideas what may be going wrong? I would suggest that there is something wrong at the OS level. In particular, you may want to take a look at the filesystem used and/or check if switching off sendfile fixes things. -- Maxim Dounin http://nginx.org/ From nginx-forum at forum.nginx.org Wed Oct 4 15:19:11 2017 From: nginx-forum at forum.nginx.org (pankaj@releasemanager.in) Date: Wed, 04 Oct 2017 11:19:11 -0400 Subject: Support Request for Preserving Proxied server original Chunk sizes In-Reply-To: References: Message-ID: Were you able to work around this issue of rechunking based on network performance and bandwidth ? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,276367,276682#msg-276682 From nginx-forum at forum.nginx.org Wed Oct 4 15:29:23 2017 From: nginx-forum at forum.nginx.org (pankaj@releasemanager.in) Date: Wed, 04 Oct 2017 11:29:23 -0400 Subject: Nginx splitting one single request's into multiple requests to upstream. (version 1.13.3) In-Reply-To: References: Message-ID: <18b6dca9ef4071b74131e64e5e4f3625.NginxMailingListEnglish@forum.nginx.org> Pbooth, Basically, i am receiving a complete json doc as a chunk from one app server which is then proxied to another app which is handing each chunk as a complete request. The problem is when you put nginx in between the document is sometime divided in multiple chunks or multiple chunks are merge into one chunk and transmitted to upstream app server. I have tried various configuration options but couldn't identify which configuration exactly controls this rechunking while proxying or max size of the chunk so there's never a case when two json docs are sent to the app in a single chunk. Ideally it should sent the chunk as is to upstream as recieved. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,276545,276683#msg-276683 From mdounin at mdounin.ru Wed Oct 4 15:39:00 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 4 Oct 2017 18:39:00 +0300 Subject: Reverse cache not working on start pages In-Reply-To: <5166a4e052f8673165df9d5c82951cdd.NginxMailingListEnglish@forum.nginx.org> References: <267d8da089f0431bd920f59ccc01f1e0.NginxMailingListEnglish@forum.nginx.org> <067ad68f6ad39736b85557c325f5b9ca.NginxMailingListEnglish@forum.nginx.org> <5166a4e052f8673165df9d5c82951cdd.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20171004153900.GR16067@mdounin.ru> Hello! On Wed, Oct 04, 2017 at 06:01:35AM -0400, Dingo wrote: > Another update: > > I finally deleted /data/nginx/cache/*, and now everything seems to be > working. It looks like Nginx don't bother about what cache timeout I use. If > it is 1 day, as in the example from nginx.org, everything that was cached at > that time will be remembered for 1 day. I have no clue why my startpages was > not working, but they are now that I deleted the cache. Cache validity time is determened when loading a response from the backend based on the response headers and the configuration active at that time. Any futher changes to the configuration have no effect. So what you did is basically correct: if you are making major changes to your caching configuration, you may need to clear the cache manually. -- Maxim Dounin http://nginx.org/ From r at roze.lv Wed Oct 4 15:51:46 2017 From: r at roze.lv (Reinis Rozitis) Date: Wed, 4 Oct 2017 18:51:46 +0300 Subject: How to grab a value from a request In-Reply-To: <2b253fa883919075c8e9f89021f70ba1.NginxMailingListEnglish@forum.nginx.org> References: <56DB108ED149495588A056BDB5C2455D@Neiroze> <2b253fa883919075c8e9f89021f70ba1.NginxMailingListEnglish@forum.nginx.org> Message-ID: > Would this work? > > location ~ ^/users/v2/ { > proxy_pass http://app.domain.com/api/$1/v2; > } No. > Would $1 resolve as users or does it need to be inside ()? For capturing (PCRE) a block into variable you need (). You can also use named variables if the pattern/configuration becomes more complex ( for example http://nginx.org/en/docs/http/server_names.html#regex_names ) rr From peter_booth at me.com Wed Oct 4 15:57:18 2017 From: peter_booth at me.com (Peter Booth) Date: Wed, 04 Oct 2017 11:57:18 -0400 Subject: Reverse cache not working on start pages In-Reply-To: <20171004153900.GR16067@mdounin.ru> References: <267d8da089f0431bd920f59ccc01f1e0.NginxMailingListEnglish@forum.nginx.org> <067ad68f6ad39736b85557c325f5b9ca.NginxMailingListEnglish@forum.nginx.org> <5166a4e052f8673165df9d5c82951cdd.NginxMailingListEnglish@forum.nginx.org> <20171004153900.GR16067@mdounin.ru> Message-ID: I found it useful to define a dropCache location that will delete the cache on request. I did this with a shell script that I invoked with lua (via openresty) but I imagine there are multiple ways to do this. Sent from my iPhone > On Oct 4, 2017, at 11:39 AM, Maxim Dounin wrote: > > Hello! > >> On Wed, Oct 04, 2017 at 06:01:35AM -0400, Dingo wrote: >> >> Another update: >> >> I finally deleted /data/nginx/cache/*, and now everything seems to be >> working. It looks like Nginx don't bother about what cache timeout I use. If >> it is 1 day, as in the example from nginx.org, everything that was cached at >> that time will be remembered for 1 day. I have no clue why my startpages was >> not working, but they are now that I deleted the cache. > > Cache validity time is determened when loading a response from the > backend based on the response headers and the configuration active > at that time. Any futher changes to the configuration have no > effect. > > So what you did is basically correct: if you are making major > changes to your caching configuration, you may need to clear the > cache manually. > > -- > Maxim Dounin > http://nginx.org/ > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From mdounin at mdounin.ru Wed Oct 4 16:04:56 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 4 Oct 2017 19:04:56 +0300 Subject: Nginx splitting one single request's into multiple requests to upstream. (version 1.13.3) In-Reply-To: <18b6dca9ef4071b74131e64e5e4f3625.NginxMailingListEnglish@forum.nginx.org> References: <18b6dca9ef4071b74131e64e5e4f3625.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20171004160456.GS16067@mdounin.ru> Hello! On Wed, Oct 04, 2017 at 11:29:23AM -0400, pankaj at releasemanager.in wrote: > Pbooth, > > Basically, i am receiving a complete json doc as a chunk from one app server > which is then proxied to another app which is handing each chunk as a > complete request. The problem is when you put nginx in between the document > is sometime divided in multiple chunks or multiple chunks are merge into one > chunk and transmitted to upstream app server. > > I have tried various configuration options but couldn't identify which > configuration exactly controls this rechunking while proxying or max size of > the chunk so there's never a case when two json docs are sent to the app in > a single chunk. Ideally it should sent the chunk as is to upstream as > recieved. By saying "chunk" you mean HTTP chunk, as in chunked transfer encoding, https://tools.ietf.org/html/rfc7230#section-4.1? If yes, you have to basic options: - rewrite your app to don't do that. Transfer encodings are specific to a particular connection, and intermediate HTTP proxies are free to recode message bodies. And there is no way to avoid this in nginx when using http proxying. - re-think your proxing configuration to avoid HTTP proxies between apps in question. For example, TCP proxying as available in nginx stream module should work fine. -- Maxim Dounin http://nginx.org/ From nginx-forum at forum.nginx.org Wed Oct 4 16:36:46 2017 From: nginx-forum at forum.nginx.org (k78rc) Date: Wed, 04 Oct 2017 12:36:46 -0400 Subject: unable to setup HTTPS reverse proxy Message-ID: <0955d0f7a7d65419721a372869b47828.NginxMailingListEnglish@forum.nginx.org> Hi, I am struggling in order to setup nginx as reverse proxy with HTTPS. In current test setup I installed nginx on a CentOS 7 machine (host 192.168.1.115) and apache within a docker container. Everything works fine as long as I use HTTP only. However if I enable SSL, my browser always ends up in getting response code 400 (bad request). ssl_certificate "/etc/nginx/cert.crt"; ssl_certificate_key "/etc/nginx/cert.key"; ssl_session_cache shared:SSL:1m; ssl_session_timeout 1m; ssl_ciphers HIGH:!aNULL:!MD5; ssl_prefer_server_ciphers on; server { listen 443 ssl; server_name .hello.com; location / { proxy_pass http://127.0.0.1:8000; } } In error.log I read: 2017/10/04 17:40:06 [info] 5695#0: *27 client sent invalid request while reading client request line, client: 192.168.1.120, server: , request: "CONNECT alpha.hello.com:443 HTTP/1.1" On the other hand, if I run in a terminal: openssl s_client -connect 192.168.1.115:443 and then I enter GET https://alpha.hello.com/ I get the expected content (in this case error.log just prints 2017/10/04 18:15:41 [debug] 15843#0: *40 http request line: "GET https://alpha.ciao.com/" ) By the way, I tried different browsers, but the proxy configuration should be pretty simple: I always set 192.168.1.115:443 as HTTPS/SSL proxy or as proxy for all protocols (actually I aim to use HTTPS only) What is my mistake? Is anything missing in nginx configuration? Is there a proxy setup in the browser I am not aware of? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,276690,276690#msg-276690 From mdounin at mdounin.ru Wed Oct 4 17:01:07 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 4 Oct 2017 20:01:07 +0300 Subject: unable to setup HTTPS reverse proxy In-Reply-To: <0955d0f7a7d65419721a372869b47828.NginxMailingListEnglish@forum.nginx.org> References: <0955d0f7a7d65419721a372869b47828.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20171004170107.GU16067@mdounin.ru> Hello! On Wed, Oct 04, 2017 at 12:36:46PM -0400, k78rc wrote: > Hi, > > I am struggling in order to setup nginx as reverse proxy with HTTPS. > In current test setup I installed nginx on a CentOS 7 machine (host > 192.168.1.115) and apache within a docker container. > Everything works fine as long as I use HTTP only. > However if I enable SSL, my browser always ends up in getting response code > 400 (bad request). > > ssl_certificate "/etc/nginx/cert.crt"; > ssl_certificate_key "/etc/nginx/cert.key"; > ssl_session_cache shared:SSL:1m; > ssl_session_timeout 1m; > ssl_ciphers HIGH:!aNULL:!MD5; > ssl_prefer_server_ciphers on; > > server { > listen 443 ssl; > server_name .hello.com; > > location / { > proxy_pass http://127.0.0.1:8000; > } > } > > In error.log I read: > > 2017/10/04 17:40:06 [info] 5695#0: *27 client sent invalid request while > reading client request line, client: 192.168.1.120, server: , request: > "CONNECT alpha.hello.com:443 HTTP/1.1" The message suggests that your browser thinks that nginx is a forward proxy and tries to use it as such. This won't work. Check your browser settings. [...] > By the way, I tried different browsers, but the proxy configuration should > be pretty simple: I always set 192.168.1.115:443 as HTTPS/SSL proxy or as > proxy for all protocols (actually I aim to use HTTPS only) > > What is my mistake? Is anything missing in nginx configuration? Is there a > proxy setup in the browser I am not aware of? For reverse proxy you should not configure anything in your browser, it is basically an internal part of a http server. In browser settings you configure _forward_ proxies, and this is not something nginx is expected to be used for. -- Maxim Dounin http://nginx.org/ From nginx-forum at forum.nginx.org Wed Oct 4 19:24:58 2017 From: nginx-forum at forum.nginx.org (pankaj@releasemanager.in) Date: Wed, 04 Oct 2017 15:24:58 -0400 Subject: Nginx splitting one single request's into multiple requests to upstream. (version 1.13.3) In-Reply-To: <20171004160456.GS16067@mdounin.ru> References: <20171004160456.GS16067@mdounin.ru> Message-ID: <2f5f62059370e690f0dcc1876c50472f.NginxMailingListEnglish@forum.nginx.org> Maxim, totally agree on your statement and options. But still i was wondering if there's any configuration directive that limit's the buffer of rechunked packet until the app is updated to handle it in a more graceful manner. thanks, P Posted at Nginx Forum: https://forum.nginx.org/read.php?2,276545,276697#msg-276697 From peter_booth at me.com Wed Oct 4 19:53:50 2017 From: peter_booth at me.com (Peter Booth) Date: Wed, 04 Oct 2017 15:53:50 -0400 Subject: Nginx splitting one single request's into multiple requests to upstream. (version 1.13.3) In-Reply-To: <2f5f62059370e690f0dcc1876c50472f.NginxMailingListEnglish@forum.nginx.org> References: <20171004160456.GS16067@mdounin.ru> <2f5f62059370e690f0dcc1876c50472f.NginxMailingListEnglish@forum.nginx.org> Message-ID: <0516AD38-988A-460E-88ED-9E51F8E739DC@me.com> I can say that Maxim's idea of using tcp proxying with the streams module Is very simple to configure - just a couple of lines, and tremendously useful. Sent from my iPhone > On Oct 4, 2017, at 3:24 PM, pankaj at releasemanager.in wrote: > > Maxim, > > totally agree on your statement and options. > > But still i was wondering if there's any configuration directive that > limit's the buffer of rechunked packet until the app is updated to handle it > in a more graceful manner. > > thanks, > > P > > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,276545,276697#msg-276697 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at forum.nginx.org Wed Oct 4 23:53:06 2017 From: nginx-forum at forum.nginx.org (k78rc) Date: Wed, 04 Oct 2017 19:53:06 -0400 Subject: unable to setup HTTPS reverse proxy In-Reply-To: <20171004170107.GU16067@mdounin.ru> References: <20171004170107.GU16067@mdounin.ru> Message-ID: Oh, now that you tell me it looks quite obvious. Thank you very much for your help! Posted at Nginx Forum: https://forum.nginx.org/read.php?2,276690,276699#msg-276699 From nginx-forum at forum.nginx.org Thu Oct 5 11:29:13 2017 From: nginx-forum at forum.nginx.org (rnmx18) Date: Thu, 05 Oct 2017 07:29:13 -0400 Subject: Using request URI path to store cached files instead of MD5 hash-based path Message-ID: Hi, If proxy caching is enabled, NGINX is saving the files under subdirectories of the proxy_cache_path, based on the MD5 hash of the cache-key and the levels parameter value. Is it possible to change this behaviour through configuration to cache the files using the request URI path itself, say, under the host-name directory under the proxy_cache_path. For example, if the proxy_cache_path is /tmp/cache1 and the request is http://www.example.com/movies/file1.mp4, then can the file get cached as /tmp/cache1/www.example.com/movies/file1.mp4 I think such a direct way of defining cached file paths would help in finding or locating specific content in cache. Also, it would be helpful in purging content from cache, even using wild-carded expressions. However, I seem to be missing the key benefit of why files are stored based on MD5 hash based paths. Could someone explain the reason for using MD5 hash based file paths? Also, with vanilla-NGINX, if there is no configurable way to use direct request URI paths, is there any external module which could help me to get this functionality? Thanks Rajesh Posted at Nginx Forum: https://forum.nginx.org/read.php?2,276700,276700#msg-276700 From nginx-forum at forum.nginx.org Thu Oct 5 12:57:14 2017 From: nginx-forum at forum.nginx.org (soulseekah) Date: Thu, 05 Oct 2017 08:57:14 -0400 Subject: nginx null bytes on static image In-Reply-To: <9333fe2b8bb6e3af725ae0bc7e7d0822.NginxMailingListEnglish@forum.nginx.org> References: <9333fe2b8bb6e3af725ae0bc7e7d0822.NginxMailingListEnglish@forum.nginx.org> Message-ID: <0afa54ee7ba5fbf820ea8d3abf541cd6.NginxMailingListEnglish@forum.nginx.org> Thanks. Turning off sendfile worked, but a small test program in C on the same filesystem doesn't yield this behavior. What else could I test to reproduce this and confirm OS or filesystem issues? Thanks for your time and ideas. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,276676,276704#msg-276704 From mdounin at mdounin.ru Thu Oct 5 13:39:11 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 5 Oct 2017 16:39:11 +0300 Subject: nginx null bytes on static image In-Reply-To: <0afa54ee7ba5fbf820ea8d3abf541cd6.NginxMailingListEnglish@forum.nginx.org> References: <9333fe2b8bb6e3af725ae0bc7e7d0822.NginxMailingListEnglish@forum.nginx.org> <0afa54ee7ba5fbf820ea8d3abf541cd6.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20171005133911.GB16067@mdounin.ru> Hello! On Thu, Oct 05, 2017 at 08:57:14AM -0400, soulseekah wrote: > Thanks. Turning off sendfile worked, but a small test program in C on the > same filesystem doesn't yield this behavior. > What else could I test to reproduce this and confirm OS or filesystem > issues? If you are able to reproduce the problem again by enabling sendfile in nginx, you may want to trace relevant syscalls nginx does and try to reproduce exactly the same in your C test program. In particular, something like O_DIRECT, non-blocking mode on sockets, or sendfile() headers/trailers might be important. -- Maxim Dounin http://nginx.org/ From nginx-forum at forum.nginx.org Thu Oct 5 18:25:44 2017 From: nginx-forum at forum.nginx.org (shiz) Date: Thu, 05 Oct 2017 14:25:44 -0400 Subject: conflicting rules Message-ID: <8cf78883139acf13805f2f4d9db044c9.NginxMailingListEnglish@forum.nginx.org> Hello, I exclude the stylesheets and javascript from the logs to alleviate them. However I would want to make an exception for awstats. So far the following doesn't work. Any help? location ~* ^.+\.(css|js)$|^/(css|Scripts|uploads)/ { expires -1; access_log off; log_not_found off; } location ~* ^/Scripts/awstats_misc_tracker.js { access_log on; } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,276715,276715#msg-276715 From igal at lucee.org Thu Oct 5 18:29:38 2017 From: igal at lucee.org (Igal @ Lucee.org) Date: Thu, 5 Oct 2017 11:29:38 -0700 Subject: conflicting rules In-Reply-To: <8cf78883139acf13805f2f4d9db044c9.NginxMailingListEnglish@forum.nginx.org> References: <8cf78883139acf13805f2f4d9db044c9.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hi, On 10/5/2017 11:25 AM, shiz wrote: > I exclude the stylesheets and javascript from the logs to alleviate them. > However I would want to make an exception for awstats. > > So far the following doesn't work. Any help? > > location ~* ^/Scripts/awstats_misc_tracker.js { > access_log on; > } Use an exact match for that URL instead of a Regex, i.e. location = /Scripts/awstats_misc_tracker.js { access_log on; } See also http://nginx.org/en/docs/http/ngx_http_core_module.html#location Igal Sapir Lucee Core Developer Lucee.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Thu Oct 5 18:40:03 2017 From: nginx-forum at forum.nginx.org (shiz) Date: Thu, 05 Oct 2017 14:40:03 -0400 Subject: conflicting rules In-Reply-To: References: Message-ID: Thanks, unfortunately it does not work grep awstat nginx/access.log |wc -l 0 Posted at Nginx Forum: https://forum.nginx.org/read.php?2,276715,276718#msg-276718 From igal at lucee.org Thu Oct 5 18:41:14 2017 From: igal at lucee.org (Igal @ Lucee.org) Date: Thu, 5 Oct 2017 11:41:14 -0700 Subject: conflicting rules In-Reply-To: References: Message-ID: <91275317-8ea4-6772-2a14-eb7291d304e6@lucee.org> On 10/5/2017 11:40 AM, shiz wrote: > Thanks, unfortunately it does not work Did you reload the config after you modified it? Igal Sapir Lucee Core Developer Lucee.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Thu Oct 5 18:50:48 2017 From: nginx-forum at forum.nginx.org (shiz) Date: Thu, 05 Oct 2017 14:50:48 -0400 Subject: conflicting rules In-Reply-To: <91275317-8ea4-6772-2a14-eb7291d304e6@lucee.org> References: <91275317-8ea4-6772-2a14-eb7291d304e6@lucee.org> Message-ID: <4ac93e0c4391af81c8647838864152d9.NginxMailingListEnglish@forum.nginx.org> yes of course I've reordered them too: location = /Scripts/awstats_misc_tracker.js { access_log on; } location ~* ^.+\.(css|js)$|^/(css|Scripts|uploads)/ { expires 50d; access_log off; log_not_found off; add_header Cache-Control "public"; } nginx -t nginx: the configuration file /etc/nginx/nginx.conf syntax is ok nginx: configuration file /etc/nginx/nginx.conf test is successful service nginx reload Posted at Nginx Forum: https://forum.nginx.org/read.php?2,276715,276720#msg-276720 From nginx-forum at forum.nginx.org Thu Oct 5 19:15:17 2017 From: nginx-forum at forum.nginx.org (shiz) Date: Thu, 05 Oct 2017 15:15:17 -0400 Subject: conflicting rules In-Reply-To: <8cf78883139acf13805f2f4d9db044c9.NginxMailingListEnglish@forum.nginx.org> References: <8cf78883139acf13805f2f4d9db044c9.NginxMailingListEnglish@forum.nginx.org> Message-ID: <3e76f4793bf5af1c22a0ba4a6c54dca6.NginxMailingListEnglish@forum.nginx.org> 1 - If I disable that section #location ~* ^.+\.(css|js)$|^/(css|Scripts|uploads)/ { #expires -1; #access_log off; #log_not_found off; #} location = /Scripts/awstats_misc_tracker.js { access_log on; } the javascript are shown in the log. /Scripts/awstats_misc_tracker.js isn't though. 2 - Now if also disable that section, /Scripts/awstats_misc_tracker.js is finally showing so something else must be conflicting #location = /Scripts/awstats_misc_tracker.js { #access_log on; #} ~:/var/log# grep awstat nginx/access.log xx.xx.xx.xxx - - [05/Oct/2017:12:08:31 -0700] "GET /Scripts/awstats_misc_tracker.js HTTP/1.0" Posted at Nginx Forum: https://forum.nginx.org/read.php?2,276715,276721#msg-276721 From igal at lucee.org Thu Oct 5 19:18:52 2017 From: igal at lucee.org (Igal @ Lucee.org) Date: Thu, 5 Oct 2017 12:18:52 -0700 Subject: conflicting rules In-Reply-To: <3e76f4793bf5af1c22a0ba4a6c54dca6.NginxMailingListEnglish@forum.nginx.org> References: <8cf78883139acf13805f2f4d9db044c9.NginxMailingListEnglish@forum.nginx.org> <3e76f4793bf5af1c22a0ba4a6c54dca6.NginxMailingListEnglish@forum.nginx.org> Message-ID: And you're sure about the CaSe, right?? I notice that everything is lowercase for you except for the Scripts directory. On 10/5/2017 12:15 PM, shiz wrote: > 1 - If I disable that section > > > #location ~* ^.+\.(css|js)$|^/(css|Scripts|uploads)/ { > #expires -1; > #access_log off; > #log_not_found off; > #} > > location = /Scripts/awstats_misc_tracker.js { > access_log on; > } > > the javascript are shown in the log. /Scripts/awstats_misc_tracker.js isn't > though. > > 2 - Now if also disable that section, /Scripts/awstats_misc_tracker.js is > finally showing so something else must be conflicting > > #location = /Scripts/awstats_misc_tracker.js { > #access_log on; > #} > > ~:/var/log# grep awstat nginx/access.log > xx.xx.xx.xxx - - [05/Oct/2017:12:08:31 -0700] "GET > /Scripts/awstats_misc_tracker.js HTTP/1.0" > > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,276715,276721#msg-276721 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at forum.nginx.org Thu Oct 5 19:22:56 2017 From: nginx-forum at forum.nginx.org (shiz) Date: Thu, 05 Oct 2017 15:22:56 -0400 Subject: conflicting rules In-Reply-To: References: Message-ID: <9d301d5137f7b76d720292a9091fe9ef.NginxMailingListEnglish@forum.nginx.org> I'm positive. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,276715,276723#msg-276723 From francis at daoine.org Thu Oct 5 20:31:21 2017 From: francis at daoine.org (Francis Daly) Date: Thu, 5 Oct 2017 21:31:21 +0100 Subject: conflicting rules In-Reply-To: References: Message-ID: <20171005203121.GD20907@daoine.org> On Thu, Oct 05, 2017 at 02:40:03PM -0400, shiz wrote: Hi there, > Thanks, unfortunately it does not work > > grep awstat nginx/access.log |wc -l > 0 http://nginx.org/r/access_log Look in the log file called "on", not in the log file called "nginx/access.log". f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Thu Oct 5 20:50:03 2017 From: nginx-forum at forum.nginx.org (shiz) Date: Thu, 05 Oct 2017 16:50:03 -0400 Subject: conflicting rules In-Reply-To: <8cf78883139acf13805f2f4d9db044c9.NginxMailingListEnglish@forum.nginx.org> References: <8cf78883139acf13805f2f4d9db044c9.NginxMailingListEnglish@forum.nginx.org> Message-ID: <18da1f7e5843317a4764474667416166.NginxMailingListEnglish@forum.nginx.org> Hey, nice catch, thanks so much! access_log on is not defeating access_log off; replaced the directive with: location = /Scripts/awstats_misc_tracker.js { } Thanks to both of you. Solved. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,276715,276725#msg-276725 From nginx-forum at forum.nginx.org Fri Oct 6 07:05:52 2017 From: nginx-forum at forum.nginx.org (rnmx18) Date: Fri, 06 Oct 2017 03:05:52 -0400 Subject: Multiple upstream_cache_status headers in response in a dual-cache configuration Message-ID: <881c1ce07cd091b545007084b4c47a47.NginxMailingListEnglish@forum.nginx.org> Hi, To realize a distributed caching layer based of disk-speed and storage, I have prepared the following configuration with an SSD-cache and HDD-cache. http { add_header X-UpStream-Server-Cache-Status $upstream_cache_status; # proxy caching configurations proxy_cache_path /tmp/myssd1 keys_zone=myssd1:1m levels=1:2 inactive=60m; proxy_cache_path /tmp/myhdd1 keys_zone=myhdd1:10m levels=1:2 inactive=60m; proxy_cache_key "$request_uri"; upstream mylocalhdd { server 127.0.0.1:82; } upstream myorigin { server 192.168.1.35:80; } # Server-block1 - will cache in SSD if we get a minimum of 5 requests. server { listen 80; server_name example.com www.example.com; # Look up in ssd cache. If not found, goto hdd cache. location / { proxy_pass http://mylocalhdd; proxy_cache myssd1; proxy_cache_min_uses 5; } } # Server-block2 (acting as local upstream) - will fetch from origin and cache in HDD server { listen 82; server_name example.com www.example.com; # Look up in hdd cache. If not found, goto origin location / { proxy_pass http://myorigin; proxy_cache myhdd1; } } } The smaller, yet faster SSD-cache will store content only if I get at least 5 requests for the URL. On the other hand, the larger HDD cache will every request. The flow is straight-forward: i) Client's first request is handled by server at port 80 ii) It looks in SSD cache. Upon a cache-miss, it proxies to local upstream at port 82. iii) The local server at port 82, looks in HDD cache. upon cache miss, it fetches from origin, and adds to HDD-cache and sends the response back to first server. iv) Server1 does not add content yet to SSD-cache (as min_uses is not reached), and sends response to client. v) For the next 3 requests, the server1 would see an SSD-cache-miss, but server2 produces an HDD-cache-hit. vi) For the 5th request, the server1, will also add the response in SSD-cache, as min_uses criteria is met. vii) For the 6th request onwards, the server1 itself will serve the request from SSD-cache itself. No request is sent to local upstream. I have added $upstream_cache_status in the response. For the first request, I see the header twice in the response: This seem to correspond to MISS in both front-end SSD-cache and back-end HDD-cache. < X-UpStream-Server-Cache-Status: MISS < X-UpStream-Server-Cache-Status: MISS For the next 4 requests, I see the following: This seems to correspond to MISS in the from-end SSD-cache and HIT in the back-end HDD-cache. < X-UpStream-Server-Cache-Status: HIT < X-UpStream-Server-Cache-Status: MISS For the 6th request, I see the following: < X-UpStream-Server-Cache-Status: HIT < X-UpStream-Server-Cache-Status: HIT Why am I getting the header twice for the 6th request. In this case, the request is HIT by the SSD cache itself, and there is no request sent to local upstream also. So, shouldn't I be getting only one instance of the header in the response? Thanks Rajesh Posted at Nginx Forum: https://forum.nginx.org/read.php?2,276727,276727#msg-276727 From nevis2us at gmail.com Fri Oct 6 11:16:07 2017 From: nevis2us at gmail.com (=?UTF-8?B?0JDQu9C10LrRgdCw0L3QtNGAINCa0LjRgNC40LvQu9C+0LI=?=) Date: Fri, 6 Oct 2017 14:16:07 +0300 Subject: subversion behind nginx Message-ID: Hi, I have 2 almost identical vhost definitions: 1. https://svn.iproducts.test location /repos/ { set $dest $http_destination; if ($http_destination ~ ^https://(.*)$) { set $dest http://$1; } proxy_set_header Destination $dest; proxy_pass http://127.0.0.1:80/repos/; } 2. https://svn-test.iproducts.test location / { set $dest $http_destination; if ($http_destination ~ ^https://(.*)$) { set $dest http://$1; } proxy_set_header Destination $dest; proxy_pass http://127.0.0.1:80/repos/; } The first one works and the second one doesn't and I don't understand why. The only difference is the uri in location. Please advise. Details below. I'm using the following commands to test the configs: 1. svn list https://svn.iproducts.test/repos/wordpress branches/ tags/ trunk/ vendor/ 2. svn list https://svn-test.iproducts.test/wordpress ... svn: PROPFIND of '/repos/wordpress/!svn/vcc/default': authorization failed: Could not authenticate to server: rejected Basic challenge ( https://svn-test.iproducts.test) In the apache logs the first 3 lines are identical but the second PROPFIND has '/repos/repos' instead of '/repos' and fails: ==> /var/log/httpd/access_log <== 127.0.0.1 - - [06/Oct/2017:13:43:54 +0300] "OPTIONS /repos/wordpress HTTP/1.0" 401 460 "-" "SVN/1.6.11 (r934486) neon/0.29.3" 127.0.0.1 - xxxxx [06/Oct/2017:13:43:54 +0300] "OPTIONS /repos/wordpress HTTP/1.0" 200 195 "-" "SVN/1.6.11 (r934486) neon/0.29.3" 127.0.0.1 - xxxxx [06/Oct/2017:13:43:54 +0300] "PROPFIND /repos/wordpress HTTP/1.0" 207 661 "-" "SVN/1.6.11 (r934486) neon/0.29.3" 127.0.0.1 - xxxxx [06/Oct/2017:13:43:54 +0300] "PROPFIND /repos/wordpress/!svn/vcc/default HTTP/1.0" 207 415 "-" "SVN/1.6.11 (r934486) neon/0.29.3" ... ==> /var/log/httpd/access_log <== 127.0.0.1 - - [06/Oct/2017:13:40:49 +0300] "OPTIONS /repos/wordpress HTTP/1.0" 401 460 "-" "SVN/1.6.11 (r934486) neon/0.29.3" 127.0.0.1 - xxxxx [06/Oct/2017:13:40:52 +0300] "OPTIONS /repos/wordpress HTTP/1.0" 200 195 "-" "SVN/1.6.11 (r934486) neon/0.29.3" 127.0.0.1 - xxxxx [06/Oct/2017:13:40:52 +0300] "PROPFIND /repos/wordpress HTTP/1.0" 207 661 "-" "SVN/1.6.11 (r934486) neon/0.29.3" ==> /var/log/httpd/error_log <== [Fri Oct 06 13:40:52 2017] [error] [client 127.0.0.1] access to /repos/repos/wordpress/!svn/vcc/default failed, reason: verification of user id 'xxxxx' not configured ==> /var/log/httpd/access_log <== 127.0.0.1 - xxxxx [06/Oct/2017:13:40:52 +0300] "PROPFIND /repos/repos/wordpress/!svn/vcc/default HTTP/1.0" 401 460 "-" "SVN/1.6.11 (r934486) neon/0.29.3" -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Fri Oct 6 11:26:42 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 6 Oct 2017 14:26:42 +0300 Subject: Multiple upstream_cache_status headers in response in a dual-cache configuration In-Reply-To: <881c1ce07cd091b545007084b4c47a47.NginxMailingListEnglish@forum.nginx.org> References: <881c1ce07cd091b545007084b4c47a47.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20171006112642.GG16067@mdounin.ru> Hello! On Fri, Oct 06, 2017 at 03:05:52AM -0400, rnmx18 wrote: [...] > For the 6th request, I see the following: > > < X-UpStream-Server-Cache-Status: HIT > < X-UpStream-Server-Cache-Status: HIT > > Why am I getting the header twice for the 6th request. In this case, the > request is HIT by the SSD cache itself, and there is no request sent to > local upstream also. That's because the cached response already contains the header in it. -- Maxim Dounin http://nginx.org/ From r at roze.lv Fri Oct 6 11:39:03 2017 From: r at roze.lv (Reinis Rozitis) Date: Fri, 6 Oct 2017 14:39:03 +0300 Subject: Multiple upstream_cache_status headers in response in a dual-cache configuration In-Reply-To: <881c1ce07cd091b545007084b4c47a47.NginxMailingListEnglish@forum.nginx.org> References: <881c1ce07cd091b545007084b4c47a47.NginxMailingListEnglish@forum.nginx.org> Message-ID: <000001d33e97$b59fd110$20df7330$@roze.lv> > Why am I getting the header twice for the 6th request. In this case, the > request is HIT by the SSD cache itself, and there is no request sent to local > upstream also. > > So, shouldn't I be getting only one instance of the header in the response? Nginx saves the object with all the headers from origin. Since your "ssd cache" makes a request to "mylocalhdd" which returns the object with the X-UpStream-Server-Cache-Status then nginx returns the original object + adds an extra header (coming from the "ssd cache" server block). To hide second header you probably need to use proxy_hide_header in the "ssd cache" block . rr From nginx-forum at forum.nginx.org Fri Oct 6 12:26:19 2017 From: nginx-forum at forum.nginx.org (rnmx18) Date: Fri, 06 Oct 2017 08:26:19 -0400 Subject: Multiple upstream_cache_status headers in response in a dual-cache configuration In-Reply-To: <000001d33e97$b59fd110$20df7330$@roze.lv> References: <000001d33e97$b59fd110$20df7330$@roze.lv> Message-ID: Hi, Thank you Maxim and Reinis for your replies. I verified that when the response from backend-hdd-cache gets cached in the front-end ssd-cache, the response includes the X-Upstream-Server-Cache-Status header added from the hdd-cache-upstream.Hence, I am seeing two headers in a response served by my ssd-cache - one from the cached file, and one added by the front-end-proxy. Yes, by adding proxy_hide_header, I was able to avoid the second header in the response to client. One more question: Is there any mechanism to avoid or exclude the X-Upstream-Server-Cache-Status header (or in general, any specific headers) from being added in the metadata header of the cached file? Thanks, Rajesh Posted at Nginx Forum: https://forum.nginx.org/read.php?2,276727,276732#msg-276732 From lucas at lucasrolff.com Fri Oct 6 13:24:45 2017 From: lucas at lucasrolff.com (Lucas Rolff) Date: Fri, 6 Oct 2017 13:24:45 +0000 Subject: Using request URI path to store cached files instead of MD5 hash-based path In-Reply-To: References: Message-ID: <4BC54DE1-E1F0-4523-B882-CBF3079A00E7@lucasrolff.com> Hi, > Is it possible to change this behaviour through configuration to cache the files using the request URI path itself, say, under the host-name directory under the proxy_cache_path. No, it?s not possible to do that with proxy_cache, you can however do it with proxy_store ( http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_store ). > I think such a direct way of defining cached file paths would help in finding or locating specific content in cache You can already find cached file paths by calculating the md5 hash yourself, it?s rather easy. > Also, it would be helpful in purging content from cache, even using wild-carded expressions. You can easily purge the cache, just not wildcard expressions, for that you?d need the plus version of nginx. > However, I seem to be missing the key benefit of why files are stored based on MD5 hash based paths One of the benefits I can think of, is that fact that you only deal with a-z0-9 characters, using ascii characters ensures compatibility with every system, it?s lightweight since you only have to deal with a small set of characters, where if you would use $request_uri as an example you?d have to use UTF-8 or similar, it makes lookups a lot heavier to do, and there could be compatibility issues with characters, and the fact that $request_uri includes query strings as well, you?d end up with very weird filenames. At same time, it wouldn?t surprise me that it?s a lot more efficient for nginx to have a consistent filename length when indexing data, you know that every file on the filesystem will be 32 characters long, you know exactly how much memory each file takes in memory, and you wouldn?t run into the problem where people have a request uri of a few hundred or even thousands of characters and possibly 10s or 100s of sub-directories. I?m pretty sure that nginx decided to use an md5 hash due to a lot of benefits over storing it as proxy_store currently does. Maybe Maxim or someone else with extensive knowledge about the codebase and its design decisions can share briefly why. Best Regards, Lucas Rolff On 05/10/2017, 13.29, "nginx on behalf of rnmx18" wrote: Hi, If proxy caching is enabled, NGINX is saving the files under subdirectories of the proxy_cache_path, based on the MD5 hash of the cache-key and the levels parameter value. Is it possible to change this behaviour through configuration to cache the files using the request URI path itself, say, under the host-name directory under the proxy_cache_path. For example, if the proxy_cache_path is /tmp/cache1 and the request is http://www.example.com/movies/file1.mp4, then can the file get cached as /tmp/cache1/www.example.com/movies/file1.mp4 I think such a direct way of defining cached file paths would help in finding or locating specific content in cache. Also, it would be helpful in purging content from cache, even using wild-carded expressions. However, I seem to be missing the key benefit of why files are stored based on MD5 hash based paths. Could someone explain the reason for using MD5 hash based file paths? Also, with vanilla-NGINX, if there is no configurable way to use direct request URI paths, is there any external module which could help me to get this functionality? Thanks Rajesh Posted at Nginx Forum: https://forum.nginx.org/read.php?2,276700,276700#msg-276700 _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From duanemulder at rattyshack.ca Fri Oct 6 13:50:34 2017 From: duanemulder at rattyshack.ca (duanemulder at rattyshack.ca) Date: Fri, 06 Oct 2017 09:50:34 -0400 Subject: subversion behind nginx In-Reply-To: References: Message-ID: <20171006135034.7958615.95674.1666@rattyshack.ca> An HTML attachment was scrubbed... URL: From smart.imran003 at gmail.com Fri Oct 6 14:00:26 2017 From: smart.imran003 at gmail.com (Syed Imran) Date: Fri, 6 Oct 2017 19:30:26 +0530 Subject: Nginx configuration with artifactory Message-ID: Hi, I have a VM with ip 10.130.0.198 (ssh will work within vpn) And I have another ip XXX.XXX.XXX.XXX (public ip, visible to all in internet). So I have a load balancer configuration as below, Reaching to 80 port of XXX.XXX.XXX.XXX will internally reach 10.130.0.198 80 Reaching to 443 port of XXX.XXX.XXX.XXX will internally reach 10.130.0.198 80 And externally Reaching to http port will be redirected to https port for XXX.XXX.XXX.XXX Now below is my nginx configuration file. server { listen 80 ; server_name 10.130.0.198; if ($http_x_forwarded_proto = '') { set $http_x_forwarded_proto $scheme; } ## Application specific logs ## access_log /var/log/nginx/artifactory.net.nokia.com-access.log timing; ## error_log /var/log/nginx/artifactory.net.nokia.com-error.log; rewrite ^/$ /artifactory/webapp/ redirect; rewrite ^/artifactory/?(/webapp)?$ /artifactory/webapp/ redirect; chunked_transfer_encoding on; client_max_body_size 0; location /artifactory/ { proxy_read_timeout 900; proxy_pass_header Server; proxy_cookie_path ~*^/.* /; proxy_pass http://10.130.0.198:8081/artifactory/; proxy_set_header X-Artifactory-Override-Base-Url $http_x_forwarded_proto://$host:$server_port/artifactory; proxy_set_header X-Forwarded-Port $server_port; proxy_set_header X-Forwarded-Proto $http_x_forwarded_proto; proxy_set_header Host $http_host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } } My artifactory application in deployed using docker in port 8081. But when I do a docker login, nginx always returns 405 not found. But the web UI works find with both the ip?s. Is there anything wrong with my nginx configuration. Can someone help. Thanks, Syed -------------- next part -------------- An HTML attachment was scrubbed... URL: From shahzaib.cb at gmail.com Fri Oct 6 14:17:39 2017 From: shahzaib.cb at gmail.com (shahzaib mushtaq) Date: Fri, 6 Oct 2017 19:17:39 +0500 Subject: Block specific request pattern !! Message-ID: Hi, We're serving mp4 files over NGINX with added security hash+ttl but there's some kind of leechers accessing videos with following pattern but not getting blocked: https://domain.com/files/videos/2017/10/04/15071356364fc6b-720.mp4?h=n_Saa78MV6BJTcoRHwHelA&ttl=1507303734& ?*/WhileYouWereSleeping56.mp4* ============================================================ As you can see the highlighted part of request is abnormal, there should be nothing included after ttl value, requests are supposed to be served with following pattern : https://domain.com/files/videos/2017/10/04/15071356364fc6b-720.mp4?h=n_Saa78MV6BJTcoRHwHelA&ttl=1507303734& ? Is there a way we can block the requests not ending up on ttl value ? Thanks. Shahzaib -------------- next part -------------- An HTML attachment was scrubbed... URL: From nevis2us at gmail.com Fri Oct 6 15:09:42 2017 From: nevis2us at gmail.com (=?UTF-8?B?0JDQu9C10LrRgdCw0L3QtNGAINCa0LjRgNC40LvQu9C+0LI=?=) Date: Fri, 6 Oct 2017 18:09:42 +0300 Subject: subversion behind nginx In-Reply-To: <20171006135034.7958615.95674.1666@rattyshack.ca> References: <20171006135034.7958615.95674.1666@rattyshack.ca> Message-ID: Thanks for your answer, I suspected as much. Is there a more detailed explanation of why the paths must match? Can it be helped with rewrite rules, additional headers or changing subversion settings? I can move subversion to / on an apache host but it's gonna be twice as slow and the grand idea is to speed up subversion operations moving it to a non-ssl backend. 2017-10-06 16:50 GMT+03:00 : > Hello > > With svn behind nginx you cannot change the path. The second location > needs to be /repos as well > > =D > > Sent from the last QNX powered smartphone > *From: *????????? ???????? > *Sent: *Friday, October 6, 2017 7:16 AM > *To: *nginx at nginx.org > *Reply To: *nginx at nginx.org > *Subject: *subversion behind nginx > > Hi, I have 2 almost identical vhost definitions: > > 1. https://svn.iproducts.test > > location /repos/ { > > set $dest $http_destination; > if ($http_destination ~ ^https://(.*)$) { > set $dest http://$1; > } > > proxy_set_header Destination $dest; > proxy_pass http://127.0.0.1:80/repos/; > } > > 2. https://svn-test.iproducts.test > > location / { > > set $dest $http_destination; > if ($http_destination ~ ^https://(.*)$) { > set $dest http://$1; > } > > proxy_set_header Destination $dest; > proxy_pass http://127.0.0.1:80/repos/; > } > > The first one works and the second one doesn't and I don't understand why. > The only difference is the uri in location. Please advise. Details below. > > > I'm using the following commands to test the configs: > > 1. svn list https://svn.iproducts.test/repos/wordpress > branches/ > tags/ > trunk/ > vendor/ > > 2. svn list https://svn-test.iproducts.test/wordpress > ... > svn: PROPFIND of '/repos/wordpress/!svn/vcc/default': authorization > failed: Could not authenticate to server: rejected Basic challenge ( > https://svn-test.iproducts.test) > > > In the apache logs the first 3 lines are identical but the second PROPFIND > has '/repos/repos' instead of '/repos' and fails: > > ==> /var/log/httpd/access_log <== > 127.0.0.1 - - [06/Oct/2017:13:43:54 +0300] "OPTIONS /repos/wordpress > HTTP/1.0" 401 460 "-" "SVN/1.6.11 (r934486) neon/0.29.3" > 127.0.0.1 - xxxxx [06/Oct/2017:13:43:54 +0300] "OPTIONS /repos/wordpress > HTTP/1.0" 200 195 "-" "SVN/1.6.11 (r934486) neon/0.29.3" > 127.0.0.1 - xxxxx [06/Oct/2017:13:43:54 +0300] "PROPFIND /repos/wordpress > HTTP/1.0" 207 661 "-" "SVN/1.6.11 (r934486) neon/0.29.3" > > 127.0.0.1 - xxxxx [06/Oct/2017:13:43:54 +0300] "PROPFIND > /repos/wordpress/!svn/vcc/default HTTP/1.0" 207 415 "-" "SVN/1.6.11 > (r934486) neon/0.29.3" > > ... > > ==> /var/log/httpd/access_log <== > 127.0.0.1 - - [06/Oct/2017:13:40:49 +0300] "OPTIONS /repos/wordpress > HTTP/1.0" 401 460 "-" "SVN/1.6.11 (r934486) neon/0.29.3" > 127.0.0.1 - xxxxx [06/Oct/2017:13:40:52 +0300] "OPTIONS /repos/wordpress > HTTP/1.0" 200 195 "-" "SVN/1.6.11 (r934486) neon/0.29.3" > 127.0.0.1 - xxxxx [06/Oct/2017:13:40:52 +0300] "PROPFIND /repos/wordpress > HTTP/1.0" 207 661 "-" "SVN/1.6.11 (r934486) neon/0.29.3" > > ==> /var/log/httpd/error_log <== > [Fri Oct 06 13:40:52 2017] [error] [client 127.0.0.1] access to > /repos/repos/wordpress/!svn/vcc/default failed, reason: verification of > user id 'xxxxx' not configured > > ==> /var/log/httpd/access_log <== > 127.0.0.1 - xxxxx [06/Oct/2017:13:40:52 +0300] "PROPFIND > /repos/repos/wordpress/!svn/vcc/default HTTP/1.0" 401 460 "-" "SVN/1.6.11 > (r934486) neon/0.29.3" > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Fri Oct 6 15:40:46 2017 From: nginx-forum at forum.nginx.org (joseph-pg) Date: Fri, 06 Oct 2017 11:40:46 -0400 Subject: Secure connection failed on Firefox Message-ID: Hi, I'm currently testing nginx 1.13.6 x64 on my development machine, which is Windows 10 1703 x64, and sometimes I got a "Secure connection failed" error on Firefox (55.x and 56). If I hit the reload button (F5) repeatedly, the page will eventually load. Dev tools shows: 200 OK, size 0, and transferred -. nginx debug log doesn't show anything weird. Things that I observed: 1. 1.13.5 works fine 2. Chrome on Android works fine I've tested 5a3ab1b5804b, 46ddff109e72, and 924b6ef942bf and they have the same problem. Configuration is pretty much default with HTTPS and HTTP/2 server blocks. Thanks, Joseph Aditya P. G. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,276738,276738#msg-276738 From luky-37 at hotmail.com Fri Oct 6 16:15:11 2017 From: luky-37 at hotmail.com (Lukas Tribus) Date: Fri, 6 Oct 2017 16:15:11 +0000 Subject: AW: Secure connection failed on Firefox In-Reply-To: References: Message-ID: Hello, > I'm currently testing nginx 1.13.6 x64 on my development machine, which is There is no 1.13.6. > I've tested 5a3ab1b5804b, 46ddff109e72, and 924b6ef942bf and they have the > same problem. Ah so you are running directly from the development tree. In that case, I suggest to bisect it to the offending commit. Try 12cadc4669a7 first of all, if that also fails try 019b91bd21cc, if that also fails try a0e472a2c4f1. > Configuration is pretty much default with HTTPS and HTTP/2 server blocks. If that turns out to be a regression, I assume the configuration will still be necessary, even if it is pretty much default. regards, lukas From francis at daoine.org Fri Oct 6 19:04:49 2017 From: francis at daoine.org (Francis Daly) Date: Fri, 6 Oct 2017 20:04:49 +0100 Subject: Nginx configuration with artifactory In-Reply-To: References: Message-ID: <20171006190449.GF20907@daoine.org> On Fri, Oct 06, 2017 at 07:30:26PM +0530, Syed Imran wrote: Hi there, > My artifactory application in deployed using docker in port 8081. > But when I do a docker login, nginx always returns 405 not found. But the > web UI works find with both the ip?s. Does nginx return 405, or does nginx return "not found"? The distinction might matter. > Is there anything wrong with my nginx configuration. Can someone help. There appear to be two "artifactory" documents relating to nginx -- https://www.jfrog.com/confluence/display/RTF/Configuring+a+Reverse+Proxy and https://www.jfrog.com/confluence/display/RTF/Configuring+NGINX Do things work if you use exactly the configuration that they suggest? If so, then you can start making changes to see where it starts to break. Good luck with it, f -- Francis Daly francis at daoine.org From francis at daoine.org Fri Oct 6 19:18:13 2017 From: francis at daoine.org (Francis Daly) Date: Fri, 6 Oct 2017 20:18:13 +0100 Subject: Block specific request pattern !! In-Reply-To: References: Message-ID: <20171006191813.GG20907@daoine.org> On Fri, Oct 06, 2017 at 07:17:39PM +0500, shahzaib mushtaq wrote: Hi there, > We're serving mp4 files over NGINX with added security hash+ttl but there's > some kind of leechers accessing videos with following pattern but not > getting blocked: You seem to suggest that you have some blocking configured, and that it does not do all that you want. What blocking do you have configured? It may be simplest to adjust that, rather than try to add something new. > https://domain.com/files/videos/2017/10/04/15071356364fc6b-720.mp4?h=n_Saa78MV6BJTcoRHwHelA&ttl=1507303734& > ?*/WhileYouWereSleeping56.mp4* > https://domain.com/files/videos/2017/10/04/15071356364fc6b-720.mp4?h=n_Saa78MV6BJTcoRHwHelA&ttl=1507303734& > ? The two question marks in the url looks odd to me; but it is in both your "bad" and "good" ones, so maybe it is normal. > Is there a way we can block the requests not ending up on ttl value ? You have $query_string, which is the same as $args. If you can define a regex pattern which matches everything you want, you can return failure for everything else. Perhaps ($args !~ "&ttl=[0-9]*&\?$") is a suitable test condition in your environment. Good luck with it, f -- Francis Daly francis at daoine.org From lucas at lucasrolff.com Fri Oct 6 19:32:51 2017 From: lucas at lucasrolff.com (Lucas Rolff) Date: Fri, 6 Oct 2017 19:32:51 +0000 Subject: Directive inheritance Message-ID: <3058D818-89A6-47B2-92E2-3AB3287AE536@lucasrolff.com> Hi guys, I do a lot of nginx configuration which contains plenty of ?location? blocks, however ? I often see myself duplicating a lot of directives throughout my configuration which can sadly make a single nginx server block about 400 lines long, often due to repeated settings. Not only is it a mess with big files (at least they?re generated automatically), but I also have the feeling I waste some memory if I keep redefining the settings again and again (or is nginx smart enough to ?deduplicate? settings whenever possible?) My configs usually look something like server { location / { // sendfile, client_body_buffer_size, proxy_* settings, add_header repeated location ~* \.(?:htm|html)$ { // sendfile, client_body_buffer_size, proxy_* settings, add_header repeated } location ~* \.(?:manifest|appcache|xml|json)$ { // sendfile, client_body_buffer_size, proxy_* settings, add_header repeated } } } I know that there?s some settings such as proxy_pass which can?t inherit from the parent location or server block, however ? is there any semi-easy way to figure out if a directive in nginx or it?s modules gets inherited or not? (I don?t mind digging around in some nginx source code) I could try to remove a bunch of directives from the lower location directives and see if things still work, however it would be very time consuming. Reading the docs of nginx and it?s directive, *sometimes* the docs say whether a directive gets inherited or not, but it?s not always complete ? such as sendfile for example, as far as I know it gets inherited, but it doesn?t say in the docs. The directives I mostly use are things like: proxy_* Sendfile Client_body_buffer_size Add_header Expires (these differs for each location block I have) I wonder if someone either knows a good way to figure out, or any document on the web that goes extensively into explaining what (might) inherit based on general design patterns. Also if anyone can either confirm or deny whether duplicating the directives actually increase memory usage ? because if it has next to no additional resource usage ? then I could save some time. The amount of zones/server blocks are currently small, but I?d like to be able to scale it to thousands on fairly common hardware. Best Regards, -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Fri Oct 6 20:22:52 2017 From: francis at daoine.org (Francis Daly) Date: Fri, 6 Oct 2017 21:22:52 +0100 Subject: Directive inheritance In-Reply-To: <3058D818-89A6-47B2-92E2-3AB3287AE536@lucasrolff.com> References: <3058D818-89A6-47B2-92E2-3AB3287AE536@lucasrolff.com> Message-ID: <20171006202252.GH20907@daoine.org> On Fri, Oct 06, 2017 at 07:32:51PM +0000, Lucas Rolff wrote: Hi there, > I know that there?s some settings such as proxy_pass which can?t inherit from the parent location or server block, however ? is there any semi-easy way to figure out if a directive in nginx or it?s modules gets inherited or not? (I don?t mind digging around in some nginx source code) > I wonder if someone either knows a good way to figure out, or any document on the web that goes extensively into explaining what (might) inherit based on general design patterns. My quick response, without doing too much research, is: * "rewrite" module directives (if, return) don't inherit * "handler" directives (proxy_pass, fastcgi_pass) don't inherit * pretty much anything else that is valid in "location" does inherit (That's probably not correct, but could be a good starting point for experimentation.) And be aware that inheritance is by replacement, or not at all -- so one "add_header" in a location means that the only "add_header" relevant to that location is the one that is there; while no "add_header" in a location means that all of the ones inherited from server{} are relevant. If you want the full details, it's a matter of Read The Fine Source -- each module has a "_module_ctx" which includes a function typically named "_merge_loc_conf" which shows how each directive is set if it is not defined in this location: unset, set to a default value, or inherited from the previous level. f -- Francis Daly francis at daoine.org From lucas at lucasrolff.com Fri Oct 6 20:38:54 2017 From: lucas at lucasrolff.com (Lucas Rolff) Date: Fri, 06 Oct 2017 22:38:54 +0200 Subject: Directive inheritance In-Reply-To: <20171006202252.GH20907@daoine.org> References: <3058D818-89A6-47B2-92E2-3AB3287AE536@lucasrolff.com> <20171006202252.GH20907@daoine.org> Message-ID: <59D7E9DE.4010707@lucasrolff.com> Hi Francis, Thank you a lot for your response - from a directive point of view - I don't use a lot of different headers in that sense, it's really just some settings that I would want to avoid repeating again and again - I like clean configs - and generally speaking I really want to inherit as much as possible from the initial server, or even http context when possible - all I change usually can be things like the expires header, the proxy_cache_valid directive, or adding an additional header (CORS for example). I do use some of the openresty modules such as the ngx_headers_more module, and it's pretty explicit about it's inheritance. And thank you for the pointer regarding the _module_ctx and _merge_loc_conf functions, it gave me enough information regarding the http_proxy module as an example - it seem as long as there a "offsetof(ngx_http_proxy_loc_conf_t" - then it can be inherited, or it's a coincidence that it's missing the "offsetoff" for all directives that doesn't inherit in that module from top of my head. Thanks again! Francis Daly wrote: > On Fri, Oct 06, 2017 at 07:32:51PM +0000, Lucas Rolff wrote: > > Hi there, > >> I know that there?s some settings such as proxy_pass which can?t inherit from the parent location or server block, however ? is there any semi-easy way to figure out if a directive in nginx or it?s modules gets inherited or not? (I don?t mind digging around in some nginx source code) > >> I wonder if someone either knows a good way to figure out, or any document on the web that goes extensively into explaining what (might) inherit based on general design patterns. > > My quick response, without doing too much research, is: > > * "rewrite" module directives (if, return) don't inherit > * "handler" directives (proxy_pass, fastcgi_pass) don't inherit > * pretty much anything else that is valid in "location" does inherit > > (That's probably not correct, but could be a good starting point for > experimentation.) > > And be aware that inheritance is by replacement, or not at all -- so one > "add_header" in a location means that the only "add_header" relevant > to that location is the one that is there; while no "add_header" in a > location means that all of the ones inherited from server{} are relevant. > > > If you want the full details, it's a matter of Read The Fine Source -- > each module has a "_module_ctx" which includes a function typically named > "_merge_loc_conf" which shows how each directive is set if it is not > defined in this location: unset, set to a default value, or inherited > from the previous level. > > f -------------- next part -------------- An HTML attachment was scrubbed... URL: From pluknet at nginx.com Fri Oct 6 21:16:25 2017 From: pluknet at nginx.com (Sergey Kandaurov) Date: Sat, 7 Oct 2017 00:16:25 +0300 Subject: Secure connection failed on Firefox In-Reply-To: References: Message-ID: <5276999C-07C3-4C25-B405-DB86FDBC33CC@nginx.com> > On 6 Oct 2017, at 18:40, joseph-pg wrote: > > Hi, > I'm currently testing nginx 1.13.6 x64 on my development machine, which is > Windows 10 1703 x64, and sometimes I got a "Secure connection failed" error > on Firefox (55.x and 56). If I hit the reload button (F5) repeatedly, the > page will eventually load. > > Dev tools shows: 200 OK, size 0, and transferred -. nginx debug log doesn't > show anything weird. > > Things that I observed: > 1. 1.13.5 works fine > 2. Chrome on Android works fine > > I've tested 5a3ab1b5804b, 46ddff109e72, and 924b6ef942bf and they have the > same problem. > > Configuration is pretty much default with HTTPS and HTTP/2 server blocks. Please check if reverting 12cadc4669a7 helps. -- Sergey Kandaurov From peter_booth at me.com Sat Oct 7 01:27:27 2017 From: peter_booth at me.com (Peter Booth) Date: Fri, 06 Oct 2017 21:27:27 -0400 Subject: Multiple upstream_cache_status headers in response in a dual-cache configuration In-Reply-To: <881c1ce07cd091b545007084b4c47a47.NginxMailingListEnglish@forum.nginx.org> References: <881c1ce07cd091b545007084b4c47a47.NginxMailingListEnglish@forum.nginx.org> Message-ID: <60BC6B3D-EDAC-44BA-964B-B0A952B9AD76@me.com> Why do you want to "realize a distributed caching layer based on disk-speed and storage?? Providing that you are running nginx on a healthy host running linux then your HDD-cache be faster (or the seem speed) as your SSD-cache. This because the cached file will be written though the Linux page cache, just as reads will return the file from Linux page cache and not touch either of the disks. This means that the effective performance of your server is largely decoupled from the physical performance of the physical drives. Of course you should monitor your host to ensure that it has sufficient memory and that no major page faults are occurring (sar -B should return 0.0) Peter > On Oct 6, 2017, at 3:05 AM, rnmx18 wrote: > > Hi, > > To realize a distributed caching layer based of disk-speed and storage, I > have prepared the following configuration with an SSD-cache and HDD-cache. > > http { > > add_header X-UpStream-Server-Cache-Status $upstream_cache_status; > > # proxy caching configurations > proxy_cache_path /tmp/myssd1 keys_zone=myssd1:1m levels=1:2 > inactive=60m; > > proxy_cache_path /tmp/myhdd1 keys_zone=myhdd1:10m levels=1:2 > inactive=60m; > > proxy_cache_key "$request_uri"; > > upstream mylocalhdd { > server 127.0.0.1:82; > } > > upstream myorigin { > server 192.168.1.35:80; > } > > # Server-block1 - will cache in SSD if we get a minimum of 5 requests. > server { > listen 80; > server_name example.com www.example.com; > > # Look up in ssd cache. If not found, goto hdd cache. > location / { > proxy_pass http://mylocalhdd; > proxy_cache myssd1; > proxy_cache_min_uses 5; > } > } > > # Server-block2 (acting as local upstream) - will fetch from origin and > cache in HDD > server { > listen 82; > server_name example.com www.example.com; > > # Look up in hdd cache. If not found, goto origin > location / { > proxy_pass http://myorigin; > proxy_cache myhdd1; > } > } > > } > > The smaller, yet faster SSD-cache will store content only if I get at least > 5 requests for the URL. On the other hand, the larger HDD cache will every > request. > > The flow is straight-forward: > > i) Client's first request is handled by server at port 80 > ii) It looks in SSD cache. Upon a cache-miss, it proxies to local upstream > at port 82. > iii) The local server at port 82, looks in HDD cache. upon cache miss, it > fetches from origin, and adds to HDD-cache and sends the response back to > first server. > iv) Server1 does not add content yet to SSD-cache (as min_uses is not > reached), and sends response to client. > v) For the next 3 requests, the server1 would see an SSD-cache-miss, but > server2 produces an HDD-cache-hit. > vi) For the 5th request, the server1, will also add the response in > SSD-cache, as min_uses criteria is met. > vii) For the 6th request onwards, the server1 itself will serve the request > from SSD-cache itself. No request is sent to local upstream. > > I have added $upstream_cache_status in the response. > > For the first request, I see the header twice in the response: This seem to > correspond to MISS in both front-end SSD-cache and back-end HDD-cache. > > < X-UpStream-Server-Cache-Status: MISS > < X-UpStream-Server-Cache-Status: MISS > > For the next 4 requests, I see the following: This seems to correspond to > MISS in the from-end SSD-cache and HIT in the back-end HDD-cache. > > < X-UpStream-Server-Cache-Status: HIT > < X-UpStream-Server-Cache-Status: MISS > > For the 6th request, I see the following: > > < X-UpStream-Server-Cache-Status: HIT > < X-UpStream-Server-Cache-Status: HIT > > Why am I getting the header twice for the 6th request. In this case, the > request is HIT by the SSD cache itself, and there is no request sent to > local upstream also. > > So, shouldn't I be getting only one instance of the header in the response? > > Thanks > Rajesh > > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,276727,276727#msg-276727 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From yks0000 at gmail.com Sat Oct 7 04:53:50 2017 From: yks0000 at gmail.com (Yogesh Sharma) Date: Sat, 07 Oct 2017 04:53:50 +0000 Subject: Nginx CPU Issue : Remain at 100% In-Reply-To: References: <10538001.t6D0DEl2ps@vbart-laptop> Message-ID: Issue reappeared again and CPU once goes up remains there. Will check Email Thread shared by Igal for next steps. Thanks Yogesh Sharma On Mon, 2 Oct 2017 at 4:17 AM, Igal @ Lucee.org wrote: > Hello, > > > On 9/30/2017 12:43 AM, Yogesh Sharma wrote: > > Thank you Valentin. Will give a chance. > > On Fri, 29 Sep 2017 at 5:20 PM, Valentin V. Bartenev > wrote: > >> On Friday, 29 Sep 2017 14:38:58 MSK Yogesh Sharma wrote: >> > Team, >> > >> > I am using nginx as Reverse proxy, where I see that once CPU goes up for >> > Nginx it never comes down and remain there forever until we kill that >> > worker. We tried tweaking worker_processes to number of cpu we have, >> but it >> > did not helped. >> > > I wonder if this is the same issue that I reported a few months ago at > > https://www.mail-archive.com/search?l=nginx at nginx.org&q=subject:%22nginx+hogs+cpu+on+Windows%22&o=newest&f=1 > > I've actually missed Maxim's reply and haven't noticed it until now. I > did not experience the issue lately, but I'll be sure to follow Maxim's > advice if it happens again. > > > > Igal > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Yogesh Sharma -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Sat Oct 7 05:56:29 2017 From: nginx-forum at forum.nginx.org (skidaddytn) Date: Sat, 07 Oct 2017 01:56:29 -0400 Subject: Using an SSI approach with set-cookie Message-ID: I'm building a small set of configuration web pages in html/css with use of virtual SSI to make FastCGI passes to our c++ application. It works great for adding dynamic content from the code, but since my fastcgi pass occurs after the response header, I can't use set-cookie this way. Now I could move all HTML code into my app, but I'd rather not do this as I like to edit it and test things in a browser with just the HTML file. I came up with a strategy to allow use of cookies as follows: 1) Use a location {} block #1 in nginx config to fastcgi pass everything to my app 2) location {} block #2 will check for (.html/.css/.jpg/etc) pages and grab them from disk instead. Using this method, I'm able to use set-cookie, and then do a 301 redirect to an HTML file (which in turn includes the virtual SSI to call me back for dynamic content). i.e. All of my in the html will point at pages with no extension "/homepage". Once I field the script request, i can set-cookie then redirect to "/homepage.html". Seems to be working great so far. But I wonder about how compatible this approach may be per browser? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,276750,276750#msg-276750 From nginx-forum at forum.nginx.org Sat Oct 7 06:14:26 2017 From: nginx-forum at forum.nginx.org (joseph-pg) Date: Sat, 07 Oct 2017 02:14:26 -0400 Subject: Secure connection failed on Firefox In-Reply-To: <5276999C-07C3-4C25-B405-DB86FDBC33CC@nginx.com> References: <5276999C-07C3-4C25-B405-DB86FDBC33CC@nginx.com> Message-ID: Sergey Kandaurov Wrote: ------------------------------------------------------- > > On 6 Oct 2017, at 18:40, joseph-pg > wrote: > > > > Hi, > > I'm currently testing nginx 1.13.6 x64 on my development machine, > which is > > Windows 10 1703 x64, and sometimes I got a "Secure connection > failed" error > > on Firefox (55.x and 56). If I hit the reload button (F5) > repeatedly, the > > page will eventually load. > > > > Dev tools shows: 200 OK, size 0, and transferred -. nginx debug log > doesn't > > show anything weird. > > > > Things that I observed: > > 1. 1.13.5 works fine > > 2. Chrome on Android works fine > > > > I've tested 5a3ab1b5804b, 46ddff109e72, and 924b6ef942bf and they > have the > > same problem. > > > > Configuration is pretty much default with HTTPS and HTTP/2 server > blocks. > > Please check if reverting 12cadc4669a7 helps. > > -- > Sergey Kandaurov > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Looks like it's the culprit. It happens at random times for a few seconds, so it's rather difficult to reproduce. Here's the output of the debug log when the error occured (I was wrong, weird things do happen here, eg. an entry with alert flag below): > 2017/10/07 11:28:05 [debug] 18764#15836: *15 http2 idle handler > 2017/10/07 11:28:05 [debug] 18764#15836: *15 reusable connection: 0 > 2017/10/07 11:28:05 [debug] 18764#15836: *15 malloc: 00000287AAEDD260:4096 > 2017/10/07 11:28:05 [debug] 18764#15836: *15 http2 read handler > 2017/10/07 11:28:05 [debug] 18764#15836: *15 SSL_read: 60 > 2017/10/07 11:28:05 [debug] 18764#15836: *15 SSL_read: -1 > 2017/10/07 11:28:05 [debug] 18764#15836: *15 SSL_get_error: 2 > 2017/10/07 11:28:05 [debug] 18764#15836: *15 process http2 frame type:1 f:25 l:38 sid:29 > 2017/10/07 11:28:05 [debug] 18764#15836: *15 http2 HEADERS frame sid:29 on 13 excl:0 weight:42 > 2017/10/07 11:28:05 [debug] 18764#15836: *15 malloc: 00000287AAE35390:1024 > 2017/10/07 11:28:05 [debug] 18764#15836: *15 malloc: 00000287AAED8210:4096 > 2017/10/07 11:28:05 [debug] 18764#15836: *15 malloc: 00000287AAEE22B0:4096 > 2017/10/07 11:28:05 [debug] 18764#15836: *15 http2 get indexed header name: 2 > 2017/10/07 11:28:05 [debug] 18764#15836: *15 http2 pseudo-header: ":method: GET" > 2017/10/07 11:28:05 [debug] 18764#15836: *15 http2 get indexed header: 5 > 2017/10/07 11:28:05 [debug] 18764#15836: *15 http2 hpack encoded string length: 20 > 2017/10/07 11:28:05 [debug] 18764#15836: *15 http uri: "/" > 2017/10/07 11:28:05 [debug] 18764#15836: *15 http args: "somequerystring" > 2017/10/07 11:28:05 [debug] 18764#15836: *15 http exten: "" > 2017/10/07 11:28:05 [debug] 18764#15836: *15 http2 pseudo-header: ":path: /?somequerystring" > 2017/10/07 11:28:05 [debug] 18764#15836: *15 http2 get indexed header name: 78 > 2017/10/07 11:28:05 [debug] 18764#15836: *15 http2 pseudo-header: ":authority: my-internaldomain" > 2017/10/07 11:28:05 [debug] 18764#15836: *15 http2 get indexed header name: 7 > 2017/10/07 11:28:05 [debug] 18764#15836: *15 http2 pseudo-header: ":scheme: https" > 2017/10/07 11:28:05 [debug] 18764#15836: *15 http2 get indexed header name: 77 > 2017/10/07 11:28:05 [debug] 18764#15836: *15 http2 http header: "user-agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:56.0) Gecko/20100101 Firefox/56.0" > 2017/10/07 11:28:05 [debug] 18764#15836: *15 http2 get indexed header name: 76 > 2017/10/07 11:28:05 [debug] 18764#15836: *15 http2 http header: "accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8" > 2017/10/07 11:28:05 [debug] 18764#15836: *15 http2 get indexed header name: 75 > 2017/10/07 11:28:05 [debug] 18764#15836: *15 http2 http header: "accept-language: en-US,en;q=0.5" > 2017/10/07 11:28:05 [debug] 18764#15836: *15 http2 get indexed header name: 74 > 2017/10/07 11:28:05 [debug] 18764#15836: *15 http2 http header: "accept-encoding: gzip, deflate, br" > 2017/10/07 11:28:05 [debug] 18764#15836: *15 http2 get indexed header name: 73 > 2017/10/07 11:28:05 [debug] 18764#15836: *15 http2 http header: "dnt: 1" > 2017/10/07 11:28:05 [debug] 18764#15836: *15 http2 get indexed header name: 72 > 2017/10/07 11:28:05 [debug] 18764#15836: *15 http2 http header: "upgrade-insecure-requests: 1" > 2017/10/07 11:28:05 [debug] 18764#15836: *15 http2 get indexed header name: 63 > 2017/10/07 11:28:05 [debug] 18764#15836: *15 http2 http header: "if-modified-since: Sat, 07 Oct 2017 04:26:43 GMT" > 2017/10/07 11:28:05 [debug] 18764#15836: *15 http2 get indexed header name: 62 > 2017/10/07 11:28:05 [debug] 18764#15836: *15 http2 http header: "if-none-match: W/"59d85783-3f5"" > 2017/10/07 11:28:05 [debug] 18764#15836: *15 http2 http request line: "GET /?somequerystring HTTP/2.0" > 2017/10/07 11:28:05 [debug] 18764#15836: *15 generic phase: 0 > 2017/10/07 11:28:05 [debug] 18764#15836: *15 rewrite phase: 1 > 2017/10/07 11:28:05 [debug] 18764#15836: *15 test location: "/" > 2017/10/07 11:28:05 [debug] 18764#15836: *15 using configuration "/" > 2017/10/07 11:28:05 [debug] 18764#15836: *15 http cl:-1 max:1048576 > 2017/10/07 11:28:05 [debug] 18764#15836: *15 rewrite phase: 3 > 2017/10/07 11:28:05 [debug] 18764#15836: *15 post rewrite phase: 4 > 2017/10/07 11:28:05 [debug] 18764#15836: *15 generic phase: 5 > 2017/10/07 11:28:05 [debug] 18764#15836: *15 generic phase: 6 > 2017/10/07 11:28:05 [debug] 18764#15836: *15 generic phase: 7 > 2017/10/07 11:28:05 [debug] 18764#15836: *15 access phase: 8 > 2017/10/07 11:28:05 [debug] 18764#15836: *15 access phase: 9 > 2017/10/07 11:28:05 [debug] 18764#15836: *15 access phase: 10 > 2017/10/07 11:28:05 [debug] 18764#15836: *15 post access phase: 11 > 2017/10/07 11:28:05 [debug] 18764#15836: *15 generic phase: 12 > 2017/10/07 11:28:05 [debug] 18764#15836: *15 generic phase: 13 > 2017/10/07 11:28:05 [debug] 18764#15836: *15 content phase: 14 > 2017/10/07 11:28:05 [debug] 18764#15836: *15 open index "path/to/website/index.html" > 2017/10/07 11:28:05 [debug] 18764#15836: *15 add cleanup: 00000287AAED91F8 > 2017/10/07 11:28:05 [debug] 18764#15836: *15 cached open file: path/to/website/index.html, fd:560, c:1, e:0, u:15 > 2017/10/07 11:28:05 [debug] 18764#15836: *15 internal redirect: "/index.html?somequerystring" > 2017/10/07 11:28:05 [debug] 18764#15836: *15 rewrite phase: 1 > 2017/10/07 11:28:05 [debug] 18764#15836: *15 test location: "/" > 2017/10/07 11:28:05 [debug] 18764#15836: *15 using configuration "/" > 2017/10/07 11:28:05 [debug] 18764#15836: *15 http cl:-1 max:1048576 > 2017/10/07 11:28:05 [debug] 18764#15836: *15 rewrite phase: 3 > 2017/10/07 11:28:05 [debug] 18764#15836: *15 post rewrite phase: 4 > 2017/10/07 11:28:05 [debug] 18764#15836: *15 generic phase: 5 > 2017/10/07 11:28:05 [debug] 18764#15836: *15 generic phase: 6 > 2017/10/07 11:28:05 [debug] 18764#15836: *15 generic phase: 7 > 2017/10/07 11:28:05 [debug] 18764#15836: *15 access phase: 8 > 2017/10/07 11:28:05 [debug] 18764#15836: *15 access phase: 9 > 2017/10/07 11:28:05 [debug] 18764#15836: *15 access phase: 10 > 2017/10/07 11:28:05 [debug] 18764#15836: *15 post access phase: 11 > 2017/10/07 11:28:05 [debug] 18764#15836: *15 generic phase: 12 > 2017/10/07 11:28:05 [debug] 18764#15836: *15 generic phase: 13 > 2017/10/07 11:28:05 [debug] 18764#15836: *15 content phase: 14 > 2017/10/07 11:28:05 [debug] 18764#15836: *15 content phase: 15 > 2017/10/07 11:28:05 [debug] 18764#15836: *15 content phase: 16 > 2017/10/07 11:28:05 [debug] 18764#15836: *15 content phase: 17 > 2017/10/07 11:28:05 [debug] 18764#15836: *15 http filename: "path/to/website/index.html" > 2017/10/07 11:28:05 [debug] 18764#15836: *15 add cleanup: 00000287AAEE2B18 > 2017/10/07 11:28:05 [debug] 18764#15836: *15 cached open file: path/to/website/index.html, fd:560, c:2, e:0, u:16 > 2017/10/07 11:28:05 [debug] 18764#15836: *15 http static fd: 560 > 2017/10/07 11:28:05 [debug] 18764#15836: *15 http ims:1507350403 lm:1507350443 > 2017/10/07 11:28:05 [debug] 18764#15836: *15 filter http_cookie_flag is disabled > 2017/10/07 11:28:05 [debug] 18764#15836: *15 charset: "" > "utf-8" > 2017/10/07 11:28:05 [debug] 18764#15836: *15 http2 header filter > 2017/10/07 11:28:05 [debug] 18764#15836: *15 http2 output header: ":status: 200" > 2017/10/07 11:28:05 [debug] 18764#15836: *15 http2 output header: "server: nginx" > 2017/10/07 11:28:05 [debug] 18764#15836: *15 http2 output header: "date: Sat, 07 Oct 2017 04:28:05 GMT" > 2017/10/07 11:28:05 [debug] 18764#15836: *15 http2 output header: "content-type: text/html; charset=utf-8" > 2017/10/07 11:28:05 [debug] 18764#15836: *15 http2 output header: "last-modified: Sat, 07 Oct 2017 04:27:23 GMT" > 2017/10/07 11:28:05 [debug] 18764#15836: *15 http2 output header: "etag: W/"59d857ab-3f5"" > 2017/10/07 11:28:05 [debug] 18764#15836: *15 http2 output header: "expires: Sat, 07 Oct 2017 04:28:04 GMT" > 2017/10/07 11:28:05 [debug] 18764#15836: *15 http2 output header: "cache-control: no-cache" > 2017/10/07 11:28:05 [debug] 18764#15836: *15 http2 output header: "strict-transport-security: max-age=63072000; includeSubDomains" > 2017/10/07 11:28:05 [debug] 18764#15836: *15 http2 output header: "referrer-policy: no-referrer" > 2017/10/07 11:28:05 [debug] 18764#15836: *15 http2 output header: "x-content-type-options: nosniff" > 2017/10/07 11:28:05 [debug] 18764#15836: *15 http2 output header: "x-frame-options: DENY" > 2017/10/07 11:28:05 [debug] 18764#15836: *15 http2 output header: "x-xss-protection: 1; mode=block" > 2017/10/07 11:28:05 [debug] 18764#15836: *15 http2 output header: "content-encoding: gzip" > 2017/10/07 11:28:05 [debug] 18764#15836: *15 http2:29 create HEADERS frame 00000287AAEE2FA0: len:293 > 2017/10/07 11:28:05 [debug] 18764#15836: *15 http cleanup add: 00000287AAEE30A4 > 2017/10/07 11:28:05 [debug] 18764#15836: *15 http2 frame out: 00000287AAEE2FA0 sid:29 bl:1 len:293 > 2017/10/07 11:28:05 [debug] 18764#15836: *15 malloc: 00000287ACA000D0:16384 > 2017/10/07 11:28:05 [debug] 18764#15836: *15 SSL buf copy: 9 > 2017/10/07 11:28:05 [debug] 18764#15836: *15 SSL buf copy: 293 > 2017/10/07 11:28:05 [debug] 18764#15836: *15 http2:29 HEADERS frame 00000287AAEE2FA0 was sent > 2017/10/07 11:28:05 [debug] 18764#15836: *15 http2 frame sent: 00000287AAEE2FA0 sid:29 bl:1 len:293 > 2017/10/07 11:28:05 [debug] 18764#15836: *15 http output filter "/index.html?somequerystring" > 2017/10/07 11:28:05 [debug] 18764#15836: *15 http copy filter: "/index.html?somequerystring" > 2017/10/07 11:28:05 [debug] 18764#15836: *15 malloc: 00000287AAEE0290:4096 > 2017/10/07 11:28:05 [alert] 18764#15836: *15 ReadFile() read only 993 of 1013 from "path/to/website/index.html" while sending response to client, client: 127.0.0.1, server: my-internaldomain, request: "GET /?somequerystring HTTP/2.0", host: "my-internaldomain" > 2017/10/07 11:28:05 [debug] 18764#15836: *15 http copy filter: -1 "/index.html?somequerystring" > 2017/10/07 11:28:05 [debug] 18764#15836: *15 http finalize request: -1, "/index.html?somequerystring" a:1, c:2 > 2017/10/07 11:28:05 [debug] 18764#15836: *15 http terminate request count:2 > 2017/10/07 11:28:05 [debug] 18764#15836: *15 http terminate cleanup count:2 blk:0 > 2017/10/07 11:28:05 [debug] 18764#15836: *15 http finalize request: -4, "/index.html?somequerystring" a:1, c:2 > 2017/10/07 11:28:05 [debug] 18764#15836: *15 http request count:2 blk:0 > 2017/10/07 11:28:05 [debug] 18764#15836: *15 http posted request: "/index.html?somequerystring" > 2017/10/07 11:28:05 [debug] 18764#15836: *15 http terminate handler count:1 > 2017/10/07 11:28:05 [debug] 18764#15836: *15 http request count:1 blk:0 > 2017/10/07 11:28:05 [debug] 18764#15836: *15 http2 close stream 29, queued 0, processing 1 > 2017/10/07 11:28:05 [debug] 18764#15836: *15 http2 send RST_STREAM frame sid:29, status:2 > 2017/10/07 11:28:05 [debug] 18764#15836: *15 http close request > 2017/10/07 11:28:05 [debug] 18764#15836: *15 http log handler > 2017/10/07 11:28:05 [debug] 18764#15836: *15 run cleanup: 00000287AAEE2B18 > 2017/10/07 11:28:05 [debug] 18764#15836: *15 close cached open file: path/to/website/index.html, fd:560, c:1, u:16, 0 > 2017/10/07 11:28:05 [debug] 18764#15836: *15 run cleanup: 00000287AAED91F8 > 2017/10/07 11:28:05 [debug] 18764#15836: *15 close cached open file: path/to/website/index.html, fd:560, c:0, u:16, 0 > 2017/10/07 11:28:05 [debug] 18764#15836: *15 free: 00000287AAED8210, unused: 0 > 2017/10/07 11:28:05 [debug] 18764#15836: *15 free: 00000287AAEE22B0, unused: 300 > 2017/10/07 11:28:05 [debug] 18764#15836: *15 free: 00000287AAEE0290, unused: 3051 > 2017/10/07 11:28:05 [debug] 18764#15836: *15 free: 00000287AAE35390, unused: 546 > 2017/10/07 11:28:05 [debug] 18764#15836: *15 http2 frame complete pos:00000287AD2800AF end:00000287AD2800BC > 2017/10/07 11:28:05 [debug] 18764#15836: *15 process http2 frame type:8 f:0 l:4 sid:29 > 2017/10/07 11:28:05 [debug] 18764#15836: *15 http2 WINDOW_UPDATE frame sid:29 window:12451840 > 2017/10/07 11:28:05 [debug] 18764#15836: *15 unknown http2 stream > 2017/10/07 11:28:05 [debug] 18764#15836: *15 http2 frame complete pos:00000287AD2800BC end:00000287AD2800BC > 2017/10/07 11:28:05 [debug] 18764#15836: *15 http2 frame out: 00000287AAEDD520 sid:0 bl:0 len:4 > 2017/10/07 11:28:05 [debug] 18764#15836: *15 SSL buf copy: 13 > 2017/10/07 11:28:05 [debug] 18764#15836: *15 SSL to write: 315 > 2017/10/07 11:28:05 [debug] 18764#15836: *15 SSL_write: 315 > 2017/10/07 11:28:05 [debug] 18764#15836: *15 http2 frame sent: 00000287AAEDD520 sid:0 bl:0 len:4 > 2017/10/07 11:28:05 [debug] 18764#15836: *15 free: 00000287AAEDD260, unused: 3216 > 2017/10/07 11:28:05 [debug] 18764#15836: *15 free: 00000287ACA000D0 > 2017/10/07 11:28:05 [debug] 18764#15836: *15 reusable connection: 1 > 2017/10/07 11:28:05 [debug] 18764#15836: *15 event timer del: 588: 1507350661972 > 2017/10/07 11:28:05 [debug] 18764#15836: *15 event timer add: 588: 180000:1507350665357 ------------------------------------------------------------------------------------------------------------------------------------------------------------ Compile options: nginx version: nginx/1.13.6 (12cadc4669a7) built by cl <-- Built with VS 2017, but somehow the compiler version doesn't appear here. built with OpenSSL 1.1.0f 25 May 2017 TLS SNI support enabled configure arguments: --with-cc=cl --with-cc-opt=-DFD_SETSIZE=8192 --builddir=nginx --build=12cadc4669a7 --prefix= --conf-path=conf/nginx.conf --http-log-path=logs/access.log --error-log-path=logs/error.log --pid-path=pid/nginx.pid --sbin-path=nginx.exe --http-client-body-temp-path=temp/client_body_temp --http-proxy-temp-path=temp/proxy_temp --http-fastcgi-temp-path=temp/fastcgi_temp --http-scgi-temp-path=temp/scgi_temp --http-uwsgi-temp-path=temp/uwsgi_temp --with-openssl=lib/openssl-1.1.0f --with-openssl-opt='no-asm no-dgram no-heartbeats no-nextprotoneg no-ssl3-method no-weak-ssl-ciphers no-deprecated no-blake2 no-camellia no-des no-dsa no-gost no-idea no-psk no-seed no-scrypt no-sctp no-srp no-cms no-comp no-whirlpool' --with-pcre=lib/pcre-8.41 --with-pcre-jit --with-zlib=lib/zlib-1.2.11 --with-debug --with-select_module --with-http_addition_module --with-http_auth_request_module --with-http_gzip_static_module --with-http_realip_module --with-http_secure_link_module --with-http_slice_module --with-http_ssl_module --with-http_stub_status_module --with-http_sub_module --with-http_v2_module --with-stream --with-stream_ssl_module Posted at Nginx Forum: https://forum.nginx.org/read.php?2,276738,276751#msg-276751 From smart.imran003 at gmail.com Sat Oct 7 09:14:27 2017 From: smart.imran003 at gmail.com (Syed Imran) Date: Sat, 7 Oct 2017 14:44:27 +0530 Subject: Nginx configuration with artifactory In-Reply-To: <20171006190449.GF20907@daoine.org> References: <20171006190449.GF20907@daoine.org> Message-ID: I have a working configuration already, that is using the configuration that is given in jfrog wiki. But when it comes to multiple ip's as mentioned earlier. I am getting a bit confused. Below is the error that i get. root at ip-172-31-10-127:~# docker login XXX.XXX.XXX.XXX Username: test Password: Error response from daemon: Login: 404 Not Found

404 Not Found


nginx/1.10.2
(Code: 404; Headers: map[Server:[nginx/1.10.2] Date:[Sat, 07 Oct 2017 09:11:39 GMT] Content-Type:[text/html] Content-Length:[169]]) root at ip-172-31-10-127:~# On 7 October 2017 at 00:34, Francis Daly wrote: > On Fri, Oct 06, 2017 at 07:30:26PM +0530, Syed Imran wrote: > > Hi there, > > > My artifactory application in deployed using docker in port 8081. > > > But when I do a docker login, nginx always returns 405 not found. But the > > web UI works find with both the ip?s. > > Does nginx return 405, or does nginx return "not found"? The distinction > might matter. > > > Is there anything wrong with my nginx configuration. Can someone help. > > There appear to be two "artifactory" documents relating to nginx -- > https://www.jfrog.com/confluence/display/RTF/Configuring+a+Reverse+Proxy > and https://www.jfrog.com/confluence/display/RTF/Configuring+NGINX > > Do things work if you use exactly the configuration that they suggest? > > If so, then you can start making changes to see where it starts to break. > > Good luck with it, > > f > -- > Francis Daly francis at daoine.org > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Sat Oct 7 09:23:27 2017 From: francis at daoine.org (Francis Daly) Date: Sat, 7 Oct 2017 10:23:27 +0100 Subject: Directive inheritance In-Reply-To: <59D7E9DE.4010707@lucasrolff.com> References: <3058D818-89A6-47B2-92E2-3AB3287AE536@lucasrolff.com> <20171006202252.GH20907@daoine.org> <59D7E9DE.4010707@lucasrolff.com> Message-ID: <20171007092327.GI20907@daoine.org> On Fri, Oct 06, 2017 at 10:38:54PM +0200, Lucas Rolff wrote: Hi there, > I do use some of the openresty modules such as the ngx_headers_more > module, and it's pretty explicit about it's inheritance. That's probably a good example to look at. Compare the ngx_http_headers_more_merge_loc_conf() function with ngx_http_headers_merge_conf() (or with pretty much any of the stock nginx equivalents). In the external headers_more module, the decision is taken to "merge" by adding the "parent" directive config to the "child" one. In the internal modules, the decision is (mostly) taken to "merge" by only using the "parent" directive config if the "child" is empty. > And thank you for the pointer regarding the _module_ctx and > _merge_loc_conf functions, it gave me enough information regarding > the http_proxy module as an example - it seem as long as there a > "offsetof(ngx_http_proxy_loc_conf_t" - then it can be inherited, or > it's a coincidence that it's missing the "offsetoff" for all > directives that doesn't inherit in that module from top of my head. It's not so much the "offsetof(", as the fact that in the _merge function, the config struct member that corresponds to this directive is either mentioned (and therefore probably inherits from the parent) or is not (and does not inherit). See how "expires" is handled in src/http/modules/ngx_http_headers_filter_module.c -- it's the first member of the struct and therefore does not need an offsetof() to identify the position. And, as always, the source is the ultimate documentation for how things are. This mail is just "how I think things might have been intended to be". Cheers, f -- Francis Daly francis at daoine.org From shahzaib.cb at gmail.com Sat Oct 7 11:24:56 2017 From: shahzaib.cb at gmail.com (shahzaib mushtaq) Date: Sat, 7 Oct 2017 16:24:56 +0500 Subject: Block specific request pattern !! In-Reply-To: <20171006191813.GG20907@daoine.org> References: <20171006191813.GG20907@daoine.org> Message-ID: Hi Francis, First of all please accept my gratitude for helping on this matter, this really worked for me and we're seeing lot of leechers blocked now. Thanks a lot again :) Shahzaib Virus-free. www.avast.com <#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2> On Sat, Oct 7, 2017 at 12:18 AM, Francis Daly wrote: > On Fri, Oct 06, 2017 at 07:17:39PM +0500, shahzaib mushtaq wrote: > > Hi there, > > > We're serving mp4 files over NGINX with added security hash+ttl but > there's > > some kind of leechers accessing videos with following pattern but not > > getting blocked: > > You seem to suggest that you have some blocking configured, and that it > does not do all that you want. > > What blocking do you have configured? It may be simplest to adjust that, > rather than try to add something new. > > > https://domain.com/files/videos/2017/10/04/15071356364fc6b-720.mp4?h=n_ > Saa78MV6BJTcoRHwHelA&ttl=1507303734& > > ?*/WhileYouWereSleeping56.mp4* > > > https://domain.com/files/videos/2017/10/04/15071356364fc6b-720.mp4?h=n_ > Saa78MV6BJTcoRHwHelA&ttl=1507303734& > > ? > > The two question marks in the url looks odd to me; but it is in both your > "bad" and "good" ones, so maybe it is normal. > > > Is there a way we can block the requests not ending up on ttl value ? > > You have $query_string, which is the same as $args. If you can define a > regex pattern which matches everything you want, you can return failure > for everything else. > > Perhaps ($args !~ "&ttl=[0-9]*&\?$") is a suitable test condition in > your environment. > > Good luck with it, > > f > -- > Francis Daly francis at daoine.org > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Sat Oct 7 13:16:08 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sat, 7 Oct 2017 16:16:08 +0300 Subject: Secure connection failed on Firefox In-Reply-To: References: <5276999C-07C3-4C25-B405-DB86FDBC33CC@nginx.com> Message-ID: <20171007131607.GA75166@mdounin.ru> Hello! On Sat, Oct 07, 2017 at 02:14:26AM -0400, joseph-pg wrote: [...] > Looks like it's the culprit. It happens at random times for a few seconds, > so it's rather difficult to reproduce. > > Here's the output of the debug log when the error occured (I was wrong, > weird things do happen here, eg. an entry with alert flag below): [...] > > 2017/10/07 11:28:05 [debug] 18764#15836: *15 cached open file: > path/to/website/index.html, fd:560, c:2, e:0, u:16 [...] > > 2017/10/07 11:28:05 [debug] 18764#15836: *15 http copy filter: > "/index.html?somequerystring" > > 2017/10/07 11:28:05 [debug] 18764#15836: *15 malloc: > 00000287AAEE0290:4096 > > 2017/10/07 11:28:05 [alert] 18764#15836: *15 ReadFile() read only 993 of > 1013 from "path/to/website/index.html" while sending response to client, > client: 127.0.0.1, server: my-internaldomain, request: "GET > /?somequerystring HTTP/2.0", host: "my-internaldomain" The message suggests that the file in question was non-atomically modified while being served. It is expected that such a modification will lead to a fatal error if nginx will be able to detect the problem. If it won't, likely the client will get a garbage with a mix of original and new contents of the file. The only safe approach is to modify files atomically, that is, create a new file and then use mv (the rename() syscall) to move it atomically to the appropriate place. It might not be trivial or even possible to do this correctly on Windows though[1]. Additionally, it looks like you are using open_file_cache. It is actually a very bad idea if you modify files in-place, as it greatly expands the race window between opening and stat()'ing the file and serving its contents. Remove open_file_cache from the configuration unless you are sure all file modifications are atomic. [1] https://stackoverflow.com/questions/167414/is-an-atomic-file-rename-with-overwrite-possible-on-windows -- Maxim Dounin http://nginx.org/ From nginx-forum at forum.nginx.org Sat Oct 7 16:27:30 2017 From: nginx-forum at forum.nginx.org (joseph-pg) Date: Sat, 07 Oct 2017 12:27:30 -0400 Subject: AW: Secure connection failed on Firefox In-Reply-To: References: Message-ID: <28e494f08b55ab4683898684c5f41980.NginxMailingListEnglish@forum.nginx.org> Lukas Tribus Wrote: ------------------------------------------------------- > Hello, > > > > I'm currently testing nginx 1.13.6 x64 on my development machine, > which is > > There is no 1.13.6. > > > > I've tested 5a3ab1b5804b, 46ddff109e72, and 924b6ef942bf and they > have the > > same problem. > > Ah so you are running directly from the development tree. In that > case, I suggest > to bisect it to the offending commit. > > Try 12cadc4669a7 first of all, if that also fails try 019b91bd21cc, if > that also > fails try a0e472a2c4f1. > > > > > Configuration is pretty much default with HTTPS and HTTP/2 server > blocks. > > If that turns out to be a regression, I assume the configuration will > still be > necessary, even if it is pretty much default. > > > regards, > lukas > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Thank you for your suggestions. Turns out it's a problem with an incorrect usage of open_file_cache as pointed out by Maxim after reading my debug log (I was wrong when I said nothing weird happened in the log. When the same error occurred this morning, I dig through the log and found an alert and some signs of the problem). I removed the directive from my config and now nginx works flawlessly. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,276738,276760#msg-276760 From nginx-forum at forum.nginx.org Sat Oct 7 16:27:33 2017 From: nginx-forum at forum.nginx.org (joseph-pg) Date: Sat, 07 Oct 2017 12:27:33 -0400 Subject: Secure connection failed on Firefox In-Reply-To: <20171007131607.GA75166@mdounin.ru> References: <20171007131607.GA75166@mdounin.ru> Message-ID: <13b1fd71af9ac5c3ffa8ef2acf68dd0f.NginxMailingListEnglish@forum.nginx.org> Maxim Dounin Wrote: ------------------------------------------------------- > Hello! [...] > The message suggests that the file in question was non-atomically > modified while being served. It is expected that such a > modification will lead to a fatal error if nginx will be able to > detect the problem. If it won't, likely the client will get a > garbage with a mix of original and new contents of the file. > > The only safe approach is to modify files atomically, that is, > create a new file and then use mv (the rename() syscall) to move > it atomically to the appropriate place. It might not be trivial > or even possible to do this correctly on Windows though[1]. > > Additionally, it looks like you are using open_file_cache. It is > actually a very bad idea if you modify files in-place, as it > greatly expands the race window between opening and stat()'ing the > file and serving its contents. Remove open_file_cache from the > configuration unless you are sure all file modifications are > atomic. > > [1] > https://stackoverflow.com/questions/167414/is-an-atomic-file-rename-wi > th-overwrite-possible-on-windows > > -- > Maxim Dounin > http://nginx.org/ > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Thanks, Maxim. I removed it and the problem disappears. -- Joseph Aditya P. G. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,276738,276761#msg-276761 From rdocter at gmail.com Mon Oct 9 07:33:55 2017 From: rdocter at gmail.com (Ruben) Date: Mon, 9 Oct 2017 09:33:55 +0200 Subject: Selection of server in upstream directive using hash Message-ID: I was wondering what the selection algorithm is for choosing a server in the upstream directive using hash. Is the selection based on the ip of the server or is it based on the position of the list. So if I have for example the following configuration: upstream test { hash $arg_test; server 10.0.0.10; server 10.0.0.9; server 10.0.0.8; } or (ip's in different order) upstream chat { hash $arg_test; server 10.0.0.8; server 10.0.0.9; server 10.0.0.10; } If someone is targeting an url with ?test=1, is it in both configs directed to the same ip or not. So is the selection based on the ip or based omn the position in the list. -------------- next part -------------- An HTML attachment was scrubbed... URL: From rtwk.das at gmail.com Mon Oct 9 09:20:48 2017 From: rtwk.das at gmail.com (Ritwik Das) Date: Mon, 9 Oct 2017 14:50:48 +0530 Subject: Nginx not accepting requests after configuring SSL with Let's Encrypt Message-ID: Hello, I used to run multiple nodejs apps with Nginx on a Ubuntu Server. Those apps used to receive Ajax requests from another domain. Nginx used to receive those requests on different ports and then nodejs used to spit output. After I configured SSL using Let's Encrypt on both of the domains, apps are running (I can see them running using forever list), but they are not accepting those requests (Error connection closed). I will be thankful you experts may help me to fix the issue. I am attaching Nginx configuration file, nodejs code and client side Ajax code. I can't send files as is because email attachment is getting blocked, so I added a .txt extension after original file names. Regards Ritwik Das -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- server { listen 80; server_name w3rpractice.com; location /home/w3r/editor/server.js { proxy_pass http://0.0.0.0/; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; proxy_read_timeout 43200000; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } location /home/w3r/editor/editor.js { if ($request_method = 'OPTIONS') { add_header 'Access-Control-Allow-Origin' '*'; add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS'; # # Custom headers and headers various browsers *should* be OK with but aren't # add_header 'Access-Control-Allow-Headers' 'DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Content-Range,Range'; # # Tell client that this pre-flight info is valid for 20 days # add_header 'Access-Control-Max-Age' 1728000; add_header 'Content-Type' 'text/plain; charset=utf-8'; add_header 'Content-Length' 0; return 204; } if ($request_method = 'POST') { add_header 'Access-Control-Allow-Origin' '*'; add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS'; add_header 'Access-Control-Allow-Headers' 'DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Content-Range,Range'; add_header 'Access-Control-Expose-Headers' 'DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Content-Range,Range'; } if ($request_method = 'GET') { add_header 'Access-Control-Allow-Origin' '*'; add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS'; add_header 'Access-Control-Allow-Headers' 'DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Content-Range,Range'; add_header 'Access-Control-Expose-Headers' 'DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Content-Range,Range'; } proxy_pass http://0.0.0.0:9117/; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; proxy_read_timeout 43200000; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } location /home/w3r/editor/editor_pyhon.js { if ($request_method = 'OPTIONS') { add_header 'Access-Control-Allow-Origin' '*'; add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS'; # # Custom headers and headers various browsers *should* be OK with but aren't # add_header 'Access-Control-Allow-Headers' 'DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Content-Range,Range'; # # Tell client that this pre-flight info is valid for 20 days # add_header 'Access-Control-Max-Age' 1728000; add_header 'Content-Type' 'text/plain; charset=utf-8'; add_header 'Content-Length' 0; return 204; } if ($request_method = 'POST') { add_header 'Access-Control-Allow-Origin' '*'; add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS'; add_header 'Access-Control-Allow-Headers' 'DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Content-Range,Range'; add_header 'Access-Control-Expose-Headers' 'DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Content-Range,Range'; } if ($request_method = 'GET') { add_header 'Access-Control-Allow-Origin' '*'; add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS'; add_header 'Access-Control-Allow-Headers' 'DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Content-Range,Range'; add_header 'Access-Control-Expose-Headers' 'DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Content-Range,Range'; } proxy_pass http://0.0.0.0:9118/; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; proxy_read_timeout 43200000; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } location /home/w3r/editor/editor_php.js { proxy_pass http://0.0.0.0:9130/; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; proxy_read_timeout 43200000; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } location /home/w3r/editor/editor_ruby.js { proxy_pass http://0.0.0.0:9119/; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; proxy_read_timeout 43200000; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } location /home/w3r/editor/editor_java.js { proxy_pass http://0.0.0.0:9121/; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; proxy_read_timeout 43200000; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } location /home/w3r/editor/editor_cs.js { proxy_pass http://0.0.0.0:9122/; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; proxy_read_timeout 43200000; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } location /home/w3r/editor/editor_swift.js { proxy_pass http://0.0.0.0:9124/; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; proxy_read_timeout 43200000; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } location /home/w3r/editor/editor_cpp.js { proxy_pass http://0.0.0.0:9125/; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; proxy_read_timeout 43200000; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } location /home/w3r/editor/editor_mongo.js { proxy_pass http://0.0.0.0:11000/; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; proxy_read_timeout 43200000; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } location /home/w3r/editor/editor_mysql.js { proxy_pass http://0.0.0.0:11100/; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; proxy_read_timeout 43200000; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } location /home/w3r/editor/editor_mysql2.js { proxy_pass http://0.0.0.0:11200/; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; proxy_read_timeout 43200000; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } listen 443 ssl; # managed by Certbot ssl_certificate /etc/letsencrypt/live/w3rpractice.com/fullchain.pem; # managed by Certbot ssl_certificate_key /etc/letsencrypt/live/w3rpractice.com/privkey.pem; # managed by Certbot include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot if ($scheme != "https") { return 301 https://$host$request_uri; } # managed by Certbot ssl_dhparam /etc/ssl/certs/dhparam.pem; } -------------- next part -------------- var express = require("express"); var bodyParser = require("body-parser"); var app = express(); var fs = require('fs'); var path = require('path'); var random_port = require('random-port'); var mkdirp = require('mkdirp'); var chmod = require('chmod'); app.use(bodyParser.urlencoded({ extended: false })); app.use(express.static(path.join(__dirname, '/'))); app.listen(9121,function(){ console.log("Hi how r u doing?"); }); app.all('/', function(req, res, next) { res.header("Access-Control-Allow-Origin", "*"); res.header("Access-Control-Allow-Headers", "X-Requested-With"); next(); }); app.post('/',function(req,res){ var topic = req.body.topic; var user_code=req.body.code_sent; user_code = user_code.replace(/w3rplus/g,"+"); user_code = user_code.replace(/w3rminus/g,"&"); var fname = req.body.fname; var fname_java = fname + ".java"; var uuid = require('node-uuid'); var newDir = uuid.v1(); mkdirp('/home/students/'+ newDir, function (err) { if (err) console.error(err) else console.log('pow!') }); // chmod('home/students/'+ newDir,777'); var dir = "/home/students/" + newDir; fs.writeFile(dir + "/" + fname_java, user_code, function (err) { if (err) throw err; console.log('It\'s saved!'); }); var sys = require('sys'); random_port({from:3800, range:99}, function(port) { console.log(port); var exec = require('child_process').exec; var term = "timeout -s KILL 40 web-term -c /home/students/ -H 45.55.241.251 -p " + port + " -s \" cd " + newDir + " && javac " + fname_java + " && timeout -s KILL 30 java " + fname + "\""; var child = exec(term); if(child){ //res.end("http://162.243.243.22:" + port); res.end(port + "W3R" + fname); } }); }); -------------- next part --------------

From arut at nginx.com Mon Oct 9 09:25:09 2017 From: arut at nginx.com (Roman Arutyunyan) Date: Mon, 9 Oct 2017 12:25:09 +0300 Subject: Selection of server in upstream directive using hash In-Reply-To: References: Message-ID: <20171009092509.GI60372@Romans-MacBook-Air.local> Hi Ruben, On Mon, Oct 09, 2017 at 09:33:55AM +0200, Ruben wrote: > I was wondering what the selection algorithm is for choosing a server in > the upstream directive using hash. Is the selection based on the ip of the > server or is it based on the position of the list. > > So if I have for example the following configuration: > > upstream test { > hash $arg_test; > server 10.0.0.10; > server 10.0.0.9; > server 10.0.0.8; > } > > or (ip's in different order) > > upstream chat { > hash $arg_test; > server 10.0.0.8; > server 10.0.0.9; > server 10.0.0.10; > } > > If someone is targeting an url with ?test=1, is it in both configs directed > to the same ip or not. So is the selection based on the ip or based omn the > position in the list. The regular (non-consistent) hash balancer selects a server based on the position in the list. However, the consistent hash balancer (hash $arg_test consistent) makes a selection based on the server name/ip specified in the "server" directive. -- Roman Arutyunyan From rdocter at gmail.com Mon Oct 9 09:51:54 2017 From: rdocter at gmail.com (Ruben) Date: Mon, 9 Oct 2017 11:51:54 +0200 Subject: Selection of server in upstream directive using hash In-Reply-To: <20171009092509.GI60372@Romans-MacBook-Air.local> References: <20171009092509.GI60372@Romans-MacBook-Air.local> Message-ID: First of all thanks for your reply. But what happens if I have for example a hostname: test, which resolves to a randomized list of multiple ip's. Such that when I do: dig test 10.0.0.8 10.0.0.9 10.0.0.10 and a few moments later dig test 10.0.0.10 10.0.0.8 10.0.0.9 Is it then still consistent on the ip's or is the consistency just on the name "test" in this case? >From the docs I read that it resolves the hostname and injects the ip's as server when multiple ip's are returned. But it's not completely clear on which the hash is acting. I am asking this, because docker constantly returns a randomized fixed list of ip's (for dns load balancing). But I always want to route a user, targetting for some url, to the same container. 2017-10-09 11:25 GMT+02:00 Roman Arutyunyan : > Hi Ruben, > > On Mon, Oct 09, 2017 at 09:33:55AM +0200, Ruben wrote: > > I was wondering what the selection algorithm is for choosing a server in > > the upstream directive using hash. Is the selection based on the ip of > the > > server or is it based on the position of the list. > > > > So if I have for example the following configuration: > > > > upstream test { > > hash $arg_test; > > server 10.0.0.10; > > server 10.0.0.9; > > server 10.0.0.8; > > } > > > > or (ip's in different order) > > > > upstream chat { > > hash $arg_test; > > server 10.0.0.8; > > server 10.0.0.9; > > server 10.0.0.10; > > } > > > > If someone is targeting an url with ?test=1, is it in both configs > directed > > to the same ip or not. So is the selection based on the ip or based omn > the > > position in the list. > > The regular (non-consistent) hash balancer selects a server based on the > position in the list. However, the consistent hash balancer > (hash $arg_test consistent) makes a selection based on the server name/ip > specified in the "server" directive. > > -- > Roman Arutyunyan > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From arut at nginx.com Mon Oct 9 10:23:49 2017 From: arut at nginx.com (Roman Arutyunyan) Date: Mon, 9 Oct 2017 13:23:49 +0300 Subject: Selection of server in upstream directive using hash In-Reply-To: References: <20171009092509.GI60372@Romans-MacBook-Air.local> Message-ID: <20171009102349.GJ60372@Romans-MacBook-Air.local> Hi Ruben, On Mon, Oct 09, 2017 at 11:51:54AM +0200, Ruben wrote: > First of all thanks for your reply. But what happens if I have for example > a hostname: test, which resolves to a randomized list of multiple ip's. > Such that when I do: > > dig test > 10.0.0.8 > 10.0.0.9 > 10.0.0.10 > > and a few moments later > > dig test > 10.0.0.10 > 10.0.0.8 > 10.0.0.9 > > Is it then still consistent on the ip's or is the consistency just on the > name "test" in this case? The consistency is only about choosing the name "test". Once the name is chosen (consistently), its ips are balanced by the round-robin balancer. With this approach you can change ips of your server or add new addresses to it and everything will keep working as before. However, you cannot stick to a particular ip address of a server. > From the docs I read that it resolves the hostname and injects the ip's as > server when multiple ip's are returned. But it's not completely clear on > which the hash is acting. > > I am asking this, because docker constantly returns a randomized fixed list > of ip's (for dns load balancing). But I always want to route a user, > targetting for some url, to the same container. In the commercial version of nginx we have the sticky module, which can be used to solve your issue: http://nginx.org/en/docs/http/ngx_http_upstream_module.html#sticky > 2017-10-09 11:25 GMT+02:00 Roman Arutyunyan : > > > Hi Ruben, > > > > On Mon, Oct 09, 2017 at 09:33:55AM +0200, Ruben wrote: > > > I was wondering what the selection algorithm is for choosing a server in > > > the upstream directive using hash. Is the selection based on the ip of > > the > > > server or is it based on the position of the list. > > > > > > So if I have for example the following configuration: > > > > > > upstream test { > > > hash $arg_test; > > > server 10.0.0.10; > > > server 10.0.0.9; > > > server 10.0.0.8; > > > } > > > > > > or (ip's in different order) > > > > > > upstream chat { > > > hash $arg_test; > > > server 10.0.0.8; > > > server 10.0.0.9; > > > server 10.0.0.10; > > > } > > > > > > If someone is targeting an url with ?test=1, is it in both configs > > directed > > > to the same ip or not. So is the selection based on the ip or based omn > > the > > > position in the list. > > > > The regular (non-consistent) hash balancer selects a server based on the > > position in the list. However, the consistent hash balancer > > (hash $arg_test consistent) makes a selection based on the server name/ip > > specified in the "server" directive. > > > > -- > > Roman Arutyunyan > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Roman Arutyunyan From nginx-forum at forum.nginx.org Mon Oct 9 13:45:26 2017 From: nginx-forum at forum.nginx.org (bobykus) Date: Mon, 09 Oct 2017 09:45:26 -0400 Subject: NGINX IMAP proxy and outlook ios/android app Message-ID: <1cc99d4c05f6ffec603d9177b170112c.NginxMailingListEnglish@forum.nginx.org> Looks like since mid of Sept we can not use nginx as an imap(s) proxy for mobile outlook apps (both IOS and Android ). SSL handshake is just dropping like 2017/10/09 15:32:01 [debug] 30391#0: *184 accept: 52.166.246.73 fd:44 2017/10/09 15:32:01 [info] 30391#0: *184 client 52.166.246.73 connected to 0.0.0.0:993 2017/10/09 15:32:01 [debug] 30391#0: *184 SSL_do_handshake: -1 2017/10/09 15:32:01 [debug] 30391#0: *184 SSL_get_error: 2 2017/10/09 15:32:01 [debug] 30391#0: *184 epoll add event: fd:44 op:1 ev:80000001 2017/10/09 15:32:01 [debug] 30391#0: *184 event timer add: 44: 60000:1507555981777 2017/10/09 15:32:31 [debug] 30391#0: *184 SSL handshake handler: 0 2017/10/09 15:32:31 [debug] 30391#0: *184 SSL_do_handshake: 0 2017/10/09 15:32:31 [debug] 30391#0: *184 SSL_get_error: 5 2017/10/09 15:32:31 [info] 30391#0: *184 peer closed connection in SSL handshake while SSL handshaking, client: 52.166.246.73, server: 0.0.0.0:993 2017/10/09 15:32:31 [debug] 30391#0: *184 close mail connection: 44 2017/10/09 15:32:31 [debug] 30391#0: *184 SSL_shutdown: 1 2017/10/09 15:32:31 [debug] 30391#0: *184 event timer del: 44: 1507555981777 2017/10/09 15:32:31 [debug] 30391#0: *184 reusable connection: 0 2017/10/09 15:32:31 [debug] 30391#0: *184 free: 00000000024C12A0 2017/10/09 15:32:31 [debug] 30391#0: *184 free: 00000000024C1190, unused: 8 Wonder how can I figure out what happened, MS support is not any helpful in this case. tcpdump does not show much also... openssl s_client -connect mail.server.com:993 show no errors too... CONNECTED(00000003) depth=2 C = US, O = GeoTrust Inc., CN = GeoTrust Global CA verify return:1 depth=1 C = US, O = GeoTrust Inc., CN = RapidSSL SHA256 CA verify return:1 depth=0 CN = *.server.com verify return:1 --- Certificate chain 0 s:/CN=*.server.com i:/C=US/O=GeoTrust Inc./CN=RapidSSL SHA256 CA 1 s:/C=US/O=GeoTrust Inc./CN=RapidSSL SHA256 CA i:/C=US/O=GeoTrust Inc./CN=GeoTrust Global CA 2 s:/C=US/O=GeoTrust Inc./CN=GeoTrust Global CA i:/C=US/O=GeoTrust Inc./CN=GeoTrust Global CA --- --- No client certificate CA names sent Peer signing digest: SHA512 Server Temp Key: DH, 2048 bits --- SSL handshake has read 4717 bytes and written 417 bytes --- New, TLSv1/SSLv3, Cipher is DHE-RSA-AES128-GCM-SHA256 Server public key is 2048 bit Secure Renegotiation IS supported Compression: NONE Expansion: NONE No ALPN negotiated SSL-Session: Protocol : TLSv1.2 Cipher : DHE-RSA-AES128-GCM-SHA256 .... Posted at Nginx Forum: https://forum.nginx.org/read.php?2,276771,276771#msg-276771 From citrin at citrin.ru Tue Oct 10 02:15:49 2017 From: citrin at citrin.ru (Anton Yuzhaninov) Date: Mon, 9 Oct 2017 22:15:49 -0400 Subject: Inconsistent variable caching (in SSI or in nginScript) Message-ID: <6604d67b-17f0-b10a-cc38-b4a87bead9e1@citrin.ru> Hello, I've encountered a bug (or unexpected and not well documented feature) in SSI (ngx_http_ssi_module) or in nginScript (ngx_http_js_module). If a variable, which set via js_set is used only in SSI it is not cached and nginScript function called several times for one HTTP request. If the same variable is used anywhere in a config file, then a variable value is cached (for a request). Test config: js_include /home/citrin/w/nginx/random.js; js_set $random_int randomInt; server { listen 8081; location / { ssi on; ssi_types text/plain; default_type text/plain; return 200 '\n\n'; } location /unrelated { add_header X-rnd $random_int; } } With a config above I see the same random number printed twice (as expected). But when an add_header directive is commented out I see two different random numbers (and this change is not expected). nginx version: nginx/1.12.1 nginx-module-njs 0.1.10 -- Best Regards, Anton Yuzhaninov From citrin at citrin.ru Tue Oct 10 02:45:28 2017 From: citrin at citrin.ru (Anton Yuzhaninov) Date: Mon, 9 Oct 2017 22:45:28 -0400 Subject: duplicate upstream server marked as down In-Reply-To: <7ba22baf58c0e9b6984e1f96796f4c86.NginxMailingListEnglish@forum.nginx.org> References: <7ba22baf58c0e9b6984e1f96796f4c86.NginxMailingListEnglish@forum.nginx.org> Message-ID: <45ed9f2f-e9e4-0b59-071b-bc5ad99d4e7d@citrin.ru> On 10/03/17 14:00, halfpastjohn wrote: > Can i have two, identical, server hostnames in an upstream, with one of them > marked as "down"? Like this: > > resolver 10.0.0.8; > > upstream backend { > server backend.example.com down resolve; > server backend.example.com/api/v2/; > } server in an upstream can contain only host:port for a server, not an URI. > The reason being is that i need to route to the second one (with the longer > path), but i also need to resolve the hostname. Unfortunately it won't > resolve when there is additional pathing tacked onto the end. So, i'm hoping > that this will allow the hostname to be resolved but only send traffic to > the full path. If you need to resolve server hostname in runtime, try this config: resolver 10.0.0.8; upstream backend { zone z_backend 8k; server backend.example.com resolve; } server { location /foo/ { proxy_pass http://backend/api/v2/; } } From citrin at citrin.ru Tue Oct 10 02:53:04 2017 From: citrin at citrin.ru (Anton Yuzhaninov) Date: Mon, 9 Oct 2017 22:53:04 -0400 Subject: NGINX IMAP proxy and outlook ios/android app In-Reply-To: <1cc99d4c05f6ffec603d9177b170112c.NginxMailingListEnglish@forum.nginx.org> References: <1cc99d4c05f6ffec603d9177b170112c.NginxMailingListEnglish@forum.nginx.org> Message-ID: On 10/09/17 09:45, bobykus wrote: > Looks like since mid of Sept we can not use nginx as an imap(s) proxy for > mobile outlook apps (both IOS and Android ). > SSL handshake is just dropping like Try to setup a test https site on the same nginx with the same ssl setting and access this site using default phone browser. May be a problem is not specific to imap (and mobile outlook). > 2017/10/09 15:32:01 [debug] 30391#0: *184 accept: 52.166.246.73 fd:44 > 2017/10/09 15:32:01 [info] 30391#0: *184 client 52.166.246.73 connected to > 0.0.0.0:993 > 2017/10/09 15:32:01 [debug] 30391#0: *184 SSL_do_handshake: -1 > 2017/10/09 15:32:01 [debug] 30391#0: *184 SSL_get_error: 2 > 2017/10/09 15:32:01 [debug] 30391#0: *184 epoll add event: fd:44 op:1 > ev:80000001 > 2017/10/09 15:32:01 [debug] 30391#0: *184 event timer add: 44: > 60000:1507555981777 > 2017/10/09 15:32:31 [debug] 30391#0: *184 SSL handshake handler: 0 > 2017/10/09 15:32:31 [debug] 30391#0: *184 SSL_do_handshake: 0 > 2017/10/09 15:32:31 [debug] 30391#0: *184 SSL_get_error: 5 > 2017/10/09 15:32:31 [info] 30391#0: *184 peer closed connection in SSL > handshake while SSL handshaking, client: 52.166.246.73, server: 0.0.0.0:993 ... > > Wonder how can I figure out what happened, MS support is not any helpful in > this case. > tcpdump does not show much also... Do you tried to save traffic dump to a file and study it in Wireshark? Wireshark can decode Client/Server Hello messages and it can give a useful hint. From nginx-forum at forum.nginx.org Tue Oct 10 06:31:26 2017 From: nginx-forum at forum.nginx.org (bobykus) Date: Tue, 10 Oct 2017 02:31:26 -0400 Subject: NGINX IMAP proxy and outlook ios/android app In-Reply-To: References: Message-ID: I used tcpdump and ssldump ssldump -nr /var/tmp/www-ssl-client.cap New TCP connection #1: 52.166.193.38(14104) <-> 10.32.20.102(993) 1 1 0.0187 (0.0187) C>S Handshake ClientHello Version 3.3 resume [32]= 39 27 b8 51 63 0d 88 f5 47 fa 05 41 d0 b7 ac 3e 17 93 05 1b 24 47 a6 77 41 bd 45 07 42 25 de 25 cipher suites Unknown value 0xc024 Unknown value 0xc028 Unknown value 0x3d Unknown value 0xc026 Unknown value 0xc02a Unknown value 0x6b Unknown value 0x6a Unknown value 0xc00a Unknown value 0xc014 TLS_RSA_WITH_AES_256_CBC_SHA Unknown value 0xc005 Unknown value 0xc00f TLS_DHE_RSA_WITH_AES_256_CBC_SHA TLS_DHE_DSS_WITH_AES_256_CBC_SHA Unknown value 0xc023 Unknown value 0xc027 Unknown value 0x3c Unknown value 0xc025 Unknown value 0xc029 TLS_DHE_DSS_WITH_NULL_SHA Unknown value 0x40 Unknown value 0xc009 Unknown value 0xc013 TLS_RSA_WITH_AES_128_CBC_SHA Unknown value 0xc004 Unknown value 0xc00e TLS_DHE_RSA_WITH_AES_128_CBC_SHA TLS_DHE_DSS_WITH_AES_128_CBC_SHA Unknown value 0xc02c Unknown value 0xc02b Unknown value 0xc030 Unknown value 0x9d Unknown value 0xc02e Unknown value 0xc032 Unknown value 0x9f Unknown value 0xa3 Unknown value 0xc02f Unknown value 0x9c Unknown value 0xc02d Unknown value 0xc031 Unknown value 0x9e Unknown value 0xa2 Unknown value 0xc008 Unknown value 0xc012 TLS_RSA_WITH_3DES_EDE_CBC_SHA Unknown value 0xc003 Unknown value 0xc00d TLS_DHE_RSA_WITH_3DES_EDE_CBC_SHA TLS_DHE_DSS_WITH_3DES_EDE_CBC_SHA Unknown value 0xff compression methods NULL 1 2 0.0190 (0.0003) S>C Handshake ServerHello Version 3.3 session_id[32]= 39 27 b8 51 63 0d 88 f5 47 fa 05 41 d0 b7 ac 3e 17 93 05 1b 24 47 a6 77 41 bd 45 07 42 25 de 25 cipherSuite Unknown value 0x3d compressionMethod NULL 1 3 0.0190 (0.0000) S>C ChangeCipherSpec 1 4 0.0190 (0.0000) S>C Handshake 1 5 0.0370 (0.0180) C>S ChangeCipherSpec 1 6 0.0949 (0.0578) C>S Handshake 1 7 0.0951 (0.0002) S>C application_data 1 8 0.1147 (0.0195) C>S application_data 1 9 0.1148 (0.0001) S>C application_data 1 10 0.1330 (0.0182) C>S application_data 1 11 0.1332 (0.0001) S>C application_data 1 12 0.1515 (0.0183) C>S Alert 1 0.1516 (0.0000) C>S TCP FIN 1 0.1516 (0.0000) S>C TCP FIN New TCP connection #2: 52.166.193.38(14105) <-> 10.32.20.102(993) 2 30.0452 (30.0452) C>S TCP FIN 2 30.0453 (0.0000) S>C TCP FIN Not much that can say what is the reason of connection drop Posted at Nginx Forum: https://forum.nginx.org/read.php?2,276771,276782#msg-276782 From nginx-forum at forum.nginx.org Tue Oct 10 06:32:23 2017 From: nginx-forum at forum.nginx.org (bobykus) Date: Tue, 10 Oct 2017 02:32:23 -0400 Subject: NGINX IMAP proxy and outlook ios/android app In-Reply-To: References: Message-ID: <761ccc224e236ffd66110064d227a7cf.NginxMailingListEnglish@forum.nginx.org> How exactly I can do a wireshark against ssl chart? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,276771,276783#msg-276783 From jspies at sun.ac.za Tue Oct 10 08:04:50 2017 From: jspies at sun.ac.za (Johann Spies) Date: Tue, 10 Oct 2017 10:04:50 +0200 Subject: Cookie security for nginx Message-ID: A security scan on our server showed : Vulnerability Detection Method Details: SSL/TLS: Missing `secure` Cookie Attribute OID:1.3.6.1.4.1.25623.1.0.902661 Version used: $Revision: 5543 This is on Debian 8.9. and nginx 1.6.2-5+deb8u5. I am uncertain on how to fix this using standard debian packages. Can you help me fixing this please? Regards Johann -- Johann Spies Telefoon: 021-808 4699 Databestuurder / Data manager Faks: 021-883 3691 Sentrum vir Navorsing oor Evaluasie, Wetenskap en Tegnologie Centre for Research on Evaluation, Science and Technology Universiteit Stellenbosch. The integrity and confidentiality of this email is governed by these terms / Hierdie terme bepaal die integriteit en vertroulikheid van hierdie epos. http://www.sun.ac.za/emaildisclaimer From r1ch+nginx at teamliquid.net Tue Oct 10 11:04:18 2017 From: r1ch+nginx at teamliquid.net (Richard Stanway) Date: Tue, 10 Oct 2017 13:04:18 +0200 Subject: Cookie security for nginx In-Reply-To: <59dca50f.8378630a.a60df.a794SMTPIN_ADDED_MISSING@mx.google.com> References: <59dca50f.8378630a.a60df.a794SMTPIN_ADDED_MISSING@mx.google.com> Message-ID: This is something you should fix on whatever application is setting the cookie. It probably isn't nginx. On Tue, Oct 10, 2017 at 10:04 AM, Johann Spies wrote: > A security scan on our server showed : > > Vulnerability Detection Method > Details: SSL/TLS: > Missing `secure` Cookie Attribute > OID:1.3.6.1.4.1.25623.1.0.902661 > Version used: > $Revision: 5543 > > This is on Debian 8.9. and nginx 1.6.2-5+deb8u5. > > I am uncertain on how to fix this using standard debian packages. > > Can you help me fixing this please? > > Regards > Johann > > > -- > Johann Spies Telefoon: 021-808 4699 > Databestuurder / Data manager Faks: 021-883 3691 > > Sentrum vir Navorsing oor Evaluasie, Wetenskap en Tegnologie > Centre for Research on Evaluation, Science and Technology > Universiteit Stellenbosch. > > The integrity and confidentiality of this email is governed by these terms > / Hierdie terme bepaal die integriteit en vertroulikheid van hierdie epos. > http://www.sun.ac.za/emaildisclaimer > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jspies at sun.ac.za Tue Oct 10 12:24:16 2017 From: jspies at sun.ac.za (Johann Spies) Date: Tue, 10 Oct 2017 14:24:16 +0200 Subject: Cookie security for nginx In-Reply-To: References: <59dca50f.8378630a.a60df.a794SMTPIN_ADDED_MISSING@mx.google.com> Message-ID: <20171010122416.ufzetst6gexfpv2d@sun.ac.za> > This is something you should fix on whatever application is setting the > cookie. It probably isn't nginx. > Thanks. That helped. Regards Johann -- Johann Spies Telefoon: 021-808 4699 Databestuurder / Data manager Faks: 021-883 3691 Sentrum vir Navorsing oor Evaluasie, Wetenskap en Tegnologie Centre for Research on Evaluation, Science and Technology Universiteit Stellenbosch. The integrity and confidentiality of this email is governed by these terms / Hierdie terme bepaal die integriteit en vertroulikheid van hierdie epos. http://www.sun.ac.za/emaildisclaimer From citrin at citrin.ru Tue Oct 10 14:22:50 2017 From: citrin at citrin.ru (Anton Yuzhaninov) Date: Tue, 10 Oct 2017 10:22:50 -0400 Subject: Inconsistent variable caching (in SSI or in nginScript) In-Reply-To: <6604d67b-17f0-b10a-cc38-b4a87bead9e1@citrin.ru> References: <6604d67b-17f0-b10a-cc38-b4a87bead9e1@citrin.ru> Message-ID: <8eef7d12-9bd9-07ed-c0ae-c9b3b723b0ea@citrin.ru> On 10/09/17 22:15, Anton Yuzhaninov wrote: > > I've encountered a bug (or unexpected and not well documented feature) > in SSI (ngx_http_ssi_module) or in nginScript (ngx_http_js_module). > > If a variable, which set via js_set is used only in SSI it is not cached > and nginScript function called several times for one HTTP request. This problem is not related to nginScript. More simple test case: log_format unused '$request_id'; # when this line is commented variable is not cached server { listen 8082; location / { ssi on; ssi_types text/plain; default_type text/plain; return 200 '\n\n'; } } If $request_id is used only in SSI two different $request_id values are shown. If $request_id used anywhere in the config file - variable value is cached and two same values are show. From mdounin at mdounin.ru Tue Oct 10 14:41:57 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 10 Oct 2017 17:41:57 +0300 Subject: Inconsistent variable caching (in SSI or in nginScript) In-Reply-To: <8eef7d12-9bd9-07ed-c0ae-c9b3b723b0ea@citrin.ru> References: <6604d67b-17f0-b10a-cc38-b4a87bead9e1@citrin.ru> <8eef7d12-9bd9-07ed-c0ae-c9b3b723b0ea@citrin.ru> Message-ID: <20171010144157.GJ75166@mdounin.ru> Hello! On Tue, Oct 10, 2017 at 10:22:50AM -0400, Anton Yuzhaninov wrote: > On 10/09/17 22:15, Anton Yuzhaninov wrote: > > > > I've encountered a bug (or unexpected and not well documented feature) > > in SSI (ngx_http_ssi_module) or in nginScript (ngx_http_js_module). > > > > If a variable, which set via js_set is used only in SSI it is not cached > > and nginScript function called several times for one HTTP request. > > This problem is not related to nginScript. > > More simple test case: > > log_format unused '$request_id'; # when this line is commented > variable is not cached > > server { > listen 8082; > > location / { > ssi on; > ssi_types text/plain; > default_type text/plain; > return 200 '\n\n'; > } > } > > If $request_id is used only in SSI two different $request_id values are > shown. If $request_id used anywhere in the config file - variable value > is cached and two same values are show. This is a universal problem which is the result of how variable caching works: nginx is only able to cache variable which are indexed. So variables which aren't in the configuration but only resolved dynamically, as in SSI's echo command, are not cached. Obviously enough this doesn't looks good, especially taking into account the $request_id variable. Probably a solution would be to cache non-indexed variables separately. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Tue Oct 10 15:39:58 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 10 Oct 2017 18:39:58 +0300 Subject: nginx-1.13.6 Message-ID: <20171010153958.GM75166@mdounin.ru> Changes with nginx 1.13.6 10 Oct 2017 *) Bugfix: switching to the next upstream server in the stream module did not work when using the "ssl_preread" directive. *) Bugfix: in the ngx_http_v2_module. Thanks to Piotr Sikora. *) Bugfix: nginx did not support dates after the year 2038 on 32-bit platforms with 64-bit time_t. *) Bugfix: in handling of dates prior to the year 1970 and after the year 10000. *) Bugfix: in the stream module timeouts waiting for UDP datagrams from upstream servers were not logged or logged at the "info" level instead of "error". *) Bugfix: when using HTTP/2 nginx might return the 400 response without logging the reason. *) Bugfix: in processing of corrupted cache files. *) Bugfix: cache control headers were ignored when caching errors intercepted by error_page. *) Bugfix: when using HTTP/2 client request body might be corrupted. *) Bugfix: in handling of client addresses when using unix domain sockets. *) Bugfix: nginx hogged CPU when using the "hash ... consistent" directive in the upstream block if large weights were used and all or most of the servers were unavailable. -- Maxim Dounin http://nginx.org/ From thresh at nginx.com Tue Oct 10 16:49:35 2017 From: thresh at nginx.com (Konstantin Pavlov) Date: Tue, 10 Oct 2017 19:49:35 +0300 Subject: OpenSSL 1.0.2 - CentOS 7.4 In-Reply-To: References: Message-ID: Hello, On 15/09/2017 10:29, Rodrigo Gomes wrote: > ??Hello guys, > > RedHat released yesterday (September 14) CentOS 7.4, which includes version 1.0.2 of OpenSSL. > > Now all it takes is the Nginx RPM to be compiled with the latest version of OpenSSL to work with ALPN. > > Do you have a forecast when we will have these features supported in the official repository?? nginx 1.13.6 packages from the official nginx.org mainline repository are now built on CentOS 7.4 using openssl 1.0.2. That also means those packages are impossible to install on CentOS/RHEL 7.0-7.3, if the operating system is explicitly configured to use minor version repos, instead of the "rolling" 7. The error observed on such systems is something like: Error: Package: 1:nginx-1.13.6-1.el7_4.ngx.x86_64 (nginx) Requires: libcrypto.so.10(OPENSSL_1.0.2)(64bit) You could try using --skip-broken to work around the problem You could try running: rpm -Va --nofiles --nodigest Unfortunately there is only one solution - get the systems upgraded to 7.4; one should also keep in mind that 7.0-7.3 are not supported by the distributions vendors anymore (including security fixes). -- Konstantin Pavlov www.nginx.com From kworthington at gmail.com Tue Oct 10 17:48:16 2017 From: kworthington at gmail.com (Kevin Worthington) Date: Tue, 10 Oct 2017 13:48:16 -0400 Subject: [nginx-announce] nginx-1.13.6 In-Reply-To: References: <20171010154005.GN75166@mdounin.ru> Message-ID: Hello Nginx users, Now available: Nginx 1.13.6 for Windows https://kevinworthington.com/nginxwin1136 (32-bit and 64-bit versions) These versions are to support legacy users who are already using Cygwin based builds of Nginx. Officially supported native Windows binaries are at nginx.org. Announcements are also available here: Twitter http://twitter.com/kworthington Google+ https://plus.google.com/+KevinWorthington/ Thank you, Kevin -- Kevin Worthington kworthington *@* (gmail] [dot} {com) https://kevinworthington.com/ https://twitter.com/kworthington https://plus.google.com/+KevinWorthington/ On Tue, Oct 10, 2017 at 1:47 PM, Kevin Worthington wrote: > Hello Nginx users, > > Now available: Nginx 1.13.6 for Windows https://kevinworthington.com/ > nginxwin1136 (32-bit and 64-bit versions) > > These versions are to support legacy users who are already using Cygwin > based builds of Nginx. Officially supported native Windows binaries are at > nginx.org. > > Announcements are also available here: > Twitter http://twitter.com/kworthington > Google+ https://plus.google.com/+KevinWorthington/ > > Thank you, > Kevin > -- > Kevin Worthington > kworthington *@* (gmail] [dot} {com) > https://kevinworthington.com/ > https://twitter.com/kworthington > https://plus.google.com/+KevinWorthington/ > > > On Tue, Oct 10, 2017 at 11:40 AM, Maxim Dounin wrote: > >> Changes with nginx 1.13.6 10 Oct >> 2017 >> >> *) Bugfix: switching to the next upstream server in the stream module >> did not work when using the "ssl_preread" directive. >> >> *) Bugfix: in the ngx_http_v2_module. >> Thanks to Piotr Sikora. >> >> *) Bugfix: nginx did not support dates after the year 2038 on 32-bit >> platforms with 64-bit time_t. >> >> *) Bugfix: in handling of dates prior to the year 1970 and after the >> year 10000. >> >> *) Bugfix: in the stream module timeouts waiting for UDP datagrams >> from >> upstream servers were not logged or logged at the "info" level >> instead of "error". >> >> *) Bugfix: when using HTTP/2 nginx might return the 400 response >> without >> logging the reason. >> >> *) Bugfix: in processing of corrupted cache files. >> >> *) Bugfix: cache control headers were ignored when caching errors >> intercepted by error_page. >> >> *) Bugfix: when using HTTP/2 client request body might be corrupted. >> >> *) Bugfix: in handling of client addresses when using unix domain >> sockets. >> >> *) Bugfix: nginx hogged CPU when using the "hash ... consistent" >> directive in the upstream block if large weights were used and all >> or >> most of the servers were unavailable. >> >> >> -- >> Maxim Dounin >> http://nginx.org/ >> _______________________________________________ >> nginx-announce mailing list >> nginx-announce at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx-announce >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Tue Oct 10 23:06:40 2017 From: nginx-forum at forum.nginx.org (omar_h) Date: Tue, 10 Oct 2017 19:06:40 -0400 Subject: Unable to escape $ sign in subs_filter Message-ID: <5ad00be3925db15bbcbef5535fe22295.NginxMailingListEnglish@forum.nginx.org> Hello, I am trying to add some code to a page which contains a $ symbol. However, I am getting the following error: nginx: [emerg] invalid variable name Here is a simple example of the config: subs_filter 'example' '$'; Is there any way to escape the $ symbol? The issue occurs even with single brackets, which normally should take strings as literal without any variable substitution. It seems like a bug since this behavior should only occur with double brackets. This issue was also discussed in the past, however, the workarounds don't work in my case https://forum.nginx.org/read.php?29,243808,243808#msg-243808 https://forum.nginx.org/read.php?2,218536,218536#msg-218536 Posted at Nginx Forum: https://forum.nginx.org/read.php?2,276801,276801#msg-276801 From mailinglist at unix-solution.de Wed Oct 11 07:58:13 2017 From: mailinglist at unix-solution.de (basti) Date: Wed, 11 Oct 2017 09:58:13 +0200 Subject: index.php and autoindex on Message-ID: <0f0765fd-7f65-6570-6717-5ca5bd33f2b4@unix-solution.de> Hello, i have a config look like server { servername example.com .... location /foo { index index.php; proxy_pass ... ... } root /some/path; autoindex on; ... } I want Autoindex if url is example.com/ and want to run index.php if url is example.com/foo but in this dir I can also see all files. Best Regards, Basti From smart.imran003 at gmail.com Wed Oct 11 08:13:33 2017 From: smart.imran003 at gmail.com (Syed Imran) Date: Wed, 11 Oct 2017 13:43:33 +0530 Subject: docker login always returns success Message-ID: Hi, I have artifactory installed with docker & nginx integrated. What ever credentials i give, always returns "Login succeeded" Below is the log. Can this be something related to firewall block. Because application side log always shows up saying Login Denied. test at finland-artifactory:~$ docker login XXX.XXX.XXX.XXX Username (asdf): asdfasdf Password: Login Succeeded test at finland-artifactory:~$ docker login XXX.XXX.XXX.XXX Username (asdfasdf): thttht Password: Login Succeeded Below is the nginx configuration. ssl_certificate /etc/nginx/ssl/demo.pem; ssl_certificate_key /etc/nginx/ssl/demo.key; ssl_session_cache shared:SSL:1m; ssl_prefer_server_ciphers on; ## server configuration server { listen 443 ssl; listen 80 ; server_name XXX.XXX.XXX.XXX; if ($http_x_forwarded_proto = '') { set $http_x_forwarded_proto $scheme; } ## Application specific logs ## access_log /var/log/nginx/artifactory.net.nokia.com-access.log timing; ## error_log /var/log/nginx/artifactory.net.nokia.com-error.log; rewrite ^/$ /artifactory/webapp/ redirect; rewrite ^/artifactory/?(/webapp)?$ /artifactory/webapp/ redirect; rewrite ^/(v1|v2)/(.*) /artifactory/api/docker/fnms_docker_virtual/$1/$2; chunked_transfer_encoding on; client_max_body_size 0; location /artifactory/ { proxy_read_timeout 900; proxy_pass_header Server; proxy_cookie_path ~*^/.* /; proxy_pass http://XXX.XXX.XXX.XXX:8081/artifactory/; proxy_set_header X-Artifactory-Override-Base-Url $http_x_forwarded_proto://$host:$server_port/artifactory; proxy_set_header X-Forwarded-Port $server_port; proxy_set_header X-Forwarded-Proto $http_x_forwarded_proto; proxy_set_header Host $http_host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } } Thanks, Syed -------------- next part -------------- An HTML attachment was scrubbed... URL: From mailinglist at unix-solution.de Wed Oct 11 08:46:36 2017 From: mailinglist at unix-solution.de (basti) Date: Wed, 11 Oct 2017 10:46:36 +0200 Subject: index.php and autoindex on In-Reply-To: <0f0765fd-7f65-6570-6717-5ca5bd33f2b4@unix-solution.de> References: <0f0765fd-7f65-6570-6717-5ca5bd33f2b4@unix-solution.de> Message-ID: <05cb11fd-dde7-03d1-b1a7-2f1620fb5ac2@unix-solution.de> Hello again, OK i have fixed it. On 11.10.2017 09:58, basti wrote: > Hello, > > i have a config look like > > server { > servername example.com > .... > > location /foo { > index index.php; > proxy_pass ... > ... > } > > root /some/path; > autoindex on; > ... > } > > I want Autoindex if url is > example.com/ > > and want to run index.php if url is example.com/foo but in this dir I > can also see all files. > > Best Regards, > Basti > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > From rdocter at gmail.com Wed Oct 11 09:47:25 2017 From: rdocter at gmail.com (Ruben) Date: Wed, 11 Oct 2017 11:47:25 +0200 Subject: Selection of server in upstream directive using hash In-Reply-To: <20171009102349.GJ60372@Romans-MacBook-Air.local> References: <20171009092509.GI60372@Romans-MacBook-Air.local> <20171009102349.GJ60372@Romans-MacBook-Air.local> Message-ID: Well I'd expect that when using "server chat" as per the docs: "A domain name that resolves to several IP addresses defines multiple servers at once." the hash wouldn't be consistent on choosing the name, but on the server list, because it defines multiple servers. 2017-10-09 12:23 GMT+02:00 Roman Arutyunyan : > > Hi Ruben, > > On Mon, Oct 09, 2017 at 11:51:54AM +0200, Ruben wrote: > > First of all thanks for your reply. But what happens if I have for example > > a hostname: test, which resolves to a randomized list of multiple ip's. > > Such that when I do: > > > > dig test > > 10.0.0.8 > > 10.0.0.9 > > 10.0.0.10 > > > > and a few moments later > > > > dig test > > 10.0.0.10 > > 10.0.0.8 > > 10.0.0.9 > > > > Is it then still consistent on the ip's or is the consistency just on the > > name "test" in this case? > > The consistency is only about choosing the name "test". Once the name is > chosen (consistently), its ips are balanced by the round-robin balancer. > With this approach you can change ips of your server or add new addresses to > it and everything will keep working as before. However, you cannot stick > to a particular ip address of a server. > > > From the docs I read that it resolves the hostname and injects the ip's as > > server when multiple ip's are returned. But it's not completely clear on > > which the hash is acting. > > > > I am asking this, because docker constantly returns a randomized fixed list > > of ip's (for dns load balancing). But I always want to route a user, > > targetting for some url, to the same container. > > In the commercial version of nginx we have the sticky module, which can be used > to solve your issue: > > http://nginx.org/en/docs/http/ngx_http_upstream_module.html#sticky > > > 2017-10-09 11:25 GMT+02:00 Roman Arutyunyan : > > > > > Hi Ruben, > > > > > > On Mon, Oct 09, 2017 at 09:33:55AM +0200, Ruben wrote: > > > > I was wondering what the selection algorithm is for choosing a server in > > > > the upstream directive using hash. Is the selection based on the ip of > > > the > > > > server or is it based on the position of the list. > > > > > > > > So if I have for example the following configuration: > > > > > > > > upstream test { > > > > hash $arg_test; > > > > server 10.0.0.10; > > > > server 10.0.0.9; > > > > server 10.0.0.8; > > > > } > > > > > > > > or (ip's in different order) > > > > > > > > upstream chat { > > > > hash $arg_test; > > > > server 10.0.0.8; > > > > server 10.0.0.9; > > > > server 10.0.0.10; > > > > } > > > > > > > > If someone is targeting an url with ?test=1, is it in both configs > > > directed > > > > to the same ip or not. So is the selection based on the ip or based omn > > > the > > > > position in the list. > > > > > > The regular (non-consistent) hash balancer selects a server based on the > > > position in the list. However, the consistent hash balancer > > > (hash $arg_test consistent) makes a selection based on the server name/ip > > > specified in the "server" directive. > > > > > > -- > > > Roman Arutyunyan > > > _______________________________________________ > > > nginx mailing list > > > nginx at nginx.org > > > http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > > -- > Roman Arutyunyan > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Wed Oct 11 13:18:09 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 11 Oct 2017 16:18:09 +0300 Subject: Unable to escape $ sign in subs_filter In-Reply-To: <5ad00be3925db15bbcbef5535fe22295.NginxMailingListEnglish@forum.nginx.org> References: <5ad00be3925db15bbcbef5535fe22295.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20171011131808.GU75166@mdounin.ru> Hello! On Tue, Oct 10, 2017 at 07:06:40PM -0400, omar_h wrote: > Hello, > > I am trying to add some code to a page which contains a $ symbol. However, > I am getting the following error: > > nginx: [emerg] invalid variable name > > Here is a simple example of the config: > > subs_filter 'example' '$'; > > Is there any way to escape the $ symbol? The issue occurs even with single > brackets, which normally should take strings as literal without any variable > substitution. It seems like a bug since this behavior should only occur > with double brackets. In nginx, there is no difference between single and double quotes, they are handled equally. > This issue was also discussed in the past, however, the workarounds don't > work in my case > https://forum.nginx.org/read.php?29,243808,243808#msg-243808 > https://forum.nginx.org/read.php?2,218536,218536#msg-218536 The workaround with using geo{} still applies. -- Maxim Dounin http://nginx.org/ From r1ch+nginx at teamliquid.net Wed Oct 11 13:58:28 2017 From: r1ch+nginx at teamliquid.net (Richard Stanway) Date: Wed, 11 Oct 2017 15:58:28 +0200 Subject: [alert] epoll_ctl(1, 575) failed (17: File exists) Message-ID: Hello, I have a location that proxies to a websocket server. Clients connect over HTTPS (HTTP2, wss://). Sometimes clients generate the following alerts in the error log when hitting the websocket location: 2017/10/11 21:03:23 [alert] 34381#34381: *1020125 epoll_ctl(1, 603) failed (17: File exists) while proxying upgraded connection, client: x.158, server: www.example.com, request: "GET /websocketpath HTTP/2.0", upstream: "http:///", host: "www.example.com" 2017/10/11 21:44:15 [alert] 34374#34374: *1274194 epoll_ctl(1, 1131) failed (17: File exists) while proxying upgraded connection, client: x.42, server: www.example.com, request: "GET /websocketpath HTTP/2.0", upstream: "http:///", host: "www.example.com" Here's the location excerpt: location /websocketpath { proxy_read_timeout 300; proxy_next_upstream off; proxy_buffering off; proxy_http_version 1.1; proxy_set_header Host $host; proxy_set_header Connection $connection_upgrade; proxy_set_header X-Forwarded-For $remote_addr; proxy_set_header Upgrade $http_upgrade; proxy_pass http://; } Config is otherwise pretty straightforward (static content, fastcgi backends, no AIO). nginx is from the nginx.org Debian repository. nginx version: nginx/1.13.6 built by gcc 6.3.0 20170516 (Debian 6.3.0-18) built with OpenSSL 1.1.0f 25 May 2017 TLS SNI support enabled configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --modules-path=/usr/lib/nginx/modules --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx --group=nginx --with-compat --with-file-aio --with-threads --with-http_addition_module --with-http_auth_request_module --with-http_dav_module --with-http_flv_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_mp4_module --with-http_random_index_module --with-http_realip_module --with-http_secure_link_module --with-http_slice_module --with-http_ssl_module --with-http_stub_status_module --with-http_sub_module --with-http_v2_module --with-mail --with-mail_ssl_module --with-stream --with-stream_realip_module --with-stream_ssl_module --with-stream_ssl_preread_module --with-cc-opt='-g -O2 -fdebug-prefix-map=/data/builder/debuild/nginx-1.13.6/debian/debuild-base/nginx-1.13.6=. -specs=/usr/share/dpkg/no-pie-compile.specs -fstack-protector-strong -Wformat -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -fPIC' --with-ld-opt='-specs=/usr/share/dpkg/no-pie-link.specs -Wl,-z,relro -Wl,-z,now -Wl,--as-needed -pie' This seems to have started after upgrading to Debian 9 (which upgraded the OpenSSL library, allowing ALPN and thus HTTP2 to be usable). Previously the connections were mostly HTTP/1.1 and I didn't notice any such messages. Despite the alerts, the access log shows the clients with a 101 status code. Any idea if this is something on my end I should start looking at, or is this a possible issue with http2 and websockets? Thanks, Rich. -------------- next part -------------- An HTML attachment was scrubbed... URL: From vbart at nginx.com Wed Oct 11 14:14:45 2017 From: vbart at nginx.com (Valentin V. Bartenev) Date: Wed, 11 Oct 2017 17:14:45 +0300 Subject: [alert] epoll_ctl(1, 575) failed (17: File exists) In-Reply-To: References: Message-ID: <2366620.n283BbeWT0@vbart-workstation> On Wednesday 11 October 2017 15:58:28 Richard Stanway via nginx wrote: > Hello, > I have a location that proxies to a websocket server. Clients connect over > HTTPS (HTTP2, wss://). Sometimes clients generate the following alerts in > the error log when hitting the websocket location: > > 2017/10/11 21:03:23 [alert] 34381#34381: *1020125 epoll_ctl(1, 603) failed > (17: File exists) while proxying upgraded connection, client: x.158, > server: www.example.com, request: "GET /websocketpath HTTP/2.0", upstream: > "http:///", host: "www.example.com" > > 2017/10/11 21:44:15 [alert] 34374#34374: *1274194 epoll_ctl(1, 1131) failed > (17: File exists) while proxying upgraded connection, client: x.42, server: > www.example.com, request: "GET /websocketpath HTTP/2.0", upstream: > "http:///", > host: "www.example.com" > [..] > > This seems to have started after upgrading to Debian 9 (which upgraded the > OpenSSL library, allowing ALPN and thus HTTP2 to be usable). Previously the > connections were mostly HTTP/1.1 and I didn't notice any such messages. > > Despite the alerts, the access log shows the clients with a 101 status code. > > Any idea if this is something on my end I should start looking at, or is > this a possible issue with http2 and websockets? > [..] Websockets cannot work over HTTP/2. wbr, Valentin V. Bartenev From r1ch+nginx at teamliquid.net Wed Oct 11 14:54:56 2017 From: r1ch+nginx at teamliquid.net (Richard Stanway) Date: Wed, 11 Oct 2017 16:54:56 +0200 Subject: [alert] epoll_ctl(1, 575) failed (17: File exists) In-Reply-To: <2366620.n283BbeWT0@vbart-workstation> References: <2366620.n283BbeWT0@vbart-workstation> Message-ID: On Wed, Oct 11, 2017 at 4:14 PM, Valentin V. Bartenev wrote: > > Websockets cannot work over HTTP/2. > > So it appears, I guess I should have checked that! Upon closer examination, all the 101 responses I was seeing in the access log were from HTTP/1.1 clients, the HTTP 2 requests never even got logged in the access log. I'll see if I can rework my application to avoid using websockets. Thanks. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Thu Oct 12 05:57:34 2017 From: nginx-forum at forum.nginx.org (joseph-pg) Date: Thu, 12 Oct 2017 01:57:34 -0400 Subject: Unable to escape $ sign in subs_filter In-Reply-To: <5ad00be3925db15bbcbef5535fe22295.NginxMailingListEnglish@forum.nginx.org> References: <5ad00be3925db15bbcbef5535fe22295.NginxMailingListEnglish@forum.nginx.org> Message-ID: <6d114a0b9511cd8a2399d43a5994d60b.NginxMailingListEnglish@forum.nginx.org> omar_h Wrote: ------------------------------------------------------- > Hello, > > I am trying to add some code to a page which contains a $ symbol. > However, I am getting the following error: > > nginx: [emerg] invalid variable name > > Here is a simple example of the config: > > subs_filter 'example' '$'; > > Is there any way to escape the $ symbol? The issue occurs even with > single brackets, which normally should take strings as literal without > any variable substitution. It seems like a bug since this behavior > should only occur with double brackets. > > This issue was also discussed in the past, however, the workarounds > don't work in my case > https://forum.nginx.org/read.php?29,243808,243808#msg-243808 > https://forum.nginx.org/read.php?2,218536,218536#msg-218536 You need a backslash (\). Posted at Nginx Forum: https://forum.nginx.org/read.php?2,276801,276829#msg-276829 From nginx-forum at forum.nginx.org Thu Oct 12 08:44:47 2017 From: nginx-forum at forum.nginx.org (Dingo) Date: Thu, 12 Oct 2017 04:44:47 -0400 Subject: Reverse cache not working on start pages In-Reply-To: <20171004153900.GR16067@mdounin.ru> References: <20171004153900.GR16067@mdounin.ru> Message-ID: I believe that there is sometimes a problem with the cache when i connect through a private IP-address instead of always using the public address. Since I started to always use the public address and a hairpin-nat, it always works. Maybe the cache has a problem when seeing me coming from different IP addresses. This didn't however help me with my start page problem, but I have found the solution now. I will write a new post in this thread about that. Thanks for the help. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,276670,276831#msg-276831 From nginx-forum at forum.nginx.org Thu Oct 12 08:45:55 2017 From: nginx-forum at forum.nginx.org (Dingo) Date: Thu, 12 Oct 2017 04:45:55 -0400 Subject: Reverse cache not working on start pages In-Reply-To: References: Message-ID: <4fb9a789f06a01e0f47ac1a2e4c7b50b.NginxMailingListEnglish@forum.nginx.org> Thanks for the help, and I have found the solution now, so I will post it in this thread. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,276670,276832#msg-276832 From nginx-forum at forum.nginx.org Thu Oct 12 08:52:31 2017 From: nginx-forum at forum.nginx.org (Dingo) Date: Thu, 12 Oct 2017 04:52:31 -0400 Subject: Reverse cache not working on start pages (solution founD) In-Reply-To: References: <20171004153900.GR16067@mdounin.ru> Message-ID: I found the solution, but I don't understand what it does. When I add: proxy_cache_key "$host$uri$is_args$args"; To a location block it magically works. I have no clue what happens, it was just a snippet I found on the Internet used by some other guy setting up a reverse proxy with cache. And thanks to Maxim and pbooth for trying to help me. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,276670,276833#msg-276833 From peter_booth at me.com Thu Oct 12 09:32:52 2017 From: peter_booth at me.com (Peter Booth) Date: Thu, 12 Oct 2017 05:32:52 -0400 Subject: Reverse cache not working on start pages (solution founD) In-Reply-To: References: <20171004153900.GR16067@mdounin.ru> Message-ID: Sounds like the problem is that you don?t have nginx configured to enforce canonical urls. What do I mean by this? Imagine that every page on the site has one and only one ?correct URL? So someone might type http://www.mydomain.com http://mydomain.com http://www.mydomain.com/index.html and expect to see the same page. A site that enforces canonical URLs would do a redirect from the non-canonical URL so the web server end up being queried of the canonical URL, which would be cached correctly. There is one good and one blah reason to do this. The first (good) reason is about predictability, and making easy to solve problems. The second reason is for better SEO, though there are other techniques to solve it. There are so many things that can go wrong or trip us up on websites which is why ensuring predictability whenever possible reduces the population of potential error causes. Peter > On Oct 12, 2017, at 4:52 AM, Dingo wrote: > > I found the solution, but I don't understand what it does. When I add: > > proxy_cache_key "$host$uri$is_args$args"; > > To a location block it magically works. I have no clue what happens, it was > just a snippet I found on the Internet used by some other guy setting up a > reverse proxy with cache. > > And thanks to Maxim and pbooth for trying to help me. > > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,276670,276833#msg-276833 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at forum.nginx.org Thu Oct 12 11:26:03 2017 From: nginx-forum at forum.nginx.org (Dingo) Date: Thu, 12 Oct 2017 07:26:03 -0400 Subject: Reverse cache not working on start pages (solution founD) In-Reply-To: References: Message-ID: <33f4bb82495696be868dd83c78b5fdcd.NginxMailingListEnglish@forum.nginx.org> You are right. I didn't know what canonical url:s where, but now I know. Yes there is in fact two servers. One server is running Apache with a website that has maybe 10 different DNS-domains pointing to it and then there is another server running IIS with lots of websites but usually only one DNS-domain pointing to each of them. The IIS server has a control panel software that enables customers to add both websites and DNS-records, so I don't want to change the configuration in my nginx proxy every time someone adds or changes something on that server, so there needs to be a bit of compromising. I have very limited knowledge about how to configure and protect webservers and the reason all this is happening now, is that the IIS server has been hacked due to an old wordpress vulnerability in a plugin called revslider, so I have had to do things in a bit of a hurry. When I installed nginx i didn't know that it was revslider, so nginx didn't fix the problem, so the server got hacked once again. I have now installed modsecurity, which seems to have stopped the problem. I am seriously considering using nginx plus, but it's not entirely my decision and my colleagues are already upset over all cost surrounding the web-servers at the moment. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,276670,276836#msg-276836 From lucas at lucasrolff.com Thu Oct 12 11:39:59 2017 From: lucas at lucasrolff.com (Lucas Rolff) Date: Thu, 12 Oct 2017 11:39:59 +0000 Subject: Reverse cache not working on start pages (solution founD) In-Reply-To: <33f4bb82495696be868dd83c78b5fdcd.NginxMailingListEnglish@forum.nginx.org> References: <33f4bb82495696be868dd83c78b5fdcd.NginxMailingListEnglish@forum.nginx.org> Message-ID: <5CF39CE4-D1F2-40EC-8C33-4D62DE46D346@lucasrolff.com> If your server gets hacked due to a single website, you have bigger problems, and mod_security won?t fix the issue. Consult with security professionals or give the task of managing your infrastructure to someone that can properly secure the environment. On 12/10/2017, 13.26, "nginx on behalf of Dingo" wrote: You are right. I didn't know what canonical url:s where, but now I know. Yes there is in fact two servers. One server is running Apache with a website that has maybe 10 different DNS-domains pointing to it and then there is another server running IIS with lots of websites but usually only one DNS-domain pointing to each of them. The IIS server has a control panel software that enables customers to add both websites and DNS-records, so I don't want to change the configuration in my nginx proxy every time someone adds or changes something on that server, so there needs to be a bit of compromising. I have very limited knowledge about how to configure and protect webservers and the reason all this is happening now, is that the IIS server has been hacked due to an old wordpress vulnerability in a plugin called revslider, so I have had to do things in a bit of a hurry. When I installed nginx i didn't know that it was revslider, so nginx didn't fix the problem, so the server got hacked once again. I have now installed modsecurity, which seems to have stopped the problem. I am seriously considering using nginx plus, but it's not entirely my decision and my colleagues are already upset over all cost surrounding the web-servers at the moment. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,276670,276836#msg-276836 _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From c.schoepplein at musin.de Thu Oct 12 12:41:58 2017 From: c.schoepplein at musin.de (Christian Schoepplein) Date: Thu, 12 Oct 2017 14:41:58 +0200 Subject: WebDAV behind a nginx reverse proxy Message-ID: <20171012124158.jnngibjrehe7xagg@cs.outpost.musin.de> Hi, I've installed nginx as a reverse proxy in front of an apache webdav server. Everything seems to be OK so far, but renaming or moving files failes. This is my current vhost for the webdav access on the nginx rev. proxy: server { listen 443 ssl; server_name "~^(?(webdav|schulweb))\-ca(?\d{4})\-(?(muenchen|augsburg))\.musin\.de$"; dav_methods PUT DELETE MKCOL COPY MOVE; dav_ext_methods PROPFIND OPTIONS; location / { resolver ns.musin.de; set $target $part1.ca$part2.$part3.musin.de; proxy_pass http://$target:80; client_max_body_size 0; set $dest $http_destination; if ($http_destination ~ "^https://(.+)") { set $dest http://$1; } proxy_set_header Destination $dest; proxy_http_version 1.1; proxy_buffering off; proxy_read_timeout 300; proxy_send_timeout 300; proxy_pass_request_headers on; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto http; proxy_set_header X-Forwarded-User $http_authorization; proxy_set_header Host $target; proxy_set_header X-Real-IP $remote_addr; proxy_set_header Accept-Encoding ""; proxy_pass_header Date; proxy_pass_header Server; proxy_pass_header Authorization; more_set_input_headers 'Authorization: $http_authorization'; more_set_headers -s 401 'WWW-Authenticate: Basic realm="$target"'; } access_log /var/log/nginx/webdav-access.log upstreamlog; error_log /var/log/nginx/webdav-error.log warn; } If I switch the vhost to listen on port 80 without ssl, everything is fine and files can be renamed or moved via webdav. I think the problem is related to this thread, but however, the solution described there does unfortunatly not help: http://mailman.nginx.org/pipermail/nginx/2007-January/000504.html I am absolutely no nginx, proxy or http expert, so maybe some settings in the config above are wrong. If you see anything that is strange or can be removed or should be changed, please let me know. Also every idea how to solve my problem would be great. Also every hint how to debug such kind of problems are highly wellcome :-). Cheers and thanks, Schoepp -- Christian Schoepplein Landeshauptstadt Muenchen Referat fuer Bildung und Sport Zentrum fuer Informationstechnologie im Bildungsbereich (ZIB) - Netze und Servermanagement Postanschrift: Bueroanschrift: Landeshauptstadt Muenchen Landeshauptstadt Muenchen Referat fuer Bildung und Sport Referat fuer Bildung und Sport Postfach Bayerstr. 28 (Raum 5.326) 80313 Muenchen 80335 Muenchen T: +49 (0)89 233-85906 E: c.schoepplein (at) musin.de I: http://www.zib.musin.de Elektronische Kommunikation mit der Landeshauptstadt Muenchen, siehe: http://www.muenchen.de/ekomm Bitte denken Sie an die Umwelt, bevor Sie diese E-Mail ausdrucken. Pro Blatt sparen Sie durchschnittlich 15g Holz, 260ml Wasser, 0,05kWh Strom und 5g CO2. From r at roze.lv Thu Oct 12 19:37:11 2017 From: r at roze.lv (Reinis Rozitis) Date: Thu, 12 Oct 2017 22:37:11 +0300 Subject: WebDAV behind a nginx reverse proxy In-Reply-To: <20171012124158.jnngibjrehe7xagg@cs.outpost.musin.de> References: <20171012124158.jnngibjrehe7xagg@cs.outpost.musin.de> Message-ID: > This is my current vhost for the webdav access on the nginx rev. proxy: [..] > If I switch the vhost to listen on port 80 without ssl, everything is > fine and files can be renamed or moved via webdav. If it works on http but not with ssl it might indicate that either this configuration part doesn't work as expected: set $dest $http_destination; if ($http_destination ~ "^https://(.+)") { set $dest http://$1; } proxy_set_header Destination $dest; or depending on the backend application maybe statically setting proxy_set_header X-Forwarded-Proto http; is wrong as usually you need to pass the actual protocol used for the application to respond correctly and construct the URLs using the right schema. I would try changing it to: proxy_set_header X-Forwarded-Proto $scheme; > Also every hint how to debug such kind of problems are highly wellcome One way to debug would be using something like tcpdump either on the nginx or apache host to inspect the http headers passed and/or adding them to access logs to see what goes wrong. But some parts you can check also on frontend with browser - for example the Destination header by adding it to nginx configuration: add_header Destination $dest; As far as I understand you are using nginx as an SSL offloader? Is there anything else you do on the proxy? If not maybe it's more easy to use the stream module ( http://nginx.org/en/docs/stream/ngx_stream_core_module.html ) and ssl offload on tcp level rather than deal with the http/headers between (though there is a drawback of not getting the real client ip (in a http header) on the backend server / hope for PROXY protocol support for backends one day). In short something like: stream { upstream stream_backend { server your.apache.backend:80; } server { listen 443 ssl; proxy_pass stream_backend; proxy_ssl_certificate cert.crt; proxy_ssl_certificate_key cert.key; } } Also https://www.nginx.com/resources/admin-guide/nginx-tcp-ssl-termination/ rr From sovrevage at gmail.com Fri Oct 13 05:47:11 2017 From: sovrevage at gmail.com (=?UTF-8?B?U3RpYW4gw5h2cmV2w6VnZQ==?=) Date: Fri, 13 Oct 2017 00:47:11 -0500 Subject: auth_request off; ignored when combined with auth_basic; Message-ID: Hi list, I have a server {} block that is protected with auth_request; on the top level. auth_request is used for a interactive login process. I have some endpoints that will receive data from other software, and must instead be protected by auth_basic. However, "auth_request off;" is ignored in these location{} blocks IF there is also a auth_basic statement in the block. This works without logging in: location /test/ { auth_request off; proxy_pass http://localhost:88/; } This is automatically redirected back to /security/ for login (as defined by auth_request in server{} block. location /api/ { auth_request "off"; auth_basic "Restricted access"; auth_basic_user_file /etc/htpasswd; proxy_pass http://localhost:88/; } I see online references to a "satisfy any" directive that apparently worked a few years ago, but it does not anymore, and others are reporting similar problems: https://stackoverflow.com/questions/42301559/nginx-with-auth-request-and-auth-basic Brgds, Stian ?vrev?ge From mdounin at mdounin.ru Fri Oct 13 09:14:09 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 13 Oct 2017 12:14:09 +0300 Subject: auth_request off; ignored when combined with auth_basic; In-Reply-To: References: Message-ID: <20171013091408.GM75166@mdounin.ru> Hello! On Fri, Oct 13, 2017 at 12:47:11AM -0500, Stian ?vrev?ge wrote: > Hi list, > > I have a server {} block that is protected with auth_request; on the top level. > > auth_request is used for a interactive login process. > > I have some endpoints that will receive data from other software, and > must instead be protected by auth_basic. However, "auth_request off;" > is ignored in these location{} blocks IF there is also a auth_basic > statement in the block. > > This works without logging in: > location /test/ { > auth_request off; > proxy_pass http://localhost:88/; > } > > This is automatically redirected back to /security/ for login (as > defined by auth_request in server{} block. > location /api/ { > auth_request "off"; > auth_basic "Restricted access"; > auth_basic_user_file /etc/htpasswd; > proxy_pass http://localhost:88/; > } > > I see online references to a "satisfy any" directive that apparently > worked a few years ago, but it does not anymore, and others are > reporting similar problems: > https://stackoverflow.com/questions/42301559/nginx-with-auth-request-and-auth-basic Works fine here: $ curl http://127.0.0.1:8080/ 403 Forbidden

403 Forbidden


nginx/1.13.7
$ curl http://127.0.0.1:8080/test/ ok $ curl http://127.0.0.1:8080/api/ 401 Authorization Required

401 Authorization Required


nginx/1.13.7
$ curl --basic --user foo:foo http://127.0.0.1:8080/api/ ok Just tested with the following configuration: server { listen 8080 auth_request /auth; location / { proxy_pass http://localhost:8082; } location /test/ { auth_request off; proxy_pass http://localhost:8082; } location /api/ { auth_request "off"; auth_basic "Restricted access"; auth_basic_user_file /path/to/htpasswd; proxy_pass http://localhost:8082; } location = /auth { return 403; } } server { listen 8082; return 200 ok\n; } Note that in the request to /api/, where auth_basic is configured, you have to request specify username and password, or the request will be rejected by auth_basic. -- Maxim Dounin http://nginx.org/ From c.schoepplein at musin.de Fri Oct 13 12:47:51 2017 From: c.schoepplein at musin.de (Christian Schoepplein) Date: Fri, 13 Oct 2017 14:47:51 +0200 Subject: WebDAV behind a nginx reverse proxy In-Reply-To: References: <20171012124158.jnngibjrehe7xagg@cs.outpost.musin.de> Message-ID: <20171013124751.btsbgsuwxwuzyjce@cs.outpost.musin.de> On Thu, Oct 12, 2017 at 10:37:11PM +0300, Reinis Rozitis wrote: >> This is my current vhost for the webdav access on the nginx rev. proxy: >[..] >> If I switch the vhost to listen on port 80 without ssl, everything is >> fine and files can be renamed or moved via webdav. > >If it works on http but not with ssl it might indicate that either this >configuration part doesn't work as expected: > >set $dest $http_destination; > >if ($http_destination ~ "^https://(.+)") { > set $dest http://$1; >} >proxy_set_header Destination $dest; [...] >> Also every hint how to debug such kind of problems are highly wellcome > >One way to debug would be using something like tcpdump either on the nginx or >apache host to inspect the http headers passed and/or adding them to access >logs to see what goes wrong. I've checked the headers with tcpdump now and I think I've found the problem, but I do not know how to solve it. The Destination header looks like this: Destination: http://webdav-ca0609-muenchen.musin.de This is the address that the users are using from the internet. But the Destination header should look like this: Destination: http://webdav.ca0609.muenchen.musin.de This is the internal address of the server. Can I put some rewrite roule inside the block where I set the http_destination? Something like this: set $dest $http_destination; > if ($http_destination ~ "^https://(.+)") { rewrite ... set $dest http://... } proxy_set_header Destination $dest; Has someone a hint how the block should look like? Currently I am just to stupid to find the right rewrite roule and also I have no idea how to pass the right URI to the $dest variable. >As far as I understand you are using nginx as an SSL offloader? Yes. nginx is the reverse proxy in front of 300 servers that offer different services, one service is webdav. From the internet to the nginx https is used and to the backends just http. Because we want to use a wildcard certificate for *.musin.de the adresses reachable from the internet are a-b-muenchen.musin.de, but the internal adresses are a.b.muenchen.musin.de. Thats the reason why I need to rewrite the addresses :-(. The whole setup is not the best, I know, but its evolved over years and some things can not be changed easily :-(. >Is there anything else you do on the proxy? Yes, I do additional reverse proxying for some other http(s) services which are running on internal machines, fortunatly all those services do not need rewriting of addresses and are working fine. I can move all those services to another machine, if that makes things easier. >If not maybe it's more easy to use the stream module ( >http://nginx.org/en/docs/stream/ngx_stream_core_module.html ) and ssl offload >on tcp level rather than deal with the http/headers between (though there is >a drawback of not getting the real client ip (in a http header) on the >backend server / hope for PROXY protocol support for backends one day). Hmm, I'll take a look and think about it, maybe having a seperate machine and using the streaming module si really the better solution. Cheers, Schoepp -- Christian Schoepplein Landeshauptstadt Muenchen Referat fuer Bildung und Sport Zentrum fuer Informationstechnologie im Bildungsbereich (ZIB) - Netze und Servermanagement Postanschrift: Bueroanschrift: Landeshauptstadt Muenchen Landeshauptstadt Muenchen Referat fuer Bildung und Sport Referat fuer Bildung und Sport Postfach Bayerstr. 28 (Raum 5.326) 80313 Muenchen 80335 Muenchen T: +49 (0)89 233-85906 E: c.schoepplein (at) musin.de I: http://www.zib.musin.de Elektronische Kommunikation mit der Landeshauptstadt Muenchen, siehe: http://www.muenchen.de/ekomm Bitte denken Sie an die Umwelt, bevor Sie diese E-Mail ausdrucken. Pro Blatt sparen Sie durchschnittlich 15g Holz, 260ml Wasser, 0,05kWh Strom und 5g CO2. From sovrevage at gmail.com Fri Oct 13 16:14:45 2017 From: sovrevage at gmail.com (=?UTF-8?B?U3RpYW4gw5h2cmV2w6VnZQ==?=) Date: Fri, 13 Oct 2017 11:14:45 -0500 Subject: auth_request off; ignored when combined with auth_basic; In-Reply-To: <20171013091408.GM75166@mdounin.ru> References: <20171013091408.GM75166@mdounin.ru> Message-ID: Thanks a bunch. When still being redirected now I found the culprit: location @error401 { return 302 /security/; } Which of course will redirect before auth basic will work. Thanks again and pardon my ignorance :o Br, Stian On 13 October 2017 at 04:14, Maxim Dounin wrote: > Hello! > > On Fri, Oct 13, 2017 at 12:47:11AM -0500, Stian ?vrev?ge wrote: > >> Hi list, >> >> I have a server {} block that is protected with auth_request; on the top level. >> >> auth_request is used for a interactive login process. >> >> I have some endpoints that will receive data from other software, and >> must instead be protected by auth_basic. However, "auth_request off;" >> is ignored in these location{} blocks IF there is also a auth_basic >> statement in the block. >> >> This works without logging in: >> location /test/ { >> auth_request off; >> proxy_pass http://localhost:88/; >> } >> >> This is automatically redirected back to /security/ for login (as >> defined by auth_request in server{} block. >> location /api/ { >> auth_request "off"; >> auth_basic "Restricted access"; >> auth_basic_user_file /etc/htpasswd; >> proxy_pass http://localhost:88/; >> } >> >> I see online references to a "satisfy any" directive that apparently >> worked a few years ago, but it does not anymore, and others are >> reporting similar problems: >> https://stackoverflow.com/questions/42301559/nginx-with-auth-request-and-auth-basic > > Works fine here: > > $ curl http://127.0.0.1:8080/ > > 403 Forbidden > >

403 Forbidden

>
nginx/1.13.7
> > > $ curl http://127.0.0.1:8080/test/ > ok > $ curl http://127.0.0.1:8080/api/ > > 401 Authorization Required > >

401 Authorization Required

>
nginx/1.13.7
> > > $ curl --basic --user foo:foo http://127.0.0.1:8080/api/ > ok > > Just tested with the following configuration: > > server { > listen 8080 > > auth_request /auth; > > location / { > proxy_pass http://localhost:8082; > } > > location /test/ { > auth_request off; > proxy_pass http://localhost:8082; > } > > location /api/ { > auth_request "off"; > auth_basic "Restricted access"; > auth_basic_user_file /path/to/htpasswd; > proxy_pass http://localhost:8082; > } > > location = /auth { > return 403; > } > } > > server { > listen 8082; > return 200 ok\n; > } > > Note that in the request to /api/, where auth_basic is configured, > you have to request specify username and password, or the request > will be rejected by auth_basic. > > -- > Maxim Dounin > http://nginx.org/ > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From sovrevage at gmail.com Fri Oct 13 16:39:50 2017 From: sovrevage at gmail.com (=?UTF-8?B?U3RpYW4gw5h2cmV2w6VnZQ==?=) Date: Fri, 13 Oct 2017 11:39:50 -0500 Subject: auth_request off; ignored when combined with auth_basic; In-Reply-To: References: <20171013091408.GM75166@mdounin.ru> Message-ID: Quick follow-up. I tried negating the 401 redirect at the /api/ endpoint but got an error: named location "@error401" can be on the server level only Any suggestion on how I can enforce a side-wide redirect to /security/ except the 2-3 endpoints that have basic auth? Br, Stian On 13 October 2017 at 11:14, Stian ?vrev?ge wrote: > Thanks a bunch. When still being redirected now I found the culprit: > > location @error401 { > return 302 /security/; > } > > Which of course will redirect before auth basic will work. > > Thanks again and pardon my ignorance :o > > Br, > Stian > > On 13 October 2017 at 04:14, Maxim Dounin wrote: >> Hello! >> >> On Fri, Oct 13, 2017 at 12:47:11AM -0500, Stian ?vrev?ge wrote: >> >>> Hi list, >>> >>> I have a server {} block that is protected with auth_request; on the top level. >>> >>> auth_request is used for a interactive login process. >>> >>> I have some endpoints that will receive data from other software, and >>> must instead be protected by auth_basic. However, "auth_request off;" >>> is ignored in these location{} blocks IF there is also a auth_basic >>> statement in the block. >>> >>> This works without logging in: >>> location /test/ { >>> auth_request off; >>> proxy_pass http://localhost:88/; >>> } >>> >>> This is automatically redirected back to /security/ for login (as >>> defined by auth_request in server{} block. >>> location /api/ { >>> auth_request "off"; >>> auth_basic "Restricted access"; >>> auth_basic_user_file /etc/htpasswd; >>> proxy_pass http://localhost:88/; >>> } >>> >>> I see online references to a "satisfy any" directive that apparently >>> worked a few years ago, but it does not anymore, and others are >>> reporting similar problems: >>> https://stackoverflow.com/questions/42301559/nginx-with-auth-request-and-auth-basic >> >> Works fine here: >> >> $ curl http://127.0.0.1:8080/ >> >> 403 Forbidden >> >>

403 Forbidden

>>
nginx/1.13.7
>> >> >> $ curl http://127.0.0.1:8080/test/ >> ok >> $ curl http://127.0.0.1:8080/api/ >> >> 401 Authorization Required >> >>

401 Authorization Required

>>
nginx/1.13.7
>> >> >> $ curl --basic --user foo:foo http://127.0.0.1:8080/api/ >> ok >> >> Just tested with the following configuration: >> >> server { >> listen 8080 >> >> auth_request /auth; >> >> location / { >> proxy_pass http://localhost:8082; >> } >> >> location /test/ { >> auth_request off; >> proxy_pass http://localhost:8082; >> } >> >> location /api/ { >> auth_request "off"; >> auth_basic "Restricted access"; >> auth_basic_user_file /path/to/htpasswd; >> proxy_pass http://localhost:8082; >> } >> >> location = /auth { >> return 403; >> } >> } >> >> server { >> listen 8082; >> return 200 ok\n; >> } >> >> Note that in the request to /api/, where auth_basic is configured, >> you have to request specify username and password, or the request >> will be rejected by auth_basic. >> >> -- >> Maxim Dounin >> http://nginx.org/ >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx From francis at daoine.org Fri Oct 13 22:12:32 2017 From: francis at daoine.org (Francis Daly) Date: Fri, 13 Oct 2017 23:12:32 +0100 Subject: auth_request off; ignored when combined with auth_basic; In-Reply-To: References: <20171013091408.GM75166@mdounin.ru> Message-ID: <20171013221232.GL20907@daoine.org> On Fri, Oct 13, 2017 at 11:39:50AM -0500, Stian ?vrev?ge wrote: Hi there, > Quick follow-up. I tried negating the 401 redirect at the /api/ > endpoint but got an error: > > named location "@error401" can be on the server level only > > Any suggestion on how I can enforce a side-wide redirect to /security/ > except the 2-3 endpoints that have basic auth? It will probably be easier if you show the config that does not do what you want; but my guess is that using "error_page" in the location{}s with basic auth to override the (presumed) server-level "error_page" setting might work. That is: add something like "error_page 444 =444;" where you don't want the 401 redirect to happen. f -- Francis Daly francis at daoine.org From rdocter at gmail.com Sun Oct 15 07:51:27 2017 From: rdocter at gmail.com (Ruben) Date: Sun, 15 Oct 2017 09:51:27 +0200 Subject: max_fails=0 for server directive Message-ID: When setting max_fails=0 for all server directives used in upstream module. So for example: upstream chat-servers { hash $arg_chatName; server chat-1 max_fails=0; server chat-2 max_fails=0; server chat-3 max_fails=0; } Assume a certain ?chatName=xxx is directed to chat-2 server, and this server fails. Do I get an error for that connection or does it try chat-3 server? I want it not to go to the next sever but to just fail. Is this the correct config? -------------- next part -------------- An HTML attachment was scrubbed... URL: From charles.orth at oath.com Mon Oct 16 15:29:57 2017 From: charles.orth at oath.com (Charles Orth) Date: Mon, 16 Oct 2017 11:29:57 -0400 Subject: Stable version release date for 1.13.X Message-ID: Hi Folks, Do we know when the next stable version is schedule to be released using 1.13 code base? Charles -------------- next part -------------- An HTML attachment was scrubbed... URL: From maxim at nginx.com Mon Oct 16 15:34:17 2017 From: maxim at nginx.com (Maxim Konovalov) Date: Mon, 16 Oct 2017 18:34:17 +0300 Subject: Stable version release date for 1.13.X In-Reply-To: References: Message-ID: On 16/10/2017 18:29, Charles Orth via nginx wrote: > Hi Folks, > > Do we know when the next stable version is schedule to be released > using 1.13 code base? > You can use April 2018 as a good estimation for 1.13 -> 1.14 switchover. Just in case: it is ok to use -mainline in production. -- Maxim Konovalov "I'm not a software developer, but it doesn't seem as rocket science" From mdounin at mdounin.ru Mon Oct 16 15:49:57 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 16 Oct 2017 18:49:57 +0300 Subject: Stable version release date for 1.13.X In-Reply-To: References: Message-ID: <20171016154957.GS75166@mdounin.ru> Hello! On Mon, Oct 16, 2017 at 11:29:57AM -0400, Charles Orth via nginx wrote: > Do we know when the next stable version is schedule to be released using > 1.13 code base? Next stable branch, 1.14.x, based on 1.13.x mainline, is expected to appear around next April, as usual. Next stable version, 1.12.2, based on 1.12.1 with some bugfixes merged from 1.13.x, is expected to be released tomorrow. -- Maxim Dounin http://nginx.org/ From vbart at nginx.com Mon Oct 16 15:50:29 2017 From: vbart at nginx.com (Valentin V. Bartenev) Date: Mon, 16 Oct 2017 18:50:29 +0300 Subject: max_fails=0 for server directive In-Reply-To: References: Message-ID: <1886261.Rehekh8TW7@vbart-workstation> On Sunday 15 October 2017 09:51:27 Ruben wrote: > When setting max_fails=0 for all server directives used in upstream module. > So for example: > > upstream chat-servers { > hash $arg_chatName; > server chat-1 max_fails=0; > server chat-2 max_fails=0; > server chat-3 max_fails=0; > } > > Assume a certain ?chatName=xxx is directed to chat-2 server, and this > server fails. Do I get an error for that connection or does it try chat-3 > server? > > I want it not to go to the next sever but to just fail. Is this the correct > config? No. http://nginx.org/en/docs/http/ngx_http_upstream_module.html#server | The zero value disables the accounting of attempts. | What is considered an unsuccessful attempt is defined by the | proxy_next_upstream, fastcgi_next_upstream, uwsgi_next_upstream, | scgi_next_upstream, and memcached_next_upstream directives. http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_next_upstream | off | disables passing a request to the next server. wbr, Valentin V. Bartenev From nginx-forum at forum.nginx.org Mon Oct 16 16:37:35 2017 From: nginx-forum at forum.nginx.org (agriz) Date: Mon, 16 Oct 2017 12:37:35 -0400 Subject: E3-1240 with 32GB Ram - Unable to set the optimal value for the server Message-ID: I am struggling to handle a traffic of 10K+ It will reach a lot. But my bad configuration is limiting them. www.conf pm.max_children = 400 pm.start_servers = 40 pm.min_spare_servers = 40 pm.max_spare_servers = 70 pm.max_requests = 800 nginx.conf worker_processes 3; events { worker_connections 8096; accept_mutex on; accept_mutex_delay 500ms; multi_accept on; use epoll; } sendfile on; tcp_nodelay on; tcp_nopush on; client_body_buffer_size 128K; client_max_body_size 8m; client_body_timeout 15; client_header_timeout 15; send_timeout 10; keepalive_timeout 15; open_file_cache max=5000 inactive=20s; open_file_cache_valid 60s; open_file_cache_min_uses 4; open_file_cache_errors on; If i change the values, it hangs with 3k or 5k visitors. This one handle 5k to 8k But cpu and ram are available in server. Quad core and 32GB Ram SSD Posted at Nginx Forum: https://forum.nginx.org/read.php?2,276892,276892#msg-276892 From medvedev.yp at gmail.com Mon Oct 16 16:49:47 2017 From: medvedev.yp at gmail.com (Iurii Medvedev) Date: Mon, 16 Oct 2017 19:49:47 +0300 Subject: E3-1240 with 32GB Ram - Unable to set the optimal value for the server In-Reply-To: References: Message-ID: How many core do you have? On Oct 16, 2017 19:37, "agriz" wrote: > I am struggling to handle a traffic of 10K+ It will reach a lot. But my bad > configuration is limiting them. > > www.conf > > pm.max_children = 400 > pm.start_servers = 40 > pm.min_spare_servers = 40 > pm.max_spare_servers = 70 > pm.max_requests = 800 > > nginx.conf > > worker_processes 3; > events { > worker_connections 8096; > accept_mutex on; > accept_mutex_delay 500ms; > multi_accept on; > use epoll; > } > > sendfile on; > tcp_nodelay on; > tcp_nopush on; > > > client_body_buffer_size 128K; > client_max_body_size 8m; > > client_body_timeout 15; > client_header_timeout 15; > send_timeout 10; > keepalive_timeout 15; > > open_file_cache max=5000 inactive=20s; > open_file_cache_valid 60s; > open_file_cache_min_uses 4; > open_file_cache_errors on; > > If i change the values, it hangs with 3k or 5k visitors. > This one handle 5k to 8k > > But cpu and ram are available in server. > Quad core and 32GB Ram SSD > > Posted at Nginx Forum: https://forum.nginx.org/read. > php?2,276892,276892#msg-276892 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Mon Oct 16 16:59:07 2017 From: nginx-forum at forum.nginx.org (agriz) Date: Mon, 16 Oct 2017 12:59:07 -0400 Subject: E3-1240 with 32GB Ram - Unable to set the optimal value for the server In-Reply-To: References: Message-ID: PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 22093 nginx 20 0 393060 11848 3828 S 31.9 0.0 10:17.70 php-fpm: pool www 1495 mysql 20 0 4793852 318444 9824 S 23.6 1.0 796:41.59 /usr/libexec/mysqld --basedir=/usr --datadir=/var/lib/mysql --plugin-dir=/usr/lib64/mysql/plugin -+ 3135 nginx 20 0 393108 12112 3832 R 7.3 0.0 10:30.35 php-fpm: pool www 6839 nginx 20 0 392804 11832 3828 R 7.3 0.0 10:37.57 php-fpm: pool www 14311 nginx 20 0 392800 11820 3828 S 7.3 0.0 10:39.68 php-fpm: pool www 889 nginx 20 0 393072 11832 3828 R 7.0 0.0 10:38.91 php-fpm: pool www 1153 nginx 20 0 393100 12100 3832 S 7.0 0.0 10:38.73 php-fpm: pool www 5768 nginx 20 0 392736 11708 3836 R 7.0 0.0 10:49.05 php-fpm: pool www 6675 nginx 20 0 393100 11892 3832 S 7.0 0.0 10:38.87 php-fpm: pool www 6840 nginx 20 0 393108 12136 3832 S 7.0 0.0 10:35.61 php-fpm: pool www 12767 nginx 20 0 393156 12092 3832 R 7.0 0.0 10:23.34 php-fpm: pool www 21948 nginx 20 0 393108 12132 3828 S 7.0 0.0 10:18.22 php-fpm: pool www 888 nginx 20 0 392800 11848 3848 S 6.6 0.0 10:41.74 php-fpm: pool www 1152 nginx 20 0 393092 11928 3836 R 6.6 0.0 10:37.03 php-fpm: pool www 5036 nginx 20 0 393076 11852 3848 S 6.6 0.0 10:41.52 php-fpm: pool www 12692 nginx 20 0 393056 11832 3828 S 6.6 0.0 10:25.90 php-fpm: pool www 22033 nginx 20 0 393076 11904 3832 S 6.6 0.0 10:09.92 php-fpm: pool www 22034 nginx 20 0 393092 11864 3832 S 6.6 0.0 10:14.02 php-fpm: pool www 22092 nginx 20 0 392800 11832 3832 S 6.6 0.0 10:22.43 php-fpm: pool www 22184 nginx 20 0 393108 12100 3832 S 6.6 0.0 10:17.56 php-fpm: pool www 22185 nginx 20 0 393104 12100 3832 S 6.6 0.0 10:14.99 php-fpm: pool www 27712 nginx 20 0 393100 12116 3848 S 6.6 0.0 10:47.98 php-fpm: pool www 790 nginx 20 0 393108 12096 3832 S 6.3 0.0 10:41.45 php-fpm: pool www 1063 nginx 20 0 392548 11584 3836 S 6.3 0.0 10:47.35 php-fpm: pool www 3058 nginx 20 0 393124 12100 3832 R 6.3 0.0 10:35.90 php-fpm: pool www 5933 nginx 20 0 392800 11832 3836 S 6.3 0.0 10:43.31 php-fpm: pool www 6737 nginx 20 0 393056 11840 3828 S 6.3 0.0 10:36.62 php-fpm: pool www 6838 nginx 20 0 393056 11932 3832 S 6.3 0.0 10:37.22 php-fpm: pool www 13061 nginx 20 0 393140 11896 3836 R 6.3 0.0 10:33.85 php-fpm: pool www 13146 nginx 20 0 392820 11832 3828 R 6.3 0.0 10:39.73 php-fpm: pool www 22183 nginx 20 0 392924 11724 3828 S 6.3 0.0 10:18.64 php-fpm: pool www 3134 nginx 20 0 393108 12104 3828 S 6.0 0.0 10:38.90 php-fpm: pool www 6736 nginx 20 0 393100 12112 3828 S 6.0 0.0 10:30.51 php-fpm: pool www 22091 nginx 20 0 392800 11832 3832 S 6.0 0.0 10:15.83 php-fpm: pool www 10880 nginx 20 0 392804 11844 3844 S 5.6 0.0 10:40.13 php-fpm: pool www 22090 nginx 20 0 393076 11876 3828 S 5.6 0.0 10:21.79 php-fpm: pool www 10430 nginx 20 0 53984 7700 1200 S 1.7 0.0 0:33.71 nginx: worker process Tasks: 197 total, 4 running, 192 sleeping, 0 stopped, 1 zombie %Cpu(s): 25.6 us, 2.3 sy, 0.0 ni, 71.9 id, 0.0 wa, 0.0 hi, 0.2 si, 0.0 st KiB Mem : 32740464 total, 29158440 free, 892028 used, 2689996 buff/cache KiB Swap: 8191996 total, 8191996 free, 0 used. 31303316 avail Mem grep -c ^processor /proc/cpuinfo 8 Posted at Nginx Forum: https://forum.nginx.org/read.php?2,276892,276894#msg-276894 From rdocter at gmail.com Mon Oct 16 17:25:08 2017 From: rdocter at gmail.com (Ruben D) Date: Mon, 16 Oct 2017 19:25:08 +0200 Subject: E3-1240 with 32GB Ram - Unable to set the optimal value for the server In-Reply-To: References: Message-ID: <01C4A1F9-9D77-4EAE-AD37-F8E45B78E7AE@gmail.com> Maybe lower fpm specs and raise nginx spec? Verstuurd vanaf mijn iPhone > Op 16 okt. 2017 om 18:59 heeft agriz het volgende geschreven: > > PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND > 22093 nginx 20 0 393060 11848 3828 S 31.9 0.0 10:17.70 php-fpm: > pool www > 1495 mysql 20 0 4793852 318444 9824 S 23.6 1.0 796:41.59 > /usr/libexec/mysqld --basedir=/usr --datadir=/var/lib/mysql > --plugin-dir=/usr/lib64/mysql/plugin -+ > 3135 nginx 20 0 393108 12112 3832 R 7.3 0.0 10:30.35 php-fpm: > pool www > 6839 nginx 20 0 392804 11832 3828 R 7.3 0.0 10:37.57 php-fpm: > pool www > 14311 nginx 20 0 392800 11820 3828 S 7.3 0.0 10:39.68 php-fpm: > pool www > 889 nginx 20 0 393072 11832 3828 R 7.0 0.0 10:38.91 php-fpm: > pool www > 1153 nginx 20 0 393100 12100 3832 S 7.0 0.0 10:38.73 php-fpm: > pool www > 5768 nginx 20 0 392736 11708 3836 R 7.0 0.0 10:49.05 php-fpm: > pool www > 6675 nginx 20 0 393100 11892 3832 S 7.0 0.0 10:38.87 php-fpm: > pool www > 6840 nginx 20 0 393108 12136 3832 S 7.0 0.0 10:35.61 php-fpm: > pool www > 12767 nginx 20 0 393156 12092 3832 R 7.0 0.0 10:23.34 php-fpm: > pool www > 21948 nginx 20 0 393108 12132 3828 S 7.0 0.0 10:18.22 php-fpm: > pool www > 888 nginx 20 0 392800 11848 3848 S 6.6 0.0 10:41.74 php-fpm: > pool www > 1152 nginx 20 0 393092 11928 3836 R 6.6 0.0 10:37.03 php-fpm: > pool www > 5036 nginx 20 0 393076 11852 3848 S 6.6 0.0 10:41.52 php-fpm: > pool www > 12692 nginx 20 0 393056 11832 3828 S 6.6 0.0 10:25.90 php-fpm: > pool www > 22033 nginx 20 0 393076 11904 3832 S 6.6 0.0 10:09.92 php-fpm: > pool www > 22034 nginx 20 0 393092 11864 3832 S 6.6 0.0 10:14.02 php-fpm: > pool www > 22092 nginx 20 0 392800 11832 3832 S 6.6 0.0 10:22.43 php-fpm: > pool www > 22184 nginx 20 0 393108 12100 3832 S 6.6 0.0 10:17.56 php-fpm: > pool www > 22185 nginx 20 0 393104 12100 3832 S 6.6 0.0 10:14.99 php-fpm: > pool www > 27712 nginx 20 0 393100 12116 3848 S 6.6 0.0 10:47.98 php-fpm: > pool www > 790 nginx 20 0 393108 12096 3832 S 6.3 0.0 10:41.45 php-fpm: > pool www > 1063 nginx 20 0 392548 11584 3836 S 6.3 0.0 10:47.35 php-fpm: > pool www > 3058 nginx 20 0 393124 12100 3832 R 6.3 0.0 10:35.90 php-fpm: > pool www > 5933 nginx 20 0 392800 11832 3836 S 6.3 0.0 10:43.31 php-fpm: > pool www > 6737 nginx 20 0 393056 11840 3828 S 6.3 0.0 10:36.62 php-fpm: > pool www > 6838 nginx 20 0 393056 11932 3832 S 6.3 0.0 10:37.22 php-fpm: > pool www > 13061 nginx 20 0 393140 11896 3836 R 6.3 0.0 10:33.85 php-fpm: > pool www > 13146 nginx 20 0 392820 11832 3828 R 6.3 0.0 10:39.73 php-fpm: > pool www > 22183 nginx 20 0 392924 11724 3828 S 6.3 0.0 10:18.64 php-fpm: > pool www > 3134 nginx 20 0 393108 12104 3828 S 6.0 0.0 10:38.90 php-fpm: > pool www > 6736 nginx 20 0 393100 12112 3828 S 6.0 0.0 10:30.51 php-fpm: > pool www > 22091 nginx 20 0 392800 11832 3832 S 6.0 0.0 10:15.83 php-fpm: > pool www > 10880 nginx 20 0 392804 11844 3844 S 5.6 0.0 10:40.13 php-fpm: > pool www > 22090 nginx 20 0 393076 11876 3828 S 5.6 0.0 10:21.79 php-fpm: > pool www > 10430 nginx 20 0 53984 7700 1200 S 1.7 0.0 0:33.71 nginx: > worker process > > > Tasks: 197 total, 4 running, 192 sleeping, 0 stopped, 1 zombie > %Cpu(s): 25.6 us, 2.3 sy, 0.0 ni, 71.9 id, 0.0 wa, 0.0 hi, 0.2 si, 0.0 > st > KiB Mem : 32740464 total, 29158440 free, 892028 used, 2689996 buff/cache > KiB Swap: 8191996 total, 8191996 free, 0 used. 31303316 avail Mem > > > grep -c ^processor /proc/cpuinfo > 8 > > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,276892,276894#msg-276894 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at forum.nginx.org Mon Oct 16 17:32:08 2017 From: nginx-forum at forum.nginx.org (agriz) Date: Mon, 16 Oct 2017 13:32:08 -0400 Subject: E3-1240 with 32GB Ram - Unable to set the optimal value for the server In-Reply-To: <01C4A1F9-9D77-4EAE-AD37-F8E45B78E7AE@gmail.com> References: <01C4A1F9-9D77-4EAE-AD37-F8E45B78E7AE@gmail.com> Message-ID: <7d36538739cf444b209e0fd432cc22ab.NginxMailingListEnglish@forum.nginx.org> Sir, Can you give me a rough values? I will play with them. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,276892,276896#msg-276896 From rdocter at gmail.com Mon Oct 16 17:34:38 2017 From: rdocter at gmail.com (Ruben D) Date: Mon, 16 Oct 2017 19:34:38 +0200 Subject: E3-1240 with 32GB Ram - Unable to set the optimal value for the server In-Reply-To: <7d36538739cf444b209e0fd432cc22ab.NginxMailingListEnglish@forum.nginx.org> References: <01C4A1F9-9D77-4EAE-AD37-F8E45B78E7AE@gmail.com> <7d36538739cf444b209e0fd432cc22ab.NginxMailingListEnglish@forum.nginx.org> Message-ID: Just start with the default ones. Verstuurd vanaf mijn iPhone > Op 16 okt. 2017 om 19:32 heeft agriz het volgende geschreven: > > Sir, > > Can you give me a rough values? > I will play with them. > > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,276892,276896#msg-276896 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at forum.nginx.org Mon Oct 16 17:52:23 2017 From: nginx-forum at forum.nginx.org (agriz) Date: Mon, 16 Oct 2017 13:52:23 -0400 Subject: E3-1240 with 32GB Ram - Unable to set the optimal value for the server In-Reply-To: References: Message-ID: <552b000705bb39efe2ea5b84cf61bb74.NginxMailingListEnglish@forum.nginx.org> Reduced to 4k visitors from 10k :( Posted at Nginx Forum: https://forum.nginx.org/read.php?2,276892,276898#msg-276898 From medvedev.yp at gmail.com Mon Oct 16 18:19:44 2017 From: medvedev.yp at gmail.com (Iurii Medvedev) Date: Mon, 16 Oct 2017 21:19:44 +0300 Subject: E3-1240 with 32GB Ram - Unable to set the optimal value for the server In-Reply-To: <552b000705bb39efe2ea5b84cf61bb74.NginxMailingListEnglish@forum.nginx.org> References: <552b000705bb39efe2ea5b84cf61bb74.NginxMailingListEnglish@forum.nginx.org> Message-ID: Try to change pm.max_requests up to 2048. To much pm.max_children = 400 2017-10-16 20:52 GMT+03:00 agriz : > Reduced to 4k visitors from 10k :( > > Posted at Nginx Forum: https://forum.nginx.org/read. > php?2,276892,276898#msg-276898 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- With best regards Iurii Medvedev -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Mon Oct 16 19:04:27 2017 From: nginx-forum at forum.nginx.org (agriz) Date: Mon, 16 Oct 2017 15:04:27 -0400 Subject: E3-1240 with 32GB Ram - Unable to set the optimal value for the server In-Reply-To: References: Message-ID: <77083dc22a5e62afc00cf952a7b8003b.NginxMailingListEnglish@forum.nginx.org> worker_processes 4; worker_rlimit_nofile 40000; events { worker_connections 4096; # accept_mutex on; # accept_mutex_delay 500ms; multi_accept on; use epoll; } pm.max_children = 50 pm.start_servers = 4 pm.min_spare_servers = 4 pm.max_spare_servers = 32 pm.max_requests = 2500 //modified rlimit_files = 131072 //modified rlimit_core = unlimited //modified # TCP Stack changes net.ipv4.tcp_fin_timeout = 20 net.ipv4.tcp_tw_reuse = 1 net.core.netdev_max_backlog = 10000 net.core.somaxconn = 2048 net.ipv4.tcp_max_syn_backlog = 2048 net.ipv4.ip_local_port_range = 15000 65000 But still not efficient. Losing visitors abnormally Posted at Nginx Forum: https://forum.nginx.org/read.php?2,276892,276900#msg-276900 From peter_booth at me.com Mon Oct 16 19:30:30 2017 From: peter_booth at me.com (Peter Booth) Date: Mon, 16 Oct 2017 15:30:30 -0400 Subject: E3-1240 with 32GB Ram - Unable to set the optimal value for the server In-Reply-To: <77083dc22a5e62afc00cf952a7b8003b.NginxMailingListEnglish@forum.nginx.org> References: <77083dc22a5e62afc00cf952a7b8003b.NginxMailingListEnglish@forum.nginx.org> Message-ID: Advice - instead of tweaking values, first work out what is happening, locate the bottleneck, then try adjusting things when you have a theory First QN you need to answer: For your test, is your system as a whole overloaded? As in, for he duration of the test is the #req/se supported constant? Is the request time shown in nginx log increasing? If you capture the output of net stat -ant | grep -i tcp > aa is the # of tcp connections changing wit time Some other key questions: Does every PHP request involve a call to mysql? Is there a connection pool or does every PHP instance have its connection to mysql When you do your test are you ramping up workload do you have a consistent workload? How many requests per second are seeing from the nginx logs? How are you driving the test traffic and from what host? Ar you logging the request execution time in the nginx log? The ps output that you pasted only showed 36 PHP processes but your initial config specified 400 max_children If consistent, how many virtual agents / independent request sources do you have? What do you mean ?losing visitors abnormally?? how are you seeing this? Do you realize that your PHP process is configured to die after serving 800 (now 2500) requests and then needs to be restarted? > On Oct 16, 2017, at 3:04 PM, agriz wrote: > > worker_processes 4; > worker_rlimit_nofile 40000; > > events { > worker_connections 4096; > # accept_mutex on; > # accept_mutex_delay 500ms; > multi_accept on; > use epoll; > } > > pm.max_children = 50 > pm.start_servers = 4 > pm.min_spare_servers = 4 > pm.max_spare_servers = 32 > pm.max_requests = 2500 //modified > rlimit_files = 131072 //modified > rlimit_core = unlimited //modified > > # TCP Stack changes > net.ipv4.tcp_fin_timeout = 20 > net.ipv4.tcp_tw_reuse = 1 > net.core.netdev_max_backlog = 10000 > net.core.somaxconn = 2048 > net.ipv4.tcp_max_syn_backlog = 2048 > net.ipv4.ip_local_port_range = 15000 65000 > > But still not efficient. > Losing visitors abnormally > > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,276892,276900#msg-276900 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From peter_booth at me.com Mon Oct 16 19:35:25 2017 From: peter_booth at me.com (Peter Booth) Date: Mon, 16 Oct 2017 15:35:25 -0400 Subject: E3-1240 with 32GB Ram - Unable to set the optimal value for the server In-Reply-To: References: <77083dc22a5e62afc00cf952a7b8003b.NginxMailingListEnglish@forum.nginx.org> Message-ID: <917526AF-7F9A-4C38-AD0B-05642241BA44@me.com> You said this > On Oct 16, 2017, at 3:30 PM, Peter Booth wrote: > > If i change the values, it hangs with 3k or 5k visitors. > This one handle 5k to 8k what hangs? the host or the nginx worker processes or the PHP or the mysql? You need to capture some diagnostic information over time to see whats going on here. e.g. (ps, net stat, sar -A, lidstaat -h -r -u -w both with and without he -t switch every ten or so seconds for are minutes. How long does your test run for before hanging? -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Mon Oct 16 19:55:48 2017 From: nginx-forum at forum.nginx.org (agriz) Date: Mon, 16 Oct 2017 15:55:48 -0400 Subject: E3-1240 with 32GB Ram - Unable to set the optimal value for the server In-Reply-To: <917526AF-7F9A-4C38-AD0B-05642241BA44@me.com> References: <917526AF-7F9A-4C38-AD0B-05642241BA44@me.com> Message-ID: <89bb401779332590bf41e2f9636ff1cb.NginxMailingListEnglish@forum.nginx.org> Sir, Thank you for your reply. This is a live server. It is an NPO (non profit organisation). I pay for the server and maintaining it. We cant afford to a admin. It will be a great help if you can solve this. People are visiting for registering complains and viewing our activity. all php pages are connected with mysql. I try my best to learn about server. But it seems difficult to me. I will work on your questions now and let you know. "Do you realize that your PHP process is configured to die after serving 800 (now 2500) requests and then needs to be restarted?" Is this bad or good? How many php process are there? Thanks again Posted at Nginx Forum: https://forum.nginx.org/read.php?2,276892,276903#msg-276903 From crazibri at gmail.com Mon Oct 16 20:05:19 2017 From: crazibri at gmail.com (Brian) Date: Mon, 16 Oct 2017 15:05:19 -0500 Subject: E3-1240 with 32GB Ram - Unable to set the optimal value for the server In-Reply-To: <77083dc22a5e62afc00cf952a7b8003b.NginxMailingListEnglish@forum.nginx.org> References: <77083dc22a5e62afc00cf952a7b8003b.NginxMailingListEnglish@forum.nginx.org> Message-ID: We should see memory and cpu for this server. I suspect a variety of issues. Is php-fpm using a Unix socket with Nginx? That would help you remove tcp sockets internally. This likely needs to be much higher but depends on your hardware, cpu/memory. Maybe 500? pm.max_children = 50 The below setting should be commented out as I understand because you?re limiting your php threads to handle 2500 requests then die. pm.max_requests = 2500 //modified This likely needs much higher too: net.core.somaxconn = 2048 I also wonder what the application is caching and how MySQL is doing. MySQL might need better tuning too. Brian (Sent via Mobile) On Oct 16, 2017, at 2:04 PM, agriz wrote: worker_processes 4; worker_rlimit_nofile 40000; events { worker_connections 4096; # accept_mutex on; # accept_mutex_delay 500ms; multi_accept on; use epoll; } pm.max_children = 50 pm.start_servers = 4 pm.min_spare_servers = 4 pm.max_spare_servers = 32 pm.max_requests = 2500 //modified rlimit_files = 131072 //modified rlimit_core = unlimited //modified # TCP Stack changes net.ipv4.tcp_fin_timeout = 20 net.ipv4.tcp_tw_reuse = 1 net.core.netdev_max_backlog = 10000 net.core.somaxconn = 2048 net.ipv4.tcp_max_syn_backlog = 2048 net.ipv4.ip_local_port_range = 15000 65000 But still not efficient. Losing visitors abnormally Posted at Nginx Forum: https://forum.nginx.org/read.php?2,276892,276900#msg-276900 _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at forum.nginx.org Tue Oct 17 03:38:03 2017 From: nginx-forum at forum.nginx.org (agriz) Date: Mon, 16 Oct 2017 23:38:03 -0400 Subject: E3-1240 with 32GB Ram - Unable to set the optimal value for the server In-Reply-To: References: Message-ID: <1b201d9513966eebc548b49acf6ecbbf.NginxMailingListEnglish@forum.nginx.org> I am in the field now. I asked my friend to get the info you asked. This is what i received php-fpm is using unix sockets (I know this. I configured this) tcp 0 0 TIME_WAIT tcp 0 0 TIME_WAIT tcp 0 0 TIME_WAIT tcp 0 0 ESTABLISHED tcp 0 0 TIME_WAIT tcp 0 0 TIME_WAIT tcp 0 0 TIME_WAIT tcp 0 0 TIME_WAIT tcp 0 0 TIME_WAIT tcp 0 0 TIME_WAIT tcp 0 0 TIME_WAIT tcp 0 0 TIME_WAIT tcp 0 0 TIME_WAIT tcp 0 0 TIME_WAIT tcp 0 0 ESTABLISHED tcp 0 0 TIME_WAIT tcp 0 0 TIME_WAIT tcp 0 0 TIME_WAIT tcp 0 0 TIME_WAIT tcp 0 0 TIME_WAIT tcp 0 0 TIME_WAIT tcp 0 0 TIME_WAIT tcp 0 0 TIME_WAIT tcp 0 0 TIME_WAIT tcp 0 0 TIME_WAIT tcp 0 0 TIME_WAIT tcp 0 0 TIME_WAIT CPU(s): 8 On-line CPU(s) list: 0-7 Thread(s) per core: 2 Core(s) per socket: 4 Socket(s): 1 NUMA node(s): 1 Vendor ID: GenuineIntel CPU family: 6 Model: 60 Model name: Intel(R) Xeon(R) CPU E3-1240 v3 @ 3.40GHz Stepping: 3 CPU MHz: 3574.781 CPU max MHz: 3800.0000 CPU min MHz: 800.0000 L1d cache: 32K L1i cache: 32K L2 cache: 256K L3 cache: 8192K NUMA node0 CPU(s): 0-7 MemTotal: 32740464 kB MemFree: 29960736 kB MemAvailable: 31298872 kB Posted at Nginx Forum: https://forum.nginx.org/read.php?2,276892,276905#msg-276905 From peter_booth at me.com Tue Oct 17 03:49:30 2017 From: peter_booth at me.com (Peter Booth) Date: Mon, 16 Oct 2017 23:49:30 -0400 Subject: E3-1240 with 32GB Ram - Unable to set the optimal value for the server In-Reply-To: <89bb401779332590bf41e2f9636ff1cb.NginxMailingListEnglish@forum.nginx.org> References: <917526AF-7F9A-4C38-AD0B-05642241BA44@me.com> <89bb401779332590bf41e2f9636ff1cb.NginxMailingListEnglish@forum.nginx.org> Message-ID: <4EB0C18E-EC99-4CC0-B47C-805B6455FD71@me.com> Agree, Can you email me offline. I might have a few ideas on how to assist. Peter peter _ booth @ me.com > On Oct 16, 2017, at 3:55 PM, agriz wrote: > > Sir, > > Thank you for your reply. > > This is a live server. > It is an NPO (non profit organisation). > I pay for the server and maintaining it. We cant afford to a admin. > It will be a great help if you can solve this. > > People are visiting for registering complains and viewing our activity. > all php pages are connected with mysql. > > I try my best to learn about server. But it seems difficult to me. > I will work on your questions now and let you know. > > "Do you realize that your PHP process is configured to die after serving 800 > (now 2500) > requests and then needs to be restarted?" > > Is this bad or good? > How many php process are there? > > Thanks again > > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,276892,276903#msg-276903 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at forum.nginx.org Tue Oct 17 04:39:47 2017 From: nginx-forum at forum.nginx.org (agriz) Date: Tue, 17 Oct 2017 00:39:47 -0400 Subject: E3-1240 with 32GB Ram - Unable to set the optimal value for the server In-Reply-To: References: Message-ID: <35dff87f450572ff7db1474cd6ce2de9.NginxMailingListEnglish@forum.nginx.org> Sir reading some info, i guess i cant tell any number blindly without test. I think I can first try to give these values max_children = 100 start server = 34 spareserver min and max = 20 & 50 We have around 20GB free Ram all the time. Why can't we use it for php-fpm? Are those values safe to check? But max_requests, why should we limit the numbers to 500 or 2500? Why cant we have unlimited? What is wrong in setting it to zero? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,276892,276907#msg-276907 From nginx-forum at forum.nginx.org Tue Oct 17 05:31:26 2017 From: nginx-forum at forum.nginx.org (agriz) Date: Tue, 17 Oct 2017 01:31:26 -0400 Subject: E3-1240 with 32GB Ram - Unable to set the optimal value for the server In-Reply-To: References: Message-ID: php fpm status pool: www process manager: dynamic start time: 17/Oct/2017:00:55:44 -0400 start since: 2077 accepted conn: 183360 listen queue: 0 max listen queue: 0 listen queue len: 0 idle processes: 28 active processes: 3 total processes: 31 max active processes: 31 max children reached: 0 slow requests: 0 Posted at Nginx Forum: https://forum.nginx.org/read.php?2,276892,276908#msg-276908 From nginx-forum at forum.nginx.org Tue Oct 17 05:48:50 2017 From: nginx-forum at forum.nginx.org (agriz) Date: Tue, 17 Oct 2017 01:48:50 -0400 Subject: E3-1240 with 32GB Ram - Unable to set the optimal value for the server In-Reply-To: References: Message-ID: I am getting these kind of warning in php fpm log NOTICE: [pool www] child 25826 exited with code 0 after 864.048588 seconds from start NOTICE: [pool www] child 6580 started Sir, I am still looking for a way to monitor nginx performance. But i am not able to find the solution in internet. Can you please guide me how to monitor nginx? Something is still limiting the visitors from accessing the page after certain limit. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,276892,276909#msg-276909 From nginx-forum at forum.nginx.org Tue Oct 17 05:50:50 2017 From: nginx-forum at forum.nginx.org (agriz) Date: Tue, 17 Oct 2017 01:50:50 -0400 Subject: E3-1240 with 32GB Ram - Unable to set the optimal value for the server In-Reply-To: References: Message-ID: I forget to mention. during night, there wont be any visitors. It is only for specific regions. website will be busy during the morning time. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,276892,276910#msg-276910 From peter_booth at me.com Tue Oct 17 05:58:09 2017 From: peter_booth at me.com (Peter Booth) Date: Tue, 17 Oct 2017 01:58:09 -0400 Subject: E3-1240 with 32GB Ram - Unable to set the optimal value for the server In-Reply-To: References: Message-ID: <9D0E680A-437E-42AC-92FF-F5E99201F73A@me.com> So this message can be interpreted: > NOTICE: [pool www] child 25826 exited with code 0 after 864.048588 seconds > from start The code 0 means that the child exited normally 864 seconds after it had started. In other words, it chose to die (probably after serving 800 or 2500 requests). Now if your access.log indicates *which* php process serves a request then you should be able to work out whether 864 seconds is a reasonable time for it to have received the appropriate number of requests. Monitoring nginx performance The first thing is to configure nginx so that it logs how many milliseconds are spent building every request See https://lincolnloop.com/blog/tracking-application-response-time-nginx/ It?s better that you email me off-list for further discussion Peter peter _ booth @ me.com Peter > On Oct 17, 2017, at 1:48 AM, agriz wrote: > > I am getting these kind of warning in php fpm log > > NOTICE: [pool www] child 25826 exited with code 0 after 864.048588 seconds > from start > NOTICE: [pool www] child 6580 started > > Sir, I am still looking for a way to monitor nginx performance. But i am not > able to find the solution in internet. > Can you please guide me how to monitor nginx? > > Something is still limiting the visitors from accessing the page after > certain limit. > > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,276892,276909#msg-276909 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Tue Oct 17 12:01:26 2017 From: nginx-forum at forum.nginx.org (rnmx18) Date: Tue, 17 Oct 2017 08:01:26 -0400 Subject: Periodic external HTTP exchange from NGINX for dynamic control of upstream server list Message-ID: <110b0d342544434d0c7ea71c3e8ff741.NginxMailingListEnglish@forum.nginx.org> Hi, I have a requirement to de-select or exclude one or more servers from an upstream group, if that server is either just newborn or exhausted (overloaded or showing reduced performance). So, consider there are servers A, B and C in the upstream group. An external program periodically checks A, B and C and keeps track of different parameters (uptime, average response time etc), and assigns a "good" or "bad" status flag to each of them. I would like to implement a mechanism in NGINX, in which it can periodically (say every 5 minutes) communicate with this external program and collect the status flag for A,B and C. This would be an HTTP communication with a response in a parse-able text format. Assume if B is reported as "bad", I would like to exclude B temporarily, till I get a "good" value back for it. I am aware of the native upstream healthcheck mechanism in NGINX. However, the requirement here is to bring some additional parameters into consideration for the selection of upstream. I would like to know whether this is feasible to realize in NGINX. Could someone please explain some design insights for this problem? Going a bit deeper, I could identify the following potential requirements or questions? a) Should I depend on both (a) the passive upstream check native to NGINX and (b) the status value to be obtained from the external process? I think since the native health check is passive in nature (like when NGINX tries to connect to an upstream when it wants to proxy the request), it is beyond my programmable-control. So, I needn't worry about that. What I should be worried is about enabling NGINX to dynamically exclude a particular server, based on a "bad" feedback from the external process. b) How to communicate periodically with the external process over HTTP? How can I get an HTTP client functionality in NGINX to send an external request and process the response? Can I build this functionality as an NGINX module? c) Last but not the least, I would need some mechanism by which I should be able to map the server(s) reported as "bad" to the configured upstream servers, and then temporarily exclude them (mark down), and later include them when the status changes to "good". Thanks Rajesh Posted at Nginx Forum: https://forum.nginx.org/read.php?2,276912,276912#msg-276912 From philip.walenta at gmail.com Tue Oct 17 13:19:31 2017 From: philip.walenta at gmail.com (Philip Walenta) Date: Tue, 17 Oct 2017 08:19:31 -0500 Subject: Periodic external HTTP exchange from NGINX for dynamic control of upstream server list In-Reply-To: <110b0d342544434d0c7ea71c3e8ff741.NginxMailingListEnglish@forum.nginx.org> References: <110b0d342544434d0c7ea71c3e8ff741.NginxMailingListEnglish@forum.nginx.org> Message-ID: It sounds like you want to control what upstream servers are available based on a set of criteria. This setup might do what you want, just not implemented in the exact way you desire: https://medium.com/@sigil66/dynamic-nginx-upstreams-from-consul-via-lua-nginx-module-2bebc935989b On Tue, Oct 17, 2017 at 7:01 AM, rnmx18 wrote: > Hi, > > I have a requirement to de-select or exclude one or more servers from an > upstream group, if that server is either just newborn or exhausted > (overloaded or showing reduced performance). > > So, consider there are servers A, B and C in the upstream group. An > external > program periodically checks A, B and C and keeps track of different > parameters (uptime, average response time etc), and assigns a "good" or > "bad" status flag to each of them. > > I would like to implement a mechanism in NGINX, in which it can > periodically > (say every 5 minutes) communicate with this external program and collect > the > status flag for A,B and C. This would be an HTTP communication with a > response in a parse-able text format. Assume if B is reported as "bad", I > would like to exclude B temporarily, till I get a "good" value back for it. > > I am aware of the native upstream healthcheck mechanism in NGINX. > > However, the requirement here is to bring some additional parameters into > consideration for the selection of upstream. > > I would like to know whether this is feasible to realize in NGINX. Could > someone please explain some design insights for this problem? > > Going a bit deeper, I could identify the following potential requirements > or > questions? > > a) Should I depend on both (a) the passive upstream check native to NGINX > and (b) the status value to be obtained from the external process? I think > since the native health check is passive in nature (like when NGINX tries > to > connect to an upstream when it wants to proxy the request), it is beyond my > programmable-control. So, I needn't worry about that. What I should be > worried is about enabling NGINX to dynamically exclude a particular server, > based on a "bad" feedback from the external process. > > b) How to communicate periodically with the external process over HTTP? How > can I get an HTTP client functionality in NGINX to send an external request > and process the response? Can I build this functionality as an NGINX > module? > > c) Last but not the least, I would need some mechanism by which I should be > able to map the server(s) reported as "bad" to the configured upstream > servers, and then temporarily exclude them (mark down), and later include > them when the status changes to "good". > > Thanks > Rajesh > > Posted at Nginx Forum: https://forum.nginx.org/read. > php?2,276912,276912#msg-276912 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Tue Oct 17 13:34:59 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 17 Oct 2017 16:34:59 +0300 Subject: nginx-1.12.2 Message-ID: <20171017133459.GA26836@mdounin.ru> Changes with nginx 1.12.2 17 Oct 2017 *) Bugfix: client SSL connections were immediately closed if deferred accept and the "proxy_protocol" parameter of the "listen" directive were used. *) Bugfix: client connections might be dropped during configuration testing when using the "reuseport" parameter of the "listen" directive on Linux. *) Bugfix: incorrect response length was returned on 32-bit platforms when requesting more than 4 gigabytes with multiple ranges. *) Bugfix: switching to the next upstream server in the stream module did not work when using the "ssl_preread" directive. *) Bugfix: when using HTTP/2 client request body might be corrupted. *) Bugfix: in handling of client addresses when using unix domain sockets. -- Maxim Dounin http://nginx.org/ From kworthington at gmail.com Tue Oct 17 19:14:27 2017 From: kworthington at gmail.com (Kevin Worthington) Date: Tue, 17 Oct 2017 15:14:27 -0400 Subject: [nginx-announce] nginx-1.12.2 In-Reply-To: <20171017133504.GB26836@mdounin.ru> References: <20171017133504.GB26836@mdounin.ru> Message-ID: Hello Nginx users, Now available: Nginx 1.12.2 for Windows https://kevinworthington.com/nginxwin1122 (32-bit and 64-bit versions) These versions are to support legacy users who are already using Cygwin based builds of Nginx. Officially supported native Windows binaries are at nginx.org. Announcements are also available here: Twitter http://twitter.com/kworthington Google+ https://plus.google.com/+KevinWorthington/ Thank you, Kevin -- Kevin Worthington kworthington *@* (gmail] [dot} {com) https://kevinworthington.com/ https://twitter.com/kworthington https://plus.google.com/+KevinWorthington/ On Tue, Oct 17, 2017 at 9:35 AM, Maxim Dounin wrote: > Changes with nginx 1.12.2 17 Oct > 2017 > > *) Bugfix: client SSL connections were immediately closed if deferred > accept and the "proxy_protocol" parameter of the "listen" directive > were used. > > *) Bugfix: client connections might be dropped during configuration > testing when using the "reuseport" parameter of the "listen" > directive on Linux. > > *) Bugfix: incorrect response length was returned on 32-bit platforms > when requesting more than 4 gigabytes with multiple ranges. > > *) Bugfix: switching to the next upstream server in the stream module > did not work when using the "ssl_preread" directive. > > *) Bugfix: when using HTTP/2 client request body might be corrupted. > > *) Bugfix: in handling of client addresses when using unix domain > sockets. > > > -- > Maxim Dounin > http://nginx.org/ > _______________________________________________ > nginx-announce mailing list > nginx-announce at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-announce > -------------- next part -------------- An HTML attachment was scrubbed... URL: From peter_booth at me.com Wed Oct 18 00:16:41 2017 From: peter_booth at me.com (Peter Booth) Date: Tue, 17 Oct 2017 20:16:41 -0400 Subject: E3-1240 with 32GB Ram - Unable to set the optimal value for the server In-Reply-To: <35dff87f450572ff7db1474cd6ce2de9.NginxMailingListEnglish@forum.nginx.org> References: <35dff87f450572ff7db1474cd6ce2de9.NginxMailingListEnglish@forum.nginx.org> Message-ID: <91DE096C-F445-4E26-BFB9-B614FB5C4E59@me.com> Agree, I work as performance architect , specializing in improving the performance of trading applications and high traffic web sites. When I first began tuning Apache (and then nginx) I realized the the internet was full of ?helpful suggestions? about why you should set configuration X to this number. What took me more than ten years to learn was that 95% of these tips are useless, because they are ideas that made sense in one setting at one time that got copied and passed down year after year without people understanding them. So be skeptical about anything that anyone suggests, including me. Regarding one of these settings: max_requests. This is a very old setting inherited from apache that originally allowed you to recycle apache worker processes after they had handled N requests. Why would you do this? If your worker included a module that leaked memory then over time worker processes would grow in size - this setting would mean that each worker would get killed eventually, at different time, so a new fresh process could be started to take its place, avoiding a catastrophe where all of your your workers consume all the memory on the host. In 2017 you can absolutely set iit to zero, provided you keep track of the size of your processes. In fact we can confirm this idea just from your top output. ID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 3135 nginx 20 0 393108 12112 3832 R 7.3 0.0 10:30.35 php-fpm: pool www 6839 nginx 20 0 392804 11832 3828 R 7.3 0.0 10:37.57 php-fpm: pool www See how process 3135 has been running fro 10 minutes 30 seconds whilst process 6839 has been running for 10 minutes 37 seconds. But the longer running process has a smaller resident set size (RSS = memory in use) If you look at all of the lines its easier to see that there is no trend of memory increasing over time: NewiMac:Records peter$ cat phpOutput.txt | grep php-fpm | awk '{print $11,$6}' | head -15 | awk '{print $2}' | average -M 11852 NewiMac:Records peter$ cat phpOutput.txt | grep php-fpm | awk '{print $11,$6}' | tail -15 | awk '{print $2}' | average -M 11876 > On Oct 17, 2017, at 12:39 AM, agriz wrote: > > Sir reading some info, i guess i cant tell any number blindly without test. > > I think > I can first try to give these values > > max_children = 100 > start server = 34 > spareserver min and max = 20 & 50 > > We have around 20GB free Ram all the time. Why can't we use it for php-fpm? > Are those values safe to check? > > But max_requests, why should we limit the numbers to 500 or 2500? Why cant > we have unlimited? What is wrong in setting it to zero? > > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,276892,276907#msg-276907 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From steve at greengecko.co.nz Wed Oct 18 07:35:02 2017 From: steve at greengecko.co.nz (steve at greengecko.co.nz) Date: Wed, 18 Oct 2017 07:35:02 +0000 Subject: E3-1240 with 32GB Ram - Unable to set the optimal value for the server In-Reply-To: <91DE096C-F445-4E26-BFB9-B614FB5C4E59@me.com> References: <91DE096C-F445-4E26-BFB9-B614FB5C4E59@me.com> <35dff87f450572ff7db1474cd6ce2de9.NginxMailingListEnglish@forum.nginx.org> Message-ID: <97b9608275d7e94aa8b21ceee6e03e60@webmail.greengecko.co.nz> You can't say that. Which fpm model are you using? dynamic, ondemand? Makes a huge difference. If you have a memory leak, ensure your workers are killed on a regular basis. Don't want you near my servers for sure! Steve October 18, 2017 1:17 PM, "Peter Booth" wrote: Agree, I work as performance architect , specializing in improving the performance of trading applications and high traffic web sites. When I first began tuning Apache (and then nginx) I realized the the internet was full of ?helpful suggestions? about why you should set configuration X to this number. What took me more than ten years to learn was that 95% of these tips are useless, because they are ideas that made sense in one setting at one time that got copied and passed down year after year without people understanding them. So be skeptical about anything that anyone suggests, including me. Regarding one of these settings: max_requests. This is a very old setting inherited from apache that originally allowed you to recycle apache worker processes after they had handled N requests. Why would you do this? If your worker included a module that leaked memory then over time worker processes would grow in size - this setting would mean that each worker would get killed eventually, at different time, so a new fresh process could be started to take its place, avoiding a catastrophe where all of your your workers consume all the memory on the host. In 2017 you can absolutely set iit to zero, provided you keep track of the size of your processes. In fact we can confirm this idea just from your top output. ID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 3135 nginx 20 0 393108 12112 3832 R 7.3 0.0 10:30.35 php-fpm: pool www 6839 nginx 20 0 392804 11832 3828 R 7.3 0.0 10:37.57 php-fpm: pool www See how process 3135 has been running fro 10 minutes 30 seconds whilst process 6839 has been running for 10 minutes 37 seconds. But the longer running process has a smaller resident set size (RSS = memory in use) If you look at all of the lines its easier to see that there is no trend of memory increasing over time: NewiMac:Records peter$ cat phpOutput.txt | grep php-fpm | awk '{print $11,$6}' | head -15 | awk '{print $2}' | average -M 11852 NewiMac:Records peter$ cat phpOutput.txt | grep php-fpm | awk '{print $11,$6}' | tail -15 | awk '{print $2}' | average -M 11876 On Oct 17, 2017, at 12:39 AM, agriz wrote: Sir reading some info, i guess i cant tell any number blindly without test. I think I can first try to give these values max_children = 100 start server = 34 spareserver min and max = 20 & 50 We have around 20GB free Ram all the time. Why can't we use it for php-fpm? Are those values safe to check? But max_requests, why should we limit the numbers to 500 or 2500? Why cant we have unlimited? What is wrong in setting it to zero? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,276892,276907#msg-276907 (https://forum.nginx.org/read.php?2,276892,276907#msg-276907) _______________________________________________ nginx mailing list nginx at nginx.org (mailto:nginx at nginx.org) http://mailman.nginx.org/mailman/listinfo/nginx (http://mailman.nginx.org/mailman/listinfo/nginx) -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Wed Oct 18 11:46:53 2017 From: nginx-forum at forum.nginx.org (rnmx18) Date: Wed, 18 Oct 2017 07:46:53 -0400 Subject: Periodic external HTTP exchange from NGINX for dynamic control of upstream server list In-Reply-To: References: Message-ID: <5a33b957cb4e39b255a47f8cb12c6c24.NginxMailingListEnglish@forum.nginx.org> Hi Philip, Yes. The link which you referred to, definitely tries to address quite a similar problem as what I had described earlier. I need to get more familiar with Lua and its usage with NGINX. Thanks for your reply. Regards, Rajesh Posted at Nginx Forum: https://forum.nginx.org/read.php?2,276912,276944#msg-276944 From nginx-forum at forum.nginx.org Wed Oct 18 15:39:22 2017 From: nginx-forum at forum.nginx.org (halfpastjohn) Date: Wed, 18 Oct 2017 11:39:22 -0400 Subject: basic rewrite question Message-ID: <9de5520efaad960d2628a53c08c69a50.NginxMailingListEnglish@forum.nginx.org> I'm trying to setup rewrites so I can automate this more efficiently. Some of my locations require a rewrite and some do not. I currently have it hardcoded into the proxy_pass: location ~* /v1/device/(.*)/ { proxy_pass http://api.domain.com/api/v1.0/download/$1; } Would this accomplish the same thing? location ~* /v1/device/(.*)/ { rewrite ^/(.*) /api/v1.0/download/$1 break; proxy_pass http://api.domain.com; } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,276946,276946#msg-276946 From nziada at pivotal.io Wed Oct 18 15:54:50 2017 From: nziada at pivotal.io (Nader Ziada) Date: Wed, 18 Oct 2017 11:54:50 -0400 Subject: workaround to handle HTTP and HTTPS on same port? Message-ID: Hi, In order to support HTTP and HTTPS on the same port we had to enable SSL for that listen port in the nginx config and remap the 497 nginx error code to a permanent redirect. To be transparent to the client making the request (written in go), we used "error_page 497 301 =200 $request_uri;". This seems to work for the GET requests, but not for PUT requests. This is a temporary workaround that we need while the more permanent solution is being worked on, so we're wondering if anybody here had any workarounds or tips on how to achieve that. Thanks! Nader -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Wed Oct 18 15:58:38 2017 From: nginx-forum at forum.nginx.org (halfpastjohn) Date: Wed, 18 Oct 2017 11:58:38 -0400 Subject: basic rewrite question In-Reply-To: <9de5520efaad960d2628a53c08c69a50.NginxMailingListEnglish@forum.nginx.org> References: <9de5520efaad960d2628a53c08c69a50.NginxMailingListEnglish@forum.nginx.org> Message-ID: <9ce3c1be8528f775bf845156d272039f.NginxMailingListEnglish@forum.nginx.org> Also - our standardization is not the greatest, so I actually want to rewrite the entire URI, which is why I have ^/(.*) as the regex. However I don't think the $1 in the replacement string will still apply to the original URI. Would this work? location ~* /v1/device/(.*)/ { rewrite $uri /api/v1.0/download/$1 break; proxy_pass http://api.domain.com; } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,276946,276948#msg-276948 From nginx-forum at forum.nginx.org Wed Oct 18 16:02:52 2017 From: nginx-forum at forum.nginx.org (halfpastjohn) Date: Wed, 18 Oct 2017 12:02:52 -0400 Subject: basic rewrite question In-Reply-To: <9ce3c1be8528f775bf845156d272039f.NginxMailingListEnglish@forum.nginx.org> References: <9de5520efaad960d2628a53c08c69a50.NginxMailingListEnglish@forum.nginx.org> <9ce3c1be8528f775bf845156d272039f.NginxMailingListEnglish@forum.nginx.org> Message-ID: rather, $request_uri Posted at Nginx Forum: https://forum.nginx.org/read.php?2,276946,276949#msg-276949 From nginx-forum at forum.nginx.org Wed Oct 18 19:03:42 2017 From: nginx-forum at forum.nginx.org (user82) Date: Wed, 18 Oct 2017 15:03:42 -0400 Subject: Request help on a "SSL_read() failed (SSL: error:14094438:)" error Message-ID: <9ac498157f70999ba04fc1c6c3ec43db.NginxMailingListEnglish@forum.nginx.org> Hello all, We are seeing this error in NGINX logs, when the response is being read back from the upstream servers. SSL_read() failed (SSL: error:14094438:SSL routines:ssl3_read_bytes:tlsv1 alert internal error:SSL alert number 80) while reading response header from upstream" Could you please suggest us a possible cause of this error. We have nginx setup to have only TLS 1.2 as we want only TLS1.2: Proxy_ssl_protocols: TLSV1.2 Proxy_ssl_ciphers: HIGH:!aNULL:!MD5 This is happening only for random requests during a load test and other requests with similar payload, or even the same type of request is going through fine after some time. Thanks. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,276950,276950#msg-276950 From eliezer at ngtech.co.il Wed Oct 18 20:57:53 2017 From: eliezer at ngtech.co.il (Eliezer Croitoru) Date: Wed, 18 Oct 2017 23:57:53 +0300 Subject: workaround to handle HTTP and HTTPS on same port? In-Reply-To: References: Message-ID: <0d3e01d34853$c44d6bd0$4ce84370$@ngtech.co.il> Have you tried using 307 or 308 redirect codes? It works for any request which contains a body like POST\PUT\OTHERS. Eliezer ---- http://ngtech.co.il/lmgtfy/ Linux System Administrator Mobile: +972-5-28704261 Email: eliezer at ngtech.co.il From: nginx [mailto:nginx-bounces at nginx.org] On Behalf Of Nader Ziada Sent: Wednesday, October 18, 2017 18:55 To: nginx at nginx.org Subject: workaround to handle HTTP and HTTPS on same port? Hi, In order to support HTTP and HTTPS on the same port we had to enable SSL for that listen port in the nginx config and remap the 497 nginx error code to a permanent redirect. To be transparent to the client making the request (written in go), we used "error_page 497 301 =200 $request_uri;". This seems to work for the GET requests, but not for PUT requests. This is a temporary workaround that we need while the more permanent solution is being worked on, so we're wondering if anybody here had any workarounds or tips on how to achieve that. Thanks! Nader From nginx-forum at forum.nginx.org Wed Oct 18 22:05:10 2017 From: nginx-forum at forum.nginx.org (eax) Date: Wed, 18 Oct 2017 18:05:10 -0400 Subject: max_ranges not working Message-ID: <8224b24dc9063e53173ecd689f91f994.NginxMailingListEnglish@forum.nginx.org> hi , we are under heavy request storm . i seted max_ranges 0; to stop multirange requests "multipart download" but it not working i see 10 concurrent download follow in each file . i need to limit this value to 2 maximum range request per file. i usig aio threads. any way any body help me to fix the problem? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,276952,276952#msg-276952 From lists at lazygranch.com Wed Oct 18 22:13:15 2017 From: lists at lazygranch.com (Gary) Date: Wed, 18 Oct 2017 15:13:15 -0700 Subject: max_ranges not working In-Reply-To: <8224b24dc9063e53173ecd689f91f994.NginxMailingListEnglish@forum.nginx.org> Message-ID: I know max connections will solve this, but the drawback is you could have some large user behind a NAT, which would lock out users. I used a multiple connection download manager to verify this. This ranges feature sounds great. I look forward to you getting it to work. ;-) ? Original Message ? From: nginx-forum at forum.nginx.org Sent: October 18, 2017 3:05 PM To: nginx at nginx.org Reply-to: nginx at nginx.org Subject: max_ranges not working hi , we are under heavy request storm . i seted max_ranges 0; to stop multirange requests "multipart download" but it not working i see 10 concurrent download follow in each file . i need to limit this value to 2 maximum range request per file. i usig aio threads. any way any body help me to fix the problem? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,276952,276952#msg-276952 _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From lists at lazygranch.com Wed Oct 18 22:15:29 2017 From: lists at lazygranch.com (Gary) Date: Wed, 18 Oct 2017 15:15:29 -0700 Subject: max_ranges not working In-Reply-To: <20171018221324.8419E2C56B56@mail.nginx.com> Message-ID: This needs further explaining. If you rate limit, a multiple connection download manager won't download any faster. ? Original Message ? From: lists at lazygranch.com Sent: October 18, 2017 3:13 PM To: nginx at nginx.org Reply-to: nginx at nginx.org Subject: Re: max_ranges not working I know max connections will solve this, but the drawback is you could have some large user behind a NAT, which would lock out users.? I used a multiple connection download manager to verify this. This ranges feature sounds great. I look forward to you getting it to work. ;-) ? Original Message ? From: nginx-forum at forum.nginx.org Sent: October 18, 2017 3:05 PM To: nginx at nginx.org Reply-to: nginx at nginx.org Subject: max_ranges not working hi , we are under heavy request storm . i seted max_ranges 0; to stop multirange requests "multipart download" but it not working i see 10 concurrent download follow in each file . i need to limit this value to 2 maximum range request per file. i usig aio threads. any way any body help me to fix the problem? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,276952,276952#msg-276952 _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From lists at lazygranch.com Wed Oct 18 23:02:53 2017 From: lists at lazygranch.com (Gary) Date: Wed, 18 Oct 2017 16:02:53 -0700 Subject: max_ranges not working In-Reply-To: <20171018221533.559692C56B69@mail.nginx.com> Message-ID: Isn't multipart the means to speed up downloading with multiple streams? So wouldn't rate limiting solve the problem? ? Original Message ? From: lists at lazygranch.com Sent: October 18, 2017 3:15 PM To: nginx at nginx.org Reply-to: nginx at nginx.org Subject: Re: max_ranges not working This needs further explaining. If you rate limit, a multiple connection download manager won't download any faster. ? Original Message ? From: lists at lazygranch.com Sent: October 18, 2017 3:13 PM To: nginx at nginx.org Reply-to: nginx at nginx.org Subject: Re: max_ranges not working I know max connections will solve this, but the drawback is you could have some large user behind a NAT, which would lock out users.? I used a multiple connection download manager to verify this. This ranges feature sounds great. I look forward to you getting it to work. ;-) ? Original Message ? From: nginx-forum at forum.nginx.org Sent: October 18, 2017 3:05 PM To: nginx at nginx.org Reply-to: nginx at nginx.org Subject: max_ranges not working hi , we are under heavy request storm . i seted max_ranges 0; to stop multirange requests "multipart download" but it not working i see 10 concurrent download follow in each file . i need to limit this value to 2 maximum range request per file. i usig aio threads. any way any body help me to fix the problem? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,276952,276952#msg-276952 _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at forum.nginx.org Thu Oct 19 05:41:32 2017 From: nginx-forum at forum.nginx.org (eax) Date: Thu, 19 Oct 2017 01:41:32 -0400 Subject: max_ranges not working In-Reply-To: <20171018230257.DE66A2C56B47@mail.nginx.com> References: <20171018230257.DE66A2C56B47@mail.nginx.com> Message-ID: <83c0b912ab436a40b61b77354c0f402c.NginxMailingListEnglish@forum.nginx.org> the main problem is here when i change max_ranges to 0 for example i want to disable range request header ! , or multipart download , or resemble download ! nginx ignore that i dont know why . i need limit it to 2 connection. but no value effective on this directive :( nginx ignore it ... Posted at Nginx Forum: https://forum.nginx.org/read.php?2,276952,276957#msg-276957 From contact at simonbernard.eu Thu Oct 19 09:42:23 2017 From: contact at simonbernard.eu (Simon Bernard) Date: Thu, 19 Oct 2017 11:42:23 +0200 Subject: DTLS Load Balancing Message-ID: <481794e6-f29f-b9ef-8a1b-5af6ff22b634@simonbernard.eu> Hi, ? There is a draft[1] at the IETF about connection ID for DTLS . This is a way to identify a "DTLS connection" by an ID instead of the classical Ip address/port tuple. The objective is to reduce the need of DTLS full handshake when client address/port change. ?? I would like to know if it make sense to make load balancing based on this connection ID. ?? Here is the use case: ?? You have a cluster of servers behind a unique IP address. ?? You do load balancing using IP address. ?? You use UDP/DTLS. ?? Some clients are behind NAT and so theirs IP/port can change. ?? DTLS connection states are store in each server and so are not shared. ?? So if clients use same address/port, there is no issue as traffic will be redirect always on the same server. Server has already a connection for this peer, no need to full-handshake. ?? If address/port change, 2 possibilities: ???? - by chance load balancer, send traffic to the same server and thanks to CID the server can reuse its connection, no-need to full-handshake ???? - bad luck, traffic is redirect on server which does not know this peer so it will need to do a full-handshake. ?? It seems to me that doing load balancing on this connection ID could solve the problem. [2] ?? Does it make sense to you ? Is it a way to create kind of 3rd party module for nginx ? Thx Simon [1]https://tools.ietf.org/html/draft-rescorla-tls-dtls-connection-id-00 [2]https://www.ietf.org/mail-archive/web/tls/current/msg24619.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From pluknet at nginx.com Thu Oct 19 11:48:44 2017 From: pluknet at nginx.com (Sergey Kandaurov) Date: Thu, 19 Oct 2017 14:48:44 +0300 Subject: Request help on a "SSL_read() failed (SSL: error:14094438:)" error In-Reply-To: <9ac498157f70999ba04fc1c6c3ec43db.NginxMailingListEnglish@forum.nginx.org> References: <9ac498157f70999ba04fc1c6c3ec43db.NginxMailingListEnglish@forum.nginx.org> Message-ID: <9F7EF6C2-A86E-415D-9444-2991D7CAC52A@nginx.com> > On 18 Oct 2017, at 22:03, user82 wrote: > > Hello all, > > We are seeing this error in NGINX logs, when the response is being read back > from the upstream servers. > > SSL_read() failed (SSL: error:14094438:SSL routines:ssl3_read_bytes:tlsv1 > alert internal error:SSL alert number 80) while reading response header from > upstream? > You received an ?internal_error? TLS alert from the peer. You may want to check error logs on upstream side. -- Sergey Kandaurov From john.w.baird at gmail.com Thu Oct 19 18:59:15 2017 From: john.w.baird at gmail.com (John Baird) Date: Thu, 19 Oct 2017 18:59:15 +0000 Subject: Nginx Auth Request By Source IP Message-ID: I have been doing some reading an googling, and I am wondering if someone can help. I have an oauth2 service successfully authenticating nginx visitors. Because Nginx is fronting a web application on the backend, the web application does NOT have valid domain credentials to interact with the nginx layer. Goal: I would like to be able to do something like the following: geo $localhost { default 0; 127.0.0.1/32 1; } server { location / { if ($localhost = 0) { auth_request = /oauth2/callback .... } } } Is this possible? TL;DR -> bypass nginx oauth2 auth_request module when source ip is localhost -------------- next part -------------- An HTML attachment was scrubbed... URL: From gk at leniwiec.biz Thu Oct 19 19:08:29 2017 From: gk at leniwiec.biz (Grzegorz Kulewski) Date: Thu, 19 Oct 2017 21:08:29 +0200 Subject: Nginx Auth Request By Source IP In-Reply-To: References: Message-ID: <159313e8-9b62-9641-130e-8890319b8384@leniwiec.biz> W dniu 19.10.2017 o?20:59, John Baird pisze: > I have been doing some reading an googling, and I am wondering if someone can help. > > I have an oauth2 service successfully authenticating nginx visitors.? Because Nginx is fronting a web application on the backend, the web application does NOT have valid domain credentials to interact with the nginx layer. > > Goal: > I would like to be able to do something like the following: > > geo $localhost { > ? default 0; > ? 127.0.0.1/32 1; > } > > server { > ? location / { > ? ? if ($localhost = 0) { > ? ? ? auth_request = /oauth2/callback > ? ? ? .... > ? ? } > ? } > } > > Is this possible? > TL;DR -> bypass nginx oauth2 auth_request module when source ip is localhost If I understood correctly something like that should work: satisfy any; allow 127.0.0.1; deny all; auth_request ...; -- Grzegorz Kulewski From john.w.baird at gmail.com Thu Oct 19 19:15:26 2017 From: john.w.baird at gmail.com (John Baird) Date: Thu, 19 Oct 2017 19:15:26 +0000 Subject: Nginx Auth Request By Source IP In-Reply-To: <159313e8-9b62-9641-130e-8890319b8384@leniwiec.biz> References: <159313e8-9b62-9641-130e-8890319b8384@leniwiec.biz> Message-ID: That definitely helped! I didn't realize I could stack like that exactly. Getting a 502 from localhost queries now, I can work on that. Thanks for the quick reply! On Thu, Oct 19, 2017 at 2:08 PM Grzegorz Kulewski wrote: > W dniu 19.10.2017 o 20:59, John Baird pisze: > > I have been doing some reading an googling, and I am wondering if > someone can help. > > > > I have an oauth2 service successfully authenticating nginx visitors. > Because Nginx is fronting a web application on the backend, the web > application does NOT have valid domain credentials to interact with the > nginx layer. > > > > Goal: > > I would like to be able to do something like the following: > > > > geo $localhost { > > default 0; > > 127.0.0.1/32 1; > > } > > > > server { > > location / { > > if ($localhost = 0) { > > auth_request = /oauth2/callback > > .... > > } > > } > > } > > > > Is this possible? > > TL;DR -> bypass nginx oauth2 auth_request module when source ip is > localhost > > If I understood correctly something like that should work: > > satisfy any; > allow 127.0.0.1; > deny all; > auth_request ...; > > -- > Grzegorz Kulewski > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Thu Oct 19 19:23:22 2017 From: nginx-forum at forum.nginx.org (user82) Date: Thu, 19 Oct 2017 15:23:22 -0400 Subject: Request help on a "SSL_read() failed (SSL: error:14094438:)" error In-Reply-To: <9F7EF6C2-A86E-415D-9444-2991D7CAC52A@nginx.com> References: <9F7EF6C2-A86E-415D-9444-2991D7CAC52A@nginx.com> Message-ID: <3c88aa5f6ad572009f52e0326982e160.NginxMailingListEnglish@forum.nginx.org> Hi Sergey, Thank you. After enabling the "-Djava.net.debug=ssl" on the upstream, we are seeing the following SSL error in upstream: Thread-7, fatal error: 80: problem unwrapping net record javax.net.ssl.SSLException: Unsupported record version Unknown-126.133 %% Invalidated: [Session-20, TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384] Thread-7, SEND TLSv1.2 ALERT: fatal, description = internal_error And we are still trying to find the cause of this SSL error. Thanks. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,276950,276963#msg-276963 From igor at sysoev.ru Fri Oct 20 07:42:08 2017 From: igor at sysoev.ru (Igor Sysoev) Date: Fri, 20 Oct 2017 10:42:08 +0300 Subject: unit-0.2 beta release Message-ID: <7E15C9B5-8004-4BC8-9FC8-24022DA0812A@sysoev.ru> http://unit.nginx.org Changes with Unit 0.2 19 Oct 2017 *) Feature: Go package improvements. *) Feature: configuration persistence. *) Feature: improved handling of configuration errors. *) Feature: application "timeout" property. *) Bugfix: Go application crashed under load. *) Bugfix: POST request for PHP were handled incorrectly. *) Bugfix: the router exited abnormally if all listeners had been deleted. *) Bugfix: the router crashed under load. *) Bugfix: memory leak in the router. -- Igor Sysoev http://nginx.com From nginx-forum at forum.nginx.org Fri Oct 20 22:37:45 2017 From: nginx-forum at forum.nginx.org (anon59682) Date: Fri, 20 Oct 2017 18:37:45 -0400 Subject: Help with locations and regex Message-ID: Hi, I'm trying to serve up static files for some paths and proxy_pass for others and I can't figure it out. === Requirements === Location 1: the following paths should all load the requested file from /home/user/project/dist/ and fall back to /home/user/project/dist/index.html if the explicit filename was not found (ideally, without a redirect): /admin /admin/ /admin/favicon.ico /admin/foo /admin/foo/logo.png etc. - basically, anything beginning with /admin/ or /admin itself (but not /admin123) Location 2: the root path should proxy_pass to a process running on another port. This is separate from location 3 because it has different location settings (such as auth). Location 3: all first-level paths off the root should proxy_pass to the process on the other port, for example: /foo /bar123 /foo/baz # this would NOT be proxy passed, it can just fall through and 404 or whatever === End of Requirements === So far, the closest I have come is a situation where location 2 and 3 are working, but location 1 only partially works. In my current configuration for location 1, it will accept anything beginning with /admin (including /admin123) and it will 301 redirect /admin to /admin/ (note the trailing slash). Here is that config: server { listen 80 default_server; server_name my.domain; root /home/user/project/dist; index index.html; # Location 1 location ^~ /admin { alias /home/user/project/dist; try_files $uri $uri/ /admin/index.html; } # Location 2 location = / { # other stuff omitted for brevity proxy_pass http://127.0.0.1:8080; } # Location 3 location ~ /([^\/]+)$ { # other stuff omitted for brevity proxy_pass http://127.0.0.1:8080; } } I thought I should be able to use ^/admin(?:/.*)?$ for the location 1 match, but that causes /admin/ to fall through to the default nginx 404 while /admin and /admin/foo are caught by location 3. Using that new match and changing the location modifier from ^~ to ~ catches everything correctly, but then every location 1 request results in the following error: rewrite or internal redirection cycle while internally redirecting to "/admin/index.html" I assume that's because I shouldn't be using alias and/or the try_files directive is wrong when using that regex, but I haven't been able to find the magic combination. Is there any saint out there who knows how to satisfy all 3 requirements? Thanks, Anonymous Coward Posted at Nginx Forum: https://forum.nginx.org/read.php?2,277012,277012#msg-277012 From francis at daoine.org Sat Oct 21 08:22:31 2017 From: francis at daoine.org (Francis Daly) Date: Sat, 21 Oct 2017 09:22:31 +0100 Subject: max_ranges not working In-Reply-To: <8224b24dc9063e53173ecd689f91f994.NginxMailingListEnglish@forum.nginx.org> References: <8224b24dc9063e53173ecd689f91f994.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20171021082231.GM20907@daoine.org> On Wed, Oct 18, 2017 at 06:05:10PM -0400, eax wrote: Hi there, > max_ranges 0; > to stop multirange requests "multipart download" > but it not working i see 10 concurrent download follow in each file . max_ranges controls the number of accepted Range values in a single request. It is unrelated to multiple requests. So it probably is working for its intended purpose. > i need to limit this value to 2 maximum range request per file. > i usig aio threads. > > any way any body help me to fix the problem? I think that it can't be done reliably; but if unreliable is good enough, then you might be able to use limit_conn with limit_conn_zone based on the browser (whatever combination of $remote_ip, $http_user_agent, your chosen $cookie_X, and any other variables you think is good enough) and the part of the request that corresponds to the file ($request_uri, $uri, $args, your chosen $arg_X -- whatever matches your expectations). "unreliable" there means "you probably won't block a determined unwanted user; and you probably will block some wanted users; but you probably will block the bulk of casual unwanted users". Only you can decide what balance is correct for you. If you need access control, use access control -- only allow authenticated users and actively manage the ones who don't do what you want. f -- Francis Daly francis at daoine.org From francis at daoine.org Sat Oct 21 09:07:42 2017 From: francis at daoine.org (Francis Daly) Date: Sat, 21 Oct 2017 10:07:42 +0100 Subject: Help with locations and regex In-Reply-To: References: Message-ID: <20171021090742.GN20907@daoine.org> On Fri, Oct 20, 2017 at 06:37:45PM -0400, anon59682 wrote: Hi there, > Location 1: the following paths should all load the requested file from > /home/user/project/dist/ and fall back to /home/user/project/dist/index.html > if the explicit filename was not found (ideally, without a redirect): > > /admin > /admin/ > /admin/favicon.ico > /admin/foo > /admin/foo/logo.png > etc. - basically, anything beginning with /admin/ or /admin itself (but not > /admin123) That sounds like "location ^~ /admin/ {}", possibly with an extra "location = /admin {}". > Location 2: the root path should proxy_pass to a process running on another > port. This is separate from location 3 because it has different location > settings (such as auth). "location = / {}". > Location 3: all first-level paths off the root should proxy_pass to the > process on the other port, for example: > > /foo > /bar123 "location ~ ^/[^/]*$ {}" > /foo/baz # this would NOT be proxy passed, it can just fall through and 404 > or whatever "location / {}" > === End of Requirements === > > So far, the closest I have come is a situation where location 2 and 3 are > working, but location 1 only partially works. In my current configuration > for location 1, it will accept anything beginning with /admin (including > /admin123) and it will 301 redirect /admin to /admin/ (note the trailing > slash). Here is that config: > > server { > listen 80 default_server; > server_name my.domain; > root /home/user/project/dist; > index index.html; > > # Location 1 > location ^~ /admin { > alias /home/user/project/dist; > try_files $uri $uri/ /admin/index.html; > } Make that be "location ^~ /admin/" and "alias /home/user/project/dist/" (extra / on the end of both). I'm not certain what you mean by the combination of "ideally, without a redirect" and the suggestion that a request for "/admin/foo" (a directory) does what you want, because I think (without testing) that the latter does a redirect. Possibly you want location = /admin { return 301 /admin/; } or possibly location = /admin { alias /home/user/project/dist/index.html; } If you don't want any redirect, and you are happy that the content of the url /admin/index.html makes sense everywhere in the hierarchy, use the second one above and remove the "$uri/" from the try_files. > # Location 3 > location ~ /([^\/]+)$ { /foo/baz matches this location. Add "^" to the start of the regex to avoid that. > # other stuff omitted for brevity > proxy_pass http://127.0.0.1:8080; > } > } > > I thought I should be able to use ^/admin(?:/.*)?$ for the location 1 match, Right now, your location 1 is not a regex. I think that it is simplest to keep it that way. (If you really wanted just one location{}, you would want "starts with /admin, then is followed by / or end-of-string".) > I assume that's because I shouldn't be using alias and/or the try_files > directive is wrong when using that regex, but I haven't been able to find > the magic combination. "alias" in a regex location is not the same as "alias" in a prefix location. Also, there is a bug with alias and try_files and the fallback value which you might be hitting, but if you see the expected response then I guess it works for you. f -- Francis Daly francis at daoine.org From francis at daoine.org Sat Oct 21 09:20:22 2017 From: francis at daoine.org (Francis Daly) Date: Sat, 21 Oct 2017 10:20:22 +0100 Subject: Help with locations and regex In-Reply-To: <20171021090742.GN20907@daoine.org> References: <20171021090742.GN20907@daoine.org> Message-ID: <20171021092022.GO20907@daoine.org> On Sat, Oct 21, 2017 at 10:07:42AM +0100, Francis Daly wrote: > On Fri, Oct 20, 2017 at 06:37:45PM -0400, anon59682 wrote: Hi there, > > location ^~ /admin { > > alias /home/user/project/dist; > > try_files $uri $uri/ /admin/index.html; > > } > Also, there is a bug with alias and try_files and the fallback value > which you might be hitting, but if you see the expected response then > I guess it works for you. I'm wrong. That bug only applies if there is a $variable in the fallback value (and even then, not always). So your use of try_files here is fine. f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Sat Oct 21 17:41:52 2017 From: nginx-forum at forum.nginx.org (anon59682) Date: Sat, 21 Oct 2017 13:41:52 -0400 Subject: Help with locations and regex In-Reply-To: <20171021090742.GN20907@daoine.org> References: <20171021090742.GN20907@daoine.org> Message-ID: <1ad8c66f7225efbb1da133c3047d4d67.NginxMailingListEnglish@forum.nginx.org> Hi! With your suggestions, I was able to get it - thank you! Here is what I ended up doing: location = /admin { try_files /index.html =404; } location ^~ /admin/ { alias /home/user/project/dist/; try_files $uri $uri/ /admin/index.html; } location = / { # proxy_pass } location ~ ^/[^/]+$ { # proxy_pass } The results are perfect. I kinda wanted a single location to handle both admin cases, but it's not a big deal and the results are more important than the structure of the config, so I'm okay with it. Thank you so much, you're the best! -AC Posted at Nginx Forum: https://forum.nginx.org/read.php?2,277012,277016#msg-277016 From nginx-forum at forum.nginx.org Mon Oct 23 06:43:13 2017 From: nginx-forum at forum.nginx.org (fengx) Date: Mon, 23 Oct 2017 02:43:13 -0400 Subject: 'real_ip_header proxy_protocol' don't change the client address In-Reply-To: <20170928140801.GG19617@mdounin.ru> References: <20170928140801.GG19617@mdounin.ru> Message-ID: <0bd666c1e3a4caf371dcdeef602227ce.NginxMailingListEnglish@forum.nginx.org> OK. My nginx IP is 172.20.19.18. I need to add 'set_real_ip_from 172.16.0.0/12' and now it can replace remote_addr with proxy_protocol. Thanks a lot. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,276542,277033#msg-277033 From nginx-forum at forum.nginx.org Mon Oct 23 19:14:52 2017 From: nginx-forum at forum.nginx.org (c0nw0nk) Date: Mon, 23 Oct 2017 15:14:52 -0400 Subject: Nginx Listen directive with reuseport; SO_REUSEPORT Message-ID: <66d78347630f3a22dd4a362d3639dcb7.NginxMailingListEnglish@forum.nginx.org> So on each server you can add to your listen directive. listen 8181 default bind reuseport; Cloudflare use it and posted in on their blog and github here (benchmark stats included) GitHub : https://github.com/cloudflare/cloudflare-blog/tree/master/2017-10-accept-balancing Cloudflare Blog : https://blog.cloudflare.com/the-sad-state-of-linux-socket-balancing/ I question if it is ideal in a production high traffic enviorment if using "reuseport" will be beneficial or it is best to just leave it out as it is by default anyway. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,277041,277041#msg-277041 From lucas at lucasrolff.com Mon Oct 23 19:56:51 2017 From: lucas at lucasrolff.com (Lucas Rolff) Date: Mon, 23 Oct 2017 19:56:51 +0000 Subject: Nginx Listen directive with reuseport; SO_REUSEPORT In-Reply-To: <66d78347630f3a22dd4a362d3639dcb7.NginxMailingListEnglish@forum.nginx.org> References: <66d78347630f3a22dd4a362d3639dcb7.NginxMailingListEnglish@forum.nginx.org> Message-ID: What?s high traffic? At a previous employer we used it across the infrastructure, and I?d say it?s fairly high traffic (100s of gigabit of traffic). On 23/10/2017, 21.15, "nginx on behalf of c0nw0nk" wrote: So on each server you can add to your listen directive. listen 8181 default bind reuseport; Cloudflare use it and posted in on their blog and github here (benchmark stats included) GitHub : https://github.com/cloudflare/cloudflare-blog/tree/master/2017-10-accept-balancing Cloudflare Blog : https://blog.cloudflare.com/the-sad-state-of-linux-socket-balancing/ I question if it is ideal in a production high traffic enviorment if using "reuseport" will be beneficial or it is best to just leave it out as it is by default anyway. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,277041,277041#msg-277041 _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at forum.nginx.org Tue Oct 24 09:38:12 2017 From: nginx-forum at forum.nginx.org (rnmx18) Date: Tue, 24 Oct 2017 05:38:12 -0400 Subject: Crash observed while using nginx-upstream-carp along with zone directive Message-ID: <25eab088d88b9c8d9d45d2ffe040a533.NginxMailingListEnglish@forum.nginx.org> Hi, I am trying to use the following "NGINX upstream carp" module to realize upstream load balancing with a certain server affinity for requests. https://github.com/olegloa/nginx-upstream-carp The following basic configuration works fine. upstream mybackends { server 192.168.1.3:8720; server 192.168.1.3:8721; server 192.168.1.3:8722; carp; } However, if I add a "zone" directive within the upstream block (which contains the carp directive also), the verification test for the configuration (using -t option) crashes. upstream mybackends { zone zone_for_mybackends 1m; server 192.168.1.37:8720; server 192.168.1.37:8721; server 192.168.1.37:8722; carp; } The presence of "carp" directive seems to be causing the issue here. From my analysis, the crash seems to be due to the "setting" of peer.data field by carp module, which leads to an invalid memory access in the upstream zone code. The carp-module code seems to be developed 6-7 years ago, and I guess it is probably not fully compatible with the latest NGINX upstream constructs like zone. If someone else is using carp module and has faced this problem, could you please explain how it can be resolved? Thanks Rajesh Posted at Nginx Forum: https://forum.nginx.org/read.php?2,277045,277045#msg-277045 From nginx-forum at forum.nginx.org Tue Oct 24 15:09:21 2017 From: nginx-forum at forum.nginx.org (agriz) Date: Tue, 24 Oct 2017 11:09:21 -0400 Subject: How to force php-fpm to display slow log? Message-ID: <99d27c850d148983ed9c15c35421d052.NginxMailingListEnglish@forum.nginx.org> slowlog = /var/log/php-fpm/slow.log request_slowlog_timeout = 1s This following two lines are added in the php config file. Once it is added and restart php-fpm, the slow.log file is created. (request: "GET /index.php") executing too slow (1.072177 sec), logging This error is displayed in error.log file of php-fpm. But there is no additional details. I am not able to trace which url is causing the trouble. It is codeigniter framework. So I used the framework's benchmarking tool on all the methods and found the execution is fast 0.004 to 0.01 for multiple test What could be the possible reason for slow.log being empty? Is there a way to get complete url in the error log for slow process? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,277048,277048#msg-277048 From anoopalias01 at gmail.com Tue Oct 24 15:18:00 2017 From: anoopalias01 at gmail.com (Anoop Alias) Date: Tue, 24 Oct 2017 20:48:00 +0530 Subject: How to force php-fpm to display slow log? In-Reply-To: <99d27c850d148983ed9c15c35421d052.NginxMailingListEnglish@forum.nginx.org> References: <99d27c850d148983ed9c15c35421d052.NginxMailingListEnglish@forum.nginx.org> Message-ID: /index.php -- is the URL as stated by the log you provided. On Tue, Oct 24, 2017 at 8:39 PM, agriz wrote: > slowlog = /var/log/php-fpm/slow.log > request_slowlog_timeout = 1s > This following two lines are added in the php config file. Once it is added > and restart php-fpm, the slow.log file is created. > > (request: "GET /index.php") executing too slow (1.072177 sec), logging > This error is displayed in error.log file of php-fpm. But there is no > additional details. > > I am not able to trace which url is causing the trouble. It is codeigniter > framework. So I used the framework's benchmarking tool on all the methods > and found the execution is fast 0.004 to 0.01 for multiple test > > What could be the possible reason for slow.log being empty? Is there a way > to get complete url in the error log for slow process? > > Posted at Nginx Forum: https://forum.nginx.org/read. > php?2,277048,277048#msg-277048 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- *Anoop P Alias* -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Tue Oct 24 15:32:44 2017 From: nginx-forum at forum.nginx.org (agriz) Date: Tue, 24 Oct 2017 11:32:44 -0400 Subject: How to force php-fpm to display slow log? In-Reply-To: References: Message-ID: <2382e4d7aa192240efd90b317ebf6e38.NginxMailingListEnglish@forum.nginx.org> Sir, codeigniter is executing everything from index.php The coding which has been done are in the controller, model and view folders. The actual url which user see is different. It is routed. Can we get that url? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,277048,277050#msg-277050 From macko003 at gmail.com Wed Oct 25 10:58:43 2017 From: macko003 at gmail.com (Kiss Norbert) Date: Wed, 25 Oct 2017 12:58:43 +0200 Subject: nginx load-balancing latency problem Message-ID: Hi Everyone I would like to ask a little help to understand and refine my config. The basic problem: We have 2 nginx frontend on a same site (only for backup and sandbox the secound). We have 2 graylog backend servers on different sites. The nginx servers and SERVER_B is on the same site. The Nginx(s) forward the http traffic and the stream the log traffic. Unfortunately we have 0.5-0.6sec latency between the sites, so the nginx forward almost all traffic to one backend server. At the log forward function only the safe arrive important, the latency not. The HTTP traffic is low, so one server can serve it. How can I tell for the nginx, this latency is ok, and don't mark SERVER_A as unavailable? I triedwith max_fails, fail_timeout, weight parameters without success. My config part: stream{ upstream graylog_syslog { server SERVER_A:1514; server SERVER_B:1514; } server { listen 514; proxy_pass graylog_syslog; } upstream graylog_beats { server SERVER_A:5044; server SERVER_B:5044; } server { listen 5044; proxy_pass graylog_beats; } upstream graylog_gelf { server SERVER_A:12201; server SERVER_B:12201; } server { listen 12201; proxy_pass graylog_gelf; } } And the packets from the nginx to the backend servers (ok, I know one log message can be fregmented to multiple TCP packages...): tcpdump -nn -i any \(dst host SERVER_A or dst host SERVER_B\) and not port 9000 -r /tmp/log.debug | awk '{print $5}' | sort -n | uniq -c 485 SERVER_A.5044: 4157 SERVER_B.1514: 7424 SERVER_B.5044: Latency: 64 bytes from SERVER_A: icmp_seq=1 ttl=64 time=0.521 ms 64 bytes from SERVER_A: icmp_seq=2 ttl=64 time=0.523 ms .... 64 bytes from SERVER_B: icmp_seq=1 ttl=64 time=0.136 ms 64 bytes from SERVER_B: icmp_seq=2 ttl=64 time=0.222 ms Any idea? Thanks, Norbert -------------- next part -------------- An HTML attachment was scrubbed... URL: From edigarov at qarea.com Wed Oct 25 15:30:08 2017 From: edigarov at qarea.com (Gregory Edigarov) Date: Wed, 25 Oct 2017 18:30:08 +0300 Subject: need help Message-ID: <8b020819-1084-a03d-e35b-bd8f3603d077@qarea.com> hello, I have an app under /var/www/admin/dist: index.html bundle.js static/ and a bunch of files under static/ ?i need nginx to get these files? when I access https://somesite.net/admin/, not files from /admin. is that possible? thanks. From sca at andreasschulze.de Wed Oct 25 15:48:59 2017 From: sca at andreasschulze.de (A. Schulze) Date: Wed, 25 Oct 2017 17:48:59 +0200 Subject: need help In-Reply-To: <8b020819-1084-a03d-e35b-bd8f3603d077@qarea.com> References: <8b020819-1084-a03d-e35b-bd8f3603d077@qarea.com> Message-ID: <80cd6d19-1d0d-2859-9034-4fb6d8646567@andreasschulze.de> Am 25.10.2017 um 17:30 schrieb Gregory Edigarov: > hello, > > I have an app under /var/www/admin/dist: > > index.html > > bundle.js > > static/ > > and a bunch of files under static/ > > ?i need nginx to get these files? when I access https://somesite.net/admin/, not files from /admin. > > is that possible? should be possible: https://nginx.org/r/alias Andreas From nginx-forum at forum.nginx.org Wed Oct 25 17:07:46 2017 From: nginx-forum at forum.nginx.org (jlange) Date: Wed, 25 Oct 2017 13:07:46 -0400 Subject: Cookie renaming per server Message-ID: <4598ecb41ce6e619d905d15e76676f5b.NginxMailingListEnglish@forum.nginx.org> I am looking for a way to rename a cookie in a Set-Cookie header on a per-server basis. Each server that I'm proxying generates a "Set-Cookie: sessionid=xxxxxx" header. I would like to create some sort of rule that would rename that cookie "sessionid_xyz=xxxxxx" when sending to the client, then rename it back to session_id when sending back to the proxy_pass location Is this possible given the standard set of plugins? or will I have to write a completely new plugin to pull this off? Any help would be greatly appreciated. Thanks! Posted at Nginx Forum: https://forum.nginx.org/read.php?2,277061,277061#msg-277061 From helenomics at gmail.com Wed Oct 25 22:13:20 2017 From: helenomics at gmail.com (Helen Weng) Date: Wed, 25 Oct 2017 22:13:20 +0000 Subject: Can I update a custom dynamic module after the NGINX binary has been compiled? Message-ID: Hello! Early last year, dynamic modules were released. In the notes, it was mentioned that eventually "the ability to compile modules after the NGINX binary has been compiled" would be added. I was wondering if this has been added yet, and if so, which version? Thank you! -------------- next part -------------- An HTML attachment was scrubbed... URL: From jaygooby at gmail.com Wed Oct 25 22:30:14 2017 From: jaygooby at gmail.com (Jay Caines-Gooby) Date: Wed, 25 Oct 2017 23:30:14 +0100 Subject: Tool to help build nginx with different modules and dependencies Message-ID: Recently, I've had to build a variety of different versions of nginx with different combinations of core modules and third-party modules, so I wrote https://github.com/jaygooby/build-nginx to make it as easy as possible; e.g. ./build-nginx \ -n https://github.com/nginx/nginx.git at release-1.12.2 \ -d https://github.com/openssl/openssl.git at OpenSSL_1_0_2l \ -o --with-http_v2_module will clone nginx at 1.12.2, and will also clone and automatically configure openssl 1.0.2l as a dependency, and enable the http_v2 module. The README has more detail on the options available: https://github.com/ jaygooby/build-nginx/blob/master/README.md PRs and issues welcome :) -- Jay Caines-Gooby http://jay.gooby.org jay at gooby.org +44 (0)7956 182625 <07956%20182625> twitter, skype & aim: jaygooby gtalk: jaygooby at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From joel.parker.gm at gmail.com Wed Oct 25 22:55:07 2017 From: joel.parker.gm at gmail.com (Joel Parker) Date: Wed, 25 Oct 2017 17:55:07 -0500 Subject: [emerg] unknown directive "rewrite_by_lua_file" in /usr/local/nginx Message-ID: I have configured nginx-1.9.2 to evaluate a third party module and configure the source like this: ./configure --add-module=../ngx_http_proxy_connect_module-master/ --add-module=../lua-5.1.4/ --with-http_ssl_module After compiling the version shows what I configured: # nginx -V nginx version: nginx/1.9.2 built by gcc 4.8.5 20150623 (Red Hat 4.8.5-16) (GCC) built with OpenSSL 1.0.2k-fips 26 Jan 2017 TLS SNI support enabled configure arguments: --add-module=../ngx_http_proxy_connect_module-master/ --add-module=../lua-5.1.4/ --with-http_ssl_module I have lua installed as well as compiled into nginx: # lua Lua 5.1.4 Copyright (C) 1994-2008 Lua.org, PUC-Rio When I try to load my config file, I get this error message: # nginx -t nginx: [emerg] unknown directive "rewrite_by_lua_file" in /usr/local/nginx/conf/nginx.conf:25 nginx: configuration file /usr/local/nginx/conf/nginx.conf test failed I am trying to figure out what I did wrong to receive this error. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Wed Oct 25 23:56:42 2017 From: nginx-forum at forum.nginx.org (helenaut) Date: Wed, 25 Oct 2017 19:56:42 -0400 Subject: [emerg] unknown directive "rewrite_by_lua_file" in /usr/local/nginx In-Reply-To: References: Message-ID: <0dd1b8b30b150d7fba0042f0a1633159.NginxMailingListEnglish@forum.nginx.org> Hi Joel, maybe you need the "lua-nginx-module"? https://github.com/openresty/lua-nginx-module#rewrite_by_lua_file Posted at Nginx Forum: https://forum.nginx.org/read.php?2,277065,277067#msg-277067 From yuanm12 at 163.com Thu Oct 26 07:15:02 2017 From: yuanm12 at 163.com (=?GBK?B?sLK48Q==?=) Date: Thu, 26 Oct 2017 15:15:02 +0800 (CST) Subject: Performance issue of "ngx_http_mirror_module" Message-ID: <24d0db6e.8cb1.15f5788c5be.Coremail.yuanm12@163.com> Dear All, I have faced a issue with Nginx "ngx_http_mirror_module" mirror function. want to discuss with you about this. The situation is like below: While I try to copy the request form original to mirror side, if the original application can process 600 request per seconds, but the mirror environment can only process 100 requests per seconds. Normally, even the mirror environment can't process all the request in time. it's should not impact the nginx forward the request to the original environment. But we observed if the mirror environment can't process all the request, then the Nginx will fall in issue , and original environment can't feedback process result to client in time. Then from client side, it seems the Nginx is down. If you have faced same issue before ? any suggestion ? -- Regards, Yuan Man (Angus) Trouble is a Friend. -------------- next part -------------- An HTML attachment was scrubbed... URL: From macko003 at gmail.com Thu Oct 26 07:16:31 2017 From: macko003 at gmail.com (Kiss Norbert) Date: Thu, 26 Oct 2017 09:16:31 +0200 Subject: nginx load-balancing latency problem Message-ID: Hi Everyone I would like to ask a little help to understand and refine my config. The basic problem: We have 2 nginx frontend on a same site (only for backup and sandbox the secound). We have 2 graylog backend servers on different sites. The nginx servers and SERVER_B is on the same site. The Nginx(s) forward the http traffic and the stream the log traffic. Unfortunately we have 0.5-0.6sec latency between the sites, so the nginx forward almost all traffic to one backend server. At the log forward function only the safe arrive important, the latency not. The HTTP traffic is low, so one server can serve it. How can I tell for the nginx, this latency is ok, and don't mark SERVER_A as unavailable? I triedwith max_fails, fail_timeout, weight parameters without success. My config part: stream{ upstream graylog_syslog { server SERVER_A:1514; server SERVER_B:1514; } server { listen 514; proxy_pass graylog_syslog; } upstream graylog_beats { server SERVER_A:5044; server SERVER_B:5044; } server { listen 5044; proxy_pass graylog_beats; } upstream graylog_gelf { server SERVER_A:12201; server SERVER_B:12201; } server { listen 12201; proxy_pass graylog_gelf; } } And the packets from the nginx to the backend servers (ok, I know one log message can be fregmented to multiple TCP packages...): tcpdump -nn -i any \(dst host SERVER_A or dst host SERVER_B\) and not port 9000 -r /tmp/log.debug | awk '{print $5}' | sort -n | uniq -c 485 SERVER_A.5044: 4157 SERVER_B.1514: 7424 SERVER_B.5044: Latency: 64 bytes from SERVER_A: icmp_seq=1 ttl=64 time=0.521 ms 64 bytes from SERVER_A: icmp_seq=2 ttl=64 time=0.523 ms .... 64 bytes from SERVER_B: icmp_seq=1 ttl=64 time=0.136 ms 64 bytes from SERVER_B: icmp_seq=2 ttl=64 time=0.222 ms Any idea? Thanks, Norbert -------------- next part -------------- An HTML attachment was scrubbed... URL: From jarkko.torppa at elisa.fi Thu Oct 26 11:07:09 2017 From: jarkko.torppa at elisa.fi (Torppa Jarkko) Date: Thu, 26 Oct 2017 11:07:09 +0000 Subject: when client->server socket is closed also server->client is closed and request is aborted ? Message-ID: I have an old xmlrpc client that seems to close the client->server side of the socket immediately after it has sent the request to server, server seems to close the server->client side of socket in response to this. I have been trying to find setting for this, cannot find one. Also have been trying do dig into the sources to see where this happens, but no luck so far. From arut at nginx.com Thu Oct 26 12:22:13 2017 From: arut at nginx.com (Roman Arutyunyan) Date: Thu, 26 Oct 2017 15:22:13 +0300 Subject: Performance issue of "ngx_http_mirror_module" In-Reply-To: <24d0db6e.8cb1.15f5788c5be.Coremail.yuanm12@163.com> References: <24d0db6e.8cb1.15f5788c5be.Coremail.yuanm12@163.com> Message-ID: <20171026122213.GA75960@Romans-MacBook-Air.local> Hi, On Thu, Oct 26, 2017 at 03:15:02PM +0800, ?? wrote: > Dear All, > > > I have faced a issue with Nginx "ngx_http_mirror_module" mirror function. want to discuss with you about this. > > > The situation is like below: > While I try to copy the request form original to mirror side, if the original application can process 600 request per seconds, but the mirror environment can only process 100 requests per seconds. Normally, even the mirror environment can't process all the request in time. it's should not impact the nginx forward the request to the original environment. But we observed if the mirror environment can't process all the request, then the Nginx will fall in issue , and original environment can't feedback process result to client in time. Then from client side, it seems the Nginx is down. If you have faced same issue before ? any suggestion ? A mirror request is executed in parallel with the main request and does not directly affect the main request execution time. However, if you send another request on the same client connection, it will not be processed until the previous request and all its subrequest (including mirror subrequests) finish. So if you use keep-alive client connections and your mirror subrequests are slow, you may experience some performance issues. -- Roman Arutyunyan From r1ch+nginx at teamliquid.net Thu Oct 26 13:33:06 2017 From: r1ch+nginx at teamliquid.net (Richard Stanway) Date: Thu, 26 Oct 2017 15:33:06 +0200 Subject: when client->server socket is closed also server->client is closed and request is aborted ? In-Reply-To: References: Message-ID: Look at http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_ignore_client_abort or http://nginx.org/en/docs/http/ngx_http_fastcgi_module.html#fastcgi_ignore_client_abort etc depending on what you're doing with the request. On Thu, Oct 26, 2017 at 1:07 PM, Torppa Jarkko wrote: > I have an old xmlrpc client that seems to close the client->server side of > the socket immediately after it has sent the request to server, server > seems to close the server->client side of socket in response to this. > > I have been trying to find setting for this, cannot find one. > > Also have been trying do dig into the sources to see where this happens, > but no luck so far. > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Thu Oct 26 17:21:25 2017 From: nginx-forum at forum.nginx.org (rnmx18) Date: Thu, 26 Oct 2017 13:21:25 -0400 Subject: failed to run balancer_by_lua*: balancer_by_lua:2: module 'ngx.balancer' not found: Message-ID: <35cb98ac01d223e6e3aad431e74137b3.NginxMailingListEnglish@forum.nginx.org> Hi, I am getting the following error when I try to use balancer_by_lua with ngx.balancer in my NGINX configuration. 2017/10/26 08:13:38 [error] 22447#22447: *10 failed to run balancer_by_lua*: balancer_by_lua:2: module 'ngx.balancer' not found: no field package.preload['ngx.balancer'] no file '/home/rajesh/lua-resty-http-master/lib/ngx/balancer.lua' no file '/home/rajesh/lua-resty-balancer-master/lib/ngx/balancer.lua' no file './ngx/balancer.lua' no file '/usr/local/share/luajit-2.0.5/ngx/balancer.lua' no file '/usr/local/share/lua/5.1/ngx/balancer.lua' no file '/usr/local/share/lua/5.1/ngx/balancer/init.lua' no file '/home/rajesh/lua-resty-balancer-master/ngx/balancer.so' no file './ngx/balancer.so' no file '/usr/local/lib/lua/5.1/ngx/balancer.so' no file '/usr/local/lib/lua/5.1/loadall.so' no file '/home/rajesh/lua-resty-balancer-master/ngx.so' no file './ngx.so' no file '/usr/local/lib/lua/5.1/ngx.so' no file '/usr/local/lib/lua/5.1/loadall.so' stack traceback: [C]: in function 'require' balancer_by_lua:2: in function while connecting to upstream, client: 192.168.1.3, server: myserver.com, request: "GET /images/6314.jpeg HTTP/1.1" The upstream config snippet is as follows: upstream backends { server 0.0.0.1; balancer_by_lua_block { local b = require "ngx.balancer" ..... [root at localhost rajesh]# /usr/sbin/nginx -V nginx version: nginx/1.12.1 built by gcc 4.8.5 20150623 (Red Hat 4.8.5-11) (GCC) built with OpenSSL 1.0.2l 25 May 2017 TLS SNI support enabled configure arguments: --prefix=/etc/nginx --with-openssl-opt=-fPIC --with-openssl=/home/rajesh/nginx_prod_build_test/openssl-1.0.2l --sbin-path=/usr/sbin/nginx --modules-path=/usr/lib64/nginx/modules --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid --lock-path=/var/run/lock/subsys/nginx --user=nginx --group=nginx --with-http_ssl_module --with-http_realip_module --with-http_addition_module --with-http_sub_module --with-http_dav_module --with-http_flv_module --with-http_mp4_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_random_index_module --with-http_secure_link_module --with-http_degradation_module --with-http_slice_module --with-http_stub_status_module --with-http_xslt_module=dynamic --with-http_image_filter_module=dynamic --with-http_geoip_module=dynamic --with-http_perl_module=dynamic --with-threads --with-stream=dynamic --with-stream_ssl_module --with-http_auth_request_module --with-mail=dynamic --with-mail_ssl_module --with-file-aio --with-pcre --with-pcre-jit --with-google_perftools_module --with-http_v2_module --add-module=/home/rajesh/nginx_prod_build_test/headers-more-nginx-module-0.32 --with-cc-opt='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -O2 -fPIC -g -pipe -Wno-error -fstack-protector -std=c++11 -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -m64 -mtune=generic' --with-ld-opt='-Wl,-z,relro -specs=/usr/lib/rpm/redhat/redhat-hardened-ld -Wl,-E -Wl,-rpath,/usr/local/lib' --with-debug --add-module=/home/rajesh/nginx-upstream-carp-master --add-module=/home/rajesh/ngx_devel_kit-0.3.0 --add-module=/home/rajesh/lua-nginx-module-0.10.11rc2 Any inputs regarding the reason for this error, and how to resolve this? Thanks Rajesh Posted at Nginx Forum: https://forum.nginx.org/read.php?2,277076,277076#msg-277076 From yuanm12 at 163.com Fri Oct 27 01:24:01 2017 From: yuanm12 at 163.com (=?GBK?B?sLK48Q==?=) Date: Fri, 27 Oct 2017 09:24:01 +0800 (CST) Subject: Performance issue of "ngx_http_mirror_module" In-Reply-To: <20171026122213.GA75960@Romans-MacBook-Air.local> References: <24d0db6e.8cb1.15f5788c5be.Coremail.yuanm12@163.com> <20171026122213.GA75960@Romans-MacBook-Air.local> Message-ID: <591fff42.25c1.15f5b6dc6ca.Coremail.yuanm12@163.com> Dear Roman, Thanks for your valuable response. So ,is that means, If we optimize the parameter of the keep-alive to avoid keep-alive connection ? then we can avoid this kind of performance issue ? Or if the mirror subrequest is slow than original subrequest, then we can't avoid this kind of performance issue ? Thanks in advance. -- Regards, Yuan Man Trouble is a Friend. At 2017-10-26 20:22:13, "Roman Arutyunyan" wrote: >Hi, > >On Thu, Oct 26, 2017 at 03:15:02PM +0800, ?? wrote: >> Dear All, >> >> >> I have faced a issue with Nginx "ngx_http_mirror_module" mirror function. want to discuss with you about this. >> >> >> The situation is like below: >> While I try to copy the request form original to mirror side, if the original application can process 600 request per seconds, but the mirror environment can only process 100 requests per seconds. Normally, even the mirror environment can't process all the request in time. it's should not impact the nginx forward the request to the original environment. But we observed if the mirror environment can't process all the request, then the Nginx will fall in issue , and original environment can't feedback process result to client in time. Then from client side, it seems the Nginx is down. If you have faced same issue before ? any suggestion ? > >A mirror request is executed in parallel with the main request and does not >directly affect the main request execution time. However, if you send another >request on the same client connection, it will not be processed until the >previous request and all its subrequest (including mirror subrequests) finish. >So if you use keep-alive client connections and your mirror subrequests are >slow, you may experience some performance issues. > >-- >Roman Arutyunyan >_______________________________________________ >nginx mailing list >nginx at nginx.org >http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From zchao1995 at gmail.com Fri Oct 27 01:42:14 2017 From: zchao1995 at gmail.com (Zhang Chao) Date: Thu, 26 Oct 2017 21:42:14 -0400 Subject: failed to run balancer_by_lua*: balancer_by_lua:2: module 'ngx.balancer' not found: In-Reply-To: <35cb98ac01d223e6e3aad431e74137b3.NginxMailingListEnglish@forum.nginx.org> References: <35cb98ac01d223e6e3aad431e74137b3.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hi! For using the ngx.balancer, lua-resty-core is necessary. On 27 October 2017 at 01:21:43, rnmx18 (nginx-forum at forum.nginx.org) wrote: Hi, I am getting the following error when I try to use balancer_by_lua with ngx.balancer in my NGINX configuration. 2017/10/26 08:13:38 [error] 22447#22447: *10 failed to run balancer_by_lua*: balancer_by_lua:2: module 'ngx.balancer' not found: no field package.preload['ngx.balancer'] no file '/home/rajesh/lua-resty-http-master/lib/ngx/balancer.lua' no file '/home/rajesh/lua-resty-balancer-master/lib/ngx/balancer.lua' no file './ngx/balancer.lua' no file '/usr/local/share/luajit-2.0.5/ngx/balancer.lua' no file '/usr/local/share/lua/5.1/ngx/balancer.lua' no file '/usr/local/share/lua/5.1/ngx/balancer/init.lua' no file '/home/rajesh/lua-resty-balancer-master/ngx/balancer.so' no file './ngx/balancer.so' no file '/usr/local/lib/lua/5.1/ngx/balancer.so' no file '/usr/local/lib/lua/5.1/loadall.so' no file '/home/rajesh/lua-resty-balancer-master/ngx.so' no file './ngx.so' no file '/usr/local/lib/lua/5.1/ngx.so' no file '/usr/local/lib/lua/5.1/loadall.so' stack traceback: [C]: in function 'require' balancer_by_lua:2: in function while connecting to upstream, client: 192.168.1.3, server: myserver.com, request: "GET /images/6314.jpeg HTTP/1.1" The upstream config snippet is as follows: upstream backends { server 0.0.0.1; balancer_by_lua_block { local b = require "ngx.balancer" ..... [root at localhost rajesh]# /usr/sbin/nginx -V nginx version: nginx/1.12.1 built by gcc 4.8.5 20150623 (Red Hat 4.8.5-11) (GCC) built with OpenSSL 1.0.2l 25 May 2017 TLS SNI support enabled configure arguments: --prefix=/etc/nginx --with-openssl-opt=-fPIC --with-openssl=/home/rajesh/nginx_prod_build_test/openssl-1.0.2l --sbin-path=/usr/sbin/nginx --modules-path=/usr/lib64/nginx/modules --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid --lock-path=/var/run/lock/subsys/nginx --user=nginx --group=nginx --with-http_ssl_module --with-http_realip_module --with-http_addition_module --with-http_sub_module --with-http_dav_module --with-http_flv_module --with-http_mp4_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_random_index_module --with-http_secure_link_module --with-http_degradation_module --with-http_slice_module --with-http_stub_status_module --with-http_xslt_module=dynamic --with-http_image_filter_module=dynamic --with-http_geoip_module=dynamic --with-http_perl_module=dynamic --with-threads --with-stream=dynamic --with-stream_ssl_module --with-http_auth_request_module --with-mail=dynamic --with-mail_ssl_module --with-file-aio --with-pcre --with-pcre-jit --with-google_perftools_module --with-http_v2_module --add-module=/home/rajesh/nginx_prod_build_test/headers-more-nginx-module-0.32 --with-cc-opt='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -O2 -fPIC -g -pipe -Wno-error -fstack-protector -std=c++11 -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -m64 -mtune=generic' --with-ld-opt='-Wl,-z,relro -specs=/usr/lib/rpm/redhat/redhat-hardened-ld -Wl,-E -Wl,-rpath,/usr/local/lib' --with-debug --add-module=/home/rajesh/nginx-upstream-carp-master --add-module=/home/rajesh/ngx_devel_kit-0.3.0 --add-module=/home/rajesh/lua-nginx-module-0.10.11rc2 Any inputs regarding the reason for this error, and how to resolve this? Thanks Rajesh Posted at Nginx Forum: https://forum.nginx.org/read.php?2,277076,277076#msg-277076 _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From arut at nginx.com Fri Oct 27 06:54:19 2017 From: arut at nginx.com (Roman Arutyunyan) Date: Fri, 27 Oct 2017 09:54:19 +0300 Subject: Performance issue of "ngx_http_mirror_module" In-Reply-To: <591fff42.25c1.15f5b6dc6ca.Coremail.yuanm12@163.com> References: <24d0db6e.8cb1.15f5788c5be.Coremail.yuanm12@163.com> <20171026122213.GA75960@Romans-MacBook-Air.local> <591fff42.25c1.15f5b6dc6ca.Coremail.yuanm12@163.com> Message-ID: <20171027065419.GB75960@Romans-MacBook-Air.local> Hi, On Fri, Oct 27, 2017 at 09:24:01AM +0800, ?? wrote: > Dear Roman, > > > Thanks for your valuable response. > So ,is that means, If we optimize the parameter of the keep-alive to avoid keep-alive connection ? then we can avoid this kind of performance issue ? > Or if the mirror subrequest is slow than original subrequest, then we can't avoid this kind of performance issue ? Thanks in advance. It should be ok unless you wait for the (non-keepalive) client connection to be closed. It won't until the mirror subrequests are done. > > > -- > > > > Regards, > Yuan Man > Trouble is a Friend. > > > > At 2017-10-26 20:22:13, "Roman Arutyunyan" wrote: > >Hi, > > > >On Thu, Oct 26, 2017 at 03:15:02PM +0800, ?? wrote: > >> Dear All, > >> > >> > >> I have faced a issue with Nginx "ngx_http_mirror_module" mirror function. want to discuss with you about this. > >> > >> > >> The situation is like below: > >> While I try to copy the request form original to mirror side, if the original application can process 600 request per seconds, but the mirror environment can only process 100 requests per seconds. Normally, even the mirror environment can't process all the request in time. it's should not impact the nginx forward the request to the original environment. But we observed if the mirror environment can't process all the request, then the Nginx will fall in issue , and original environment can't feedback process result to client in time. Then from client side, it seems the Nginx is down. If you have faced same issue before ? any suggestion ? > > > >A mirror request is executed in parallel with the main request and does not > >directly affect the main request execution time. However, if you send another > >request on the same client connection, it will not be processed until the > >previous request and all its subrequest (including mirror subrequests) finish. > >So if you use keep-alive client connections and your mirror subrequests are > >slow, you may experience some performance issues. > > > >-- > >Roman Arutyunyan > >_______________________________________________ > >nginx mailing list > >nginx at nginx.org > >http://mailman.nginx.org/mailman/listinfo/nginx -- Roman Arutyunyan From linuxmail at 4lin.net Fri Oct 27 07:28:55 2017 From: linuxmail at 4lin.net (Denny Fuchs) Date: Fri, 27 Oct 2017 09:28:55 +0200 Subject: Migrate from Apache2: Location_Alias + rewrite + Location + PHP Message-ID: Hello, I have standard Apache2 Vhost configuration and I try to understand, how to migrate to Nginx :-) Apache2 has simple Vhost: ... Alias /dispatcher "/foo/bin/dispatch.php" ... We need an alias for a file, which is not in the www_root ... In the "/opt/foo/web" we have a .htaccess, which contains: RewriteEngine On RewriteCond %{REQUEST_URI} !dispatcher$ RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule .*/dispatcher [QSA,L] So, every call on http://foobar/whatever is rewritten to the alias /dispatcher/dispatch.php which is located /opt/foo/web/dispatch.php For example: curl --header 'Content-Type: application/json' --data '{"data" :{....}' http://foobar/go_and_save I'm a bit confused, if I need try_file, or rewrite and that in combination with the alias, outside the www_root and with PHP7. Can somebody help :-) cu denny From neuronetv at gmail.com Fri Oct 27 08:29:39 2017 From: neuronetv at gmail.com (Anthony Griffiths) Date: Fri, 27 Oct 2017 09:29:39 +0100 Subject: newbie: iptables gone, what about apache? Message-ID: I've installed nginx-1.12.2 on a centos 6.9 server and got it running and streaming live video. I compiled it from source using the instructions here: https://www.nginx.com/resources/admin-guide/installing-nginx-open-source/ My main aim is to get nginx streaming live video to phones but more of that later... After the install I noticed I no longer have /etc/sysconfig/iptables. What did nginx replace it with? Also, if I have apache and some websites already set up on the centos 6 machine how does nginx get along with that? (if it serves it's own pages at /usr/local/nginx/html) Thanks for any advice. From peter_booth at me.com Fri Oct 27 14:02:21 2017 From: peter_booth at me.com (Peter Booth) Date: Fri, 27 Oct 2017 10:02:21 -0400 Subject: Performance issue of "ngx_http_mirror_module" In-Reply-To: <591fff42.25c1.15f5b6dc6ca.Coremail.yuanm12@163.com> References: <24d0db6e.8cb1.15f5788c5be.Coremail.yuanm12@163.com> <20171026122213.GA75960@Romans-MacBook-Air.local> <591fff42.25c1.15f5b6dc6ca.Coremail.yuanm12@163.com> Message-ID: <94D235B4-844E-4724-BFC8-DE422696878C@me.com> There are a few approaches to this but they depend upon what you?re trying to achieve. Are your requests POSTs or GETs? Why do you have the mirroring configured? If the root cause is that your mirror site cannot support the same workload as your primary site, what do you want to happen when your mirror site is overloaded? One approach, using nginx, is to use rate limiting and connection limiting on your mirror server. This is described on the nginx website as part of the ddos mitigation section. Or, if your bursts of activity are typically for the same resource then you can use caching with the proxy_cache_use_stale directive. Another approach could be to use lua / openresty to implement a work- shedding interceptor (within nginx) that sits in front of your slow web server. Within lua you would need code that guesses whether or not your web server is overloaded and, if it is, it simply returns a 503 and doesn?t forward the request. Sent from my iPhone > On Oct 27, 2017, at 2:24 AM, ?? wrote: > > Dear Roman, > > Thanks for your valuable response. > So ,is that means, If we optimize the parameter of the keep-alive to avoid keep-alive connection ? then we can avoid this kind of performance issue ? > Or if the mirror subrequest is slow than original subrequest, then we can't avoid this kind of performance issue ? Thanks in advance. > > -- > > Regards, > Yuan Man > Trouble is a Friend. > > > At 2017-10-26 20:22:13, "Roman Arutyunyan" wrote: > >Hi, > > > >On Thu, Oct 26, 2017 at 03:15:02PM +0800, ?? wrote: > >> Dear All, > >> > >> > >> I have faced a issue with Nginx "ngx_http_mirror_module" mirror function. want to discuss with you about this. > >> > >> > >> The situation is like below: > >> While I try to copy the request form original to mirror side, if the original application can process 600 request per seconds, but the mirror environment can only process 100 requests per seconds. Normally, even the mirror environment can't process all the request in time. it's should not impact the nginx forward the request to the original environment. But we observed if the mirror environment can't process all the request, then the Nginx will fall in issue , and original environment can't feedback process result to client in time. Then from client side, it seems the Nginx is down. If you have faced same issue before ? any suggestion ? > > > >A mirror request is executed in parallel with the main request and does not > >directly affect the main request execution time. However, if you send another > >request on the same client connection, it will not be processed until the > >previous request and all its subrequest (including mirror subrequests) finish. > >So if you use keep-alive client connections and your mirror subrequests are > >slow, you may experience some performance issues. > > > >-- > >Roman Arutyunyan > >_______________________________________________ > >nginx mailing list > >nginx at nginx.org > >http://mailman.nginx.org/mailman/listinfo/nginx > > > ??????????????????????13?/??????75?3?>> > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Fri Oct 27 18:58:16 2017 From: nginx-forum at forum.nginx.org (RKGood) Date: Fri, 27 Oct 2017 14:58:16 -0400 Subject: Resolver not re-resolving new ip address of an AW ELB Message-ID: Hello, We are trying to find a solution from past couple of days but nothing seems to work so far. Our server in prod going down everyday when the AWS ELB changes ip address. Our's little complex proxy (don't ask me why we have to do this :), there is a strong reason for it), our clients will send requests apache (legacy), apache proxies to nginx (new) and nginx decides whether proxy back to apache or serve the request with new micro services. Resolver re-resolves the new micro services (internal alb) ip address but fail to re-resolve the legacy apache (has a ELB with route 53 entry in front). We are using https endpoint to proxy apache request. The request flows thru ELB (legacy) -> Apache -> ELB (new) -> nginx -> ELB (legacy) -> apache Can you please provide feedback on what are we doing wrong, this is only happening in production. Our load is normal few fundred requests per second. We aren't able to simulate it in test environment. Here is the configuration: user nginx; worker_processes auto; worker_rlimit_nofile 5120; error_log /var/log/nginx/error.log warn; pid /var/run/nginx.pid; events { worker_connections 2048; } http { include /etc/nginx/mime.types; default_type text/plain; resolver 10....2 172.16.0.23 valid=30s ipv6=off; resolver_timeout 5s; log_format main '$proxy_protocol_addr - [$status] - [$request_time] - [$upstream_response_time] - $server_name $upstream_addr $request'; access_log /var/log/nginx/error.log main; rewrite_log on; client_body_timeout 60s; client_header_timeout 30s; send_timeout 60s; sendfile off; tcp_nodelay on; tcp_nopush on; reset_timedout_connection on; server_names_hash_bucket_size 128; client_body_buffer_size 64k; client_max_body_size 10m; server { listen 443 ssl proxy_protocol default_server; server_name mydomain.com; ssl_certificate mydomain.crt; ssl_certificate_key mydomain.key; set $alb_upstream aws-internal-alb; set $apache_upstream legacy.domain.com; proxy_buffers 8 24k; proxy_buffer_size 2k; proxy_http_version 1.1; proxy_set_header Connection ""; proxy_set_header X-Real-IP $proxy_protocol_addr; proxy_next_upstream off; location /services/(migrated1|migrated2)/ { proxy_set_header Host $host; proxy_connect_timeout 302; proxy_read_timeout 302; rewrite /services/(.*) /$1?$args break; proxy_pass http://alb_upstream; } location /services/ { proxy_set_header x-nginx-rejected true; proxy_set_header Host legacy.domain.com; proxy_connect_timeout 302; proxy_read_timeout 302; rewrite /services/(.*) /$1?$args break; proxy_pass https://$apache_upstream; } } } Thanks in advance. RK Posted at Nginx Forum: https://forum.nginx.org/read.php?2,277101,277101#msg-277101 From nginx-forum at forum.nginx.org Fri Oct 27 19:00:28 2017 From: nginx-forum at forum.nginx.org (RKGood) Date: Fri, 27 Oct 2017 15:00:28 -0400 Subject: Resolver not re-resolving new ip address of an AW ELB In-Reply-To: References: Message-ID: Forgot to mention imporant information: we are using dockerized nginx, deployed in ECS cluster. Docker nginx image version is 1.11 Posted at Nginx Forum: https://forum.nginx.org/read.php?2,277101,277102#msg-277102 From nginx-forum at forum.nginx.org Sun Oct 29 01:49:12 2017 From: nginx-forum at forum.nginx.org (hpernaf) Date: Sat, 28 Oct 2017 21:49:12 -0400 Subject: Redirect of permalinks in Wordpress. Message-ID: <956edde6d4f0779b719ab47927518b02.NginxMailingListEnglish@forum.nginx.org> Hi everyone! I'm using NGINX with Worpdress on my site and would like to do some 301 redirects. My site currently has permalinks: http://site.com/2017/10/29/post-example/ and would like to switch to http://site.com/post-example/ But I already have more than 120 articles published and ranked by Google and would not like to lose this ranking. I would like to know how to redirect these links without having to do one by one. I did a search and found a website that says that you need to enter the following lines: location ~ "^/([0-9]{4})/([0-9]{2})/([0-9]{2})/(.*)$" { rewrite "^/([0-9]{4})/([0-9]{2})/([0-9]{2})/(.*)$" http://site.com/$4 permanent; } My question is: Is this code correct? Where should I enter these lines? I tried inserting into my nginx.conf file and then tried putting it in .htaccess. But I was not successful, because redirection is not being done after changing the permalinks. If anyone can help me, I will be very grateful. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,277107,277107#msg-277107 From shanchuan04 at gmail.com Sun Oct 29 06:57:56 2017 From: shanchuan04 at gmail.com (yang chen) Date: Sun, 29 Oct 2017 14:57:56 +0800 Subject: why delta only include the execution time of ngx_process_events not ngx_event_process_posted Message-ID: hello, everyone. I have a problem when I read the nginx source code, why delta variable only include the execution time of ngx_process_events not ngx_event_process_posted, ngx_event_process_posted maybe takes more time. delta = ngx_current_msec; (void) ngx_process_events(cycle, timer, flags); delta = ngx_current_msec - delta; ngx_log_debug1(NGX_LOG_DEBUG_EVENT, cycle->log, 0, "timer delta: %M", delta); ngx_event_process_posted(cycle, &ngx_posted_accept_events); -------------- next part -------------- An HTML attachment was scrubbed... URL: From zchao1995 at gmail.com Sun Oct 29 07:20:48 2017 From: zchao1995 at gmail.com (Zhang Chao) Date: Sun, 29 Oct 2017 08:20:48 +0100 Subject: why delta only include the execution time of ngx_process_events not ngx_event_process_posted In-Reply-To: References: Message-ID: I think most of time delta is larger than zero, and i guess that the intention of the if statement for checking delta is non-zero is for ignoring some extreme case that the ngx_process_events handler returns so quickly that the millisecond precision is not enough to display the calling time(less than 1ms maybe). In this case, calling the ngx_event_expire_timers is unnecessary. Also , even in an extreme case that delta is zero for current cycle and events are posted to queue, calling for the ngx_event_expire_timers will be executed next cycle, presumably. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mailinglist at unix-solution.de Sun Oct 29 10:53:23 2017 From: mailinglist at unix-solution.de (basti) Date: Sun, 29 Oct 2017 11:53:23 +0100 Subject: Regex on Variable ($servername) Message-ID: <07ef84ca-f730-1279-d01a-6cf8febd53ef@unix-solution.de> Hello, i try to setup a catch all proxy server with nginx. I want to catch domains like this but have only domainname (without subdomain) in $domain In this example from nginx docs domain has "fullname". server { ??? server_name ~^(www\.)?(*?*.+)$; ??? root /sites/*$domain*; } servername: www.example.com -> $domain should be example.com Best Regards -------------- next part -------------- An HTML attachment was scrubbed... URL: From shanchuan04 at gmail.com Sun Oct 29 13:35:15 2017 From: shanchuan04 at gmail.com (yang chen) Date: Sun, 29 Oct 2017 21:35:15 +0800 Subject: why delta only include the execution time of ngx_process_events not ngx_event_process_posted (Zhang Chao) Message-ID: Thanks for your reply, why calling the ngx_event_expire_timers is unnecessary when ngx_process_events handler returns so quickly that the millisecond precision is not enough to display the calling time(less than 1ms maybe). ngx_process_events handler returns quickly which doesn't mean ngx_event_process_posted return quickly, maybe it excute for 2 ms or more -------------- next part -------------- An HTML attachment was scrubbed... URL: From nikhil6018 at gmail.com Mon Oct 30 10:33:55 2017 From: nikhil6018 at gmail.com (nik mur) Date: Mon, 30 Oct 2017 10:33:55 +0000 Subject: [Module] ngx_http_gzip issue : unknown directive "gzip" Message-ID: Hi, Recently I upgraded my nginx to 1.12 version from 1.10 branch. The build from source went through without any issues, but while starting Nginx I am receiving this error: *>>[emerg] 342#342: unknown directive "gzip" in /usr/local/apps/nginx/etc/conf.d/gzip.conf:2* GZIP was working fine on 1.10 so I am really confused what the issue is here, the GZIP module is enabled by default is my understanding. Can anybody guide me in the right direction what's the issue here. Here is my configure script for reference *LDFLAGS="-L$PPS_PATH/lib" CPPFLAGS="-I$PPS_PATH/include" ./configure \* * --prefix=$PATH_NGINX/etc \* * --sbin-path=$PATH_NGINX/sbin/nginx \* * --conf-path=$PATH_NGINX/etc/nginx.conf \* * --error-log-path=$PATH_NGINX/var/log/error.log \* * --http-log-path=$PATH_NGINX/var/log/access.log \* * --pid-path=$PATH_NGINX/var/run/nginx.pid \* * --lock-path=$PATH_NGINX/var/run/nginx.lock \* * --http-client-body-temp-path=$PATH_NGINX/var/cache/client_temp \* * --http-proxy-temp-path=$PATH_NGINX/var/cache/proxy_temp \* * --http-fastcgi-temp-path=$PATH_NGINX/var/cache/fastcgi_temp \* * --http-uwsgi-temp-path=$PATH_NGINX/var/cache/uwsgi_temp \* * --http-scgi-temp-path=$PATH_NGINX/var/cache/scgi_temp \* * --user=$thisuser \* * --group=$thisuser \* * --with-zlib=$MAIN_SRC/$ZLIB \* * --with-pcre=$MAIN_SRC/$pcre \* * --with-openssl=$MAIN_SRC/$OPENSSL \* * --with-http_ssl_module \* * --with-http_realip_module \* * --with-http_addition_module \* * --with-http_sub_module \* * --with-http_dav_module \* * --with-http_flv_module \* * --with-http_mp4_module \* * --with-http_gzip_static_module \* * --with-http_random_index_module \* * --with-http_secure_link_module \* * --with-http_stub_status_module \* * --with-http_v2_module \* * --with-mail_ssl_module \* * --with-file-aio \* * --add-module=$MAIN_SRC/nginx-rtmp-module \* * --add-module=$MAIN_SRC/ngx_pagespeed-release-$NPS_VERSION \* * --with-threads \* * --with-ipv6 && make && make install >> $LOG 2>&1* -------------- next part -------------- An HTML attachment was scrubbed... URL: From zchao1995 at gmail.com Mon Oct 30 10:54:07 2017 From: zchao1995 at gmail.com (Zhang Chao) Date: Mon, 30 Oct 2017 03:54:07 -0700 Subject: [Module] ngx_http_gzip issue : unknown directive "gzip" In-Reply-To: References: Message-ID: Hi! gzip is a directive defined by ngx_http_gzip_filter_module while you only link the ngx_http_gzip_static_module. On 30 October 2017 at 18:34:15, nik mur (nikhil6018 at gmail.com) wrote: *http_ssl_module* -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Mon Oct 30 13:46:13 2017 From: nginx-forum at forum.nginx.org (stuwat) Date: Mon, 30 Oct 2017 09:46:13 -0400 Subject: Apply nginx rate limits to certain IP addresses, and another rate limit to others Message-ID: <66d76f29abd3c17632449678889d1707.NginxMailingListEnglish@forum.nginx.org> In our Nginx config we currently have this:- limit_req_zone $binary_remote_addr zone=two:10m rate=15r/m; limit_req zone=two burst=5 nodelay; Now we want to change this so that this rate limit applies to certain IP addresses, and then have another rate limit that applies to others that is slightly less restrictive. geo $limited_net { default 0; 111.222.333.444 1; } map $limited_net $addr_to_limit { 0 ""; 1 $binary_remote_addr; } limit_req_zone $addr_to_limit zone=two:10m rate=15r/m; geo $less_limited_net { default 1; 111.222.333.444 0; } map $less_limited_net $addr_to_limit_less { 0 ""; 1 $binary_remote_addr; } limit_req_zone $addr_to_limit_less zone=three:10m rate=25r/m; So the traffic from the IP 111.222.333.444 will be affected by the rate 1st more restrictive rate limit, and not by the second less restrictive one. Does this give me what I want? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,277126,277126#msg-277126 From nginx-forum at forum.nginx.org Mon Oct 30 15:54:33 2017 From: nginx-forum at forum.nginx.org (RKGood) Date: Mon, 30 Oct 2017 11:54:33 -0400 Subject: Resolver not re-resolving new ip address of an AW ELB In-Reply-To: References: Message-ID: Can someone help please. We been having issues everyday and left with no other options to try Posted at Nginx Forum: https://forum.nginx.org/read.php?2,277101,277132#msg-277132 From nginx-forum at forum.nginx.org Mon Oct 30 16:12:22 2017 From: nginx-forum at forum.nginx.org (stuwat) Date: Mon, 30 Oct 2017 12:12:22 -0400 Subject: Apply nginx rate limits to certain IP addresses, and another rate limit to others In-Reply-To: <66d76f29abd3c17632449678889d1707.NginxMailingListEnglish@forum.nginx.org> References: <66d76f29abd3c17632449678889d1707.NginxMailingListEnglish@forum.nginx.org> Message-ID: Or should it be more like this? geo $limited_net { default 0; 111.222.333.444 1; } map $limited_net $addr_to_limit { 0 ""; 1 $binary_remote_addr; } limit_req_zone $addr_to_limit zone=two:10m rate=15r/m; limit_req_zone $binary_remote_addr; zone=three:10m rate=25r/m; Posted at Nginx Forum: https://forum.nginx.org/read.php?2,277126,277133#msg-277133 From minoru.nishikubo at lyz.jp Tue Oct 31 01:12:01 2017 From: minoru.nishikubo at lyz.jp (Nishikubo Minoru) Date: Tue, 31 Oct 2017 10:12:01 +0900 Subject: Resolver not re-resolving new ip address of an AW ELB In-Reply-To: References: Message-ID: Hello, I checked my old note and configured as follows, but I doubt myself that will help you... - Configured the resolver of internal reverse proxy to internal ALB/ELB to AWS dedicated DNS server. - Configured the resolver cache 30 seconds. - Set proxy_pass arguments to ALB/ELB endpoint names. Here is my old note: The resolver 172.31.0.2 is the address of AWS dedicated DNS server. I checked the response time of resolve accross the availability zone with dig, the AWS dedicated DNS server will respond without difference in spite of our cache resolver. TTL of DNS record in AWS is 60seconds, ALB(ELB) will scale in/out and update their address, so the internal resolver in nginx will keep 30 seconds, that is valid=30s with resolver directive. resolver 172.31.0.2 valid=30s; resolver_timeout 10s; set $backendelb " backendelb-87654321.elasticloaodbalancing.region.amazonaws.com"; location / { proxy_pass http://$backendelb; } On Sat, Oct 28, 2017 at 3:58 AM, RKGood wrote: > Hello, > > We are trying to find a solution from past couple of days but nothing seems > to work so far. Our server in prod going down everyday when the AWS ELB > changes ip address. Our's little complex proxy (don't ask me why we have to > do this :), there is a strong reason for it), our clients will send > requests > apache (legacy), apache proxies to nginx (new) and nginx decides whether > proxy back to apache or serve the request with new micro services. Resolver > re-resolves the new micro services (internal alb) ip address but fail to > re-resolve the legacy apache (has a ELB with route 53 entry in front). We > are using https endpoint to proxy apache request. The request flows thru > ELB > (legacy) -> Apache -> ELB (new) -> nginx -> ELB (legacy) -> apache > > Can you please provide feedback on what are we doing wrong, this is only > happening in production. Our load is normal few fundred requests per > second. > We aren't able to simulate it in test environment. > > Here is the configuration: > > > user nginx; > worker_processes auto; > worker_rlimit_nofile 5120; > error_log /var/log/nginx/error.log warn; > pid /var/run/nginx.pid; > > events { > worker_connections 2048; > } > > http { > include /etc/nginx/mime.types; > default_type text/plain; > > resolver 10....2 172.16.0.23 valid=30s ipv6=off; > resolver_timeout 5s; > > log_format main '$proxy_protocol_addr - [$status] - [$request_time] - > [$upstream_response_time] - $server_name $upstream_addr $request'; > > access_log /var/log/nginx/error.log main; > > rewrite_log on; > > client_body_timeout 60s; > client_header_timeout 30s; > send_timeout 60s; > sendfile off; > > tcp_nodelay on; > tcp_nopush on; > reset_timedout_connection on; > > server_names_hash_bucket_size 128; > client_body_buffer_size 64k; > client_max_body_size 10m; > > server { > > listen 443 ssl proxy_protocol default_server; > server_name mydomain.com; > ssl_certificate mydomain.crt; > ssl_certificate_key mydomain.key; > > set $alb_upstream aws-internal-alb; > set $apache_upstream legacy.domain.com; > > proxy_buffers 8 24k; > proxy_buffer_size 2k; > proxy_http_version 1.1; > proxy_set_header Connection ""; > proxy_set_header X-Real-IP $proxy_protocol_addr; > proxy_next_upstream off; > > > location /services/(migrated1|migrated2)/ { > > proxy_set_header Host $host; > proxy_connect_timeout 302; > proxy_read_timeout 302; > > rewrite /services/(.*) /$1?$args break; > proxy_pass http://alb_upstream; > } > > location /services/ { > proxy_set_header x-nginx-rejected true; > proxy_set_header Host legacy.domain.com; > proxy_connect_timeout 302; > proxy_read_timeout 302; > > rewrite /services/(.*) /$1?$args break; > proxy_pass https://$apache_upstream; > } > > } > } > > Thanks in advance. > RK > > Posted at Nginx Forum: https://forum.nginx.org/read. > php?2,277101,277101#msg-277101 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Tue Oct 31 03:25:00 2017 From: nginx-forum at forum.nginx.org (pankaj@releasemanager.in) Date: Mon, 30 Oct 2017 23:25:00 -0400 Subject: Resolver not re-resolving new ip address of an AW ELB In-Reply-To: References: Message-ID: you have two choices if the variable use is not working and you specifically need to use upstream config option. 1. Subscribe to Nginx Plus. 2. Compile this module along with Nginx .. https://github.com/aziontech/nginx-upstream-dynamic-servers I have personally used this module and it works. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,277101,277147#msg-277147 From nginx-forum at forum.nginx.org Tue Oct 31 03:59:44 2017 From: nginx-forum at forum.nginx.org (JoakimR) Date: Mon, 30 Oct 2017 23:59:44 -0400 Subject: Redirect of permalinks in Wordpress. In-Reply-To: <956edde6d4f0779b719ab47927518b02.NginxMailingListEnglish@forum.nginx.org> References: <956edde6d4f0779b719ab47927518b02.NginxMailingListEnglish@forum.nginx.org> Message-ID: It look to me like you are redirecting from 1 to 1 :/ but, i'll remember it like there is a wp plugin for this.... have you searched on wp? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,277107,277152#msg-277152 From nikhil6018 at gmail.com Tue Oct 31 07:35:07 2017 From: nikhil6018 at gmail.com (nik mur) Date: Tue, 31 Oct 2017 07:35:07 +0000 Subject: [Module] ngx_http_gzip issue : unknown directive "gzip" In-Reply-To: References: Message-ID: Hi Zhang, If I pass* --with-http_gzip_filter_module* , the configure script throws me an error unknown option. Googling suggested that you don't need to add this module it's provided by Nginx by default. On Mon, Oct 30, 2017 at 4:24 PM Zhang Chao wrote: > Hi! > > gzip is a directive defined by ngx_http_gzip_filter_module while you only > link the ngx_http_gzip_static_module. > > > On 30 October 2017 at 18:34:15, nik mur (nikhil6018 at gmail.com) wrote: > > *http_ssl_module* > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From vbart at nginx.com Tue Oct 31 13:51:31 2017 From: vbart at nginx.com (Valentin V. Bartenev) Date: Tue, 31 Oct 2017 16:51:31 +0300 Subject: [Module] ngx_http_gzip issue : unknown directive "gzip" In-Reply-To: References: Message-ID: <4952394.jikkrLoDxI@vbart-workstation> On Monday 30 October 2017 10:33:55 nik mur wrote: > Hi, > > Recently I upgraded my nginx to 1.12 version from 1.10 branch. > > The build from source went through without any issues, but while starting > Nginx I am receiving this error: > > *>>[emerg] 342#342: unknown directive "gzip" in > /usr/local/apps/nginx/etc/conf.d/gzip.conf:2* > [..] You should check your full configuration. It's unclear where this "gzip" directive is included. Please note, there's no such directive in mail and stream modules. wbr, Valentin V. Bartenev From nikhil6018 at gmail.com Tue Oct 31 14:02:25 2017 From: nikhil6018 at gmail.com (nik mur) Date: Tue, 31 Oct 2017 14:02:25 +0000 Subject: [Module] ngx_http_gzip issue : unknown directive "gzip" In-Reply-To: <4952394.jikkrLoDxI@vbart-workstation> References: <4952394.jikkrLoDxI@vbart-workstation> Message-ID: Hi Valentin, My Nginx installs in /usr/local/apps So the gzip.conf file is included from the nginx.conf in the /usr/local/apps/nginx/etc/ directory, so no issues there. This is very strange behavior no errors while building and even the error logs are not helpful, the same structure and configure script works for 1.10 and below. I am really banging my head over this, would be really helpful if you or anyone can point me in the right direction to debug this. On Tue, Oct 31, 2017 at 7:21 PM Valentin V. Bartenev wrote: > On Monday 30 October 2017 10:33:55 nik mur wrote: > > Hi, > > > > Recently I upgraded my nginx to 1.12 version from 1.10 branch. > > > > The build from source went through without any issues, but while starting > > Nginx I am receiving this error: > > > > *>>[emerg] 342#342: unknown directive "gzip" in > > /usr/local/apps/nginx/etc/conf.d/gzip.conf:2* > > > [..] > > You should check your full configuration. It's unclear where this "gzip" > directive is included. > > Please note, there's no such directive in mail and stream modules. > > wbr, Valentin V. Bartenev > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vbart at nginx.com Tue Oct 31 14:13:33 2017 From: vbart at nginx.com (Valentin V. Bartenev) Date: Tue, 31 Oct 2017 17:13:33 +0300 Subject: why delta only include the execution time of ngx_process_events not ngx_event_process_posted (Zhang Chao) In-Reply-To: References: Message-ID: <1551515.h7q2v3zecH@vbart-workstation> On Sunday 29 October 2017 21:35:15 yang chen wrote: > Thanks for your reply, why calling the ngx_event_expire_timers is > unnecessary when ngx_process_events handler returns so quickly that the > millisecond precision is not enough to display the calling time(less than > 1ms maybe). > > ngx_process_events handler returns quickly which doesn't > mean ngx_event_process_posted return quickly, maybe it excute for 2 ms or > more First of all, ngx_process_events() is the function that actually _waits_ and updates process time. So, there's no sense to move the calculation of delta elsewhere, since ngx_current_msec will be the same anyway. In most cases the time is spend by waiting on kernel for events, not by processing the events and executing handlers. Normally ngx_event_process_posted should not spend 2ms, otherwise that would result in less than 500 rps, which can only indicate blocking issue or overload. wbr, Valentin V. Bartenev From jarkko.torppa at elisa.fi Tue Oct 31 14:25:48 2017 From: jarkko.torppa at elisa.fi (Torppa Jarkko) Date: Tue, 31 Oct 2017 14:25:48 +0000 Subject: when client->server socket is closed also server->client is closed and request is aborted ? In-Reply-To: References: Message-ID: Indeed that was it (uwsgi_ignore_client_abort on) I kinda feel that this should be on by default for http/1.0 and connection: close clients. The client that I had trouble with was org.apache.xmlrpc.client java class From: nginx [mailto:nginx-bounces at nginx.org] On Behalf Of Richard Stanway via nginx Sent: 26. lokakuuta 2017 16:33 To: nginx at nginx.org Cc: Richard Stanway Subject: Re: when client->server socket is closed also server->client is closed and request is aborted ? Look at http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_ignore_client_abort or http://nginx.org/en/docs/http/ngx_http_fastcgi_module.html#fastcgi_ignore_client_abort etc depending on what you're doing with the request. On Thu, Oct 26, 2017 at 1:07 PM, Torppa Jarkko > wrote: I have an old xmlrpc client that seems to close the client->server side of the socket immediately after it has sent the request to server, server seems to close the server->client side of socket in response to this. I have been trying to find setting for this, cannot find one. Also have been trying do dig into the sources to see where this happens, but no luck so far. _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From peter_booth at me.com Tue Oct 31 17:25:45 2017 From: peter_booth at me.com (Peter Booth) Date: Tue, 31 Oct 2017 13:25:45 -0400 Subject: higher precision timings [ Re: why delta only include the execution time of ngx_process_events not ngx_event_process_posted (Zhang Chao) In-Reply-To: References: Message-ID: I think that this discussion touches on another question - are millisecond timings still sufficient when monitoring web applications? I think that in 2017, with the astounding increases in processing power we have seen in the last decade, millisecond timings are too imprecise. The cost of capturing a timestamp in Linux on recent hardware is about 30 nanos, and the precision of such a timestamp is also around 30 nanos. I think that there is a good argument to be made for exposing timestamps at the maximum level of precison possible, rather than hiding what could be useful diagnostic data. Are there any lans within nginx to report higher resolution timings? Peter > On Oct 29, 2017, at 9:35 AM, yang chen wrote: > > Thanks for your reply, why calling the ngx_event_expire_timers is unnecessary when ngx_process_events handler returns so quickly that the > millisecond precision is not enough to display the calling time(less than > 1ms maybe). > > ngx_process_events handler returns quickly which doesn't mean ngx_event_process_posted return quickly, maybe it excute for 2 ms or more > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Tue Oct 31 17:26:45 2017 From: nginx-forum at forum.nginx.org (RKGood) Date: Tue, 31 Oct 2017 13:26:45 -0400 Subject: Resolver not re-resolving new ip address of an AW ELB In-Reply-To: References: Message-ID: <8ac79f05aa3424d54f5333ca2c0acf2e.NginxMailingListEnglish@forum.nginx.org> Thank you for your replies. I think we have found the root cause. We have found below: When you are using variables in a proxy_pass directive, nginx will use runtime resolving except if : the target server is declared as an IP address the target server name is part of an upstream server group the target server name has already been resolved (e.g. it matches a server name in another server block) In our case we have one server block, with default set. Apache is sending server name as legacy.domain.com and nginx also proxying to legacy.domain.com, so I belive because both server name and proxy host is same, nginx not trying to re-resolve the ip address. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,277101,277164#msg-277164 From nginx-forum at forum.nginx.org Tue Oct 31 17:59:13 2017 From: nginx-forum at forum.nginx.org (blason) Date: Tue, 31 Oct 2017 13:59:13 -0400 Subject: Nginx reverse proxy with Sharepoint web Message-ID: Hi Guys, I am kindaa facing an issue with sharepoint sub-sites authentication with nginx as a reverse proxy. Somehow primary site is perfectly getting authenticated with upstream and ntlm however subsites shows 401 and 404 error. Does anyone have any use case or working configuration with sharepoint and nginx as reverse proxy? Thanks and Regards, Blason Posted at Nginx Forum: https://forum.nginx.org/read.php?2,277165,277165#msg-277165