From nginx-forum at forum.nginx.org Thu Feb 1 06:46:59 2018 From: nginx-forum at forum.nginx.org (prasanna) Date: Thu, 01 Feb 2018 01:46:59 -0500 Subject: why my nginx so many waiting users ? In-Reply-To: References: Message-ID: <6f4d5b999fb62aa46731d267eb561bed.NginxMailingListEnglish@forum.nginx.org> Active connections: 8 server accepts handled requests 1292 1292 3884 Reading: 0 Writing: 2 Waiting: 2 always showing waiting what is waiting and how to make normal like "0" Posted at Nginx Forum: https://forum.nginx.org/read.php?2,49182,278314#msg-278314 From mdounin at mdounin.ru Thu Feb 1 13:03:03 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 1 Feb 2018 16:03:03 +0300 Subject: why my nginx so many waiting users ? In-Reply-To: <6f4d5b999fb62aa46731d267eb561bed.NginxMailingListEnglish@forum.nginx.org> References: <6f4d5b999fb62aa46731d267eb561bed.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20180201130303.GL24410@mdounin.ru> Hello! On Thu, Feb 01, 2018 at 01:46:59AM -0500, prasanna wrote: > Active connections: 8 > server accepts handled requests > 1292 1292 3884 > Reading: 0 Writing: 2 Waiting: 2 > > always showing waiting what is waiting and how to make normal like "0" http://nginx.org/en/docs/http/ngx_http_stub_status_module.html#data -- Maxim Dounin http://mdounin.ru/ From nginx-forum at forum.nginx.org Fri Feb 2 18:04:58 2018 From: nginx-forum at forum.nginx.org (rajuginne) Date: Fri, 02 Feb 2018 13:04:58 -0500 Subject: why my nginx so many waiting users ? In-Reply-To: <6f4d5b999fb62aa46731d267eb561bed.NginxMailingListEnglish@forum.nginx.org> References: <6f4d5b999fb62aa46731d267eb561bed.NginxMailingListEnglish@forum.nginx.org> Message-ID: <6da3d7d59a85ed2bfb63956e4ad7830d.NginxMailingListEnglish@forum.nginx.org> @ i just worried, but i realized its keepalive connections. if you have any knowledge oo LEMP configuration please let me know anyway, thank you for the nginx forum.' Posted at Nginx Forum: https://forum.nginx.org/read.php?2,49182,278322#msg-278322 From joel.parker.gm at gmail.com Mon Feb 5 01:26:22 2018 From: joel.parker.gm at gmail.com (Joel Parker) Date: Sun, 4 Feb 2018 19:26:22 -0600 Subject: N00b: Forwarding the full request to upstream server Message-ID: I have a situation where I receive a request like: http://device.healthcheck.com/ready I want this to be sent to a server upstream but keep the full request intact. For example: server { resolver 8.8.8.8; listen 80; location / { // send this too 192.168.10.34 proxy_pass $host:$server_port$uri$is_args$args; } } I know this is probably easy to do but I am not sure how to accomplish this. -------------- next part -------------- An HTML attachment was scrubbed... URL: From michael.friscia at yale.edu Mon Feb 5 01:33:13 2018 From: michael.friscia at yale.edu (Friscia, Michael) Date: Mon, 5 Feb 2018 01:33:13 +0000 Subject: N00b: Forwarding the full request to upstream server In-Reply-To: References: Message-ID: <16EC0A87-F532-43C6-A164-FCD73D41E6B1@yale.edu> I?m interested in this answer as well but will offer what I?ve done so far. In your example, the only thing I?ve added are these three lines in the location block: proxy_cache_valid 200 1y; proxy_cache_bypass 1; proxy_no_cache 1; But I am not sure I am doing this correctly because I am running into a situation where I don?t think the original request is staying intact. Other settings that are common to both cache and bypass (also not sure these are all correct for bypassing): proxy_cache_valid any 3m; proxy_cache_revalidate on; proxy_cache_use_stale error timeout updating invalid_header http_500 http_502 http_503 http_504; proxy_cache_background_update on; proxy_cache_lock on; proxy_cache_methods GET HEAD; proxy_cache_key "$scheme$host$request_uri"; proxy_ignore_headers X-Accel-Expires Expires Cache-Control Set-Cookie; proxy_hide_header X-Accel-Expires; proxy_hide_header Expires; proxy_hide_header Cache-Control; proxy_hide_header Set-Cookie; proxy_hide_header Pragma; proxy_hide_header Server; proxy_hide_header Request-Context; proxy_hide_header X-Powered-By; proxy_hide_header X-AspNet-Version; proxy_hide_header X-AspNetMvc-Version; proxy_set_header X-Forwarded-Server $hostname; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Accept-Encoding identity; server_tokens off; ___________________________________________ Michael Friscia Office of Communications Yale School of Medicine (203) 737-7932 - office (203) 931-5381 - mobile http://web.yale.edu From: nginx on behalf of Joel Parker Reply-To: "nginx at nginx.org" Date: Sunday, February 4, 2018 at 8:26 PM To: "nginx at nginx.org" Subject: N00b: Forwarding the full request to upstream server I have a situation where I receive a request like: http://device.healthcheck.com/ready I want this to be sent to a server upstream but keep the full request intact. For example: server { resolver 8.8.8.8; listen 80; location / { // send this too 192.168.10.34 proxy_pass $host:$server_port$uri$is_args$args; } } I know this is probably easy to do but I am not sure how to accomplish this. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Mon Feb 5 09:08:33 2018 From: nginx-forum at forum.nginx.org (loopback_proxy) Date: Mon, 05 Feb 2018 04:08:33 -0500 Subject: nginx cache issue (http upstream cache: -5) Message-ID: <5fa1805349bd35467aa19c60ee919fe0.NginxMailingListEnglish@forum.nginx.org> I am new to nginx caching but have worked with nginx a lot. I tried enabling caching feature in our repository but it never worked so I thought I will pull a fresh copy of nginx and turn it on. I ended with the same issue. For some reason, nginx is not able to create the cache file in the cache dir. I have already turned on proxy buffering and set full rw permission for all users on the cache dir. I also gdb'ed the code and it seems like it gets into ngx_open_and_stat_file from ngx_open_cached_file ( http://lxr.nginx.org/source/src/core/ngx_open_file_cache.c?v=nginx-1.12.0#0144 ) and it tries to open an non existent file in RDONLY mode if the of->log is not set ( http://lxr.nginx.org/source/src/core/ngx_open_file_cache.c?v=nginx-1.12.0#0869 ) Any help here would be awesome. I have also pasted my relevant config and debug log below. Here's my configuration.. (only the relevant part) http { proxy_cache_path /tmp/nginx/cache levels=1:2 keys_zone=STATIC:10m inactive=24h max_size=1g; server { listen 80; server_name localhost; proxy_buffering on; location / { proxy_pass http://52.216.66.40$request_uri; proxy_set_header Host images.clipartpanda.com; proxy_buffering on; proxy_cache STATIC; add_header X-Proxy-Cache $upstream_cache_status; root html; index index.html index.htm; } } And here's the debug log showing whats that nginx is failing to create the cache file. 2018/02/05 08:57:26 [debug] 22509#0: *7 http cache key: "http://52.216.66.40" 2018/02/05 08:57:26 [debug] 22509#0: *7 http cache key: "/teddy-clip-art-teddy-md.png" 2018/02/05 08:57:26 [debug] 22509#0: *7 add cleanup: 00000000006D3560 2018/02/05 08:57:26 [debug] 22509#0: shmtx lock 2018/02/05 08:57:26 [debug] 22509#0: slab alloc: 120 slot: 4 2018/02/05 08:57:26 [debug] 22509#0: slab alloc: 00007FD2C31AA080 2018/02/05 08:57:26 [debug] 22509#0: shmtx unlock 2018/02/05 08:57:26 [debug] 22509#0: *7 http file cache exists: -5 e:0 2018/02/05 08:57:26 [debug] 22509#0: *7 cache file: "/tmp/nginx/cache/3/4f/ecbc29d001e852b40f09e913b5ced4f3" 2018/02/05 08:57:26 [debug] 22509#0: *7 add cleanup: 00000000006D35B0 2018/02/05 08:57:26 [debug] 22509#0: *7 http upstream cache: -5 And for now i have set "777" perm to the nginx cache dir. >> ls -lrt /tmp/nginx/ drwxrwxrwx 2 test root 4096 Feb 5 08:20 cache Posted at Nginx Forum: https://forum.nginx.org/read.php?2,278346,278346#msg-278346 From nginx-forum at forum.nginx.org Mon Feb 5 09:17:58 2018 From: nginx-forum at forum.nginx.org (loopback_proxy) Date: Mon, 05 Feb 2018 04:17:58 -0500 Subject: N00b: Forwarding the full request to upstream server In-Reply-To: <16EC0A87-F532-43C6-A164-FCD73D41E6B1@yale.edu> References: <16EC0A87-F532-43C6-A164-FCD73D41E6B1@yale.edu> Message-ID: You could just do proxy_pass http://192.168.10.34$request_uri See this for more https://nginx.org/en/docs/http/ngx_http_core_module.html#var_request_uri Posted at Nginx Forum: https://forum.nginx.org/read.php?2,278344,278347#msg-278347 From pratyush at hostindya.com Mon Feb 5 10:02:00 2018 From: pratyush at hostindya.com (Pratyush Kumar) Date: Mon, 05 Feb 2018 15:32:00 +0530 Subject: nginx cache issue (http upstream cache: -5) In-Reply-To: <5fa1805349bd35467aa19c60ee919fe0.NginxMailingListEnglish@forum.nginx.org> Message-ID: An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Mon Feb 5 12:56:58 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 5 Feb 2018 15:56:58 +0300 Subject: nginx cache issue (http upstream cache: -5) In-Reply-To: <5fa1805349bd35467aa19c60ee919fe0.NginxMailingListEnglish@forum.nginx.org> References: <5fa1805349bd35467aa19c60ee919fe0.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20180205125658.GQ24410@mdounin.ru> Hello! On Mon, Feb 05, 2018 at 04:08:33AM -0500, loopback_proxy wrote: > I am new to nginx caching but have worked with nginx a lot. I tried enabling > caching feature in our repository but it never worked so I thought I will > pull a fresh copy of nginx and turn it on. I ended with the same issue. For > some reason, nginx is not able to create the cache file in the cache dir. I > have already turned on proxy buffering and set full rw permission for all > users on the cache dir. I also gdb'ed the code and it seems like it gets > into ngx_open_and_stat_file from ngx_open_cached_file ( > http://lxr.nginx.org/source/src/core/ngx_open_file_cache.c?v=nginx-1.12.0#0144 > ) and it tries to open an non existent file in RDONLY mode if the of->log is > not set ( > http://lxr.nginx.org/source/src/core/ngx_open_file_cache.c?v=nginx-1.12.0#0869 > ) The code in question is not related to your problem. Instead, you should check what your backend returns. There are number of cases when nginx won't cache a response, see description of the proxy_cache_valid directive for the details: http://nginx.org/r/proxy_cache_valid [...] > And here's the debug log showing whats that nginx is failing to create the > cache file. > 2018/02/05 08:57:26 [debug] 22509#0: *7 http cache key: > "http://52.216.66.40" > 2018/02/05 08:57:26 [debug] 22509#0: *7 http cache key: > "/teddy-clip-art-teddy-md.png" > 2018/02/05 08:57:26 [debug] 22509#0: *7 add cleanup: 00000000006D3560 > 2018/02/05 08:57:26 [debug] 22509#0: shmtx lock > 2018/02/05 08:57:26 [debug] 22509#0: slab alloc: 120 slot: 4 > 2018/02/05 08:57:26 [debug] 22509#0: slab alloc: 00007FD2C31AA080 > 2018/02/05 08:57:26 [debug] 22509#0: shmtx unlock > 2018/02/05 08:57:26 [debug] 22509#0: *7 http file cache exists: -5 e:0 > 2018/02/05 08:57:26 [debug] 22509#0: *7 cache file: > "/tmp/nginx/cache/3/4f/ecbc29d001e852b40f09e913b5ced4f3" > 2018/02/05 08:57:26 [debug] 22509#0: *7 add cleanup: 00000000006D35B0 > 2018/02/05 08:57:26 [debug] 22509#0: *7 http upstream cache: -5 Your assumption that it is "failing to create" is wrong. Instead, this debug log snippet shows that nginx checked if there is a cached response available, and the result is negative. -- Maxim Dounin http://mdounin.ru/ From kaushalshriyan at gmail.com Mon Feb 5 18:26:04 2018 From: kaushalshriyan at gmail.com (Kaushal Shriyan) Date: Mon, 5 Feb 2018 23:56:04 +0530 Subject: Allow and Deny IP's Message-ID: Hi, When i run this curl call -> curl -X GET http://13.127.165.226/ -H 'cache-control: no-cache' -H 'postman-token: 2494a4a7-6791-2426-cedf-d0bcaa1cd90a' -H 'x-forwarded-for: 12.12.12.13.11' Ideally the request should not be allowed and the access log should report 403 instead of 200 I get 200 OK in the access.log location / { proxy_set_header X-Forwarded-For $remote_addr; allow 182.76.214.126/32; allow 116.75.80.47/32; deny all; error_page 404 /404.html; location = /40x.html { } Please let me know if i am missing anything. Best Regards, Kaushal -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Mon Feb 5 19:11:49 2018 From: nginx-forum at forum.nginx.org (loopback_proxy) Date: Mon, 05 Feb 2018 14:11:49 -0500 Subject: nginx cache issue (http upstream cache: -5) In-Reply-To: <20180205125658.GQ24410@mdounin.ru> References: <20180205125658.GQ24410@mdounin.ru> Message-ID: <1c6c75f5de4f764a6e1c3fc53756a5d5.NginxMailingListEnglish@forum.nginx.org> Ahhh interesting, that did the trick. Thank you so much. I have been also trying to understand the internals of nginx caching and how it works. I read the nginx blog about the overall architecture and the nginx man page about proxy_cache_* directives. I am looking for the internal architecture of the how the caching subsystem works. If you guys have any documentation or article about it, that would be so useful. - Karthik Posted at Nginx Forum: https://forum.nginx.org/read.php?2,278346,278370#msg-278370 From ph.gras at worldonline.fr Tue Feb 6 00:02:22 2018 From: ph.gras at worldonline.fr (Ph. Gras) Date: Tue, 6 Feb 2018 01:02:22 +0100 Subject: Allow and Deny IP's In-Reply-To: References: Message-ID: Hello there! location ~* wp-login\.php$ { allow 127.0.0.1; allow A.B.C.D; // My server's IP allow E.F.G.H/13; // The IP range where I am deny all; if ($http_user_agent = "-") { return 403;} if ($http_user_agent = "") { return 403;} if ($http_referer = "-") { return 403;} if ($http_referer = "") { return 403;} limit_conn limit 5; } 185.124.153.168 - - [05/Feb/2018:21:36:12 +0100] "GET /wp-login.php HTTP/1.1" 200 1300 "-" "Mozilla/5.0 (Windows NT 6.0; rv:34.0) Gecko/20100101 Firefox/34.0" 185.124.153.168 - - [05/Feb/2018:21:36:12 +0100] "POST /wp-login.php HTTP/1.1" 200 1688 "http://www.example.com/wp-login.php" "Mozilla/5.0 (Windows NT 6.0; rv:34.0) Gecko/20100101 Firefox/34.0" 81.177.126.235 - - [05/Feb/2018:22:08:21 +0100] "GET /wp-login.php HTTP/1.1" 200 1300 "-" "Mozilla/5.0 (Windows NT 6.0; rv:34.0) Gecko/20100101 Firefox/34.0" 81.177.126.235 - - [05/Feb/2018:22:08:22 +0100] "POST /wp-login.php HTTP/1.1" 200 1688 "http://www.example.com/wp-login.php" "Mozilla/5.0 (Windows NT 6.0; rv:34.0) Gecko/20100101 Firefox/34.0" 109.252.93.223 - - [06/Feb/2018:00:20:05 +0100] "GET /wp-login.php HTTP/1.1" 200 1300 "-" "Mozilla/5.0 (Windows NT 6.0; rv:34.0) Gecko/20100101 Firefox/34.0" 109.252.93.223 - - [06/Feb/2018:00:20:05 +0100] "POST /wp-login.php HTTP/1.1" 200 1688 "http://www.example.com/wp-login.php" "Mozilla/5.0 (Windows NT 6.0; rv:34.0) Gecko/20100101 Firefox/34.0" 95.26.90.3 - - [06/Feb/2018:00:20:10 +0100] "GET /wp-login.php HTTP/1.1" 200 1300 "-" "Mozilla/5.0 (Windows NT 6.0; rv:34.0) Gecko/20100101 Firefox/34.0" 95.26.90.3 - - [06/Feb/2018:00:20:11 +0100] "POST /wp-login.php HTTP/1.1" 200 1688 "http://www.example.com/wp-login.php" "Mozilla/5.0 (Windows NT 6.0; rv:34.0) Gecko/20100101 Firefox/34.0" Me too :-( Ph. Gras > Hi, > > When i run this curl call -> curl -X GET http://13.127.165.226/ -H 'cache-control: no-cache' -H 'postman-token: 2494a4a7-6791-2426-cedf-d0bcaa1cd90a' -H 'x-forwarded-for: 12.12.12.13.11' > > Ideally the request should not be allowed and the access log should report 403 instead of 200 > I get 200 OK in the access.log > > location / { > proxy_set_header X-Forwarded-For $remote_addr; > allow 182.76.214.126/32; > allow 116.75.80.47/32; > deny all; > error_page 404 /404.html; > location = /40x.html { > } > > Please let me know if i am missing anything. > > Best Regards, > > Kaushal > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From akhil.dangore at calsoftinc.com Tue Feb 6 12:21:44 2018 From: akhil.dangore at calsoftinc.com (Akhil Dangore) Date: Tue, 6 Feb 2018 17:51:44 +0530 Subject: Help needed on Nginx plus configuration In-Reply-To: <2f4d8228-d6f0-1226-8a38-e019f49c2bbb@calsoftinc.com> References: <2f4d8228-d6f0-1226-8a38-e019f49c2bbb@calsoftinc.com> Message-ID: Hello Team, I am trying Nginx plus for our company product, Currently I am using trail account to achieve our requirement but I am facing some issue. Details explanation of issue: * I have configured Nginx plus with attatched file, Here I have enabled nginx API to reconfigure nginx plus run time * I am facing some issue to configure using nginx API, below are some examples of requests: o curl localhost:80/api/2/http/requests + {"total":111562,"current":1} - Working fine o curl localhost:80/api/2/http/upstreams + {} - empty dict - Not working fine, since I have configured upstreams in nginx.config file as below: + ??? ??? upstream backend { ??? ??? ??? server localhost:8080; ??? ??? ??? server localhost:8090; ??? ??? } ??????? location / { ??????????? proxy_pass http://backend; ??????? } + Why am i receiving empty dict ? If you need more details, please let me know. Regards, Akhil -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- user nginx; worker_processes auto; error_log /var/log/nginx/error.log notice; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; #tcp_nopush on; keepalive_timeout 65; upstream backend { server localhost:8080; server localhost:8090; } server { listen 80; server_name localhost; location /nginx_status { # Enable Nginx stats stub_status on; allow all; } location /api { api write=on; allow all; } location / { proxy_pass http://backend; } } #gzip on; include /etc/nginx/conf.d/*.conf; } # TCP/UDP proxy and load balancing block # #stream { # Example configuration for TCP load balancing # upstream stream_backend { # zone tcp_servers 64k; # server backend1.example.com:12345; # server backend2.example.com:12345; # } # server { # listen 12345; # status_zone tcp_server; # proxy_pass stream_backend; # } #} From iippolitov at nginx.com Tue Feb 6 12:55:07 2018 From: iippolitov at nginx.com (Igor A. Ippolitov) Date: Tue, 6 Feb 2018 15:55:07 +0300 Subject: Help needed on Nginx plus configuration In-Reply-To: References: <2f4d8228-d6f0-1226-8a38-e019f49c2bbb@calsoftinc.com> Message-ID: <22874b9c-3810-cace-d0a3-f6461c2f5d72@nginx.com> Akhil, As a trial user you can ask evaluations at nginx.com to help you with this setup. API queries data in status zones. You should configure one for your upstream with 'zone backend 64k;' or similar statement inside upstream{} block. Regards. On 06.02.2018 15:21, Akhil Dangore wrote: > > Hello Team, > > I am trying Nginx plus for our company product, Currently I am using > trail account to achieve our requirement but I am facing some issue. > > Details explanation of issue: > > * I have configured Nginx plus with attatched file, Here I have > enabled nginx API to reconfigure nginx plus run time > * I am facing some issue to configure using nginx API, below are > some examples of requests: > o curl localhost:80/api/2/http/requests > + {"total":111562,"current":1} - Working fine > o curl localhost:80/api/2/http/upstreams > + {} - empty dict - Not working fine, since I have > configured upstreams in nginx.config file as below: > + ??? ??? upstream backend { > ??? ??? ??? server localhost:8080; > ??? ??? ??? server localhost:8090; > > ??? ??? } > ??????? location / { > ??????????? proxy_pass http://backend; > ??????? } > + Why am i receiving empty dict ? > > If you need more details, please let me know. > > Regards, > Akhil > > > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From kaushalshriyan at gmail.com Tue Feb 6 14:56:04 2018 From: kaushalshriyan at gmail.com (Kaushal Shriyan) Date: Tue, 6 Feb 2018 20:26:04 +0530 Subject: Allow and Deny IP's In-Reply-To: References: Message-ID: On Tue, Feb 6, 2018 at 5:32 AM, Ph. Gras wrote: > Hello there! > > > location ~* wp-login\.php$ { > allow 127.0.0.1; > allow A.B.C.D; // My server's IP > allow E.F.G.H/13; // The IP range where I am > deny all; > if ($http_user_agent = "-") { return 403;} > if ($http_user_agent = "") { return 403;} > if ($http_referer = "-") { return 403;} > if ($http_referer = "") { return 403;} > limit_conn limit 5; > } > > 185.124.153.168 - - [05/Feb/2018:21:36:12 +0100] "GET /wp-login.php > HTTP/1.1" 200 1300 "-" "Mozilla/5.0 (Windows NT 6.0; rv:34.0) > Gecko/20100101 Firefox/34.0" > 185.124.153.168 - - [05/Feb/2018:21:36:12 +0100] "POST /wp-login.php > HTTP/1.1" 200 1688 "http://www.example.com/wp-login.php" "Mozilla/5.0 > (Windows NT 6.0; rv:34.0) Gecko/20100101 Firefox/34.0" > 81.177.126.235 - - [05/Feb/2018:22:08:21 +0100] "GET /wp-login.php > HTTP/1.1" 200 1300 "-" "Mozilla/5.0 (Windows NT 6.0; rv:34.0) > Gecko/20100101 Firefox/34.0" > 81.177.126.235 - - [05/Feb/2018:22:08:22 +0100] "POST /wp-login.php > HTTP/1.1" 200 1688 "http://www.example.com/wp-login.php" "Mozilla/5.0 > (Windows NT 6.0; rv:34.0) Gecko/20100101 Firefox/34.0" > 109.252.93.223 - - [06/Feb/2018:00:20:05 +0100] "GET /wp-login.php > HTTP/1.1" 200 1300 "-" "Mozilla/5.0 (Windows NT 6.0; rv:34.0) > Gecko/20100101 Firefox/34.0" > 109.252.93.223 - - [06/Feb/2018:00:20:05 +0100] "POST /wp-login.php > HTTP/1.1" 200 1688 "http://www.example.com/wp-login.php" "Mozilla/5.0 > (Windows NT 6.0; rv:34.0) Gecko/20100101 Firefox/34.0" > 95.26.90.3 - - [06/Feb/2018:00:20:10 +0100] "GET /wp-login.php HTTP/1.1" > 200 1300 "-" "Mozilla/5.0 (Windows NT 6.0; rv:34.0) Gecko/20100101 > Firefox/34.0" > 95.26.90.3 - - [06/Feb/2018:00:20:11 +0100] "POST /wp-login.php HTTP/1.1" > 200 1688 "http://www.example.com/wp-login.php" "Mozilla/5.0 (Windows NT > 6.0; rv:34.0) Gecko/20100101 Firefox/34.0" > > Me too :-( > > Ph. Gras > > > Hi, > > > > When i run this curl call -> curl -X GET http://13.127.165.226/ -H > 'cache-control: no-cache' -H 'postman-token: 2494a4a7-6791-2426-cedf-d0bcaa1cd90a' > -H 'x-forwarded-for: 12.12.12.13.11' > > > > Ideally the request should not be allowed and the access log should > report 403 instead of 200 > > I get 200 OK in the access.log > > > > location / { > > proxy_set_header X-Forwarded-For $remote_addr; > > allow 182.76.214.126/32; > > allow 116.75.80.47/32; > > deny all; > > error_page 404 /404.html; > > location = /40x.html { > > } > > > > Please let me know if i am missing anything. > > > > Best Regards, > > > > Kaushal > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Hi, Checking in if anyone can pitch in for help for my post to this mailing list. Thanks in Advance. Best Regards, Kaushal -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Tue Feb 6 15:29:35 2018 From: nginx-forum at forum.nginx.org (Credo) Date: Tue, 06 Feb 2018 10:29:35 -0500 Subject: localhost works but server_name times out! Message-ID: I'm new to nginx and I'm trying to learn it fast so that I can use it in a work project. But I have a weird problem. I have a django project which I run using uwsgi and I'm trying to use nginx as a reverse proxy for it. It works fine as long as I access it through localhost:port, but when I use the server name, it just gets stuck until it times out. There is no error, not even in /var/log/nginx/error.log. These are my configurations: /etc/nginx/conf.d/default.conf: server { listen 9506; server_name localhost; charset utf-8; client_max_body_size 75M; location / { root /home/user/shayan/Desktop/djangoProjects/user_management; uwsgi_pass unix:/home/shayan/Desktop/djangoProjects/user_management/uwsgi-nginx.sock; include /etc/nginx/uwsgi_params; } } /home/shayan/Desktop/djangoProjects/user_management/uwsgi.ini: [uwsgi] ini=:base socket=%duwsgi-nginx.sock master=true processes=4 [dev] ini=:base socket= :8001 [local] init= :base http= :8000 [base] chmod-socket=666 and this is how I run uwsgi: uwsgi --wsgi-file user_management/wsgi.py --ini uwsgi.ini So...what's wrong here? Thanks Posted at Nginx Forum: https://forum.nginx.org/read.php?2,278385,278385#msg-278385 From Jason.Whittington at equifax.com Tue Feb 6 15:43:10 2018 From: Jason.Whittington at equifax.com (Jason Whittington) Date: Tue, 6 Feb 2018 15:43:10 +0000 Subject: [IE] localhost works but server_name times out! In-Reply-To: References: Message-ID: <995C5C9AD54A3C419AF1C20A8B6AB9A432AFA0B4@STLEISEXCMBX3.eis.equifax.com> Try adding the server name you are using to the server_name directive. You can specify multiple, e.g: server_name dog cat dogcat; Jason -----Original Message----- From: nginx [mailto:nginx-bounces at nginx.org] On Behalf Of Credo Sent: Tuesday, February 06, 2018 9:30 AM To: nginx at nginx.org Subject: [IE] localhost works but server_name times out! I'm new to nginx and I'm trying to learn it fast so that I can use it in a work project. But I have a weird problem. I have a django project which I run using uwsgi and I'm trying to use nginx as a reverse proxy for it. It works fine as long as I access it through localhost:port, but when I use the server name, it just gets stuck until it times out. There is no error, not even in /var/log/nginx/error.log. These are my configurations: /etc/nginx/conf.d/default.conf: server { listen 9506; server_name localhost; charset utf-8; client_max_body_size 75M; location / { root /home/user/shayan/Desktop/djangoProjects/user_management; uwsgi_pass unix:/home/shayan/Desktop/djangoProjects/user_management/uwsgi-nginx.sock; include /etc/nginx/uwsgi_params; } } /home/shayan/Desktop/djangoProjects/user_management/uwsgi.ini: [uwsgi] ini=:base socket=%duwsgi-nginx.sock master=true processes=4 [dev] ini=:base socket= :8001 [local] init= :base http= :8000 [base] chmod-socket=666 and this is how I run uwsgi: uwsgi --wsgi-file user_management/wsgi.py --ini uwsgi.ini So...what's wrong here? Thanks Posted at Nginx Forum: https://forum.nginx.org/read.php?2,278385,278385#msg-278385 _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx This message contains proprietary information from Equifax which may be confidential. If you are not an intended recipient, please refrain from any disclosure, copying, distribution or use of this information and note that such actions are prohibited. If you have received this transmission in error, please notify by e-mail postmaster at equifax.com. Equifax? is a registered trademark of Equifax Inc. All rights reserved. From francis at daoine.org Tue Feb 6 23:56:34 2018 From: francis at daoine.org (Francis Daly) Date: Tue, 6 Feb 2018 23:56:34 +0000 Subject: Allow and Deny IP's In-Reply-To: References: Message-ID: <20180206235634.GH3063@daoine.org> On Tue, Feb 06, 2018 at 01:02:22AM +0100, Ph. Gras wrote: Hi there, > location ~* wp-login\.php$ { > 185.124.153.168 - - [05/Feb/2018:21:36:12 +0100] "GET /wp-login.php HTTP/1.1" 200 1300 "-" "Mozilla/5.0 (Windows NT 6.0; rv:34.0) Gecko/20100101 Firefox/34.0" > Me too :-( Have you any reason to believe that this location is used to handle this request? $ nginx -T | grep 'server\|location' will possibly give a useful hint in that direction. For what it is worth, if I use: == server { listen 8888; location /x/ { allow 127.0.0.1; deny all; } } == then $ curl -i http://127.0.0.1:8888/x/ gives me http 200 (html/x/index.html exists), while $ curl -i http://127.0.0.2:8888/x/ gives me http 403. So - "works for me". What do you see, when you test that? What parts of your current config do you have to add, in order for that test to fail for you? f -- Francis Daly francis at daoine.org From francis at daoine.org Wed Feb 7 00:02:49 2018 From: francis at daoine.org (Francis Daly) Date: Wed, 7 Feb 2018 00:02:49 +0000 Subject: Allow and Deny IP's In-Reply-To: References: Message-ID: <20180207000249.GI3063@daoine.org> On Mon, Feb 05, 2018 at 11:56:04PM +0530, Kaushal Shriyan wrote: Hi there, > When i run this curl call -> curl -X GET http://13.127.165.226/ -H > 'cache-control: no-cache' -H 'postman-token: > 2494a4a7-6791-2426-cedf-d0bcaa1cd90a' -H 'x-forwarded-for: 12.12.12.13.11' > > Ideally the request should not be allowed and the access log should report > 403 instead of 200 Why should it not be allowed? What IP address are you making the request from? > I get 200 OK in the access.log > > location / { > proxy_set_header X-Forwarded-For $remote_addr; > allow 182.76.214.126/32; > allow 116.75.80.47/32; > deny all; > error_page 404 /404.html; > location = /40x.html { > } > > Please let me know if i am missing anything. Your config fragment is incomplete. But when I use something similar, I get the expected http 200 from an address in the "allow" list, and the expected http 403 from an address not in the "allow" list. The output of "nginx -V" might be interesting, in case you are using a version that has broken allow/deny handling. f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Wed Feb 7 05:08:03 2018 From: nginx-forum at forum.nginx.org (Credo) Date: Wed, 07 Feb 2018 00:08:03 -0500 Subject: [IE] localhost works but server_name times out! In-Reply-To: <995C5C9AD54A3C419AF1C20A8B6AB9A432AFA0B4@STLEISEXCMBX3.eis.equifax.com> References: <995C5C9AD54A3C419AF1C20A8B6AB9A432AFA0B4@STLEISEXCMBX3.eis.equifax.com> Message-ID: Sorry, I posted the wrong configuration. I did add the server_name I was using. I just changed it to localhost to see what happens. But when the server name was the one I was using, I only got time outs! Posted at Nginx Forum: https://forum.nginx.org/read.php?2,278386,278393#msg-278393 From vbart at nginx.com Wed Feb 7 13:39:19 2018 From: vbart at nginx.com (Valentin V. Bartenev) Date: Wed, 07 Feb 2018 16:39:19 +0300 Subject: [IE] localhost works but server_name times out! In-Reply-To: References: <995C5C9AD54A3C419AF1C20A8B6AB9A432AFA0B4@STLEISEXCMBX3.eis.equifax.com> Message-ID: <2144462.SLfCsMbj12@vbart-workstation> On Wednesday 07 February 2018 00:08:03 Credo wrote: > Sorry, I posted the wrong configuration. I did add the server_name I was > using. I just changed it to localhost to see what happens. But when the > server name was the one I was using, I only got time outs! Are you sure that DNS record points to the right server? wbr, Valentin V. Bartenev From kaushalshriyan at gmail.com Wed Feb 7 16:27:04 2018 From: kaushalshriyan at gmail.com (Kaushal Shriyan) Date: Wed, 7 Feb 2018 21:57:04 +0530 Subject: Allow and Deny IP's In-Reply-To: <20180207000249.GI3063@daoine.org> References: <20180207000249.GI3063@daoine.org> Message-ID: On Wed, Feb 7, 2018 at 5:32 AM, Francis Daly wrote: > On Mon, Feb 05, 2018 at 11:56:04PM +0530, Kaushal Shriyan wrote: > > Hi there, > > > When i run this curl call -> curl -X GET http://13.127.165.226/ -H > > 'cache-control: no-cache' -H 'postman-token: > > 2494a4a7-6791-2426-cedf-d0bcaa1cd90a' -H 'x-forwarded-for: > 12.12.12.13.11' > > > > Ideally the request should not be allowed and the access log should > report > > 403 instead of 200 > > Why should it not be allowed? > Hi Francis, In the curl request I am adding http header -H 'x-forwarded-for: 12.12.12.13.11' curl -X GET http://13.127.165.226/ -H 'cache-control: no-cache' -H > 'postman-token: 2494a4a7-6791-2426-cedf-d0bcaa1cd90a' -H > 'x-forwarded-for: 12.12.12.13.11' IP :- 12.12.12.13.11 should be denied with 403 Please let me know if i am missing anything. Best Regards, Kaushal -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Wed Feb 7 16:37:24 2018 From: francis at daoine.org (Francis Daly) Date: Wed, 7 Feb 2018 16:37:24 +0000 Subject: Allow and Deny IP's In-Reply-To: References: <20180207000249.GI3063@daoine.org> Message-ID: <20180207163724.GJ3063@daoine.org> On Wed, Feb 07, 2018 at 09:57:04PM +0530, Kaushal Shriyan wrote: > On Wed, Feb 7, 2018 at 5:32 AM, Francis Daly wrote: > > On Mon, Feb 05, 2018 at 11:56:04PM +0530, Kaushal Shriyan wrote: Hi there, > In the curl request I am adding http header -H 'x-forwarded-for: > 12.12.12.13.11' > > curl -X GET http://13.127.165.226/ -H 'cache-control: no-cache' -H > > 'postman-token: 2494a4a7-6791-2426-cedf-d0bcaa1cd90a' -H > > 'x-forwarded-for: 12.12.12.13.11' > > > IP :- 12.12.12.13.11 should be denied with 403 > > Please let me know if i am missing anything. No part of your config that I can see says to use the contents of the x-forwarded-for header to determine whether the request should be allowed or denied. Is that in a part of the configuration that you did not show? (Also: 12.12.12.13.11 is not an IP address.) f -- Francis Daly francis at daoine.org From ph.gras at worldonline.fr Wed Feb 7 18:28:37 2018 From: ph.gras at worldonline.fr (Ph. Gras) Date: Wed, 7 Feb 2018 19:28:37 +0100 Subject: Allow and Deny IP's In-Reply-To: <20180206235634.GH3063@daoine.org> References: <20180206235634.GH3063@daoine.org> Message-ID: <74DAF65F-5675-473B-8474-567F4F119E81@worldonline.fr> Hi Francis, >> location ~* wp-login\.php$ { > >> 185.124.153.168 - - [05/Feb/2018:21:36:12 +0100] "GET /wp-login.php HTTP/1.1" 200 1300 "-" "Mozilla/5.0 (Windows NT 6.0; rv:34.0) Gecko/20100101 Firefox/34.0" > >> Me too :-( > > Have you any reason to believe that this location is used to handle this request? Yes, and this especially since before, it worked as expected :-( > > $ nginx -T | grep 'server\|location' > > will possibly give a useful hint in that direction. # nginx -T | grep "www.example.com/wp-login.php" nginx: invalid option: "T" Is something missing ? # apt-show-versions | grep nginx nginx:all/jessie 1.6.2-5+deb8u5 uptodate nginx-common:all/jessie 1.6.2-5+deb8u5 uptodate nginx-full:amd64/jessie 1.6.2-5+deb8u5 uptodate python-certbot-nginx:all/jessie-backports 0.10.2-1~bpo8+1 uptodate Thank your for your help, Ph. Gras From Jason.Whittington at equifax.com Wed Feb 7 20:11:57 2018 From: Jason.Whittington at equifax.com (Jason Whittington) Date: Wed, 7 Feb 2018 20:11:57 +0000 Subject: Allow and Deny IP's Message-ID: <995C5C9AD54A3C419AF1C20A8B6AB9A432AFAC0E@STLEISEXCMBX3.eis.equifax.com> I find that add_header always works well to verify that the location is being chosen the way you think. Try something like add_header X-NGINX-Route always; to some of your location blocks and specify different distinct values for . Then in your browser you can use F12 tools to verify that you are getting back the header you expected. Jason -----Original Message----- From: nginx [mailto:nginx-bounces at nginx.org] On Behalf Of Ph. Gras Sent: Wednesday, February 07, 2018 12:29 PM To: nginx at nginx.org Subject: [IE] Re: Allow and Deny IP's Hi Francis, >> location ~* wp-login\.php$ { > >> 185.124.153.168 - - [05/Feb/2018:21:36:12 +0100] "GET /wp-login.php HTTP/1.1" 200 1300 "-" "Mozilla/5.0 (Windows NT 6.0; rv:34.0) Gecko/20100101 Firefox/34.0" > >> Me too :-( > > Have you any reason to believe that this location is used to handle this request? Yes, and this especially since before, it worked as expected :-( > > $ nginx -T | grep 'server\|location' > > will possibly give a useful hint in that direction. # nginx -T | grep "www.example.com/wp-login.php" nginx: invalid option: "T" Is something missing ? # apt-show-versions | grep nginx nginx:all/jessie 1.6.2-5+deb8u5 uptodate nginx-common:all/jessie 1.6.2-5+deb8u5 uptodate nginx-full:amd64/jessie 1.6.2-5+deb8u5 uptodate python-certbot-nginx:all/jessie-backports 0.10.2-1~bpo8+1 uptodate Thank your for your help, Ph. Gras _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx This message contains proprietary information from Equifax which may be confidential. If you are not an intended recipient, please refrain from any disclosure, copying, distribution or use of this information and note that such actions are prohibited. If you have received this transmission in error, please notify by e-mail postmaster at equifax.com. Equifax? is a registered trademark of Equifax Inc. All rights reserved. From ph.gras at worldonline.fr Wed Feb 7 22:46:40 2018 From: ph.gras at worldonline.fr (Ph. Gras) Date: Wed, 7 Feb 2018 23:46:40 +0100 Subject: Allow and Deny IP's In-Reply-To: <995C5C9AD54A3C419AF1C20A8B6AB9A432AFAC0E@STLEISEXCMBX3.eis.equifax.com> References: <995C5C9AD54A3C419AF1C20A8B6AB9A432AFAC0E@STLEISEXCMBX3.eis.equifax.com> Message-ID: <037EB60A-9B9D-428B-AA5A-214D7B2418DE@worldonline.fr> Hmmm! >>> location ~* wp-login\.php$ { >> >>> 185.124.153.168 - - [05/Feb/2018:21:36:12 +0100] "GET /wp-login.php HTTP/1.1" 200 1300 "-" "Mozilla/5.0 (Windows NT 6.0; rv:34.0) Gecko/20100101 Firefox/34.0" >> >>> Me too :-( >> >> Have you any reason to believe that this location is used to handle this request? > > Yes, and this especially since before, it worked as expected :-( You're right. It's working better with a / before path :-) location =/wp-login.php { # etc; } Thanks for all, Ph. Gras From nginx-forum at forum.nginx.org Thu Feb 8 02:56:15 2018 From: nginx-forum at forum.nginx.org (Credo) Date: Wed, 07 Feb 2018 21:56:15 -0500 Subject: [IE] localhost works but server_name times out! In-Reply-To: <2144462.SLfCsMbj12@vbart-workstation> References: <2144462.SLfCsMbj12@vbart-workstation> Message-ID: If by dns record, you mean uwsgi_pass, then yea, I'm sure. Otherwise localhost wouldn't work either, but it does. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,278386,278408#msg-278408 From francis at daoine.org Thu Feb 8 08:24:01 2018 From: francis at daoine.org (Francis Daly) Date: Thu, 8 Feb 2018 08:24:01 +0000 Subject: Allow and Deny IP's In-Reply-To: <74DAF65F-5675-473B-8474-567F4F119E81@worldonline.fr> References: <20180206235634.GH3063@daoine.org> <74DAF65F-5675-473B-8474-567F4F119E81@worldonline.fr> Message-ID: <20180208082401.GK3063@daoine.org> On Wed, Feb 07, 2018 at 07:28:37PM +0100, Ph. Gras wrote: Hi there, > >> location ~* wp-login\.php$ { > > > >> 185.124.153.168 - - [05/Feb/2018:21:36:12 +0100] "GET /wp-login.php HTTP/1.1" 200 1300 "-" "Mozilla/5.0 (Windows NT 6.0; rv:34.0) Gecko/20100101 Firefox/34.0" > > > >> Me too :-( > > > > Have you any reason to believe that this location is used to handle this request? > > Yes, and this especially since before, it worked as expected :-( I see in other mail that that has become fixed -- probably this location was not being used for this request, owing to an earlier "location ~ php", or something else that was changed since it had been working before. > > $ nginx -T | grep 'server\|location' > > > > will possibly give a useful hint in that direction. > > # nginx -T | grep "www.example.com/wp-login.php" > nginx: invalid option: "T" I actually meant literally "grep 'server\|location'", to show the server{} blocks (and server_name directives) and the location directives in your config, which might be enough to show which location{} is used for one request. But your nginx version is from before "-T" was added, so you would have to look in the config file (and any include:d files) directly, and there isn't a simple one-liner to do that. And now that it works for you, it is not important any more :-) f -- Francis Daly francis at daoine.org From francis at daoine.org Thu Feb 8 08:46:00 2018 From: francis at daoine.org (Francis Daly) Date: Thu, 8 Feb 2018 08:46:00 +0000 Subject: [IE] localhost works but server_name times out! In-Reply-To: References: <2144462.SLfCsMbj12@vbart-workstation> Message-ID: <20180208084600.GL3063@daoine.org> On Wed, Feb 07, 2018 at 09:56:15PM -0500, Credo wrote: Hi there, > If by dns record, you mean uwsgi_pass, then yea, I'm sure. Otherwise > localhost wouldn't work either, but it does. Your client (browser) should resolve the name "www.example.com" to an IP address that corresponds to the nginx server; and all network control devices (remote firewalls, local iptables or the like) should allow the traffic get from your client to the nginx IP:port. All of that has to happen before nginx gets involved. Since you can access http://localhost/, but not http://www.example.com/, the first thing to check is whether your client tries to talk to your nginx when it asks for http://www.example.com/. If it doesn't, you must fix things outside of nginx so that it does. f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Thu Feb 8 13:45:50 2018 From: nginx-forum at forum.nginx.org (beatnut) Date: Thu, 08 Feb 2018 08:45:50 -0500 Subject: HTTP/2 server push Message-ID: <5b88462dd011b38c2fba8e4fe2f95b75.NginxMailingListEnglish@forum.nginx.org> Hi everyone, According to the roadmap https://trac.nginx.org/nginx/roadmap server push will show up in a few days. My question is : How to mitigate problem when we are pushing resources already being in the cache for example pushed earlier ? Do i have to make some workaround using cookies or nginx can hadle that kind of situation? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,278415,278415#msg-278415 From ru at nginx.com Thu Feb 8 19:41:03 2018 From: ru at nginx.com (Ruslan Ermilov) Date: Thu, 8 Feb 2018 22:41:03 +0300 Subject: HTTP/2 server push In-Reply-To: <5b88462dd011b38c2fba8e4fe2f95b75.NginxMailingListEnglish@forum.nginx.org> References: <5b88462dd011b38c2fba8e4fe2f95b75.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20180208194103.GB75377@lo0.su> On Thu, Feb 08, 2018 at 08:45:50AM -0500, beatnut wrote: > Hi everyone, > According to the roadmap https://trac.nginx.org/nginx/roadmap server push > will show up in a few days. It's been committed today and will be available in the next release. > My question is : > How to mitigate problem when we are pushing resources already being in the > cache for example pushed earlier ? Apache has this mechanism: http://httpd.apache.org/docs/2.4/mod/mod_http2.html#h2pushdiarysize But it doesn't mitigate a problem of wasted bandwidth with HTTP/2 push if a client opens a new connection, or a server is restarted. > Do i have to make some workaround using cookies or nginx can hadle that kind > of situation? Happy you now also know that HTTP/2 push is mostly a myth when it comes to performance improvement. In my opinion, it only has very limited application. From nginx-forum at forum.nginx.org Fri Feb 9 07:44:22 2018 From: nginx-forum at forum.nginx.org (aperfectman) Date: Fri, 09 Feb 2018 02:44:22 -0500 Subject: How is the progress to support DTLS Message-ID: <5508abbcf56baf9b296322606b885ffc.NginxMailingListEnglish@forum.nginx.org> Hello team, I am looking for a loader balancer to support DTLS on UDP and found that there is experimental DTLS support in specific version 1.13.0 Nginx. http://nginx.org/patches/dtls/README.txt Just curious about the progress of releasing the official feature? And is it being supported in Nginx Plus? Thanks, Ted Posted at Nginx Forum: https://forum.nginx.org/read.php?2,278434,278434#msg-278434 From nginx-forum at forum.nginx.org Fri Feb 9 08:58:00 2018 From: nginx-forum at forum.nginx.org (beatnut) Date: Fri, 09 Feb 2018 03:58:00 -0500 Subject: HTTP/2 server push In-Reply-To: <20180208194103.GB75377@lo0.su> References: <20180208194103.GB75377@lo0.su> Message-ID: Thank you for an answer. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,278415,278435#msg-278435 From maxim at nginx.com Fri Feb 9 15:42:35 2018 From: maxim at nginx.com (Maxim Konovalov) Date: Fri, 9 Feb 2018 18:42:35 +0300 Subject: How is the progress to support DTLS In-Reply-To: <5508abbcf56baf9b296322606b885ffc.NginxMailingListEnglish@forum.nginx.org> References: <5508abbcf56baf9b296322606b885ffc.NginxMailingListEnglish@forum.nginx.org> Message-ID: <7cb11a7f-6639-9725-4929-b3c40da26fdf@nginx.com> Hi Ted, On 09/02/2018 10:44, aperfectman wrote: > Hello team, > > I am looking for a loader balancer to support DTLS on UDP and found that > there is experimental DTLS support in specific version 1.13.0 Nginx. > http://nginx.org/patches/dtls/README.txt > > Just curious about the progress of releasing the official feature? And is > it being supported in Nginx Plus? > have you tested the patch? Any feedback? Thanks, Maxim -- Maxim Konovalov From nginx-forum at forum.nginx.org Fri Feb 9 16:53:05 2018 From: nginx-forum at forum.nginx.org (aperfectman) Date: Fri, 09 Feb 2018 11:53:05 -0500 Subject: How is the progress to support DTLS In-Reply-To: <7cb11a7f-6639-9725-4929-b3c40da26fdf@nginx.com> References: <7cb11a7f-6639-9725-4929-b3c40da26fdf@nginx.com> Message-ID: Hello Maxim, Yes, I tested it based on the instruction but it didn't work. The error was "DTLSv1_listen error -1 (SSL: error:1408A0C1:SSL routines:ssl3_get_client_hello:no shared cipher) while SSL handshaking, udp client: 127.0.0.1..." However, with the same key, it worked with goldy https://developer.ibm.com/code/open/projects/goldy/ So I think my key pair should be good. Any suggestion? Thanks, Ted Posted at Nginx Forum: https://forum.nginx.org/read.php?2,278434,278460#msg-278460 From vbart at nginx.com Fri Feb 9 18:27:05 2018 From: vbart at nginx.com (Valentin V. Bartenev) Date: Fri, 09 Feb 2018 21:27:05 +0300 Subject: Unit 0.6 beta release Message-ID: <1841410.l7VLkUhVve@vbart-workstation> Hello, I'm glad to announce a new beta of NGINX Unit with advanced process management and Perl/PSGI support. One of the Perl applications that has been tested is Bugzilla and it run with Unit flawlessly. Here is a changes log of 0.5 and 0.6 versions: Changes with Unit 0.5 08 Feb 2018 *) Change: the "workers" application option was removed, the "processes" application option should be used instead. *) Feature: the "processes" application option with prefork and dynamic process management support. *) Feature: Perl application module. *) Bugfix: in reading client request body; the bug had appeared in 0.3. *) Bugfix: some Python applications might not work due to missing "wsgi.errors" environ variable. *) Bugfix: HTTP chunked responses might be encoded incorrectly on 32-bit platforms. *) Bugfix: infinite looping in HTTP parser. *) Bugfix: segmentation fault in router. Changes with Unit 0.6 09 Feb 2018 *) Bugfix: the main process died when the "type" application option contained version; the bug had appeared in 0.5. The announce of 0.5 has been skipped as a serious regression was found right after the packages were built and published. Besides the precompiled packages for CentOS, RHEL, Debian, Ubuntu, and Amazon Linux, now you can try Unit using official Docker containers. See the links below for details: - Packages: https://unit.nginx.org/installation/#precompiled-packages - Docker: https://hub.docker.com/r/nginx/unit/tags/ wbr, Valentin V. Bartenev From nginx-forum at forum.nginx.org Fri Feb 9 18:57:10 2018 From: nginx-forum at forum.nginx.org (spacerobot) Date: Fri, 09 Feb 2018 13:57:10 -0500 Subject: Different certificates and keys for server and client verification Message-ID: <3071e8233d365202f792acaada537ed3.NginxMailingListEnglish@forum.nginx.org> Hi, I want my nginx listener to use SSL and do both server and client verification. However, I want it to use different certificates and keys for server vs client verification. The reason is that I want to use a properly signed certificate for the server verification and a self signed certificate for client verification (in order to manage allowed clients). Is there a way to achieve this? Thanks! Posted at Nginx Forum: https://forum.nginx.org/read.php?2,278466,278466#msg-278466 From Jason.Whittington at equifax.com Fri Feb 9 21:16:31 2018 From: Jason.Whittington at equifax.com (Jason Whittington) Date: Fri, 9 Feb 2018 21:16:31 +0000 Subject: Different certificates and keys for server and client verification Message-ID: <995C5C9AD54A3C419AF1C20A8B6AB9A432AFC9EE@STLEISEXCMBX3.eis.equifax.com> Yes - SSL and Client certs are completely orthogonal. However nginx needs to know about whatever cert is used to sign the client certs. Each client can't create completely distinct self-signed certs; they have to be signed by an issuer that nginx trusts. The blog posts at [1] and [2] do a pretty good job outlining the process for client certs. Notice that they both basically start with creating a CA you are going to use to issue client certs. [1] http://nategood.com/client-side-certificate-authentication-in-ngi [2] https://arcweb.co/securing-websites-nginx-and-client-side-certificate-authentication-linux/ Jason -----Original Message----- From: nginx [mailto:nginx-bounces at nginx.org] On Behalf Of spacerobot Sent: Friday, February 09, 2018 12:57 PM To: nginx at nginx.org Subject: [IE] Different certificates and keys for server and client verification Hi, I want my nginx listener to use SSL and do both server and client verification. However, I want it to use different certificates and keys for server vs client verification. The reason is that I want to use a properly signed certificate for the server verification and a self signed certificate for client verification (in order to manage allowed clients). Is there a way to achieve this? Thanks! Posted at Nginx Forum: https://forum.nginx.org/read.php?2,278466,278466#msg-278466 _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx This message contains proprietary information from Equifax which may be confidential. If you are not an intended recipient, please refrain from any disclosure, copying, distribution or use of this information and note that such actions are prohibited. If you have received this transmission in error, please notify by e-mail postmaster at equifax.com. Equifax? is a registered trademark of Equifax Inc. All rights reserved. From nginx-forum at forum.nginx.org Sat Feb 10 16:31:21 2018 From: nginx-forum at forum.nginx.org (scoulibaly) Date: Sat, 10 Feb 2018 11:31:21 -0500 Subject: How is the progress to support DTLS In-Reply-To: References: <7cb11a7f-6639-9725-4929-b3c40da26fdf@nginx.com> Message-ID: <48f6e6555e2178d4edcc27a3be403a35.NginxMailingListEnglish@forum.nginx.org> Ted, I had similar issue recently and found out that the NGINX patch for DTLS doesn't seem to support PSK. Depending on the client cipher negociation at handshake time you might or might not encounter "no shared cipher". If you can, you should force your client to use an "SSL" cipher supported by nginx (and not a PSK one). Regards Sekine Posted at Nginx Forum: https://forum.nginx.org/read.php?2,278434,278478#msg-278478 From nginx-forum at forum.nginx.org Sat Feb 10 16:34:25 2018 From: nginx-forum at forum.nginx.org (scoulibaly) Date: Sat, 10 Feb 2018 11:34:25 -0500 Subject: How is the progress to support DTLS In-Reply-To: <5508abbcf56baf9b296322606b885ffc.NginxMailingListEnglish@forum.nginx.org> References: <5508abbcf56baf9b296322606b885ffc.NginxMailingListEnglish@forum.nginx.org> Message-ID: <3aae7f4cec00c23d3de05855c5c1167d.NginxMailingListEnglish@forum.nginx.org> Ted, A patched version of NginxPlus is available on request from Nginx customer care (based on 1.18.0). AFAIK the DTLS feature is expected to be deployed in either next or the other one release. Sekine Posted at Nginx Forum: https://forum.nginx.org/read.php?2,278434,278479#msg-278479 From nginx-forum at forum.nginx.org Sat Feb 10 16:36:07 2018 From: nginx-forum at forum.nginx.org (scoulibaly) Date: Sat, 10 Feb 2018 11:36:07 -0500 Subject: How is the progress to support DTLS In-Reply-To: <7cb11a7f-6639-9725-4929-b3c40da26fdf@nginx.com> References: <7cb11a7f-6639-9725-4929-b3c40da26fdf@nginx.com> Message-ID: <143c0ae87a60ab503c2eaa1a84c0f851.NginxMailingListEnglish@forum.nginx.org> Hi Maxim, Tested the NginxPlus patch for DTLS. UDP healthchecking doesn't work (ptoxy_timeout 1s, proxy_responses:1, my server answers every single request right away). Reproducible with Californium Scandium demos. Sekine Posted at Nginx Forum: https://forum.nginx.org/read.php?2,278434,278480#msg-278480 From nginx-forum at forum.nginx.org Sat Feb 10 18:21:16 2018 From: nginx-forum at forum.nginx.org (George) Date: Sat, 10 Feb 2018 13:21:16 -0500 Subject: Nginx 1.13.9 HTTP/2 Server Push - non-compressed assets ? Message-ID: <67a56ac7eccb49fc5f009455ea40b338.NginxMailingListEnglish@forum.nginx.org> Hi compiled Nginx 1.13.9 from master branch to try out HTTP/2 Server Push but noticed the pushed assets loose their gzip compression and are served as non-compressed assets ? Is that as intended ? I posted my findings at https://community.centminmod.com/threads/hurray-http-2-server-push-for-nginx.11910/#post-59411 http2_push_preload on; add_header Link "; rel=preload; as=style"; add_header Link "; rel=preload; as=style"; push works as I see PUSH_PROMISE frames and chrome reports push nghttp -navs https://baremetal.doman.com/ [ 0.018] recv (stream_id=13) :method: GET [ 0.018] recv (stream_id=13) :path: /css/bootstrap.min.css [ 0.018] recv (stream_id=13) :authority: baremetal.domain.com [ 0.018] recv (stream_id=13) :scheme: https [ 0.018] recv PUSH_PROMISE frame ; END_HEADERS (padlen=0, promised_stream_id=2) [ 0.018] recv (stream_id=13) :method: GET [ 0.018] recv (stream_id=13) :path: /css/theme-style.css [ 0.018] recv (stream_id=13) :authority: baremetal.domain.com [ 0.018] recv (stream_id=13) :scheme: https [ 0.018] recv PUSH_PROMISE frame Posted at Nginx Forum: https://forum.nginx.org/read.php?2,278481,278481#msg-278481 From c at vakantieland.nl Sat Feb 10 23:17:38 2018 From: c at vakantieland.nl (C. Jacobs) Date: Sun, 11 Feb 2018 00:17:38 +0100 Subject: How to resolve Nginx dav PUT request failed on rename() with (13: Permission denied)? Message-ID: <63C40CFC-5AD6-41C0-8466-4D2080D7194C@vakantieland.nl> When trying to PUT an index.html file in the root of an already existing folder, nginx fails with: [crit] 1181#0: *1 rename() "/opt/spool/nginx/client_temp/1/0000000001? to "/opt/share/www/domain.tld/index.html-3hlCQ9iE" failed (13: Permission denied), client: 1.2.3.9, server: host.domain.tld, request: "PUT /www/domain.tld/index.html-3hlCQ9iE HTTP/1.1", host: "172.21.2.2" Environment ? Nginx-extras 1.13.6-1 from entware-3x repo ? running on Padavan firmware. ? Nginx.conf: user www-rw www-w; ? grep www-rw /etc/passwd www-rw:x:1000:1001:Linux User,,,:/opt/share/www:/bin/sh ? grep www-w /etc/group www-w:x:1001: ? ls -l /opt/share/www drw-rw-r-- 2 www-rw www-w 4096 Feb 9 13:51 domain.tld ? ls -al /opt/share/www/domain.tld drw-rw-r-- 2 www-rw www-w 4096 Feb 9 13:51 . drwxr-xr-x 4 www-rw www-w 4096 Feb 9 13:51 .. ? ls -l /opt/spool/nginx drwxrwxrwx 7 www-rw root 4096 Feb 9 22:46 client_temp ? ls -l /opt/spool/nginx/client_temp drwx------ 2 www-rw www-w 4096 Feb 9 22:28 5 ? client: Cyberduck/6.3.3.27341 ? client-user: www-rw Nginx.conf --- server { location /www { root /opt/share; client_body_temp_path /opt/spool/nginx/client_temp 1; dav_methods PUT DELETE MKCOL COPY MOVE; dav_ext_methods PROPFIND OPTIONS; # allow creating directories create_full_put_path on; dav_access user:rw group:r all:r; autoindex on; } } --- Regression ? user nobody nogroup; ? dav_access user:rw group:r all:r; ? #autoindex ... ? client-user: admin What should I fix (in the permissions?) to resolve the Nginx dav permission denied errors? -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: Message signed with OpenPGP using GPGMail URL: From francis at daoine.org Sun Feb 11 18:56:20 2018 From: francis at daoine.org (Francis Daly) Date: Sun, 11 Feb 2018 18:56:20 +0000 Subject: How to resolve Nginx dav PUT request failed on rename() with (13: Permission denied)? In-Reply-To: <63C40CFC-5AD6-41C0-8466-4D2080D7194C@vakantieland.nl> References: <63C40CFC-5AD6-41C0-8466-4D2080D7194C@vakantieland.nl> Message-ID: <20180211185620.GM3063@daoine.org> On Sun, Feb 11, 2018 at 12:17:38AM +0100, C. Jacobs wrote: Hi there, Not specifically tested, but... > [crit] 1181#0: *1 rename() "/opt/spool/nginx/client_temp/1/0000000001? to "/opt/share/www/domain.tld/index.html-3hlCQ9iE" failed (13: Permission denied), client: 1.2.3.9, server: host.domain.tld, request: "PUT /www/domain.tld/index.html-3hlCQ9iE HTTP/1.1", host: "172.21.2.2" > ? ls -l /opt/share/www > drw-rw-r-- 2 www-rw www-w 4096 Feb 9 13:51 domain.tld ...add "x" permissions there. That is, "chmod 775 (or 755) /opt/share/www/domain.tld" > What should I fix (in the permissions?) to resolve the Nginx dav permission denied errors? "x" on a directory is needed for some things that you might naively think should only need "w". f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Sun Feb 11 21:16:05 2018 From: nginx-forum at forum.nginx.org (George) Date: Sun, 11 Feb 2018 16:16:05 -0500 Subject: Nginx 1.13.9 HTTP/2 Server Push - non-compressed assets ? In-Reply-To: <67a56ac7eccb49fc5f009455ea40b338.NginxMailingListEnglish@forum.nginx.org> References: <67a56ac7eccb49fc5f009455ea40b338.NginxMailingListEnglish@forum.nginx.org> Message-ID: <0807523c972e0ca9a71eac609109565b.NginxMailingListEnglish@forum.nginx.org> Reported bug at https://trac.nginx.org/nginx/ticket/1478 Posted at Nginx Forum: https://forum.nginx.org/read.php?2,278481,278488#msg-278488 From nginx-forum at forum.nginx.org Mon Feb 12 01:57:04 2018 From: nginx-forum at forum.nginx.org (entpneur) Date: Sun, 11 Feb 2018 20:57:04 -0500 Subject: Mail Proxy for two domains nehind NAT Message-ID: <15aa4b47d506505e6d6e6df8eee57bf9.NginxMailingListEnglish@forum.nginx.org> Hello, I tried to setup NGiNX as a Mail Proxy for two domains behind NAT. Users will be diverted to the right server base on domain (eg./ user at domainA.com will be diverted to Server A and user at domainB.com will be diverted to Server B). When I test the nginx.conf it gives me the following: "nginx: [emerg] duplicate "0.0.0.0:143" address and port pair ..." After some digging, I know that the listen directive need to have different address:port pair but reuseport option is not allow in mail context. Is there a way to overcome or achieve that? Please Help! TIA! Regards, YSC Posted at Nginx Forum: https://forum.nginx.org/read.php?2,278489,278489#msg-278489 From mdounin at mdounin.ru Mon Feb 12 12:50:31 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 12 Feb 2018 15:50:31 +0300 Subject: Mail Proxy for two domains nehind NAT In-Reply-To: <15aa4b47d506505e6d6e6df8eee57bf9.NginxMailingListEnglish@forum.nginx.org> References: <15aa4b47d506505e6d6e6df8eee57bf9.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20180212125031.GV24410@mdounin.ru> Hello! On Sun, Feb 11, 2018 at 08:57:04PM -0500, entpneur wrote: > Hello, > > I tried to setup NGiNX as a Mail Proxy for two domains behind NAT. Users > will be diverted to the right server base on domain (eg./ user at domainA.com > will be diverted to Server A and user at domainB.com will be diverted to Server > B). > > When I test the nginx.conf it gives me the following: > > "nginx: [emerg] duplicate "0.0.0.0:143" address and port pair ..." > > After some digging, I know that the listen directive need to have different > address:port pair but reuseport option is not allow in mail context. > > Is there a way to overcome or achieve that? Please Help! TIA! If you want to distinguish different clients based on what they provide during authentication, you have to use single server{} block for this, and then introduce required distinction at authentication level in your auth_http script. -- Maxim Dounin http://mdounin.ru/ From nginx-forum at forum.nginx.org Mon Feb 12 17:19:56 2018 From: nginx-forum at forum.nginx.org (clintmiller) Date: Mon, 12 Feb 2018 12:19:56 -0500 Subject: Is post_action skipped when an upstream app server returns a X-Accel-Redirect header is used? Message-ID: The post_action in the config for my Nginx 1.12.1 instance is not firing (or does not appear to be). I'm wondering if it is because my app server is returning a X-Accel-Redirect header. The ultimate goal is track when downloads hosted on S3 have completed. **** Step 1: The request hits nginx at http://host/download/... **** location ~ /download/ { proxy_pass http://rails-app-upstream; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; post_action @finished; } **** Step 2: After authenticating the request for the download file and marking the start of the download in the database, the app server returns the request with the following headers: **** { 'X-Accel-Redirect' => '/s3-zip/my-bucket.s3.amazonaws.com/downloads/0001.jpg` 'Content-Disposition' => "attachment; filename=download.jpg", 'X-Download-Log-Id' => log.id.to_s } **** Step 3: Nginx sees the x-accel-redirect header and performance an internal request to this location that is defined: **** location ~ "^/s3-zip/(?.[a-z0-9][a-z0-9-.]*.s3.amazonaws.com)/(?.*)$" { # examples: # s3_bucket = my-bucket.s3.amazonaws.com # path = downloads/0001.mpg # args = X-Amz-Credentials=...&X-Amz-Date=... internal; access_log /var/log/nginx/s3_assets-access.log main; error_log /var/log/nginx/s3_assets-error.log warn; resolver 8.8.8.8 valid=30s; # Google DNS resolver_timeout 10s; proxy_http_version 1.1; proxy_set_header Host $s3_bucket; proxy_set_header Authorization ''; # remove amazon headers proxy_hide_header x-amz-id-2; proxy_hide_header x-amz-request-id; proxy_hide_header Set-Cookie; proxy_ignore_headers "Set-Cookie"; # no file buffering proxy_buffering off; # bubble errors up proxy_intercept_errors on; proxy_pass https://$s3_bucket/$path?$args; } **** Missed Step: The following location called by the post_action in Step 1 is never fired. Is this because of the x-accel-redirect header, or because Step 3 uses a proxy_pass, or something else? This last location calls the app server's endpoint once nginx has completed the request to mark the download as completed. **** location @finished { internal; rewrite ^ /download/finish/$sent_http_x_download_log_id?bytes=$body_bytes_sent; } For what it's worth, this basic technique (admittedly, it has a few steps!) works-properlyl executes the post_action- for zip-on-the-fly downloads, using the mod_zip plugin and fetching zip file component contents from S3. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,278529,278529#msg-278529 From nginx-forum at forum.nginx.org Tue Feb 13 07:39:10 2018 From: nginx-forum at forum.nginx.org (Azusa Taroura) Date: Tue, 13 Feb 2018 02:39:10 -0500 Subject: Mail proxy the destination server by ssl (Postfix) Message-ID: <3d9d340bad7d410ace2cab38ba34a87c.NginxMailingListEnglish@forum.nginx.org> Hi everyone,? I?m trying to mail-proxy by ssl connection from the nginx server to the postfix server. Please let me ask some question. SMTPS(465)->| nginx |--SMTPS(465)->| Postfix | Question1: I found this issue. The mail module cannot proxy to the destination server by ssl, right? https://forum.nginx.org/read.php?2,232147,232466#msg-232466 Question2: I tried the another way to use the stream server, but I could not proxy (The connection timeout is occurred.) How can i fix it? SMTPS(465)->| mail -> upstream(20465)| --SMTPS(465)->| Postfix | load_module "modules/ngx_stream_module.so"; worker_processes auto; error_log /var/log/nginx/error.log warn; events { worker_connections 1024; } stream { upstream smtps_server { server postfix_server:465; } server { listen 20465; proxy_pass smtps_server; proxy_ssl on; proxy_ssl_certificate /etc/nginx/ssl/server.crt; proxy_ssl_certificate_key /etc/nginx/ssl/server.key; error_log /var/log/nginx/mail-tcp-proxy.log info; } } mail { auth_http localhost:80/auth/smtp; proxy_pass_error_message on; proxy on; smtp_auth login plain; xclient on; server_name nginx_server; server { listen 25; protocol smtp; } server { listen 465; protocol smtp; ssl on; ssl_certificate /etc/nginx/ssl/server.crt; ssl_certificate_key /etc/nginx/ssl/server.key; } } Thank you for your time. Azusa Taroura Posted at Nginx Forum: https://forum.nginx.org/read.php?2,278532,278532#msg-278532 From nginx-forum at forum.nginx.org Tue Feb 13 09:55:27 2018 From: nginx-forum at forum.nginx.org (stout) Date: Tue, 13 Feb 2018 04:55:27 -0500 Subject: proxy_cache overflows max_size Message-ID: <15c4e39660ecda4da06323015fb0c2ed.NginxMailingListEnglish@forum.nginx.org> Hi, I have: nginx version: nginx/1.12.2 (github.com/istenrot/centos-nginx-http2: openssl-1.0.2m, PCRE JIT, PushStream, HeadersMore, LUA, Brotli) built by gcc 4.9.2 20150212 (Red Hat 4.9.2-6) (GCC) working as reverse proxy for ceph with media files Here's my configuration.. (only the relevant part) http { proxy_temp_path /mnt/nginx-cache/temp_proxy 1 2; proxy_cache_lock on; proxy_cache_path /mnt/nginx-cache/ceph levels=1:2 keys_zone=myCache:256m inactive=8h max_size=30g; proxy_cache_revalidate on; proxy_request_buffering off; proxy_buffering on; proxy_buffers 64 16k; #for lua sendfile on; sendfile_max_chunk 512k; aio threads; directio 512k; output_buffers 1 128k; server { listen 80; server_name _; location / { proxy_pass http://ceph$uri; proxy_set_header Host $host; proxy_cache myCache; } } cache is tmpfs: tmpfs /mnt/nginx-cache tmpfs rw,size=40960m,noatime,uid=335,context="system_u:object_r:httpd_tmp_t:s0" 0 2 Now df -h shows: tmpfs 40G 35G 5.7G 86% /mnt/nginx-cache and deleted files in /mnt/nginx-cache/temp_proxy are about 200M Strace of cache manager shows: epoll_wait(9, [], 512, 1000) = 0 epoll_wait(9, [], 512, 1000) = 0 epoll_wait(9, [], 512, 1000) = 0 epoll_wait(9, [], 512, 1000) = 0 for most of the time How to improve this situation? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,278534,278534#msg-278534 From mdounin at mdounin.ru Tue Feb 13 12:41:41 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 13 Feb 2018 15:41:41 +0300 Subject: Mail proxy the destination server by ssl (Postfix) In-Reply-To: <3d9d340bad7d410ace2cab38ba34a87c.NginxMailingListEnglish@forum.nginx.org> References: <3d9d340bad7d410ace2cab38ba34a87c.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20180213124141.GI24410@mdounin.ru> Hello! On Tue, Feb 13, 2018 at 02:39:10AM -0500, Azusa Taroura wrote: > Hi everyone,? > I?m trying to mail-proxy by ssl connection from the nginx server to the > postfix server. > Please let me ask some question. > > SMTPS(465)->| nginx |--SMTPS(465)->| Postfix | > > Question1: > I found this issue. The mail module cannot proxy to the destination server > by ssl, right? > https://forum.nginx.org/read.php?2,232147,232466#msg-232466 Yes, only non-SSL backends are supported. > Question2: > I tried the another way to use the stream server, but I could not proxy > (The connection timeout is occurred.) > > How can i fix it? > > SMTPS(465)->| mail -> upstream(20465)| --SMTPS(465)->| Postfix | [...] The configuration provided looks fine, at least I see no obvious errors. Try looking into more details on where the timeout occurs. -- Maxim Dounin http://mdounin.ru/ From dkewley at uci.edu Tue Feb 13 21:01:03 2018 From: dkewley at uci.edu (David Kewley) Date: Tue, 13 Feb 2018 13:01:03 -0800 Subject: no access_log logging for UDP streams Message-ID: I'm using nginx 1.12.1 to proxy TCP and UDP streams. I have in my stream {} stanza: log_format test '$time_local'; access_log /var/log/nginx/stream-access.log test buffer=64k flush=1s; error_log /var/log/nginx/stream-info.log info; Both TCP and UDP streams log to /var/log/nginx/stream-info.log. Only TCP streams are logged in /var/log/nginx/stream-access.log; UDP streams are not. FWIW, client application behavior and tcpdump both show that the upstream UDP server is sending a response that makes it through nginx proxy to the client just fine. Is this lack of UDP stream access_log logging expected, or should I open a new bug? If you need to see more details to show exactly what I'm doing, let me know which details would be helpful. Thanks! David -------------- next part -------------- An HTML attachment was scrubbed... URL: From tom at keepschtum.win Wed Feb 14 01:46:05 2018 From: tom at keepschtum.win (Tom) Date: Wed, 14 Feb 2018 14:46:05 +1300 Subject: ip address masking Message-ID: <1072191518572765@web44o.yandex.ru> An HTML attachment was scrubbed... URL: From alex at samad.com.au Wed Feb 14 03:07:49 2018 From: alex at samad.com.au (Alex Samad) Date: Wed, 14 Feb 2018 14:07:49 +1100 Subject: ip address masking In-Reply-To: <1072191518572765@web44o.yandex.ru> References: <1072191518572765@web44o.yandex.ru> Message-ID: Why not just change the log format to exclude the ip address or put in static ip On 14 February 2018 at 12:46, Tom wrote: > Hi, > > I'm wondering if anyone has successfully masked ip addresses in nginx > before they are written to a log file. > > I understand there are reasons why you would and would not do this. > > Anyway, my config so far, which I believe works for ipv4 addresses, but > probably on only a few formats of ipv6 addresses. I've used secondary map > directives to append text to the short ip address as I couldn't work out > how to concatenate the variable with text, so concatenated two variables > instead. (Hope that makes sense). > > > log_format ipmask '$remote_addr $ip_anon'; > > map $remote_addr $ip_anon { > default $remote_addr; > "~^(?P[0-9]{1,3}\.[0-9]{1,3}.)(?P.*)" $ipv4$ipv4suffix; > "~^(?P[^:]+:[^:]+)(?P.*$)" '$ipv6 $junkv6'; > } > > map - $ipv4suffix{ > default 0.0; > } > map - $ipv6suffix{ > default XX; > } > server { > listen 8080; > listen [::]:8080; > server_name _; > access_log /tmp/ngn-ip.log ipmask; > allow all; > } > > > Anyone got any thoughts on this? > Thanks > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tom at keepschtum.win Wed Feb 14 03:58:08 2018 From: tom at keepschtum.win (Tom) Date: Wed, 14 Feb 2018 16:58:08 +1300 Subject: ip address masking In-Reply-To: References: <1072191518572765@web44o.yandex.ru> Message-ID: <366551518580688@web9g.yandex.ru> An HTML attachment was scrubbed... URL: From arut at nginx.com Wed Feb 14 13:59:49 2018 From: arut at nginx.com (Roman Arutyunyan) Date: Wed, 14 Feb 2018 16:59:49 +0300 Subject: no access_log logging for UDP streams In-Reply-To: References: Message-ID: <20180214135949.GM4346@Romans-MacBook-Air.local> Hi David, On Tue, Feb 13, 2018 at 01:01:03PM -0800, David Kewley wrote: > I'm using nginx 1.12.1 to proxy TCP and UDP streams. I have in my stream {} > stanza: > > log_format test '$time_local'; > > access_log /var/log/nginx/stream-access.log test buffer=64k flush=1s; > error_log /var/log/nginx/stream-info.log info; > > > Both TCP and UDP streams log to /var/log/nginx/stream-info.log. Only TCP > streams are logged in /var/log/nginx/stream-access.log; UDP streams are not. > > FWIW, client application behavior and tcpdump both show that the upstream > UDP server is sending a response that makes it through nginx proxy to the > client just fine. > > Is this lack of UDP stream access_log logging expected, or should I open a > new bug? > > If you need to see more details to show exactly what I'm doing, let me know > which details would be helpful. Probably your UDP session is not considered finished by nginx. Did you specify proxy_responses and proxy_timeout in your config? If proxy_responses is unspecified and proxy_timeout is large, it may take a long time for a UDP session to expire and be logged. -- Roman Arutyunyan From michael.friscia at yale.edu Wed Feb 14 14:09:57 2018 From: michael.friscia at yale.edu (Friscia, Michael) Date: Wed, 14 Feb 2018 14:09:57 +0000 Subject: Response Header IF statement problem Message-ID: <4F6078C5-8652-4EE0-9556-D06574891E97@yale.edu> I?m at a loss on this. I am working on a cache problem where some pages need to be bypassed and others will be cached. So the web server is adding a response header (X-Secured-Page). I?ve tried multiple combinations of $http_x_secured_page and $sent_http_x_secured_page and even though I see the header when I inspect the page, the IF statements inside the location block are not getting fired off. What could I possibly be doing wrong? ___________________________________________ Michael Friscia Office of Communications Yale School of Medicine (203) 737-7932 - office (203) 931-5381 - mobile http://web.yale.edu -------------- next part -------------- An HTML attachment was scrubbed... URL: From arut at nginx.com Wed Feb 14 15:00:32 2018 From: arut at nginx.com (Roman Arutyunyan) Date: Wed, 14 Feb 2018 18:00:32 +0300 Subject: Response Header IF statement problem In-Reply-To: <4F6078C5-8652-4EE0-9556-D06574891E97@yale.edu> References: <4F6078C5-8652-4EE0-9556-D06574891E97@yale.edu> Message-ID: <20180214150032.GN4346@Romans-MacBook-Air.local> Hi Michael, On Wed, Feb 14, 2018 at 02:09:57PM +0000, Friscia, Michael wrote: > I?m at a loss on this. I am working on a cache problem where some pages need to be bypassed and others will be cached. So the web server is adding a response header (X-Secured-Page). I?ve tried multiple combinations of > $http_x_secured_page and $sent_http_x_secured_page and even though I see the header when I inspect the page, the IF statements inside the location block are not getting fired off. > > What could I possibly be doing wrong? If you want to disable caching for a specific response, you can use the proxy_no_cache directive. Pass it $upstream_http_x_secured_page if you want to disable caching of responses having this HTTP header. Using "if" directive for analyzing output headers like $sent_http_x_secured_page will not work since "if" is evaluated at an early request processing stage (rewrite phase) and no output is normally created by this time. -- Roman Arutyunyan From michael.friscia at yale.edu Wed Feb 14 15:01:49 2018 From: michael.friscia at yale.edu (Friscia, Michael) Date: Wed, 14 Feb 2018 15:01:49 +0000 Subject: Response Header IF statement problem In-Reply-To: <20180214150032.GN4346@Romans-MacBook-Air.local> References: <4F6078C5-8652-4EE0-9556-D06574891E97@yale.edu> <20180214150032.GN4346@Romans-MacBook-Air.local> Message-ID: <216E7162-573F-40C1-AC93-99CCDADA35E7@yale.edu> Thank you Roman, but this raises a different question, if I want to base this on the value and not the existence, is that still possible? ___________________________________________ Michael Friscia Office of Communications Yale School of Medicine (203) 737-7932 - office (203) 931-5381 - mobile http://web.yale.edu On 2/14/18, 10:00 AM, "nginx on behalf of Roman Arutyunyan" wrote: Hi Michael, On Wed, Feb 14, 2018 at 02:09:57PM +0000, Friscia, Michael wrote: > I?m at a loss on this. I am working on a cache problem where some pages need to be bypassed and others will be cached. So the web server is adding a response header (X-Secured-Page). I?ve tried multiple combinations of > $http_x_secured_page and $sent_http_x_secured_page and even though I see the header when I inspect the page, the IF statements inside the location block are not getting fired off. > > What could I possibly be doing wrong? If you want to disable caching for a specific response, you can use the proxy_no_cache directive. Pass it $upstream_http_x_secured_page if you want to disable caching of responses having this HTTP header. Using "if" directive for analyzing output headers like $sent_http_x_secured_page will not work since "if" is evaluated at an early request processing stage (rewrite phase) and no output is normally created by this time. -- Roman Arutyunyan _______________________________________________ nginx mailing list nginx at nginx.org https://urldefense.proofpoint.com/v2/url?u=http-3A__mailman.nginx.org_mailman_listinfo_nginx&d=DwIGaQ&c=cjytLXgP8ixuoHflwc-poQ&r=wvXEDjvtDPcv7AlldT5UvDx32KXBEM6um_lS023SJrs&m=fbY3_x6ACbtIV55mcZsfJMVepTuuqXtt2QkwBQ_DlOg&s=yQYgAxzpG-gYD_SClb9BufTDkAIZfHQ2POVAXyIeCno&e= From nginx-forum at forum.nginx.org Wed Feb 14 15:03:02 2018 From: nginx-forum at forum.nginx.org (webopsx) Date: Wed, 14 Feb 2018 10:03:02 -0500 Subject: Response Header IF statement problem In-Reply-To: <4F6078C5-8652-4EE0-9556-D06574891E97@yale.edu> References: <4F6078C5-8652-4EE0-9556-D06574891E97@yale.edu> Message-ID: <635765e0227de4b735a4ceae2c8480ed.NginxMailingListEnglish@forum.nginx.org> Hi, If I understand correctly you actually don't want to cache specific responses (not bypass). The proxy_cache_bypass is only for if the response has already been cached and defines the behavior in which NGINX should serve the cached version to a client. Therefore if I understand correctly, you should be using the upstream module for your origin definition and the proper variable will be available as $upstream_http_x_secured_page - http://nginx.org/en/docs/http/ngx_http_upstream_module.html#var_upstream_http_ Then use the proxy_no_cache directive to determine what should and should not be cached in your configuration. If you want to simply check if the header exists and then not cache the response, you can add the upstream_http_ variable as a parameter. - http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_no_cache If you need to inspect the header, then look into using map instead to define the conditions you need in other to set or not set the proxy_no_cache to a value or not. Please let me know if my understanding is correct. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,278558,278561#msg-278561 From nginx-forum at forum.nginx.org Wed Feb 14 15:08:07 2018 From: nginx-forum at forum.nginx.org (webopsx) Date: Wed, 14 Feb 2018 10:08:07 -0500 Subject: Response Header IF statement problem In-Reply-To: <216E7162-573F-40C1-AC93-99CCDADA35E7@yale.edu> References: <216E7162-573F-40C1-AC93-99CCDADA35E7@yale.edu> Message-ID: <25ca24d1d25ee3d14a1368cac16a90ef.NginxMailingListEnglish@forum.nginx.org> You can use map for this... - http://nginx.org/en/docs/http/ngx_http_map_module.html#map map $upstream_http_x_secured_page $nocache { "search string" "1" default ""; } location /foo { ... proxy_no_cache $nocache; } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,278558,278563#msg-278563 From michael.friscia at yale.edu Wed Feb 14 15:17:20 2018 From: michael.friscia at yale.edu (Friscia, Michael) Date: Wed, 14 Feb 2018 15:17:20 +0000 Subject: Response Header IF statement problem In-Reply-To: <635765e0227de4b735a4ceae2c8480ed.NginxMailingListEnglish@forum.nginx.org> References: <4F6078C5-8652-4EE0-9556-D06574891E97@yale.edu> <635765e0227de4b735a4ceae2c8480ed.NginxMailingListEnglish@forum.nginx.org> Message-ID: Ok, I think this sends me into the correct direction. Thanks for posting the links and explaining the _bypass, I was setting both _bypass and _no_cache because I wasn?t sure. ___________________________________________ Michael Friscia Office of Communications Yale School of Medicine (203) 737-7932 - office (203) 931-5381 - mobile http://web.yale.edu On 2/14/18, 10:03 AM, "nginx on behalf of webopsx" wrote: Hi, If I understand correctly you actually don't want to cache specific responses (not bypass). The proxy_cache_bypass is only for if the response has already been cached and defines the behavior in which NGINX should serve the cached version to a client. Therefore if I understand correctly, you should be using the upstream module for your origin definition and the proper variable will be available as $upstream_http_x_secured_page - https://urldefense.proofpoint.com/v2/url?u=http-3A__nginx.org_en_docs_http_ngx-5Fhttp-5Fupstream-5Fmodule.html-23var-5Fupstream-5Fhttp-5F&d=DwICAg&c=cjytLXgP8ixuoHflwc-poQ&r=wvXEDjvtDPcv7AlldT5UvDx32KXBEM6um_lS023SJrs&m=uDTsErKM0Hidc5iw_7R5NJrlwOfkoJbgLxL4BJmCY5I&s=QEt-4cOG5TsUmuMUu8UNGiNid6whJR-CBqlmzBdUC78&e= Then use the proxy_no_cache directive to determine what should and should not be cached in your configuration. If you want to simply check if the header exists and then not cache the response, you can add the upstream_http_ variable as a parameter. - https://urldefense.proofpoint.com/v2/url?u=http-3A__nginx.org_en_docs_http_ngx-5Fhttp-5Fproxy-5Fmodule.html-23proxy-5Fno-5Fcache&d=DwICAg&c=cjytLXgP8ixuoHflwc-poQ&r=wvXEDjvtDPcv7AlldT5UvDx32KXBEM6um_lS023SJrs&m=uDTsErKM0Hidc5iw_7R5NJrlwOfkoJbgLxL4BJmCY5I&s=5JotmehkLwuEdHep18iz8JzWFD5LRfgDzAOuV_y8DwA&e= If you need to inspect the header, then look into using map instead to define the conditions you need in other to set or not set the proxy_no_cache to a value or not. Please let me know if my understanding is correct. Posted at Nginx Forum: https://urldefense.proofpoint.com/v2/url?u=https-3A__forum.nginx.org_read.php-3F2-2C278558-2C278561-23msg-2D278561&d=DwICAg&c=cjytLXgP8ixuoHflwc-poQ&r=wvXEDjvtDPcv7AlldT5UvDx32KXBEM6um_lS023SJrs&m=uDTsErKM0Hidc5iw_7R5NJrlwOfkoJbgLxL4BJmCY5I&s=UrVYW4V-vb5FY0oXMikgzaPAKrNuFPAz897blNCH_p8&e= _______________________________________________ nginx mailing list nginx at nginx.org https://urldefense.proofpoint.com/v2/url?u=http-3A__mailman.nginx.org_mailman_listinfo_nginx&d=DwICAg&c=cjytLXgP8ixuoHflwc-poQ&r=wvXEDjvtDPcv7AlldT5UvDx32KXBEM6um_lS023SJrs&m=uDTsErKM0Hidc5iw_7R5NJrlwOfkoJbgLxL4BJmCY5I&s=AIsbNagy7Mrq1Xd-D4En3kAzGjRXaS05L_e68spvhhE&e= From ru at nginx.com Wed Feb 14 19:49:33 2018 From: ru at nginx.com (Ruslan Ermilov) Date: Wed, 14 Feb 2018 22:49:33 +0300 Subject: Nginx 1.13.9 HTTP/2 Server Push - non-compressed assets ? In-Reply-To: <0807523c972e0ca9a71eac609109565b.NginxMailingListEnglish@forum.nginx.org> <67a56ac7eccb49fc5f009455ea40b338.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20180214194933.GM67691@lo0.su> On Sat, Feb 10, 2018 at 01:21:16PM -0500, George wrote: > Hi compiled Nginx 1.13.9 from master branch to try out HTTP/2 Server Push > but noticed the pushed assets loose their gzip compression and are served as > non-compressed assets ? Is that as intended ? I posted my findings at > https://community.centminmod.com/threads/hurray-http-2-server-push-for-nginx.11910/#post-59411 > > http2_push_preload on; > add_header Link "; rel=preload; as=style"; > add_header Link "; rel=preload; as=style"; > > push works as I see PUSH_PROMISE frames and chrome reports push > > nghttp -navs https://baremetal.doman.com/ > > [ 0.018] recv (stream_id=13) :method: GET > [ 0.018] recv (stream_id=13) :path: /css/bootstrap.min.css > [ 0.018] recv (stream_id=13) :authority: baremetal.domain.com > [ 0.018] recv (stream_id=13) :scheme: https > [ 0.018] recv PUSH_PROMISE frame > ; END_HEADERS > (padlen=0, promised_stream_id=2) > [ 0.018] recv (stream_id=13) :method: GET > [ 0.018] recv (stream_id=13) :path: /css/theme-style.css > [ 0.018] recv (stream_id=13) :authority: baremetal.domain.com > [ 0.018] recv (stream_id=13) :scheme: https > [ 0.018] recv PUSH_PROMISE frame On Sun, Feb 11, 2018 at 04:16:05PM -0500, George wrote: > Reported bug at https://trac.nginx.org/nginx/ticket/1478 The fix is underway. From michael.friscia at yale.edu Wed Feb 14 21:02:35 2018 From: michael.friscia at yale.edu (Friscia, Michael) Date: Wed, 14 Feb 2018 21:02:35 +0000 Subject: Response Header IF statement problem In-Reply-To: <25ca24d1d25ee3d14a1368cac16a90ef.NginxMailingListEnglish@forum.nginx.org> References: <216E7162-573F-40C1-AC93-99CCDADA35E7@yale.edu> <25ca24d1d25ee3d14a1368cac16a90ef.NginxMailingListEnglish@forum.nginx.org> Message-ID: <19638EFE-5B04-464A-A07E-55BEDB2B1C3D@yale.edu> Ok, so I did this and it worked and then it stopped working, then it worked again and then stopped working. I literally used the code below, the map appears right above my server {} block. When it worked I was passing a header with the $nocache value set and it was consistently returning the correct value. What I don?t understand is that I did not change the code that runs this, I made some other non related changes and then this stops working. What would cause the map directive to stop working? Or maybe my question is what would cause the map directive to cache a value to that variable and then not change it even with nginx restarts? ___________________________________________ Michael Friscia Office of Communications Yale School of Medicine (203) 737-7932 - office (203) 931-5381 - mobile http://web.yale.edu On 2/14/18, 10:35 AM, "nginx on behalf of webopsx" wrote: You can use map for this... - https://urldefense.proofpoint.com/v2/url?u=http-3A__nginx.org_en_docs_http_ngx-5Fhttp-5Fmap-5Fmodule.html-23map&d=DwICAg&c=cjytLXgP8ixuoHflwc-poQ&r=wvXEDjvtDPcv7AlldT5UvDx32KXBEM6um_lS023SJrs&m=csqe7S6wi1lA0kyhdVj1SfeveAOmZWbSQEv2optF3DM&s=T2K9F3KlskBzpELc7oj9imZ4p2BYuKpKw_5q-LFmuNI&e= map $upstream_http_x_secured_page $nocache { "search string" "1" default ""; } location /foo { ... proxy_no_cache $nocache; } Posted at Nginx Forum: https://urldefense.proofpoint.com/v2/url?u=https-3A__forum.nginx.org_read.php-3F2-2C278558-2C278563-23msg-2D278563&d=DwICAg&c=cjytLXgP8ixuoHflwc-poQ&r=wvXEDjvtDPcv7AlldT5UvDx32KXBEM6um_lS023SJrs&m=csqe7S6wi1lA0kyhdVj1SfeveAOmZWbSQEv2optF3DM&s=165qUkBDlSIz9uoSMkUjKmbUovmlJdt3fBGdYfNEKcU&e= _______________________________________________ nginx mailing list nginx at nginx.org https://urldefense.proofpoint.com/v2/url?u=http-3A__mailman.nginx.org_mailman_listinfo_nginx&d=DwICAg&c=cjytLXgP8ixuoHflwc-poQ&r=wvXEDjvtDPcv7AlldT5UvDx32KXBEM6um_lS023SJrs&m=csqe7S6wi1lA0kyhdVj1SfeveAOmZWbSQEv2optF3DM&s=8K1QknyjZyQW-6l2izUg26ej0rKE49EN2u5NQ-vGEuI&e= From nginx-forum at forum.nginx.org Wed Feb 14 21:39:18 2018 From: nginx-forum at forum.nginx.org (webopsx) Date: Wed, 14 Feb 2018 16:39:18 -0500 Subject: Response Header IF statement problem In-Reply-To: <19638EFE-5B04-464A-A07E-55BEDB2B1C3D@yale.edu> References: <19638EFE-5B04-464A-A07E-55BEDB2B1C3D@yale.edu> Message-ID: <5010c3e0f211ae204ed53e2094f88e46.NginxMailingListEnglish@forum.nginx.org> Hi, The map is processed on each request and should be very consistent. I thought you wanted to disable cache on the existence of a response header, not a request header. Otherwise I think we need more information to understand, such as how are you testing? Perhaps paste your full configuration here after removing any sensitive text. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,278558,278576#msg-278576 From michael.friscia at yale.edu Wed Feb 14 21:44:02 2018 From: michael.friscia at yale.edu (Friscia, Michael) Date: Wed, 14 Feb 2018 21:44:02 +0000 Subject: Response Header IF statement problem In-Reply-To: <5010c3e0f211ae204ed53e2094f88e46.NginxMailingListEnglish@forum.nginx.org> References: <19638EFE-5B04-464A-A07E-55BEDB2B1C3D@yale.edu> <5010c3e0f211ae204ed53e2094f88e46.NginxMailingListEnglish@forum.nginx.org> Message-ID: Maybe that?s the problem, I want to disable cache if the response header is true but not do anything if it is false. I can change my logic in creating this header to only have it on pages where cache should be disabled if it is not possible to use an IF statement around it. I will post my config here once I get rid of the sensitive things, but the confusing thing is that it worked, then it stopped working. ___________________________________________ Michael Friscia Office of Communications Yale School of Medicine (203) 737-7932 ? office (203) 931-5381 ? mobile http://web.yale.edu On 2/14/18, 4:39 PM, "nginx on behalf of webopsx" wrote: Hi, The map is processed on each request and should be very consistent. I thought you wanted to disable cache on the existence of a response header, not a request header. Otherwise I think we need more information to understand, such as how are you testing? Perhaps paste your full configuration here after removing any sensitive text. Posted at Nginx Forum: https://urldefense.proofpoint.com/v2/url?u=https-3A__forum.nginx.org_read.php-3F2-2C278558-2C278576-23msg-2D278576&d=DwICAg&c=cjytLXgP8ixuoHflwc-poQ&r=wvXEDjvtDPcv7AlldT5UvDx32KXBEM6um_lS023SJrs&m=x0V3h5hp-I18hCxDQuN0U7f_JYUQYISVWlygl1QK-FU&s=sUMR29LFjCI97P-LSb-RwuojlvhtDhvfRiX_YAio19E&e= _______________________________________________ nginx mailing list nginx at nginx.org https://urldefense.proofpoint.com/v2/url?u=http-3A__mailman.nginx.org_mailman_listinfo_nginx&d=DwICAg&c=cjytLXgP8ixuoHflwc-poQ&r=wvXEDjvtDPcv7AlldT5UvDx32KXBEM6um_lS023SJrs&m=x0V3h5hp-I18hCxDQuN0U7f_JYUQYISVWlygl1QK-FU&s=833NZxJsLHFd1YlQmOtXp22oR4K0OXPS-xz0mYs6r04&e= From nginx-forum at forum.nginx.org Wed Feb 14 22:07:46 2018 From: nginx-forum at forum.nginx.org (webopsx) Date: Wed, 14 Feb 2018 17:07:46 -0500 Subject: Response Header IF statement problem In-Reply-To: <4F6078C5-8652-4EE0-9556-D06574891E97@yale.edu> References: <4F6078C5-8652-4EE0-9556-D06574891E97@yale.edu> Message-ID: Hi, Yes NGINX can inspect the header, See the following full example. It will check for the match of "true" case-insensitive. I am simulating your backend on port 81. Does this make sense? map $upstream_http_x_secured_page $nocache { ~*true "1"; default ""; } upstream backend { server 127.0.0.1:81; } server { listen 80; server_name localhost; #charset koi8-r; #access_log /var/log/nginx/log/host.access.log main; location / { proxy_no_cache $nocache; add_header X-No-Cache-Status $nocache; proxy_pass http://backend; } error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/share/nginx/html; } } server { listen 81; location / { add_header X-Secured-Page "True"; return 200 "OK\n"; } } # testing root at dev:/etc/nginx/conf.d# curl -I localhost HTTP/1.1 200 OK Server: nginx/1.13.7 Date: Wed, 14 Feb 2018 21:59:55 GMT Content-Type: application/octet-stream Content-Length: 3 Connection: keep-alive X-Secured-Page: True X-No-Cache-Status: 1 Posted at Nginx Forum: https://forum.nginx.org/read.php?2,278558,278578#msg-278578 From dkewley at uci.edu Wed Feb 14 22:09:48 2018 From: dkewley at uci.edu (David Kewley) Date: Wed, 14 Feb 2018 14:09:48 -0800 Subject: no access_log logging for UDP streams In-Reply-To: <20180214135949.GM4346@Romans-MacBook-Air.local> References: <20180214135949.GM4346@Romans-MacBook-Air.local> Message-ID: On Wed, Feb 14, 2018 at 5:59 AM, Roman Arutyunyan wrote: > Hi David, > > On Tue, Feb 13, 2018 at 01:01:03PM -0800, David Kewley wrote: > > I'm using nginx 1.12.1 to proxy TCP and UDP streams. I have in my stream > {} > > stanza: > > > > log_format test '$time_local'; > > > > access_log /var/log/nginx/stream-access.log test buffer=64k > flush=1s; > > error_log /var/log/nginx/stream-info.log info; > > > > > > Both TCP and UDP streams log to /var/log/nginx/stream-info.log. Only TCP > > streams are logged in /var/log/nginx/stream-access.log; UDP streams are > not. > > > > FWIW, client application behavior and tcpdump both show that the upstream > > UDP server is sending a response that makes it through nginx proxy to the > > client just fine. > > > > Is this lack of UDP stream access_log logging expected, or should I open > a > > new bug? > > > > If you need to see more details to show exactly what I'm doing, let me > know > > which details would be helpful. > > Probably your UDP session is not considered finished by nginx. > Did you specify proxy_responses and proxy_timeout in your config? > If proxy_responses is unspecified and proxy_timeout is large, it may take a > long time for a UDP session to expire and be logged. Thank you, Roman! You pointed me in the right direction. Now that I do more careful tests, I see the action of the default setting of "proxy_timeout 10m" in the two logs (access and info). Namely, info-level error_log shows client and proxy connections at the time they happen, as well as the disconnect when the 10m timeout happens. Meanwhile the access_log logs the connection at the time of disconnection (after the 10m timeout). I'm thinking that in many circumstances, I may not know how many response packets to expect from the upstream, so proxy_responses may not be optimal. But setting proxy_timeout shorter may in many cases help me by logging in access_log sooner. I've started by setting proxy_timeout 1s in a situation where each application message stream is expected to be quick and very short-lived. Now I see entries in access_log quickly. David -------------- next part -------------- An HTML attachment was scrubbed... URL: From peter_booth at me.com Wed Feb 14 22:13:37 2018 From: peter_booth at me.com (Peter Booth) Date: Wed, 14 Feb 2018 17:13:37 -0500 Subject: Response Header IF statement problem In-Reply-To: References: <19638EFE-5B04-464A-A07E-55BEDB2B1C3D@yale.edu> <5010c3e0f211ae204ed53e2094f88e46.NginxMailingListEnglish@forum.nginx.org> Message-ID: I think that part of the power and challenge of using nginx?s caching is that there are many different ways of achieving the same or similar results, but some of the approaches will be more awkward than others. I think that it might help if you could express what the issue is that you are trying to solve, as opposed to the mechanism that you are wanting to use to solve your problem, for example, ?Don?t cache personalized pages? or ?don?t cache pages where the user is outside the US? or ?don?t cache pages if the user has logged in? or ... > On Feb 14, 2018, at 4:44 PM, Friscia, Michael wrote: > > Maybe that?s the problem, I want to disable cache if the response header is true but not do anything if it is false. I can change my logic in creating this header to only have it on pages where cache should be disabled if it is not possible to use an IF statement around it. I will post my config here once I get rid of the sensitive things, but the confusing thing is that it worked, then it stopped working. > > > > ___________________________________________ > > Michael Friscia > > Office of Communications > Yale School of Medicine > > (203) 737-7932 ? office > (203) 931-5381 ? mobile > > http://web.yale.edu > > > > > On 2/14/18, 4:39 PM, "nginx on behalf of webopsx" wrote: > > Hi, > > The map is processed on each request and should be very consistent. > > I thought you wanted to disable cache on the existence of a response header, > not a request header. > > Otherwise I think we need more information to understand, such as how are > you testing? Perhaps paste your full configuration here after removing any > sensitive text. > > Posted at Nginx Forum: https://urldefense.proofpoint.com/v2/url?u=https-3A__forum.nginx.org_read.php-3F2-2C278558-2C278576-23msg-2D278576&d=DwICAg&c=cjytLXgP8ixuoHflwc-poQ&r=wvXEDjvtDPcv7AlldT5UvDx32KXBEM6um_lS023SJrs&m=x0V3h5hp-I18hCxDQuN0U7f_JYUQYISVWlygl1QK-FU&s=sUMR29LFjCI97P-LSb-RwuojlvhtDhvfRiX_YAio19E&e= > > _______________________________________________ > nginx mailing list > nginx at nginx.org > https://urldefense.proofpoint.com/v2/url?u=http-3A__mailman.nginx.org_mailman_listinfo_nginx&d=DwICAg&c=cjytLXgP8ixuoHflwc-poQ&r=wvXEDjvtDPcv7AlldT5UvDx32KXBEM6um_lS023SJrs&m=x0V3h5hp-I18hCxDQuN0U7f_JYUQYISVWlygl1QK-FU&s=833NZxJsLHFd1YlQmOtXp22oR4K0OXPS-xz0mYs6r04&e= > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at forum.nginx.org Wed Feb 14 23:19:01 2018 From: nginx-forum at forum.nginx.org (George) Date: Wed, 14 Feb 2018 18:19:01 -0500 Subject: Nginx 1.13.9 HTTP/2 Server Push - non-compressed assets ? In-Reply-To: <20180214194933.GM67691@lo0.su> References: <20180214194933.GM67691@lo0.su> Message-ID: <10bd234bd16e6b47e1a2e80c63722feb.NginxMailingListEnglish@forum.nginx.org> thanks Ruslan for the update appreciate all your work and looking forward to playing with HTTP/2 Push finally ! :) Posted at Nginx Forum: https://forum.nginx.org/read.php?2,278481,278582#msg-278582 From ru at nginx.com Thu Feb 15 04:09:00 2018 From: ru at nginx.com (Ruslan Ermilov) Date: Thu, 15 Feb 2018 07:09:00 +0300 Subject: Nginx 1.13.9 HTTP/2 Server Push - non-compressed assets ? In-Reply-To: <10bd234bd16e6b47e1a2e80c63722feb.NginxMailingListEnglish@forum.nginx.org> References: <20180214194933.GM67691@lo0.su> <10bd234bd16e6b47e1a2e80c63722feb.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20180215040900.GQ67691@lo0.su> On Wed, Feb 14, 2018 at 06:19:01PM -0500, George wrote: > thanks Ruslan for the update appreciate all your work and looking forward to > playing with HTTP/2 Push finally ! :) Start off right today: https://www.youtube.com/watch?v=wR1gF5Lhcq0 From nginx-forum at forum.nginx.org Thu Feb 15 05:09:36 2018 From: nginx-forum at forum.nginx.org (George) Date: Thu, 15 Feb 2018 00:09:36 -0500 Subject: Nginx 1.13.9 HTTP/2 Server Push - non-compressed assets ? In-Reply-To: <20180215040900.GQ67691@lo0.su> References: <20180215040900.GQ67691@lo0.su> Message-ID: Thanks for that video link :) Posted at Nginx Forum: https://forum.nginx.org/read.php?2,278481,278585#msg-278585 From nginx-forum at forum.nginx.org Thu Feb 15 09:03:51 2018 From: nginx-forum at forum.nginx.org (Elifish4) Date: Thu, 15 Feb 2018 04:03:51 -0500 Subject: Nginx QPS rate limit Message-ID: Hi All, We are using nginx cluster to get lots of request. All the nginx under ELB, We want to limit the QPS requests by device name/id/mac/whatever I found that i can do it with Limit upstream module for nginx, but i wondering if there is better way to achieved the goal. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,278588,278588#msg-278588 From nginx-forum at forum.nginx.org Thu Feb 15 09:33:19 2018 From: nginx-forum at forum.nginx.org (Andrzej Walas) Date: Thu, 15 Feb 2018 04:33:19 -0500 Subject: Files still on disc after inactive time Message-ID: Hi, I use nginx 1.12.2. I have settings like this: proxy_cache_path /ephemeral/nginx/cache levels=1:2 keys_zone=proxy-cache:4000m max_size=40g inactive=1d; proxy_temp_path /ephemeral/nginx/tmp; And I have two problems with this: 1. If during the download to the cache the connection is disconnected, the files that are not completely downloaded are left as junk in tmp and block the re-download to the cache of the given file. 2. Often, there are individual files that remain in the cache despite being inactive for more than one day. Restart nginx does not help. Br Andrzej Walas Posted at Nginx Forum: https://forum.nginx.org/read.php?2,278589,278589#msg-278589 From akhil.dangore at calsoftinc.com Thu Feb 15 11:50:20 2018 From: akhil.dangore at calsoftinc.com (Akhil Dangore) Date: Thu, 15 Feb 2018 17:20:20 +0530 Subject: How to distinguish between nginx and nginx plus ? In-Reply-To: <5aabe28d-fb9d-d2c7-5bbc-ba0818e31eea@calsoftinc.com> References: <5aabe28d-fb9d-d2c7-5bbc-ba0818e31eea@calsoftinc.com> Message-ID: <9d40bb90-4908-f1d7-0f00-31a2a820438e@calsoftinc.com> Hello Team, Is there any to distinguish between nginx and nginx plus except "version check"(nginx -v)? ? Below is output of ps aux command: *ubuntu at ubuntu-VirtualBox:~/Documents$ ps aux | grep nginx** **root????? 1436? 0.0? 0.0? 43068??? 80 ???????? Ss Feb12?? 0:00 nginx: master process /usr/sbin/nginx -c /etc/nginx/nginx.conf** **nginx???? 1437? 0.0? 0.0? 43596? 1572 ???????? S Feb12?? 0:31 nginx: worker process** **nginx???? 1438? 0.0? 0.0? 43280?? 108 ???????? S Feb12?? 0:02 nginx: cache manager process** **root???? 11063? 0.0? 0.2? 32428? 4788 ???????? Ss 14:48?? 0:00 nginx: master process nginx -g daemon off;** **systemd+ 11100? 0.0? 0.1? 32900? 3180 ???????? S 14:48?? 0:00 nginx: worker process* Regards, Akhil Dangore -------------- next part -------------- An HTML attachment was scrubbed... URL: From maxim at nginx.com Thu Feb 15 11:53:13 2018 From: maxim at nginx.com (Maxim Konovalov) Date: Thu, 15 Feb 2018 14:53:13 +0300 Subject: How to distinguish between nginx and nginx plus ? In-Reply-To: <9d40bb90-4908-f1d7-0f00-31a2a820438e@calsoftinc.com> References: <5aabe28d-fb9d-d2c7-5bbc-ba0818e31eea@calsoftinc.com> <9d40bb90-4908-f1d7-0f00-31a2a820438e@calsoftinc.com> Message-ID: <0d3bb8b8-3418-e651-9122-c7a6f09d7882@nginx.com> On 15/02/2018 14:50, Akhil Dangore wrote: > Hello Team, > > Is there any to distinguish between nginx and nginx plus except > "version check"(nginx -v)? ? > Yes, there is. This is -plus: $ nginx -v nginx version: nginx/1.13.7 (nginx-*plus*-r14-p1) -- Maxim Konovalov From akhil.dangore at calsoftinc.com Thu Feb 15 12:32:20 2018 From: akhil.dangore at calsoftinc.com (Akhil Dangore) Date: Thu, 15 Feb 2018 18:02:20 +0530 Subject: How to distinguish between nginx and nginx plus ? In-Reply-To: <0d3bb8b8-3418-e651-9122-c7a6f09d7882@nginx.com> References: <5aabe28d-fb9d-d2c7-5bbc-ba0818e31eea@calsoftinc.com> <9d40bb90-4908-f1d7-0f00-31a2a820438e@calsoftinc.com> <0d3bb8b8-3418-e651-9122-c7a6f09d7882@nginx.com> Message-ID: <4430f1c3-04e8-8b3b-f026-40b74cfde346@calsoftinc.com> Hello Maxim, I am looking for some alternate way. Does it stored any directory or file ? Regards, Akhil Dangore On 2/15/2018 5:23 PM, Maxim Konovalov wrote: > On 15/02/2018 14:50, Akhil Dangore wrote: >> Hello Team, >> >> Is there any to distinguish between nginx and nginx plus except >> "version check"(nginx -v)? ? >> > Yes, there is. This is -plus: > > $ nginx -v > nginx version: nginx/1.13.7 (nginx-*plus*-r14-p1) > From maxim at nginx.com Thu Feb 15 12:39:33 2018 From: maxim at nginx.com (Maxim Konovalov) Date: Thu, 15 Feb 2018 15:39:33 +0300 Subject: How to distinguish between nginx and nginx plus ? In-Reply-To: <4430f1c3-04e8-8b3b-f026-40b74cfde346@calsoftinc.com> References: <5aabe28d-fb9d-d2c7-5bbc-ba0818e31eea@calsoftinc.com> <9d40bb90-4908-f1d7-0f00-31a2a820438e@calsoftinc.com> <0d3bb8b8-3418-e651-9122-c7a6f09d7882@nginx.com> <4430f1c3-04e8-8b3b-f026-40b74cfde346@calsoftinc.com> Message-ID: Hi Akhil, On 15/02/2018 15:32, Akhil Dangore wrote: > Hello Maxim, > > I am looking for some alternate way. > > Does it stored any directory or file ? > I am not sure now if I understand your intentions fully now. Would you mind to elaborate? By the way, if you are -plus customer it makes sense to open a support ticket with your questions. Thanks, Maxim -- Maxim Konovalov From michael.friscia at yale.edu Thu Feb 15 13:02:50 2018 From: michael.friscia at yale.edu (Friscia, Michael) Date: Thu, 15 Feb 2018 13:02:50 +0000 Subject: Response Header IF statement problem In-Reply-To: References: <19638EFE-5B04-464A-A07E-55BEDB2B1C3D@yale.edu> <5010c3e0f211ae204ed53e2094f88e46.NginxMailingListEnglish@forum.nginx.org> Message-ID: <8CFBA9FC-9864-4B5E-A86F-BF67E9616CF4@yale.edu> Yes, I should have explained the problem up front. I made the wrong assumption I was asking a simple question and quickly realized I was getting good answers but my approach was likely flawed from the start. We are using Nginx just as a cache mechanism across many custom DNS names. As a result we have many server {} blocks handling a variety of different DNS names, vanity URLs and in a few cases SSL redirects. Where possible a single server {} block handles a wildcard DNS name but there are reasons we separate some of them out. Given the number of DNS entries, I have also created a system for creating *.conf files, so I do not use a single conf file for all the servers, instead I divide them out into many conf files. For special configuration items that are only included sometimes, I use *.inc files with the idea being that nginx.conf includes *.conf but does not include *.inc so that way I can make my server{} blocks cleaner and put common configurations into inc file. The main benefit being that almost all the config files fit on the screen without scrolling that much in VIM. Within each of these DNS instances we have a mix of login required content. We use three different login methods, IP whitelists, local user/password set in our CMS and then we use NetID via CAS/Shibboleth. We don?t want any of the login pages cached. The way we differentiate this relates to the mechanics on the backend of the web application. So we created a custom header called X-Secured-Page which is either true or false. In the event the value is true, we do not want to cache. To go a step further, I have made use the add_header to set different headers for when we cache a page versus when we do not cache a page. The idea being that Tier1 support could view these headers to deflect support calls when things are working as intended but also to create Jira tickets with the custom header content when things do not seem to be working correctly. In addition to this, when the secure page is true, I use this setting proxy_pass_header Set-Cookie; but when the secured page is false I use this setting: proxy_hide_header Set-Cookie; As for why, it is actually the CAS/Shibboleth authentication that fails to work without this setting change. I could easily just always pass the cookie but I was trying to eliminate any noise I could. We have a fairly complicated stack given the Nginx passes most traffic to a server cluster that handles URL rewrites and Vanity URLs at a level we did not want to approach in Nginx. We may in the future but right now that is off the table given the complexity and work that went into the management interfaces used to handle the url rewrites the content owners use. So my thought process started with a test for $http_x_secured_page which for obvious reasons would not work because I was failing to distinguish request versus response headers and am very new to Nginx. Using the responses yesterday, I was 99% sure I had a working model nailed down. I had a map block that looked like this Map $upstream_http_x_secured_page $nocache { default ?0?; ?true? ?1?; ?false? ?2?; } Then in my server block I had logic that looks like this Server{ Location / { If ($nocache = ?1?) { return 418; } If ($nocache = ?2?) { return 419; }; } } Then I had a custom location for each, again this was all just to test and it worked. I used a combination of cURL and just using the inspector in firefox to see the headers and I was consistently getting the correct code from either the 418 or 419 block. So I moved the same identical code into production but used a new server block. In all this testing I was working with a DNS name not yet put into Nginx, so I was using all HOSTS file manipulation. But the content of the Server block was accurate so the responses were correct. This is the same method I used to test every DNS name moved in and seemed like the simplest way to test in production, not to mention we run 4 nginx servers as part of a globally redundant solution, using the HOSTS file allows me to easily test against a single production server before rolling my configs to all servers. In trying to figure out the problem, I read about how the map{} block worked and was confused by it saying that it stores the last request made which caused me to wonder if the design I had with many server{} blocks all sharing the same map{} directive was somehow flawed so my testing in production was doomed from the start. I also really do not understand the order of operation taking place. If the map{} took place inside a server block, then I would understand it better and see it encapsulated from other server{} requests. Or if I was to use upstream{} blocks and it went inside that, again I would understand it more. I did try implementing upstream{} sections but it did not seem to make sense, I only had one server defined inside of it and since I did not see a direct relationship with map{} and in testing it made no difference whether I had this defined or not, so I removed it. The bottom line is that I need a solution that will cache almost all pages and then in some cases when the X-Secured-Page is true, I don?t want to cache it. I am totally flexible, we could easily set it up so that the X-Secured-Page header only exists when the value would be true and does not exist at all when it would be false if that would make things easier or more reliable. I?m also open to doing just about anything else to make this happen. On the code side, this value is determined in a security class that was already setting some headings we used for debugging at the application level and it would not be hard to setup a service that could be called (even cached behind nginx) where a URL is cURL?d to see if the page should be secure or not. I?m not sure if having that sort of logic inside a server{} block is possible, good practice or the stupidest idea in the world. But like I said, I?m willing to approach this problem however necessary with one exception. As it stands now, I have a server block that we moved in which is when we realized there was going to be a problem with login pages. So inside the server{} block I setup location{} blocks for the folders that were secure and handled them with the alternative add_header and cookie settings. Security on folders can be a little more dynamic than I?d like and also not predictable. We have hundreds of web content editors that can make secure content easily in our CMS and they make significant use of this option. So creating a system that would use location{} blocks has to be off the table. It would be a tremendous amount of work to maintain that setup. ___________________________________________ Michael Friscia Office of Communications Yale School of Medicine (203) 737-7932 - office (203) 931-5381 - mobile http://web.yale.edu On 2/14/18, 5:13 PM, "nginx on behalf of Peter Booth" wrote: I think that part of the power and challenge of using nginx?s caching is that there are many different ways of achieving the same or similar results, but some of the approaches will be more awkward than others. I think that it might help if you could express what the issue is that you are trying to solve, as opposed to the mechanism that you are wanting to use to solve your problem, for example, ?Don?t cache personalized pages? or ?don?t cache pages where the user is outside the US? or ?don?t cache pages if the user has logged in? or ... > On Feb 14, 2018, at 4:44 PM, Friscia, Michael wrote: > > Maybe that?s the problem, I want to disable cache if the response header is true but not do anything if it is false. I can change my logic in creating this header to only have it on pages where cache should be disabled if it is not possible to use an IF statement around it. I will post my config here once I get rid of the sensitive things, but the confusing thing is that it worked, then it stopped working. > > > > ___________________________________________ > > Michael Friscia > > Office of Communications > Yale School of Medicine > > (203) 737-7932 ? office > (203) 931-5381 ? mobile > > http://web.yale.edu > > > > > On 2/14/18, 4:39 PM, "nginx on behalf of webopsx" wrote: > > Hi, > > The map is processed on each request and should be very consistent. > > I thought you wanted to disable cache on the existence of a response header, > not a request header. > > Otherwise I think we need more information to understand, such as how are > you testing? Perhaps paste your full configuration here after removing any > sensitive text. > > Posted at Nginx Forum: https://urldefense.proofpoint.com/v2/url?u=https-3A__forum.nginx.org_read.php-3F2-2C278558-2C278576-23msg-2D278576&d=DwICAg&c=cjytLXgP8ixuoHflwc-poQ&r=wvXEDjvtDPcv7AlldT5UvDx32KXBEM6um_lS023SJrs&m=x0V3h5hp-I18hCxDQuN0U7f_JYUQYISVWlygl1QK-FU&s=sUMR29LFjCI97P-LSb-RwuojlvhtDhvfRiX_YAio19E&e= > > _______________________________________________ > nginx mailing list > nginx at nginx.org > https://urldefense.proofpoint.com/v2/url?u=http-3A__mailman.nginx.org_mailman_listinfo_nginx&d=DwICAg&c=cjytLXgP8ixuoHflwc-poQ&r=wvXEDjvtDPcv7AlldT5UvDx32KXBEM6um_lS023SJrs&m=x0V3h5hp-I18hCxDQuN0U7f_JYUQYISVWlygl1QK-FU&s=833NZxJsLHFd1YlQmOtXp22oR4K0OXPS-xz0mYs6r04&e= > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > https://urldefense.proofpoint.com/v2/url?u=http-3A__mailman.nginx.org_mailman_listinfo_nginx&d=DwIGaQ&c=cjytLXgP8ixuoHflwc-poQ&r=wvXEDjvtDPcv7AlldT5UvDx32KXBEM6um_lS023SJrs&m=JuKVbxbm0g2sxo5YSYeiKyCXurdeZujyxenVsST6Ce8&s=m6rC0_mW4r0GP-Yl6edLAFDbcVLuRViJIOg-r3yQ9vc&e= _______________________________________________ nginx mailing list nginx at nginx.org https://urldefense.proofpoint.com/v2/url?u=http-3A__mailman.nginx.org_mailman_listinfo_nginx&d=DwIGaQ&c=cjytLXgP8ixuoHflwc-poQ&r=wvXEDjvtDPcv7AlldT5UvDx32KXBEM6um_lS023SJrs&m=JuKVbxbm0g2sxo5YSYeiKyCXurdeZujyxenVsST6Ce8&s=m6rC0_mW4r0GP-Yl6edLAFDbcVLuRViJIOg-r3yQ9vc&e= From michael.friscia at yale.edu Thu Feb 15 13:22:04 2018 From: michael.friscia at yale.edu (Friscia, Michael) Date: Thu, 15 Feb 2018 13:22:04 +0000 Subject: Response Header IF statement problem In-Reply-To: <8CFBA9FC-9864-4B5E-A86F-BF67E9616CF4@yale.edu> References: <19638EFE-5B04-464A-A07E-55BEDB2B1C3D@yale.edu> <5010c3e0f211ae204ed53e2094f88e46.NginxMailingListEnglish@forum.nginx.org> <8CFBA9FC-9864-4B5E-A86F-BF67E9616CF4@yale.edu> Message-ID: <2A4BBE83-264E-49E8-885D-1ECB45019A36@yale.edu> To add one more thing. I mentioned that my testing failed. Exactly what was failing is that the map{} block that worked and then stopped working was the problem, the $nocache variable would always return the default value no matter what I did. So in a previous post the suggested code was proxy_no_cache $nocache; I also included a add_header X-NoCacheValue $nocache; My initial tests worked well, I saw the new header and the $nocache value that was being decided was being passed just fine. But I don?t know if this sort of use of the $nocache value was incorrect. I tried this using the IF() block I mentioned below as well as eliminating the IF() blocks entirely and handling it all within the server{} block to rule out problems with IF() which I am well versed in. ___________________________________________ Michael Friscia Office of Communications Yale School of Medicine (203) 737-7932 - office (203) 931-5381 - mobile http://web.yale.edu On 2/15/18, 8:03 AM, "nginx on behalf of Friscia, Michael" wrote: Yes, I should have explained the problem up front. I made the wrong assumption I was asking a simple question and quickly realized I was getting good answers but my approach was likely flawed from the start. We are using Nginx just as a cache mechanism across many custom DNS names. As a result we have many server {} blocks handling a variety of different DNS names, vanity URLs and in a few cases SSL redirects. Where possible a single server {} block handles a wildcard DNS name but there are reasons we separate some of them out. Given the number of DNS entries, I have also created a system for creating *.conf files, so I do not use a single conf file for all the servers, instead I divide them out into many conf files. For special configuration items that are only included sometimes, I use *.inc files with the idea being that nginx.conf includes *.conf but does not include *.inc so that way I can make my server{} blocks cleaner and put common configurations into inc file. The main benefit being that almost all the config files fit on the screen without scrolling that much in VIM. Within each of these DNS instances we have a mix of login required content. We use three different login methods, IP whitelists, local user/password set in our CMS and then we use NetID via CAS/Shibboleth. We don?t want any of the login pages cached. The way we differentiate this relates to the mechanics on the backend of the web application. So we created a custom header called X-Secured-Page which is either true or false. In the event the value is true, we do not want to cache. To go a step further, I have made use the add_header to set different headers for when we cache a page versus when we do not cache a page. The idea being that Tier1 support could view these headers to deflect support calls when things are working as intended but also to create Jira tickets with the custom header content when things do not seem to be working correctly. In addition to this, when the secure page is true, I use this setting proxy_pass_header Set-Cookie; but when the secured page is false I use this setting: proxy_hide_header Set-Cookie; As for why, it is actually the CAS/Shibboleth authentication that fails to work without this setting change. I could easily just always pass the cookie but I was trying to eliminate any noise I could. We have a fairly complicated stack given the Nginx passes most traffic to a server cluster that handles URL rewrites and Vanity URLs at a level we did not want to approach in Nginx. We may in the future but right now that is off the table given the complexity and work that went into the management interfaces used to handle the url rewrites the content owners use. So my thought process started with a test for $http_x_secured_page which for obvious reasons would not work because I was failing to distinguish request versus response headers and am very new to Nginx. Using the responses yesterday, I was 99% sure I had a working model nailed down. I had a map block that looked like this Map $upstream_http_x_secured_page $nocache { default ?0?; ?true? ?1?; ?false? ?2?; } Then in my server block I had logic that looks like this Server{ Location / { If ($nocache = ?1?) { return 418; } If ($nocache = ?2?) { return 419; }; } } Then I had a custom location for each, again this was all just to test and it worked. I used a combination of cURL and just using the inspector in firefox to see the headers and I was consistently getting the correct code from either the 418 or 419 block. So I moved the same identical code into production but used a new server block. In all this testing I was working with a DNS name not yet put into Nginx, so I was using all HOSTS file manipulation. But the content of the Server block was accurate so the responses were correct. This is the same method I used to test every DNS name moved in and seemed like the simplest way to test in production, not to mention we run 4 nginx servers as part of a globally redundant solution, using the HOSTS file allows me to easily test against a single production server before rolling my configs to all servers. In trying to figure out the problem, I read about how the map{} block worked and was confused by it saying that it stores the last request made which caused me to wonder if the design I had with many server{} blocks all sharing the same map{} directive was somehow flawed so my testing in production was doomed from the start. I also really do not understand the order of operation taking place. If the map{} took place inside a server block, then I would understand it better and see it encapsulated from other server{} requests. Or if I was to use upstream{} blocks and it went inside that, again I would understand it more. I did try implementing upstream{} sections but it did not seem to make sense, I only had one server defined inside of it and since I did not see a direct relationship with map{} and in testing it made no difference whether I had this defined or not, so I removed it. The bottom line is that I need a solution that will cache almost all pages and then in some cases when the X-Secured-Page is true, I don?t want to cache it. I am totally flexible, we could easily set it up so that the X-Secured-Page header only exists when the value would be true and does not exist at all when it would be false if that would make things easier or more reliable. I?m also open to doing just about anything else to make this happen. On the code side, this value is determined in a security class that was already setting some headings we used for debugging at the application level and it would not be hard to setup a service that could be called (even cached behind nginx) where a URL is cURL?d to see if the page should be secure or not. I?m not sure if having that sort of logic inside a server{} block is possible, good practice or the stupidest idea in the world. But like I said, I?m willing to approach this problem however necessary with one exception. As it stands now, I have a server block that we moved in which is when we realized there was going to be a problem with login pages. So inside the server{} block I setup location{} blocks for the folders that were secure and handled them with the alternative add_header and cookie settings. Security on folders can be a little more dynamic than I?d like and also not predictable. We have hundreds of web content editors that can make secure content easily in our CMS and they make significant use of this option. So creating a system that would use location{} blocks has to be off the table. It would be a tremendous amount of work to maintain that setup. ___________________________________________ Michael Friscia Office of Communications Yale School of Medicine (203) 737-7932 - office (203) 931-5381 - mobile http://web.yale.edu On 2/14/18, 5:13 PM, "nginx on behalf of Peter Booth" wrote: I think that part of the power and challenge of using nginx?s caching is that there are many different ways of achieving the same or similar results, but some of the approaches will be more awkward than others. I think that it might help if you could express what the issue is that you are trying to solve, as opposed to the mechanism that you are wanting to use to solve your problem, for example, ?Don?t cache personalized pages? or ?don?t cache pages where the user is outside the US? or ?don?t cache pages if the user has logged in? or ... > On Feb 14, 2018, at 4:44 PM, Friscia, Michael wrote: > > Maybe that?s the problem, I want to disable cache if the response header is true but not do anything if it is false. I can change my logic in creating this header to only have it on pages where cache should be disabled if it is not possible to use an IF statement around it. I will post my config here once I get rid of the sensitive things, but the confusing thing is that it worked, then it stopped working. > > > > ___________________________________________ > > Michael Friscia > > Office of Communications > Yale School of Medicine > > (203) 737-7932 ? office > (203) 931-5381 ? mobile > > http://web.yale.edu > > > > > On 2/14/18, 4:39 PM, "nginx on behalf of webopsx" wrote: > > Hi, > > The map is processed on each request and should be very consistent. > > I thought you wanted to disable cache on the existence of a response header, > not a request header. > > Otherwise I think we need more information to understand, such as how are > you testing? Perhaps paste your full configuration here after removing any > sensitive text. > > Posted at Nginx Forum: https://urldefense.proofpoint.com/v2/url?u=https-3A__forum.nginx.org_read.php-3F2-2C278558-2C278576-23msg-2D278576&d=DwICAg&c=cjytLXgP8ixuoHflwc-poQ&r=wvXEDjvtDPcv7AlldT5UvDx32KXBEM6um_lS023SJrs&m=x0V3h5hp-I18hCxDQuN0U7f_JYUQYISVWlygl1QK-FU&s=sUMR29LFjCI97P-LSb-RwuojlvhtDhvfRiX_YAio19E&e= > > _______________________________________________ > nginx mailing list > nginx at nginx.org > https://urldefense.proofpoint.com/v2/url?u=http-3A__mailman.nginx.org_mailman_listinfo_nginx&d=DwICAg&c=cjytLXgP8ixuoHflwc-poQ&r=wvXEDjvtDPcv7AlldT5UvDx32KXBEM6um_lS023SJrs&m=x0V3h5hp-I18hCxDQuN0U7f_JYUQYISVWlygl1QK-FU&s=833NZxJsLHFd1YlQmOtXp22oR4K0OXPS-xz0mYs6r04&e= > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > https://urldefense.proofpoint.com/v2/url?u=http-3A__mailman.nginx.org_mailman_listinfo_nginx&d=DwIGaQ&c=cjytLXgP8ixuoHflwc-poQ&r=wvXEDjvtDPcv7AlldT5UvDx32KXBEM6um_lS023SJrs&m=JuKVbxbm0g2sxo5YSYeiKyCXurdeZujyxenVsST6Ce8&s=m6rC0_mW4r0GP-Yl6edLAFDbcVLuRViJIOg-r3yQ9vc&e= _______________________________________________ nginx mailing list nginx at nginx.org https://urldefense.proofpoint.com/v2/url?u=http-3A__mailman.nginx.org_mailman_listinfo_nginx&d=DwIGaQ&c=cjytLXgP8ixuoHflwc-poQ&r=wvXEDjvtDPcv7AlldT5UvDx32KXBEM6um_lS023SJrs&m=JuKVbxbm0g2sxo5YSYeiKyCXurdeZujyxenVsST6Ce8&s=m6rC0_mW4r0GP-Yl6edLAFDbcVLuRViJIOg-r3yQ9vc&e= _______________________________________________ nginx mailing list nginx at nginx.org https://urldefense.proofpoint.com/v2/url?u=http-3A__mailman.nginx.org_mailman_listinfo_nginx&d=DwIGaQ&c=cjytLXgP8ixuoHflwc-poQ&r=wvXEDjvtDPcv7AlldT5UvDx32KXBEM6um_lS023SJrs&m=ZCQk_xaz3T6VBfr2yeoj1xp7l71S7cxV2zGx5Oj3yjM&s=fYHX-hmkaS0qF_i5RTcRpKfzALAMEa6pYb_8U_SGyz0&e= From mdounin at mdounin.ru Thu Feb 15 13:39:15 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 15 Feb 2018 16:39:15 +0300 Subject: Files still on disc after inactive time In-Reply-To: References: Message-ID: <20180215133915.GU24410@mdounin.ru> Hello! On Thu, Feb 15, 2018 at 04:33:19AM -0500, Andrzej Walas wrote: > Hi, > > I use nginx 1.12.2. I have settings like this: > proxy_cache_path /ephemeral/nginx/cache levels=1:2 > keys_zone=proxy-cache:4000m max_size=40g inactive=1d; > proxy_temp_path /ephemeral/nginx/tmp; > > And I have two problems with this: > 1. If during the download to the cache the connection is disconnected, the > files that are not completely downloaded are left as junk in tmp and block > the re-download to the cache of the given file. If you are able to reproduce this, please provide a debug log. See http://nginx.org/en/docs/debugging_log.html for details. > 2. Often, there are individual files that remain in the cache despite being > inactive for more than one day. Restart nginx does not help. Restart implies that all files in the cache become active. Please make sure to wait for at least "inactive=" time after a restart. Also, please make sure that there are no alerts in logs (notably, no processes crashes). It might be also a good idea to test without 3rd party modules if there are any. -- Maxim Dounin http://mdounin.ru/ From nginx-forum at forum.nginx.org Thu Feb 15 13:48:11 2018 From: nginx-forum at forum.nginx.org (Andrzej Walas) Date: Thu, 15 Feb 2018 08:48:11 -0500 Subject: Files still on disc after inactive time In-Reply-To: <20180215133915.GU24410@mdounin.ru> References: <20180215133915.GU24410@mdounin.ru> Message-ID: <2dcafe204a64a55372948c82d15408c2.NginxMailingListEnglish@forum.nginx.org> Thanks for answer. In logs I see couple logs like this: [alert] 11373#0: ignore long locked inactive cache entry dba676c7ebb90e210efc04d51aaa4858, count:11 Posted at Nginx Forum: https://forum.nginx.org/read.php?2,278589,278600#msg-278600 From mdounin at mdounin.ru Thu Feb 15 14:38:54 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 15 Feb 2018 17:38:54 +0300 Subject: Files still on disc after inactive time In-Reply-To: <2dcafe204a64a55372948c82d15408c2.NginxMailingListEnglish@forum.nginx.org> References: <20180215133915.GU24410@mdounin.ru> <2dcafe204a64a55372948c82d15408c2.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20180215143853.GV24410@mdounin.ru> Hello! On Thu, Feb 15, 2018 at 08:48:11AM -0500, Andrzej Walas wrote: > Thanks for answer. In logs I see couple logs like this: > [alert] 11373#0: ignore long locked inactive cache entry > dba676c7ebb90e210efc04d51aaa4858, count:11 This message indicate that the cache entry in question was was locked long time ago, and not unlocked. The next question is how this happened. Unfortunately, this is not something easy to answer unless you have debug logs since nginx start. The most likely reason is a process crash, check logs since last restart to see if there are any. Also, as already suggested, try testing without 3rd party modules if there are any. There were also reports that long locked entries might appear with http2 enabled, see https://trac.nginx.org/nginx/ticket/1163 for details. If this is the case, consider updating to the latest nginx version (1.13.8 right now) to see if it helps. -- Maxim Dounin http://mdounin.ru/ From rpaprocki at fearnothingproductions.net Thu Feb 15 17:30:42 2018 From: rpaprocki at fearnothingproductions.net (Robert Paprocki) Date: Thu, 15 Feb 2018 09:30:42 -0800 Subject: ip address masking In-Reply-To: <1072191518572765@web44o.yandex.ru> References: <1072191518572765@web44o.yandex.ru> Message-ID: Hi, On Tue, Feb 13, 2018 at 5:46 PM, Tom wrote: > Hi, > > I'm wondering if anyone has successfully masked ip addresses in nginx > before they are written to a log file. > > I understand there are reasons why you would and would not do this. > > Anyway, my config so far, which I believe works for ipv4 addresses, but > probably on only a few formats of ipv6 addresses. I've used secondary map > directives to append text to the short ip address as I couldn't work out > how to concatenate the variable with text, so concatenated two variables > instead. (Hope that makes sense). > > > log_format ipmask '$remote_addr $ip_anon'; > > map $remote_addr $ip_anon { > default $remote_addr; > "~^(?P[0-9]{1,3}\.[0-9]{1,3}.)(?P.*)" $ipv4$ipv4suffix; > "~^(?P[^:]+:[^:]+)(?P.*$)" '$ipv6 $junkv6'; > } > > map - $ipv4suffix{ > default 0.0; > } > map - $ipv6suffix{ > default XX; > } > server { > listen 8080; > listen [::]:8080; > server_name _; > access_log /tmp/ngn-ip.log ipmask; > allow all; > } > > > Anyone got any thoughts on this? > Thanks > I suspect it might be a bit more efficient to do this with a simple module than trying to play around with more variables, maps, and regular expressions. I hacked together a quick module to do this: https://github.com/p0pr0ck5/ngx_http_ip_mask_module. You could also do the same thing with a little bit of Lua scripting (simply AND-ing off the unwanted bits). I'd guess extending out the same logic for IPv6 wouldn't be too hard, but that's left as an exercise for the reader :p -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Thu Feb 15 17:36:07 2018 From: nginx-forum at forum.nginx.org (George) Date: Thu, 15 Feb 2018 12:36:07 -0500 Subject: Nginx 1.13.9 HTTP/2 Server Push - non-compressed assets ? In-Reply-To: <67a56ac7eccb49fc5f009455ea40b338.NginxMailingListEnglish@forum.nginx.org> References: <67a56ac7eccb49fc5f009455ea40b338.NginxMailingListEnglish@forum.nginx.org> Message-ID: <501395253cf590e912a2ed08658c3948.NginxMailingListEnglish@forum.nginx.org> thanks Ruslan just tested your committed fixes for this in master branch and working nicely https://community.centminmod.com/threads/hurray-http-2-server-push-for-nginx.11910/page-2#post-59602 :) Posted at Nginx Forum: https://forum.nginx.org/read.php?2,278481,278608#msg-278608 From nginx-forum at forum.nginx.org Fri Feb 16 08:16:27 2018 From: nginx-forum at forum.nginx.org (Andrzej Walas) Date: Fri, 16 Feb 2018 03:16:27 -0500 Subject: Files still on disc after inactive time In-Reply-To: <20180215143853.GV24410@mdounin.ru> References: <20180215143853.GV24410@mdounin.ru> Message-ID: <8adb8244f87408ee1b1e81607ecd8acf.NginxMailingListEnglish@forum.nginx.org> After this inactive logs I have anther logs: [alert] 11371#0: worker process 24870 exited on signal 9 Can you explain whats happend when I have logs like this: [error] 9045#0: *1138267 readv() failed (104: Connection reset by peer) while reading upstream... I have problem like https://forum.nginx.org/read.php?2,278534 too. Max size is max_size=40g but offen I have more than 100 GB cache on disc and problem with empty space to download new files to cache. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,278589,278611#msg-278611 From peter_booth at me.com Fri Feb 16 09:25:58 2018 From: peter_booth at me.com (Peter Booth) Date: Fri, 16 Feb 2018 04:25:58 -0500 Subject: Files still on disc after inactive time In-Reply-To: <8adb8244f87408ee1b1e81607ecd8acf.NginxMailingListEnglish@forum.nginx.org> References: <20180215143853.GV24410@mdounin.ru> <8adb8244f87408ee1b1e81607ecd8acf.NginxMailingListEnglish@forum.nginx.org> Message-ID: 100GB of cached files sounds enormous. What kinds of files are you caching? How large are they? How many do you have? If you look at your access log what hit rate is your cache seeing? Sent from my iPad > On Feb 16, 2018, at 3:16 AM, Andrzej Walas wrote: > > After this inactive logs I have anther logs: > [alert] 11371#0: worker process 24870 exited on signal 9 > > Can you explain whats happend when I have logs like this: > [error] 9045#0: *1138267 readv() failed (104: Connection reset by peer) > while reading upstream... > > I have problem like https://forum.nginx.org/read.php?2,278534 too. Max size > is max_size=40g but offen I have more than 100 GB cache on disc and problem > with empty space to download new files to cache. > > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,278589,278611#msg-278611 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at forum.nginx.org Fri Feb 16 09:39:50 2018 From: nginx-forum at forum.nginx.org (Andrzej Walas) Date: Fri, 16 Feb 2018 04:39:50 -0500 Subject: Files still on disc after inactive time In-Reply-To: References: Message-ID: <7798563d5b0310e91cb345e8333e169c.NginxMailingListEnglish@forum.nginx.org> For me 40-50 GB cache is ok, because I have multiple files like 2-5GB. Problem in my mind is this that I have settings: proxy_cache_path /ephemeral/nginx/cache levels=1:2 keys_zone=proxy-cache:4000m max_size=40g inactive=1d; but I have over 40GB on disc and files older than 1 day inactive. Can you tell me what happend with downloaded part of files when I have: [error] 16082#0: *1264804 upstream prematurely closed connection while reading upstream [crit] 16082#0: *1264770 pwritev() has written only 49152 of 151552 while reading upstream This part of file is still on disc and don't deleted after error? On most of my proxy rate is 90% HIT to 9% MISS and 1% ERROR. But couple of them have stats like 10% HIT, 60% MISS, 30% ERROR. Sometimes I have problem with MISS on existing file. In logs I see 1 MISS, after that 10-20 HIT and after that only multiple MISS on this same file. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,278589,278613#msg-278613 From r1ch+nginx at teamliquid.net Fri Feb 16 10:36:09 2018 From: r1ch+nginx at teamliquid.net (Richard Stanway) Date: Fri, 16 Feb 2018 11:36:09 +0100 Subject: Files still on disc after inactive time In-Reply-To: <7798563d5b0310e91cb345e8333e169c.NginxMailingListEnglish@forum.nginx.org> References: <7798563d5b0310e91cb345e8333e169c.NginxMailingListEnglish@forum.nginx.org> Message-ID: > [alert] 11371#0: worker process 24870 exited on signal 9 This is almost certainly the cause of your problems - you need to figure out why the nginx processes are crashing and resolve that. Most likely a 3rd party module is responsible. On Fri, Feb 16, 2018 at 10:39 AM, Andrzej Walas wrote: > For me 40-50 GB cache is ok, because I have multiple files like 2-5GB. > Problem in my mind is this that I have settings: > proxy_cache_path /ephemeral/nginx/cache levels=1:2 > keys_zone=proxy-cache:4000m max_size=40g inactive=1d; > but I have over 40GB on disc and files older than 1 day inactive. > > Can you tell me what happend with downloaded part of files when I have: > [error] 16082#0: *1264804 upstream prematurely closed connection while > reading upstream > [crit] 16082#0: *1264770 pwritev() has written only 49152 of 151552 > while reading upstream > This part of file is still on disc and don't deleted after error? > > On most of my proxy rate is 90% HIT to 9% MISS and 1% ERROR. But couple of > them have stats like 10% HIT, 60% MISS, 30% ERROR. > > Sometimes I have problem with MISS on existing file. In logs I see 1 MISS, > after that 10-20 HIT and after that only multiple MISS on this same file. > > Posted at Nginx Forum: https://forum.nginx.org/read. > php?2,278589,278613#msg-278613 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Fri Feb 16 11:15:06 2018 From: nginx-forum at forum.nginx.org (Andrzej Walas) Date: Fri, 16 Feb 2018 06:15:06 -0500 Subject: Files still on disc after inactive time In-Reply-To: References: Message-ID: <2abfbbee818e3036fdcb015cae7cd9a0.NginxMailingListEnglish@forum.nginx.org> Can you answer on this: > Can you tell me what happend with downloaded part of files when I have: > [error] 16082#0: *1264804 upstream prematurely closed connection while > reading upstream > [crit] 16082#0: *1264770 pwritev() has written only 49152 of 151552 > while reading upstream > This part of file is still on disc and don't deleted after error? ? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,278589,278617#msg-278617 From mdounin at mdounin.ru Fri Feb 16 11:44:50 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 16 Feb 2018 14:44:50 +0300 Subject: Files still on disc after inactive time In-Reply-To: <8adb8244f87408ee1b1e81607ecd8acf.NginxMailingListEnglish@forum.nginx.org> References: <20180215143853.GV24410@mdounin.ru> <8adb8244f87408ee1b1e81607ecd8acf.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20180216114450.GZ24410@mdounin.ru> Hello! On Fri, Feb 16, 2018 at 03:16:27AM -0500, Andrzej Walas wrote: > After this inactive logs I have anther logs: > [alert] 11371#0: worker process 24870 exited on signal 9 Signal 9 is SIGKILL sent to the nginx worker process. Killing nginx worker processes will obviously result in various problems with caches, no surprise here. You have to investigate further who and why sent it, and eliminate these conditions. Most likely it was something like OOM Killer (check system logs), though may be something different. Note that in nginx 1.13.0+ the log message in question will contain PID of the process which sent the signal. If you won't be able to find out the source of the signal, upgrading to nginx 1.13.x might help with the investigation. -- Maxim Dounin http://mdounin.ru/ From francis at daoine.org Fri Feb 16 12:35:30 2018 From: francis at daoine.org (Francis Daly) Date: Fri, 16 Feb 2018 12:35:30 +0000 Subject: Response Header IF statement problem In-Reply-To: <2A4BBE83-264E-49E8-885D-1ECB45019A36@yale.edu> References: <19638EFE-5B04-464A-A07E-55BEDB2B1C3D@yale.edu> <5010c3e0f211ae204ed53e2094f88e46.NginxMailingListEnglish@forum.nginx.org> <8CFBA9FC-9864-4B5E-A86F-BF67E9616CF4@yale.edu> <2A4BBE83-264E-49E8-885D-1ECB45019A36@yale.edu> Message-ID: <20180216123530.GO3063@daoine.org> On Thu, Feb 15, 2018 at 01:22:04PM +0000, Friscia, Michael wrote: Hi there, > To add one more thing. I mentioned that my testing failed. Exactly what was failing is that the map{} block that worked and then stopped working was the problem, the $nocache variable would always return the default value no matter what I did. there is a lot of information here, and your design seems quite complex. I think that you are close to having an nginx config that does what you want; but I suspect that it will help you to have a clear picture of how things are intended to work within nginx, so that you understand the building blocks available to you, so that you can put them together in the way that you want. Very briefly and hand-wavily (and check the proper documentation to fill in all the pieces I am leaving out): In your common case, a request comes to nginx; it is handled in one server{} in one location{}, which does proxy_pass to an upstream http server. nginx checks the proxy_cache_key corresponding to the request, and if that key is in the proxy_cache, then the content is returned from there directly. If that key is not in the proxy_cache, then the request is made to upstream, the response is written to the proxy_cache, and the response is also returned to the client. Not everything is cached from upstream. There are http rules for when things can and cannot be cached, and for how long they can be cached, and nginx (in general) obeys them. So: the simplest thing on the nginx side is for your upstream http server to use http rules to say "this response is not cacheable", and nginx will deal with it without extra config on your part. If you want nginx not to look in the proxy_cache for the proxy_cache_key for this request, you can use proxy_cache_bypass. If you want nginx not to write the response from upstream into the proxy_cache, you can use proxy_no_cache. "map" is defined outside all server{}s, and takes the form "map current$thing $new_variable {}". $new_variable gets a value the first time it is used in this request -- if $new_variable is not used, the map is never consulted; so having multiple "map"s that are not used by this request has zero overhead for this request. And: unless you understand it, don't use "if", especially not inside a location{}. If you want to, for example, add a debug-response-header with a certain value, do exactly that. Do not try to do "if the value is non-zero, add the header; else don't add the header". Just add the header with the value, or with a concatenated set of values, and let the reading side decide how to interpret it. So: putting all that back in to what you seem to want to do... * do use a map to read $upstream_http_x_secured_page and to set $your_variable to non-blank and non-zero only when the $upstream_ variable is true * do use a proxy_no_cache which includes $your_variable * do not read $your_variable anywhere early -- such as in a proxy_cache_bypass directive, or anywhere within an if() block -- because it is unnecessary, and it will break things * do make sure your proxy_cache is empty of things that you now newly want to "proxy_no_cache", before you start And if you have any behaviour you do not want or do not understand, it is very helpful if you can show the exact config that shows that behaviour, so that someone else can reproduce it without having to guess what you might have meant. Ideally, a test config will have a few tens of lines, with most of the unnecessary parts removed and with all of the private details replaced with information that was tested and works on something like "localhost". copy-paste is better than re-type. Good luck with it, f -- Francis Daly francis at daoine.org From michael.friscia at yale.edu Fri Feb 16 13:49:59 2018 From: michael.friscia at yale.edu (Friscia, Michael) Date: Fri, 16 Feb 2018 13:49:59 +0000 Subject: Response Header IF statement problem In-Reply-To: <20180216123530.GO3063@daoine.org> References: <19638EFE-5B04-464A-A07E-55BEDB2B1C3D@yale.edu> <5010c3e0f211ae204ed53e2094f88e46.NginxMailingListEnglish@forum.nginx.org> <8CFBA9FC-9864-4B5E-A86F-BF67E9616CF4@yale.edu> <2A4BBE83-264E-49E8-885D-1ECB45019A36@yale.edu> <20180216123530.GO3063@daoine.org> Message-ID: <6B309F41-E843-457C-B329-9BDE3A5D7216@yale.edu> Thank you, this was incredibly useful and helped me think this through in Nginx terms and I have everything working now. Thank you again! Minor side question, is there a variable I can use to post to a debug header to indicate if a page was newly written to the cache versus a page that was read from cache? If I had this information, then from a tier2/3 support side it could speed up debugging. ___________________________________________ Michael Friscia Office of Communications Yale School of Medicine (203) 737-7932 - office (203) 931-5381 - mobile http://web.yale.edu On 2/16/18, 7:35 AM, "nginx on behalf of Francis Daly" wrote: On Thu, Feb 15, 2018 at 01:22:04PM +0000, Friscia, Michael wrote: Hi there, > To add one more thing. I mentioned that my testing failed. Exactly what was failing is that the map{} block that worked and then stopped working was the problem, the $nocache variable would always return the default value no matter what I did. there is a lot of information here, and your design seems quite complex. I think that you are close to having an nginx config that does what you want; but I suspect that it will help you to have a clear picture of how things are intended to work within nginx, so that you understand the building blocks available to you, so that you can put them together in the way that you want. Very briefly and hand-wavily (and check the proper documentation to fill in all the pieces I am leaving out): In your common case, a request comes to nginx; it is handled in one server{} in one location{}, which does proxy_pass to an upstream http server. nginx checks the proxy_cache_key corresponding to the request, and if that key is in the proxy_cache, then the content is returned from there directly. If that key is not in the proxy_cache, then the request is made to upstream, the response is written to the proxy_cache, and the response is also returned to the client. Not everything is cached from upstream. There are http rules for when things can and cannot be cached, and for how long they can be cached, and nginx (in general) obeys them. So: the simplest thing on the nginx side is for your upstream http server to use http rules to say "this response is not cacheable", and nginx will deal with it without extra config on your part. If you want nginx not to look in the proxy_cache for the proxy_cache_key for this request, you can use proxy_cache_bypass. If you want nginx not to write the response from upstream into the proxy_cache, you can use proxy_no_cache. "map" is defined outside all server{}s, and takes the form "map current$thing $new_variable {}". $new_variable gets a value the first time it is used in this request -- if $new_variable is not used, the map is never consulted; so having multiple "map"s that are not used by this request has zero overhead for this request. And: unless you understand it, don't use "if", especially not inside a location{}. If you want to, for example, add a debug-response-header with a certain value, do exactly that. Do not try to do "if the value is non-zero, add the header; else don't add the header". Just add the header with the value, or with a concatenated set of values, and let the reading side decide how to interpret it. So: putting all that back in to what you seem to want to do... * do use a map to read $upstream_http_x_secured_page and to set $your_variable to non-blank and non-zero only when the $upstream_ variable is true * do use a proxy_no_cache which includes $your_variable * do not read $your_variable anywhere early -- such as in a proxy_cache_bypass directive, or anywhere within an if() block -- because it is unnecessary, and it will break things * do make sure your proxy_cache is empty of things that you now newly want to "proxy_no_cache", before you start And if you have any behaviour you do not want or do not understand, it is very helpful if you can show the exact config that shows that behaviour, so that someone else can reproduce it without having to guess what you might have meant. Ideally, a test config will have a few tens of lines, with most of the unnecessary parts removed and with all of the private details replaced with information that was tested and works on something like "localhost". copy-paste is better than re-type. Good luck with it, f -- Francis Daly francis at daoine.org _______________________________________________ nginx mailing list nginx at nginx.org https://urldefense.proofpoint.com/v2/url?u=http-3A__mailman.nginx.org_mailman_listinfo_nginx&d=DwICAg&c=cjytLXgP8ixuoHflwc-poQ&r=wvXEDjvtDPcv7AlldT5UvDx32KXBEM6um_lS023SJrs&m=a54DkMNy8szWJBw16l-IByb_d7_KsXWo75h2opft0C0&s=MAdD_ml0ZQjhTod2iWoNxS4zKvzsZt2nw2l1N4uTGfk&e= From vfclists at gmail.com Fri Feb 16 14:32:59 2018 From: vfclists at gmail.com (vfclists .) Date: Fri, 16 Feb 2018 14:32:59 +0000 Subject: How to create a reverse and SSL proxy for xpra. Message-ID: I want to create a reverse and SSL proxy for xpra - https://xpra.org/, a remote desktop facility for X windows, like x2go and VNC. The proxy is targeted at the HTML5 option which allows the info to be transferred via websockets. The way to connect directly to an xpra HTML5 server is to enter the address and the port directly to the browser eg. http://1.2.3.4:5000. A page appears prompting for the target server and port, login credentials and a few others, and when filled properly the desktop comes up. This is my first attempt to create an nginx proxy from scratch and I have already hit a snag. My aim is to have different locations connecting to server:port connections, so I have something like this location /xpra { proxy_pass http://111.222.213.221:14003; # proxy_pass http://127.0.0.1:14003; proxy_http_version 1.1; proxy_buffering off; } proxy_pass is the main option whose relevance I have checkied, the other two are options which seem to useful. Whenever i try to open a page http//:111.222.111.221/xpra, the following results Error response Error code 404. Message: File not found. Error code explanation: 404 = Nothing matches the given URI. ================= The code for the client is at - https://xpra.org/html5/connect.html -- Frank Church ======================= http://devblog.brahmancreations.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Fri Feb 16 15:37:56 2018 From: francis at daoine.org (Francis Daly) Date: Fri, 16 Feb 2018 15:37:56 +0000 Subject: Response Header IF statement problem In-Reply-To: <6B309F41-E843-457C-B329-9BDE3A5D7216@yale.edu> References: <19638EFE-5B04-464A-A07E-55BEDB2B1C3D@yale.edu> <5010c3e0f211ae204ed53e2094f88e46.NginxMailingListEnglish@forum.nginx.org> <8CFBA9FC-9864-4B5E-A86F-BF67E9616CF4@yale.edu> <2A4BBE83-264E-49E8-885D-1ECB45019A36@yale.edu> <20180216123530.GO3063@daoine.org> <6B309F41-E843-457C-B329-9BDE3A5D7216@yale.edu> Message-ID: <20180216153756.GP3063@daoine.org> On Fri, Feb 16, 2018 at 01:49:59PM +0000, Friscia, Michael wrote: Hi there, > Thank you, this was incredibly useful and helped me think this through in Nginx terms and I have everything working now. Thank you again! You're welcome. Good to hear that you have things working the way you want them to. > Minor side question, is there a variable I can use to post to a debug header to indicate if a page was newly written to the cache versus a page that was read from cache? If I had this information, then from a tier2/3 support side it could speed up debugging. Possibly $upstream_cache_status? http://nginx.org/r/$upstream_cache_status, or a section in https://www.nginx.com/blog/nginx-caching-guide/ for more information. If the seven states of that variable reveals more information than you want to, you could use a map to turn three values into "it came from upstream", and four into "it came from cache", and just send that binary value in the debug header. Note that that variable is not exactly what you asked for; it might be close enough for what you want. f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Mon Feb 19 09:38:00 2018 From: nginx-forum at forum.nginx.org (Andrzej Walas) Date: Mon, 19 Feb 2018 04:38:00 -0500 Subject: Files still on disc after inactive time In-Reply-To: <20180216114450.GZ24410@mdounin.ru> References: <20180216114450.GZ24410@mdounin.ru> Message-ID: <21babf6881a18f3c29fd4536d1308efd.NginxMailingListEnglish@forum.nginx.org> Thanks for your replay I will update nginx I search again. Can you anwser on my couple questions: 1. Can you tell me what happend with downloaded part of files when I have: 1.1 [error] 16082#0: *1264804 upstream prematurely closed connection while reading upstream 1.2 [crit] 16082#0: *1264770 pwritev() has written only 49152 of 151552 while reading upstream This part of file is still on disc and don't deleted after error? 2. Why when I set 40g for cache I have 90-100GB in cache? 3. Can you tell me when in offcial repo for redhat 7 will be nginx 1.13.+? Currently the newest is 1.12.2. Br Andrzej Walas Posted at Nginx Forum: https://forum.nginx.org/read.php?2,278589,278641#msg-278641 From wiktor at metacode.biz Mon Feb 19 11:02:06 2018 From: wiktor at metacode.biz (Wiktor Kwapisiewicz) Date: Mon, 19 Feb 2018 12:02:06 +0100 Subject: Routing based on ALPN Message-ID: Hello, I'm looking for a way to route traffic on port 443 based on ALPN value without SSL termination. ssl_preread_module [1] does something similar but the only exposed variable ($ssl_preread_server_name) is for SNI, not ALPN. A bit of context. I'd like to use nginx to host regular HTTPS server on port 443 but if the ALPN value is 'xmpp-client' transparently proxy the traffic to my local Jabber server. This feature [2] is already supported by several XMPP clients. Is there a way to access and save ALPN value to a variable? Thank you for your time. Kind regards, Wiktor [1]: https://nginx.org/en/docs/stream/ngx_stream_ssl_preread_module.html [2]: https://xmpp.org/extensions/xep-0368.html -- */metacode/* From vl at nginx.com Mon Feb 19 11:45:15 2018 From: vl at nginx.com (Vladimir Homutov) Date: Mon, 19 Feb 2018 14:45:15 +0300 Subject: Routing based on ALPN In-Reply-To: References: Message-ID: <20180219114514.GA15746@vlpc> On Mon, Feb 19, 2018 at 12:02:06PM +0100, Wiktor Kwapisiewicz via nginx wrote: > Hello, > > I'm looking for a way to route traffic on port 443 based on ALPN value > without SSL termination. > > ssl_preread_module [1] does something similar but the only exposed > variable ($ssl_preread_server_name) is for SNI, not ALPN. > > A bit of context. I'd like to use nginx to host regular HTTPS server on port > 443 but if the ALPN value is 'xmpp-client' transparently proxy the traffic > to my local Jabber server. This feature [2] is already supported by several > XMPP clients. > > Is there a way to access and save ALPN value to a variable? Hello, currently this is not possible; as you correctly noted, ssl_preread module only processes SNI extension. To achieve what you want, ssl_preread module needs to be extended to process ALPN extension as well and export results as a variable, that could be used to make routing decision. From mdounin at mdounin.ru Mon Feb 19 12:44:37 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 19 Feb 2018 15:44:37 +0300 Subject: Files still on disc after inactive time In-Reply-To: <21babf6881a18f3c29fd4536d1308efd.NginxMailingListEnglish@forum.nginx.org> References: <20180216114450.GZ24410@mdounin.ru> <21babf6881a18f3c29fd4536d1308efd.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20180219124437.GC24410@mdounin.ru> Hello! On Mon, Feb 19, 2018 at 04:38:00AM -0500, Andrzej Walas wrote: > Thanks for your replay I will update nginx I search again. > > Can you anwser on my couple questions: > 1. Can you tell me what happend with downloaded part of files when I have: > 1.1 [error] 16082#0: *1264804 upstream prematurely closed connection while > reading upstream > 1.2 [crit] 16082#0: *1264770 pwritev() has written only 49152 of > 151552 while reading upstream > This part of file is still on disc and don't deleted after error? In both cases temporary files will be removed. > 2. Why when I set 40g for cache I have 90-100GB in cache? As previously explained, your problems with cache are clearly due to nginx processes being killed. You have resolve this first. > 3. Can you tell me when in offcial repo for redhat 7 will be nginx 1.13.+? > Currently the newest is 1.12.2. Most likely redhat package maintainers will either wait till 1.14.x branch, or won't update packages for redhat 7 at all. You can install 1.13.x either from source, or via a linux packages provided at nginx.org: http://nginx.org/en/download.html http://nginx.org/en/linux_packages.html#mainline -- Maxim Dounin http://mdounin.ru/ From thresh at nginx.com Mon Feb 19 13:14:51 2018 From: thresh at nginx.com (Konstantin Pavlov) Date: Mon, 19 Feb 2018 16:14:51 +0300 Subject: Routing based on ALPN In-Reply-To: References: Message-ID: On 19/02/2018 14:02, Wiktor Kwapisiewicz via nginx wrote: > Hello, > > I'm looking for a way to route traffic on port 443 based on ALPN value > without SSL termination. > > ssl_preread_module [1] does something similar but the only exposed > variable ($ssl_preread_server_name) is for SNI, not ALPN. > > A bit of context. I'd like to use nginx to host regular HTTPS server on port > 443 but if the ALPN value is 'xmpp-client' transparently proxy the traffic > to my local Jabber server. This feature [2] is already supported by several > XMPP clients. > > Is there a way to access and save ALPN value to a variable? It should possible to parse the incoming buffer with https://nginx.org/r/js_filter and create a variable to make a routing decision on. -- Konstantin Pavlov www.nginx.com From garbage at gmx.de Mon Feb 19 15:12:23 2018 From: garbage at gmx.de (Gbg) Date: Mon, 19 Feb 2018 16:12:23 +0100 Subject: Clientcertificate authentication only for a single URL In-Reply-To: References: Message-ID: <7CD30D46-D176-4EB1-864B-FF0E750E03BD@gmx.de> I need to secure only a single URL on my server by demanding or enforcing client certificate based authentication. My application is called by opening "myapp.local" and if necessary it logs in a user by issuing a call to "myapp.local/login". I can not create a second hostname to do the login, so specifying a second `server` with `server_name myapplogin.local` does not work. Because the login is not necessary all the time I do not want to encorce ssl_verify for `/` because then the user would be prompted with a certificate selection dialog even before he can see the start page of my application. This is my current setup which does not work because the first `server` definition block has higher priority. I tried to keep the example short, because of this you see some `...`, the ssl/tls stuff is in my config file but is not repeated here because I think it is not part of the problem. Replacing `server_name localhost` with `server_name myapp.local` didn't make any difference. I am on mainline 1.13.8 http { server { listen 443 ssl http2; server_name localhost; ssl_certificate ... ssl_certificate_key ... ssl_session_cache shared:SSL:1m; include templates/ssl_setup.conf; location / { root /var/www/...; } } server { listen 443 ssl http2; server_name localhost; ssl_certificate ... ssl_certificate_key ... ssl_session_cache shared:SSL:1m; ssl_client_certificate /.../acceptedcas.pem; ssl_verify_depth 2; ssl_verify_client on; location /login { proxy_set_header X-SSL-Client-Serial $ssl_client_serial; proxy_set_header X-SSL-Client-... proxy_pass http://localhost:8080; } } } From Jason.Whittington at equifax.com Mon Feb 19 15:35:59 2018 From: Jason.Whittington at equifax.com (Jason Whittington) Date: Mon, 19 Feb 2018 15:35:59 +0000 Subject: Clientcertificate authentication only for a single URL Message-ID: <995C5C9AD54A3C419AF1C20A8B6AB9A432B09D1F@STLEISEXCMBX3.eis.equifax.com> I would think "location=" would solve this. What about something like the following? server { listen 443 ssl http2; server_name localhost; ssl_certificate ... ssl_certificate_key ... ssl_session_cache shared:SSL:1m; include templates/ssl_setup.conf; location = /login { proxy_set_header X-SSL-Client-Serial $ssl_client_serial; proxy_set_header X-SSL-Client-... proxy_pass http://localhost:8080; } location / { root /var/www/...; } } Jason -----Original Message----- From: nginx [mailto:nginx-bounces at nginx.org] On Behalf Of Gbg Sent: Monday, February 19, 2018 9:12 AM To: nginx at nginx.org Subject: [IE] Clientcertificate authentication only for a single URL I need to secure only a single URL on my server by demanding or enforcing client certificate based authentication. My application is called by opening "myapp.local" and if necessary it logs in a user by issuing a call to "myapp.local/login". I can not create a second hostname to do the login, so specifying a second `server` with `server_name myapplogin.local` does not work. Because the login is not necessary all the time I do not want to encorce ssl_verify for `/` because then the user would be prompted with a certificate selection dialog even before he can see the start page of my application. This is my current setup which does not work because the first `server` definition block has higher priority. I tried to keep the example short, because of this you see some `...`, the ssl/tls stuff is in my config file but is not repeated here because I think it is not part of the problem. Replacing `server_name localhost` with `server_name myapp.local` didn't make any difference. I am on mainline 1.13.8 http { server { listen 443 ssl http2; server_name localhost; ssl_certificate ... ssl_certificate_key ... ssl_session_cache shared:SSL:1m; include templates/ssl_setup.conf; location / { root /var/www/...; } } server { listen 443 ssl http2; server_name localhost; ssl_certificate ... ssl_certificate_key ... ssl_session_cache shared:SSL:1m; ssl_client_certificate /.../acceptedcas.pem; ssl_verify_depth 2; ssl_verify_client on; location /login { proxy_set_header X-SSL-Client-Serial $ssl_client_serial; proxy_set_header X-SSL-Client-... proxy_pass http://localhost:8080; } } } _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx This message contains proprietary information from Equifax which may be confidential. If you are not an intended recipient, please refrain from any disclosure, copying, distribution or use of this information and note that such actions are prohibited. If you have received this transmission in error, please notify by e-mail postmaster at equifax.com. Equifax? is a registered trademark of Equifax Inc. All rights reserved. From garbage at gmx.de Mon Feb 19 15:51:00 2018 From: garbage at gmx.de (Gbg) Date: Mon, 19 Feb 2018 16:51:00 +0100 Subject: Clientcertificate authentication only for a single URL In-Reply-To: <995C5C9AD54A3C419AF1C20A8B6AB9A432B09D1F@STLEISEXCMBX3.eis.equifax.com> References: <995C5C9AD54A3C419AF1C20A8B6AB9A432B09D1F@STLEISEXCMBX3.eis.equifax.com> Message-ID: <2DA7CAD7-5F70-42AB-8587-F48A6C4E36A1@gmx.de> I think this will set the headers only for the login URL but still ask for the certificate on all URLs. And this is not what I need, I only want to have to present a certificate for a single URL Am 19. Februar 2018 16:35:59 MEZ schrieb Jason Whittington : >I would think "location=" would solve this. What about something like >the following? > > server { > listen 443 ssl http2; > server_name localhost; > > ssl_certificate ... > ssl_certificate_key ... > ssl_session_cache shared:SSL:1m; > include templates/ssl_setup.conf; > > location = /login { > proxy_set_header X-SSL-Client-Serial $ssl_client_serial; > proxy_set_header X-SSL-Client-... > > proxy_pass http://localhost:8080; > } > > location / { > root /var/www/...; > } > } > >Jason > > >-----Original Message----- >From: nginx [mailto:nginx-bounces at nginx.org] On Behalf Of Gbg >Sent: Monday, February 19, 2018 9:12 AM >To: nginx at nginx.org >Subject: [IE] Clientcertificate authentication only for a single URL > > > >I need to secure only a single URL on my server by demanding or >enforcing client certificate based authentication. My application is >called by opening "myapp.local" and if necessary it logs in a user by >issuing a call to "myapp.local/login". I can not create a second >hostname to do the login, so specifying a second `server` with >`server_name myapplogin.local` does not work. >Because the login is not necessary all the time I do not want to >encorce ssl_verify for `/` because then the user would be prompted with >a certificate selection dialog even before he can see the start page of >my application. > >This is my current setup which does not work because the first `server` >definition block has higher priority. I tried to keep the example >short, because of this you see some `...`, the ssl/tls stuff is in my >config file but is not repeated here because I think it is not part of >the problem. >Replacing `server_name localhost` with `server_name myapp.local` didn't >make any difference. I am on mainline 1.13.8 > >http { > server { > listen 443 ssl http2; > server_name localhost; > > ssl_certificate ... > ssl_certificate_key ... > ssl_session_cache shared:SSL:1m; > include templates/ssl_setup.conf; > > location / { > root /var/www/...; > } > > } > > server { > listen 443 ssl http2; > server_name localhost; > > ssl_certificate ... > ssl_certificate_key ... > ssl_session_cache shared:SSL:1m; > > ssl_client_certificate /.../acceptedcas.pem; > ssl_verify_depth 2; > ssl_verify_client on; > > location /login { > proxy_set_header X-SSL-Client-Serial $ssl_client_serial; > proxy_set_header X-SSL-Client-... > > proxy_pass http://localhost:8080; > } > } >} >_______________________________________________ >nginx mailing list >nginx at nginx.org >http://mailman.nginx.org/mailman/listinfo/nginx > >This message contains proprietary information from Equifax which may be >confidential. If you are not an intended recipient, please refrain from >any disclosure, copying, distribution or use of this information and >note that such actions are prohibited. If you have received this >transmission in error, please notify by e-mail postmaster at equifax.com. >Equifax? is a registered trademark of Equifax Inc. All rights reserved. >_______________________________________________ >nginx mailing list >nginx at nginx.org >http://mailman.nginx.org/mailman/listinfo/nginx -- Diese Nachricht wurde von meinem Android-Ger?t mit K-9 Mail gesendet. -------------- next part -------------- An HTML attachment was scrubbed... URL: From pythoncontrol at gmail.com Mon Feb 19 20:55:25 2018 From: pythoncontrol at gmail.com (bukow bukowiec) Date: Mon, 19 Feb 2018 21:55:25 +0100 Subject: Fwd: Question about wildcard nginx entry. In-Reply-To: References: Message-ID: ---------- Forwarded message ---------- From: bukow bukowiec Date: Mon, Feb 19, 2018 at 9:44 PM Subject: Question about wildcard nginx entry. To: nginx at nginx.org Hello, first time i am writing here, i don't know if my message will go further :) Anyway, can anyone tell me why nginx does not support wildcard type like: "*.domain.tld" ? Is that possible for nginx to this as a valid wildcard entry? The point i ask is, there is a lot of tools and software, PaaS and others that work on nginx. We cant simply add "*.domain.tld" in the software because there must be custom rule build. ( ~^(.*)\.domain\.tld$ ) Hope that makes sense. Can someone tell me what is the truth about it, as i have no idea? Regards to you Thank you Mateusz Kurowski -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Tue Feb 20 07:32:34 2018 From: nginx-forum at forum.nginx.org (Azusa Taroura) Date: Tue, 20 Feb 2018 02:32:34 -0500 Subject: Mail proxy the destination server by ssl (Postfix) In-Reply-To: <20180213124141.GI24410@mdounin.ru> References: <20180213124141.GI24410@mdounin.ru> Message-ID: Thank you for your reply! Posted at Nginx Forum: https://forum.nginx.org/read.php?2,278532,278655#msg-278655 From nginx-forum at forum.nginx.org Tue Feb 20 07:56:40 2018 From: nginx-forum at forum.nginx.org (Azusa Taroura) Date: Tue, 20 Feb 2018 02:56:40 -0500 Subject: Optimizing nginx mail proxy Message-ID: <6dfde98d8df78ec2e81443929e27929b.NginxMailingListEnglish@forum.nginx.org> Hi everyone, I?m trying to optimize mail-proxy. My performance test is 1 client sends many request to 1 nginx server. This is my current settings: worker_processes auto; worker_rlimit_nofile 100000; #error_log /var/log/nginx/error.log debug; #error_log /var/log/nginx/error.log warn; error_log /var/log/nginx/error.log crit; events { worker_connections 1024; #worker_connections 4000; #multi_accept on; #use epoll; } mail { auth_http localhost:80/auth/smtp; proxy_pass_error_message on; proxy on; smtp_auth login plain; xclient on; server { listen 25; protocol smtp; } server { listen 465; protocol smtp; ssl on; ssl_certificate /etc/nginx/ssl/server.crt; ssl_certificate_key /etc/nginx/ssl/server.key; ssl_session_cache shared:SSL:20m; ssl_session_timeout 180m; #ssl_protocols TLSv1 TLSv1.1 TLSv1.2; #ssl_prefer_server_ciphers on; #ssl_ciphers ECDH+AESGCM:ECDH+AES256:ECDH+AES128:DHE+AES128:!ADH:!AECDH:!MD5; #ssl_dhparam /etc/nginx/cert/dhparam.pem; #ssl_stapling on; #ssl_stapling_verify on; #ssl_trusted_certificate /etc/nginx/cert/trustchain.crt; #resolver 8.8.8.8 8.8.4.4; } } Question>> Low cpu usage, but the performance result is not good. Do yoy know how to take full advantage of nginx? Thank you for your time. Azusa Taroura Posted at Nginx Forum: https://forum.nginx.org/read.php?2,278656,278656#msg-278656 From francis at daoine.org Tue Feb 20 08:02:10 2018 From: francis at daoine.org (Francis Daly) Date: Tue, 20 Feb 2018 08:02:10 +0000 Subject: Fwd: Question about wildcard nginx entry. In-Reply-To: References: Message-ID: <20180220080210.GQ3063@daoine.org> On Mon, Feb 19, 2018 at 09:55:25PM +0100, bukow bukowiec wrote: Hi there, > Anyway, can anyone tell me why nginx does not support wildcard type like: > > "*.domain.tld" ? Why do you think that nginx does not support that? Can you show a configuration that tries to use it but fails? In the context of server_name, the documentation is at http://nginx.org/r/server_name f -- Francis Daly francis at daoine.org From alex at samad.com.au Tue Feb 20 08:56:06 2018 From: alex at samad.com.au (Alex Samad) Date: Tue, 20 Feb 2018 19:56:06 +1100 Subject: Optimizing nginx mail proxy In-Reply-To: <6dfde98d8df78ec2e81443929e27929b.NginxMailingListEnglish@forum.nginx.org> References: <6dfde98d8df78ec2e81443929e27929b.NginxMailingListEnglish@forum.nginx.org> Message-ID: Silly question why not use postfix for this ? On 20 February 2018 at 18:56, Azusa Taroura wrote: > Hi everyone, > > I?m trying to optimize mail-proxy. > My performance test is 1 client sends many request to 1 nginx server. > > > This is my current settings: > > worker_processes auto; > worker_rlimit_nofile 100000; > > #error_log /var/log/nginx/error.log debug; > #error_log /var/log/nginx/error.log warn; > error_log /var/log/nginx/error.log crit; > events { > worker_connections 1024; > #worker_connections 4000; > #multi_accept on; > #use epoll; > } > > mail { > auth_http localhost:80/auth/smtp; > proxy_pass_error_message on; > proxy on; > smtp_auth login plain; > xclient on; > server { > listen 25; > protocol smtp; > } > server { > listen 465; > protocol smtp; > ssl on; > ssl_certificate /etc/nginx/ssl/server.crt; > ssl_certificate_key /etc/nginx/ssl/server.key; > > ssl_session_cache shared:SSL:20m; > ssl_session_timeout 180m; > > #ssl_protocols TLSv1 TLSv1.1 TLSv1.2; > #ssl_prefer_server_ciphers on; > #ssl_ciphers > ECDH+AESGCM:ECDH+AES256:ECDH+AES128:DHE+AES128:!ADH:!AECDH:!MD5; > #ssl_dhparam /etc/nginx/cert/dhparam.pem; > #ssl_stapling on; > #ssl_stapling_verify on; > #ssl_trusted_certificate /etc/nginx/cert/trustchain.crt; > #resolver 8.8.8.8 8.8.4.4; > } > } > > > Question>> > Low cpu usage, but the performance result is not good. > Do yoy know how to take full advantage of nginx? > > Thank you for your time. > Azusa Taroura > > Posted at Nginx Forum: https://forum.nginx.org/read. > php?2,278656,278656#msg-278656 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Tue Feb 20 13:02:50 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 20 Feb 2018 16:02:50 +0300 Subject: Optimizing nginx mail proxy In-Reply-To: <6dfde98d8df78ec2e81443929e27929b.NginxMailingListEnglish@forum.nginx.org> References: <6dfde98d8df78ec2e81443929e27929b.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20180220130250.GI24410@mdounin.ru> Hello! On Tue, Feb 20, 2018 at 02:56:40AM -0500, Azusa Taroura wrote: > I?m trying to optimize mail-proxy. > My performance test is 1 client sends many request to 1 nginx server. [...] > Low cpu usage, but the performance result is not good. > Do yoy know how to take full advantage of nginx? It is not clear what do you mean by "request", as there are no requests in SMTP, and what do you mean by "performance result". In general, there is no need to optimize anything in nginx mail proxy except very basic things like worker_connections and ssl_session_cache if you use SSL. Most critical parts from performance point of view are your auth_http backend and your SMTP backend. -- Maxim Dounin http://mdounin.ru/ From mdounin at mdounin.ru Tue Feb 20 14:25:00 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 20 Feb 2018 17:25:00 +0300 Subject: nginx-1.13.9 Message-ID: <20180220142500.GK24410@mdounin.ru> Changes with nginx 1.13.9 20 Feb 2018 *) Feature: HTTP/2 server push support; the "http2_push" and "http2_push_preload" directives. *) Bugfix: "header already sent" alerts might appear in logs when using cache; the bug had appeared in 1.9.13. *) Bugfix: a segmentation fault might occur in a worker process if the "ssl_verify_client" directive was used and no SSL certificate was specified in a virtual server. *) Bugfix: in the ngx_http_v2_module. *) Bugfix: in the ngx_http_dav_module. -- Maxim Dounin http://nginx.org/ From kworthington at gmail.com Tue Feb 20 19:09:29 2018 From: kworthington at gmail.com (Kevin Worthington) Date: Tue, 20 Feb 2018 14:09:29 -0500 Subject: [nginx-announce] nginx-1.13.9 In-Reply-To: <20180220142505.GL24410@mdounin.ru> References: <20180220142505.GL24410@mdounin.ru> Message-ID: Hello Nginx users, Now available: Nginx 1.13.9 for Windows https://kevinworthington.com/nginxwin1139 (32-bit and 64-bit versions) These versions are to support legacy users who are already using Cygwin based builds of Nginx. Officially supported native Windows binaries are at nginx.org. Announcements are also available here: Twitter http://twitter.com/kworthington Google+ https://plus.google.com/+KevinWorthington/ Thank you, Kevin -- Kevin Worthington kworthington *@* (gmail] [dot} {com) https://kevinworthington.com/ https://twitter.com/kworthington https://plus.google.com/+KevinWorthington/ On Tue, Feb 20, 2018 at 9:25 AM, Maxim Dounin wrote: > Changes with nginx 1.13.9 20 Feb > 2018 > > *) Feature: HTTP/2 server push support; the "http2_push" and > "http2_push_preload" directives. > > *) Bugfix: "header already sent" alerts might appear in logs when using > cache; the bug had appeared in 1.9.13. > > *) Bugfix: a segmentation fault might occur in a worker process if the > "ssl_verify_client" directive was used and no SSL certificate was > specified in a virtual server. > > *) Bugfix: in the ngx_http_v2_module. > > *) Bugfix: in the ngx_http_dav_module. > > > -- > Maxim Dounin > http://nginx.org/ > _______________________________________________ > nginx-announce mailing list > nginx-announce at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-announce > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pythoncontrol at gmail.com Tue Feb 20 21:21:10 2018 From: pythoncontrol at gmail.com (bukow bukowiec) Date: Tue, 20 Feb 2018 22:21:10 +0100 Subject: Fwd: Question about wildcard nginx entry. In-Reply-To: <20180220080210.GQ3063@daoine.org> References: <20180220080210.GQ3063@daoine.org> Message-ID: I am sorry looks like i missunderstood some basics concepts of nginx. Sorry again, my bad. Thank you for nginx and best regards! On Tue, Feb 20, 2018 at 9:02 AM, Francis Daly wrote: > On Mon, Feb 19, 2018 at 09:55:25PM +0100, bukow bukowiec wrote: > > Hi there, > > > Anyway, can anyone tell me why nginx does not support wildcard type like: > > > > "*.domain.tld" ? > > Why do you think that nginx does not support that? > > Can you show a configuration that tries to use it but fails? > > In the context of server_name, the documentation is at > http://nginx.org/r/server_name > > f > -- > Francis Daly francis at daoine.org > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From shankerwangmiao at gmail.com Wed Feb 21 04:18:27 2018 From: shankerwangmiao at gmail.com (Wang Shanker) Date: Wed, 21 Feb 2018 12:18:27 +0800 Subject: DTLS patches Message-ID: Hi, Vladimir `ngx_stream_ssl_init_connection` trys to set tcp_nodelay on the given connection. The following patch adds a test for the type of connection before set. Cheers, Miao Wang diff --git a/src/stream/ngx_stream_ssl_module.c b/src/stream/ngx_stream_ssl_module.c index f85bbb6..36f7fdd 100644 --- a/src/stream/ngx_stream_ssl_module.c +++ b/src/stream/ngx_stream_ssl_module.c @@ -369,7 +369,7 @@ ngx_stream_ssl_init_connection(ngx_ssl_t *ssl, ngx_connection_t *c) cscf = ngx_stream_get_module_srv_conf(s, ngx_stream_core_module); - if (cscf->tcp_nodelay && ngx_tcp_nodelay(c) != NGX_OK) { + if (cscf->tcp_nodelay && c->type == SOCK_STREAM && ngx_tcp_nodelay(c) != NGX_OK) { return NGX_ERROR; } > Hello all, > > For all those interested in testing DTLS support, experimental patch > is now available at > http://nginx.org/patches/dtls/ > > > Check the README.txt for details . > > If you have any feedback, please report to this thread. > From nginx-forum at forum.nginx.org Wed Feb 21 07:50:39 2018 From: nginx-forum at forum.nginx.org (mslee) Date: Wed, 21 Feb 2018 02:50:39 -0500 Subject: What kind of problems will happen to nginx when updated from centos 6 to 7 ? Message-ID: <6baab204eac4c9302ab2a638ac80749b.NginxMailingListEnglish@forum.nginx.org> Hello. I prepare for migrate from centos 6 to 7 on my servers. My servers is now providing web services to people. It must that there no happen any problems to nginx when update from centos 6 to 7. I've been trying to see if there are any of these cases, but I haven't see anything yet. If anyone's ever had a problem, let me know. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,278691,278691#msg-278691 From lucas at lucasrolff.com Wed Feb 21 07:56:02 2018 From: lucas at lucasrolff.com (Lucas Rolff) Date: Wed, 21 Feb 2018 07:56:02 +0000 Subject: What kind of problems will happen to nginx when updated from centos 6 to 7 ? In-Reply-To: <6baab204eac4c9302ab2a638ac80749b.NginxMailingListEnglish@forum.nginx.org> References: <6baab204eac4c9302ab2a638ac80749b.NginxMailingListEnglish@forum.nginx.org> Message-ID: <116571BA-A70C-440E-8FF2-FD1B42504032@lucasrolff.com> You do not update from CentOS 6 to CentOS 7 ? you install a new server ? so you?ll have proper time to perform tests on a new box. On 21/02/2018, 08.51, "nginx on behalf of mslee" wrote: Hello. I prepare for migrate from centos 6 to 7 on my servers. My servers is now providing web services to people. It must that there no happen any problems to nginx when update from centos 6 to 7. I've been trying to see if there are any of these cases, but I haven't see anything yet. If anyone's ever had a problem, let me know. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,278691,278691#msg-278691 _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at forum.nginx.org Wed Feb 21 08:06:28 2018 From: nginx-forum at forum.nginx.org (mslee) Date: Wed, 21 Feb 2018 03:06:28 -0500 Subject: What kind of problems will happen to nginx when updated from centos 6 to 7 ? In-Reply-To: <116571BA-A70C-440E-8FF2-FD1B42504032@lucasrolff.com> References: <116571BA-A70C-440E-8FF2-FD1B42504032@lucasrolff.com> Message-ID: I was short of explanation. I am testing as you say. But I want to know the problem in advance when it comes to providing services to actual users and when not. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,278691,278693#msg-278693 From nginx-forum at forum.nginx.org Wed Feb 21 08:56:29 2018 From: nginx-forum at forum.nginx.org (Azusa Taroura) Date: Wed, 21 Feb 2018 03:56:29 -0500 Subject: Optimizing nginx mail proxy In-Reply-To: References: Message-ID: Thank you for your reply! I would like to authenticate each connection. If I use Postfix for mail proxy, it authenticates each e-mail not for each connection. alexsamad Wrote: ------------------------------------------------------- > Silly question why not use postfix for this ? > > > On 20 February 2018 at 18:56, Azusa Taroura > > wrote: > > > Hi everyone, > > > > I?m trying to optimize mail-proxy. > > My performance test is 1 client sends many request to 1 nginx > server. > > > > > > This is my current settings: > > > > worker_processes auto; > > worker_rlimit_nofile 100000; > > > > #error_log /var/log/nginx/error.log debug; > > #error_log /var/log/nginx/error.log warn; > > error_log /var/log/nginx/error.log crit; > > events { > > worker_connections 1024; > > #worker_connections 4000; > > #multi_accept on; > > #use epoll; > > } > > > > mail { > > auth_http localhost:80/auth/smtp; > > proxy_pass_error_message on; > > proxy on; > > smtp_auth login plain; > > xclient on; > > server { > > listen 25; > > protocol smtp; > > } > > server { > > listen 465; > > protocol smtp; > > ssl on; > > ssl_certificate /etc/nginx/ssl/server.crt; > > ssl_certificate_key /etc/nginx/ssl/server.key; > > > > ssl_session_cache shared:SSL:20m; > > ssl_session_timeout 180m; > > > > #ssl_protocols TLSv1 TLSv1.1 TLSv1.2; > > #ssl_prefer_server_ciphers on; > > #ssl_ciphers > > ECDH+AESGCM:ECDH+AES256:ECDH+AES128:DHE+AES128:!ADH:!AECDH:!MD5; > > #ssl_dhparam /etc/nginx/cert/dhparam.pem; > > #ssl_stapling on; > > #ssl_stapling_verify on; > > #ssl_trusted_certificate /etc/nginx/cert/trustchain.crt; > > #resolver 8.8.8.8 8.8.4.4; > > } > > } > > > > > > Question>> > > Low cpu usage, but the performance result is not good. > > Do yoy know how to take full advantage of nginx? > > > > Thank you for your time. > > Azusa Taroura > > > > Posted at Nginx Forum: https://forum.nginx.org/read. > > php?2,278656,278656#msg-278656 > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Posted at Nginx Forum: https://forum.nginx.org/read.php?2,278656,278694#msg-278694 From nginx-forum at forum.nginx.org Wed Feb 21 09:11:12 2018 From: nginx-forum at forum.nginx.org (Azusa Taroura) Date: Wed, 21 Feb 2018 04:11:12 -0500 Subject: Optimizing nginx mail proxy In-Reply-To: <20180220130250.GI24410@mdounin.ru> References: <20180220130250.GI24410@mdounin.ru> Message-ID: <6e6727736af3242bbd882cecbfb6dd14.NginxMailingListEnglish@forum.nginx.org> Hello! The "request" means send mail request on the SMTP/SMTPS connection. The "performance result" means the speed per minutes for the mail sending. So you're right about the following point. > Most critical parts from > performance point of view are your auth_http backend and your SMTP > backend. I understand that worker_connections and ssl_session_cache are seems to be useful as the performance of mail. Thank you for your answer:) Maxim Dounin Wrote: ------------------------------------------------------- > Hello! > > On Tue, Feb 20, 2018 at 02:56:40AM -0500, Azusa Taroura wrote: > > > I?m trying to optimize mail-proxy. > > My performance test is 1 client sends many request to 1 nginx > server. > > [...] > > > Low cpu usage, but the performance result is not good. > > Do yoy know how to take full advantage of nginx? > > It is not clear what do you mean by "request", as there are no > requests in SMTP, and what do you mean by "performance result". > > In general, there is no need to optimize anything in nginx mail > proxy except very basic things like worker_connections and > ssl_session_cache if you use SSL. Most critical parts from > performance point of view are your auth_http backend and your SMTP > backend. > > -- > Maxim Dounin > http://mdounin.ru/ > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Posted at Nginx Forum: https://forum.nginx.org/read.php?2,278656,278695#msg-278695 From nginx-forum at forum.nginx.org Wed Feb 21 09:56:54 2018 From: nginx-forum at forum.nginx.org (entpneur) Date: Wed, 21 Feb 2018 04:56:54 -0500 Subject: Mail Proxy for two domains nehind NAT In-Reply-To: <20180212125031.GV24410@mdounin.ru> References: <20180212125031.GV24410@mdounin.ru> Message-ID: <356bb48b601c024a75d310d5b9da7227.NginxMailingListEnglish@forum.nginx.org> Thanks Maxim, how should I put all in one server block because I thought the logic will be as follow: mail { auth_http 127.0.0.1/auth.php; imap_capabilities "IMAP4rev1" "UIDPLUS"; server { listen 0.0.0.0:143; server_name mail.domainA.com; protocol imap; proxy on; } server { listen 0.0.0.0:143; server_name mail.domainB.com; protocol imap; proxy on; } } Thanks for helping. Regards, YSC Posted at Nginx Forum: https://forum.nginx.org/read.php?2,278489,278698#msg-278698 From nginx-forum at forum.nginx.org Wed Feb 21 09:59:51 2018 From: nginx-forum at forum.nginx.org (beatnut) Date: Wed, 21 Feb 2018 04:59:51 -0500 Subject: ngx_http_geo_module vs allow/deny performance Message-ID: <77116ae7eb109d0d71bbbba0b49c9269.NginxMailingListEnglish@forum.nginx.org> Hello all, What is the best approach in relation to performance when i want to block a fiew hundrets or a fiew thousands of ip addresses ? I ofted read that ngx_http_geo_module is better for many ip addresses. What it depends on? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,278699,278699#msg-278699 From vl at nginx.com Wed Feb 21 10:18:58 2018 From: vl at nginx.com (Vladimir Homutov) Date: Wed, 21 Feb 2018 13:18:58 +0300 Subject: DTLS patches In-Reply-To: References: Message-ID: <20180221101858.GA12440@vlpc> On Wed, Feb 21, 2018 at 12:18:27PM +0800, Wang Shanker wrote: > Hi, Vladimir > > `ngx_stream_ssl_init_connection` trys to set tcp_nodelay on the given connection. > The following patch adds a test for the type of connection before set. > > Cheers, > > Miao Wang > > diff --git a/src/stream/ngx_stream_ssl_module.c b/src/stream/ngx_stream_ssl_module.c > index f85bbb6..36f7fdd 100644 > --- a/src/stream/ngx_stream_ssl_module.c > +++ b/src/stream/ngx_stream_ssl_module.c > @@ -369,7 +369,7 @@ ngx_stream_ssl_init_connection(ngx_ssl_t *ssl, ngx_connection_t *c) > > cscf = ngx_stream_get_module_srv_conf(s, ngx_stream_core_module); > > - if (cscf->tcp_nodelay && ngx_tcp_nodelay(c) != NGX_OK) { > + if (cscf->tcp_nodelay && c->type == SOCK_STREAM && ngx_tcp_nodelay(c) != NGX_OK) { > return NGX_ERROR; > } > > Hi, Miao The change is indeed correct, it is required since http://hg.nginx.org/nginx/rev/29c6d66b83ba Have you tried patches in work? From nginx-forum at forum.nginx.org Wed Feb 21 13:47:37 2018 From: nginx-forum at forum.nginx.org (shankerwangmiao) Date: Wed, 21 Feb 2018 08:47:37 -0500 Subject: DTLS patches In-Reply-To: <20180221101858.GA12440@vlpc> References: <20180221101858.GA12440@vlpc> Message-ID: <44a1e93687ac4365614d825bd2650699.NginxMailingListEnglish@forum.nginx.org> Vladimir Homutov Wrote: ------------------------------------------------------- > On Wed, Feb 21, 2018 at 12:18:27PM +0800, Wang Shanker wrote: > > Hi, Vladimir > > > > `ngx_stream_ssl_init_connection` trys to set tcp_nodelay on the > given connection. > > The following patch adds a test for the type of connection before > set. > > > > Cheers, > > > > Miao Wang > > > > diff --git a/src/stream/ngx_stream_ssl_module.c > b/src/stream/ngx_stream_ssl_module.c > > index f85bbb6..36f7fdd 100644 > > --- a/src/stream/ngx_stream_ssl_module.c > > +++ b/src/stream/ngx_stream_ssl_module.c > > @@ -369,7 +369,7 @@ ngx_stream_ssl_init_connection(ngx_ssl_t *ssl, > ngx_connection_t *c) > > > > cscf = ngx_stream_get_module_srv_conf(s, > ngx_stream_core_module); > > > > - if (cscf->tcp_nodelay && ngx_tcp_nodelay(c) != NGX_OK) { > > + if (cscf->tcp_nodelay && c->type == SOCK_STREAM && > ngx_tcp_nodelay(c) != NGX_OK) { > > return NGX_ERROR; > > } > > > > > > Hi, Miao > > The change is indeed correct, it is required since > http://hg.nginx.org/nginx/rev/29c6d66b83ba > > Have you tried patches in work? > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx I have tested this patch in my environment. Before the patch is applied, `tcp_nodelay off` needs to be placed in every `server` clause with DTLS enabled to work the problem around. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,274289,278704#msg-278704 From vl at nginx.com Wed Feb 21 14:12:33 2018 From: vl at nginx.com (Vladimir Homutov) Date: Wed, 21 Feb 2018 17:12:33 +0300 Subject: DTLS patches In-Reply-To: <44a1e93687ac4365614d825bd2650699.NginxMailingListEnglish@forum.nginx.org> References: <20180221101858.GA12440@vlpc> <44a1e93687ac4365614d825bd2650699.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20180221141232.GA10488@vlpc> On Wed, Feb 21, 2018 at 08:47:37AM -0500, shankerwangmiao wrote: > > I have tested this patch in my environment. Before the patch is applied, > `tcp_nodelay off` needs to be placed in every `server` clause with DTLS > enabled to work the problem around. > Hello, can you please elaborate about your environment? Do you proxy DTLS stream directly to backend, or you perform DTLS offload ? What protocol are you using and which server/client software before/behind nginx? I'm attaching refreshed patch against nginx-1.13.9 for those who are interested to test. -------------- next part -------------- # HG changeset patch # User Vladimir Homutov # Date 1519222093 -10800 # Wed Feb 21 17:08:13 2018 +0300 # Node ID b4b14f20123598d6c4bdff01e3c421e4f180f526 # Parent 88aad69eccef0422719698b54c82e3a020c0fe93 Stream: experimental DTLS support. With the patch, the "listen" directive in the "stream" block now accepts both "udp" and "ssl" directives. The "ssl_protocols" and "proxy_ssl_protocols" directives now accepts "DTLSv1" and "DTLSv1.2" parameters that enable support of corresponding protocols. DTLS termination: stream { # please enable debug log error_log logs/error.log debug; server { # add 'udp' and 'ssl' simultaneously to the listen directive listen 127.0.0.1:4443 udp ssl; # enable DTLSv1 or DTLSv1.2 or both protocols ssl_protocols DTLSv1; # setup other SSL options as usually ssl_certificate ...; ssl_certificate_key ...; proxy_pass ...; } } DTLS to backends: stream { # please enable debug log error_log logs/error.log debug; server { listen 127.0.0.1:5555 udp; # enable SSL to proxy proxy_ssl on; # enable DTLSv1 or DTLSv1.2 or both protocols proxy_ssl_protocols DTLSv1; # setup other proxy SSL options as usually proxy_ssl_certificate ...; proxy_ssl_certificate_key ...; # the backend is a DTLS server proxy_pass 127.0.0.1:4433; } diff --git a/auto/lib/openssl/conf b/auto/lib/openssl/conf --- a/auto/lib/openssl/conf +++ b/auto/lib/openssl/conf @@ -132,4 +132,16 @@ END exit 1 fi + ngx_feature="OpenSSL DTLS support" + ngx_feature_name="NGX_OPENSSL_DTLS" + ngx_feature_run=no + ngx_feature_incs="#include " + ngx_feature_path= + ngx_feature_libs="-lssl -lcrypto $NGX_LIBDL" + ngx_feature_test="DTLSv1_listen(NULL, NULL)" + . auto/feature + + if [ $ngx_found = yes ]; then + have=NGX_SSL_DTLS . auto/have + fi fi diff --git a/src/event/ngx_event.h b/src/event/ngx_event.h --- a/src/event/ngx_event.h +++ b/src/event/ngx_event.h @@ -507,6 +507,7 @@ void ngx_event_accept(ngx_event_t *ev); #if !(NGX_WIN32) void ngx_event_recvmsg(ngx_event_t *ev); #endif +ngx_int_t ngx_event_udp_accept(ngx_connection_t *c); ngx_int_t ngx_trylock_accept_mutex(ngx_cycle_t *cycle); u_char *ngx_accept_log_error(ngx_log_t *log, u_char *buf, size_t len); diff --git a/src/event/ngx_event_accept.c b/src/event/ngx_event_accept.c --- a/src/event/ngx_event_accept.c +++ b/src/event/ngx_event_accept.c @@ -644,6 +644,81 @@ ngx_event_recvmsg(ngx_event_t *ev) ngx_int_t +ngx_event_udp_accept(ngx_connection_t *c) +{ + int on, rc; + ngx_socket_t fd; + + fd = ngx_socket(c->listening->sockaddr->sa_family, SOCK_DGRAM, 0); + if (fd == (ngx_socket_t) -1) { + ngx_log_error(NGX_LOG_ALERT, c->log, ngx_socket_errno, + ngx_socket_n " failed"); + return NGX_ERROR; + } + + if (ngx_nonblocking(fd) == -1) { + ngx_log_error(NGX_LOG_ALERT, c->log, ngx_socket_errno, + ngx_nonblocking_n " failed"); + goto failed; + } + + on = 1; + rc = setsockopt(fd, SOL_SOCKET, SO_REUSEADDR, (char *) &on, sizeof(int)); + if (rc == -1) { + ngx_log_error(NGX_LOG_ALERT, c->log, ngx_socket_errno, + "setsockopt(SO_REUSEADDR, 1) failed"); + goto failed; + } + +#if (NGX_HAVE_REUSEPORT && NGX_FREEBSD) + on = 1; + rc = setsockopt(fd, SOL_SOCKET, SO_REUSEPORT, (char *) &on, sizeof(int)); + if (rc == -1) { + ngx_log_error(NGX_LOG_ALERT, c->log, ngx_socket_errno, + "setsockopt(SO_REUSEPORT, 1) failed"); + goto failed; + } +#endif + + rc = bind(fd, c->listening->sockaddr, c->listening->socklen); + if (-1 == rc) { + ngx_log_error(NGX_LOG_EMERG, c->log, ngx_socket_errno, + "bind() to %V failed", &c->listening->addr_text); + goto failed; + } + + if (connect(fd, c->sockaddr, c->socklen) == -1) { + ngx_log_error(NGX_LOG_ALERT, c->log, ngx_socket_errno, + "connect() failed"); + goto failed; + } + + c->fd = fd; + c->shared = 0; + c->recv = ngx_udp_recv; + + if (ngx_add_conn && (ngx_event_flags & NGX_USE_EPOLL_EVENT) == 0) { + if (ngx_add_conn(c) == NGX_ERROR) { + goto failed; + } + } + + return NGX_OK; + +failed: + + if (ngx_close_socket(fd) == -1) { + ngx_log_error(NGX_LOG_EMERG, c->log, ngx_socket_errno, + ngx_close_socket_n " failed"); + } + + c->fd = (ngx_socket_t) -1; + + return NGX_ERROR; +} + + +ngx_int_t ngx_trylock_accept_mutex(ngx_cycle_t *cycle) { if (ngx_shmtx_trylock(&ngx_accept_mutex)) { diff --git a/src/event/ngx_event_openssl.c b/src/event/ngx_event_openssl.c --- a/src/event/ngx_event_openssl.c +++ b/src/event/ngx_event_openssl.c @@ -69,6 +69,22 @@ static void *ngx_openssl_create_conf(ngx static char *ngx_openssl_engine(ngx_conf_t *cf, ngx_command_t *cmd, void *conf); static void ngx_openssl_exit(ngx_cycle_t *cycle); +#if defined(NGX_HAVE_DTLS) +static int ngx_dtls_client_hmac(SSL *ssl, u_char res[EVP_MAX_MD_SIZE], + unsigned int *rlen); +static int ngx_dtls_generate_cookie_cb(SSL *ssl, unsigned char *cookie, + unsigned int *cookie_len); +static int ngx_dtls_verify_cookie_cb(SSL *ssl, +#if OPENSSL_VERSION_NUMBER >= 0x10100000L + const +#endif +unsigned char *cookie, unsigned int cookie_len); +static ngx_int_t ngx_dtls_handshake(ngx_connection_t *c); + + +#define COOKIE_SECRET_LENGTH 32 +static u_char ngx_dtls_cookie_secret[COOKIE_SECRET_LENGTH]; +#endif static ngx_command_t ngx_openssl_commands[] = { @@ -232,13 +248,71 @@ ngx_ssl_init(ngx_log_t *log) ngx_int_t ngx_ssl_create(ngx_ssl_t *ssl, ngx_uint_t protocols, void *data) { - ssl->ctx = SSL_CTX_new(SSLv23_method()); + if (protocols & NGX_SSL_DTLSv1 || protocols & NGX_SSL_DTLSv1_2) { + +#if defined(NGX_HAVE_DTLS) + +#if OPENSSL_VERSION_NUMBER < 0x10100000L + + if (protocols & NGX_SSL_DTLSv1_2) { + + /* DTLS 1.2 is only supported since 1.0.2 */ + + /* DTLSv1_x_method() functions are deprecated in 1.1.0 */ + +#if OPENSSL_VERSION_NUMBER < 0x10002000L + + /* ancient ... 1.0.2 */ + ngx_log_error(NGX_LOG_EMERG, ssl->log, 0, + "DTLSv1.2 is not supported by " + "the used version of OpenSSL"); + return NGX_ERROR; + +#else + /* 1.0.2 ... 1.1 */ + ssl->ctx = SSL_CTX_new(DTLSv1_2_method()); +#endif + } + + /* note: either 1.2 or 1.1 methods may be initialized, not both, + * preferred is 1.2 if both specified in ssl_protocols + */ + + if (protocols & NGX_SSL_DTLSv1 && ssl->ctx == NULL) { + ssl->ctx = SSL_CTX_new(DTLSv1_method()); + } +#else + ssl->ctx = SSL_CTX_new(DTLS_method()); +#endif + +#else + ngx_log_error(NGX_LOG_EMERG, ssl->log, 0, + "OpenSSL is built without DTLS support"); + return NGX_ERROR; +#endif + + } else { + ssl->ctx = SSL_CTX_new(SSLv23_method()); + } if (ssl->ctx == NULL) { ngx_ssl_error(NGX_LOG_EMERG, ssl->log, 0, "SSL_CTX_new() failed"); return NGX_ERROR; } +#if defined(NGX_HAVE_DTLS) + if (protocols & NGX_SSL_DTLSv1 || protocols & NGX_SSL_DTLSv1_2) { + + SSL_CTX_set_cookie_generate_cb(ssl->ctx, ngx_dtls_generate_cookie_cb); + SSL_CTX_set_cookie_verify_cb(ssl->ctx, ngx_dtls_verify_cookie_cb); + + /* TODO: probably this should be rotated regularly */ + if (!RAND_bytes(ngx_dtls_cookie_secret, COOKIE_SECRET_LENGTH)) { + return NGX_ERROR; + } + } +#endif + if (SSL_CTX_set_ex_data(ssl->ctx, ngx_ssl_server_conf_index, data) == 0) { ngx_ssl_error(NGX_LOG_EMERG, ssl->log, 0, "SSL_CTX_set_ex_data() failed"); @@ -1191,6 +1265,7 @@ ngx_ssl_create_connection(ngx_ssl_t *ssl if (flags & NGX_SSL_CLIENT) { SSL_set_connect_state(sc->connection); + sc->client = 1; } else { SSL_set_accept_state(sc->connection); @@ -1227,6 +1302,19 @@ ngx_ssl_handshake(ngx_connection_t *c) int n, sslerr; ngx_err_t err; +#if defined(NGX_HAVE_DTLS) + ngx_int_t rc; + + if (c->type == SOCK_DGRAM && !c->ssl->client + && !c->ssl->dtls_cookie_accepted) + { + rc = ngx_dtls_handshake(c); + if (rc != NGX_OK) { + return rc; + } + } +#endif + ngx_ssl_clear_error(c->log); n = SSL_do_handshake(c->ssl->connection); @@ -1328,6 +1416,17 @@ ngx_ssl_handshake(ngx_connection_t *c) return NGX_ERROR; } + if (c->ssl->bio_is_mem) { + SSL_set_rfd(c->ssl->connection, c->fd); + c->ssl->bio_is_mem = 0; + + /* buffer is consumed by openssl, we don't want to proxy it */ + c->buffer->pos = c->buffer->last; + + /* continue with handshake with socket */ + return ngx_ssl_handshake(c); + } + return NGX_AGAIN; } @@ -1391,6 +1490,215 @@ ngx_ssl_handshake_handler(ngx_event_t *e } +#if defined(NGX_HAVE_DTLS) + +/* + * RFC 6347, 4.2.1: + * + * When responding to a HelloVerifyRequest, the client MUST use the same + * parameter values (version, random, session_id, cipher_suites, + * compression_method) as it did in the original ClientHello. The + * server SHOULD use those values to generate its cookie and verify that + * they are correct upon cookie receipt. + */ + +static int +ngx_dtls_client_hmac(SSL *ssl, u_char res[EVP_MAX_MD_SIZE], unsigned int *rlen) +{ + u_char *p; + size_t len; + ngx_connection_t *c; + + u_char buffer[64]; + + c = ngx_ssl_get_connection(ssl); + + p = buffer; + + p = ngx_cpymem(p, c->addr_text.data, c->addr_text.len); + p = ngx_sprintf(p, "%d", ngx_inet_get_port(c->sockaddr)); + + len = p - buffer; + + HMAC(EVP_sha1(), (const void*) ngx_dtls_cookie_secret, + COOKIE_SECRET_LENGTH, (const u_char*) buffer, len, res, rlen); + + return NGX_OK; +} + + +static int +ngx_dtls_generate_cookie_cb(SSL *ssl, unsigned char *cookie, + unsigned int *cookie_len) +{ + unsigned int rlen; + u_char res[EVP_MAX_MD_SIZE]; + + if (ngx_dtls_client_hmac(ssl, res, &rlen) != NGX_OK) { + return 0; + } + + ngx_memcpy(cookie, res, rlen); + *cookie_len = rlen; + + return 1; +} + + +static int +ngx_dtls_verify_cookie_cb(SSL *ssl, +#if OPENSSL_VERSION_NUMBER >= 0x10100000L + const +#endif + unsigned char *cookie, unsigned int cookie_len) +{ + unsigned int rlen; + u_char res[EVP_MAX_MD_SIZE]; + + if (ngx_dtls_client_hmac(ssl, res, &rlen) != NGX_OK) { + return 0; + } + + if (cookie_len == rlen && ngx_memcmp(res, cookie, rlen) == 0) { + return 1; + } + + return 0; +} + + +static ngx_int_t +ngx_dtls_handshake(ngx_connection_t *c) +{ + int n, rd; + BIO *rbio, *wbio; + ngx_int_t rc; +#if OPENSSL_VERSION_NUMBER >= 0x10100000L + BIO_ADDR *peer; +#else + SSL *ssl; + struct sockaddr *peer; +#endif + + wbio = BIO_new(BIO_s_mem()); + if (wbio == NULL) { + ngx_log_error(NGX_LOG_EMERG, c->log, 0, "BIO_new"); + return NGX_ERROR; + } + + rbio = BIO_new_mem_buf(c->buffer->pos, c->buffer->last - c->buffer->pos); + if (rbio == NULL) { + ngx_log_error(NGX_LOG_EMERG, c->log, 0, "BIO_new_mem_buf"); + return NGX_ERROR; + } + + BIO_set_mem_eof_return(rbio, -1); + + SSL_set_bio(c->ssl->connection, rbio, wbio); + +#if OPENSSL_VERSION_NUMBER >= 0x10100000L + + peer = BIO_ADDR_new(); + + if (peer == NULL) { + ngx_log_error(NGX_LOG_EMERG, c->log, 0, "BIO_ADDR_new"); + return NGX_ERROR; + } + +#else + + peer = ngx_palloc(c->pool, c->socklen); + if (peer == NULL) { + return NGX_ERROR; + } + + ssl = c->ssl->connection; + SSL_set_options(ssl, SSL_OP_COOKIE_EXCHANGE); + +#endif + + rc = DTLSv1_listen(c->ssl->connection, peer); + +#if OPENSSL_VERSION_NUMBER >= 0x10100000L + BIO_ADDR_free(peer); +#endif + + if (rc < 0) { + ngx_ssl_error(NGX_LOG_EMERG, c->log, 0, + "DTLSv1_listen error %d", rc); + +#if OPENSSL_VERSION_NUMBER >= 0x10100000L + return NGX_ERROR; +#else + /* no way to distinguish SSL error from NBIO */ + if (ERR_peek_last_error() != 0) { + return NGX_ERROR; + } + + /* assume -1 comes from NBIO and act accordingly */ + rc = 0; +#endif + } + + ngx_log_debug1(NGX_LOG_DEBUG_EVENT, c->log, 0, "DTLSv1_listen: %i", rc); + + if (rc == 0) { + /* non-blocking IO: need to send hello-verify request */ + n = BIO_ctrl_pending(wbio); + if (n > 0) { + /* openssl provided some data to send */ + rd = BIO_read(wbio, c->buffer->start, n); + if (rd != n) { + ngx_log_error(NGX_LOG_EMERG, c->log, 0, "DTLS BIO_read failed"); + return NGX_ERROR; + } + + rc = ngx_udp_send(c, c->buffer->start, n); + if (rc != n) { + return NGX_ERROR; + } + + /* ok, we sent response, session is over, + * waiting for helllo with cookie + */ + + } else { + /* renegotiation or other unexpected result */ + return NGX_ERROR; + } + + /* this session is no longer required, new will be created */ + + return NGX_ABORT; /* drop this session*/ + } + + /* rc >= 1: client with a valid cookie */ + + /* DTLSv1_listen PEEK'ed the data, SSL_accept() needs to read from start */ + if (BIO_reset(rbio) != 1) { + ngx_log_error(NGX_LOG_ALERT, c->log, 0, "BIO_reset"); + return NGX_ERROR; + } + + if (c->shared) { + if (ngx_event_udp_accept(c) != NGX_OK) { + return NGX_ERROR; + } + } + + /* to be reset by handshake when mem buf content is consumed */ + c->ssl->bio_is_mem = 1; + + /* write BIO is real socket, to start sending server hello */ + SSL_set_wfd(c->ssl->connection, c->fd); + + c->ssl->dtls_cookie_accepted = 1; + + return NGX_OK; +} + +#endif + ssize_t ngx_ssl_recv_chain(ngx_connection_t *c, ngx_chain_t *cl, off_t limit) { diff --git a/src/event/ngx_event_openssl.h b/src/event/ngx_event_openssl.h --- a/src/event/ngx_event_openssl.h +++ b/src/event/ngx_event_openssl.h @@ -50,6 +50,9 @@ #endif +#if !defined(OPENSSL_NO_DTLS) && OPENSSL_VERSION_NUMBER >= 0x009080dfL +#define NGX_HAVE_DTLS +#endif #define ngx_ssl_session_t SSL_SESSION #define ngx_ssl_conn_t SSL @@ -86,6 +89,9 @@ struct ngx_ssl_connection_s { unsigned no_wait_shutdown:1; unsigned no_send_shutdown:1; unsigned handshake_buffer_set:1; + unsigned dtls_cookie_accepted:1; + unsigned bio_is_mem:1; + unsigned client:1; }; @@ -138,6 +144,8 @@ typedef struct { #define NGX_SSL_TLSv1_1 0x0010 #define NGX_SSL_TLSv1_2 0x0020 #define NGX_SSL_TLSv1_3 0x0040 +#define NGX_SSL_DTLSv1 0x0080 +#define NGX_SSL_DTLSv1_2 0x0200 #define NGX_SSL_BUFFER 1 diff --git a/src/stream/ngx_stream_core_module.c b/src/stream/ngx_stream_core_module.c --- a/src/stream/ngx_stream_core_module.c +++ b/src/stream/ngx_stream_core_module.c @@ -849,12 +849,6 @@ ngx_stream_core_listen(ngx_conf_t *cf, n return "\"backlog\" parameter is incompatible with \"udp\""; } -#if (NGX_STREAM_SSL) - if (ls->ssl) { - return "\"ssl\" parameter is incompatible with \"udp\""; - } -#endif - if (ls->so_keepalive) { return "\"so_keepalive\" parameter is incompatible with \"udp\""; } diff --git a/src/stream/ngx_stream_proxy_module.c b/src/stream/ngx_stream_proxy_module.c --- a/src/stream/ngx_stream_proxy_module.c +++ b/src/stream/ngx_stream_proxy_module.c @@ -27,6 +27,7 @@ typedef struct { size_t upload_rate; size_t download_rate; ngx_uint_t responses; + ngx_uint_t requests; ngx_uint_t next_upstream_tries; ngx_flag_t next_upstream; ngx_flag_t proxy_protocol; @@ -95,6 +96,8 @@ static void ngx_stream_proxy_ssl_handsha static ngx_int_t ngx_stream_proxy_ssl_name(ngx_stream_session_t *s); static ngx_int_t ngx_stream_proxy_set_ssl(ngx_conf_t *cf, ngx_stream_proxy_srv_conf_t *pscf); +static char *ngx_stream_proxy_set_ssl_protocols(ngx_conf_t *cf, + ngx_command_t *cmd, void *conf); static ngx_conf_bitmask_t ngx_stream_proxy_ssl_protocols[] = { @@ -104,6 +107,8 @@ static ngx_conf_bitmask_t ngx_stream_pr { ngx_string("TLSv1.1"), NGX_SSL_TLSv1_1 }, { ngx_string("TLSv1.2"), NGX_SSL_TLSv1_2 }, { ngx_string("TLSv1.3"), NGX_SSL_TLSv1_3 }, + { ngx_string("DTLSv1"), NGX_SSL_DTLSv1 }, + { ngx_string("DTLSv1.2"), NGX_SSL_DTLSv1_2 }, { ngx_null_string, 0 } }; @@ -191,6 +196,13 @@ static ngx_command_t ngx_stream_proxy_c offsetof(ngx_stream_proxy_srv_conf_t, responses), NULL }, + { ngx_string("proxy_requests"), + NGX_STREAM_MAIN_CONF|NGX_STREAM_SRV_CONF|NGX_CONF_TAKE1, + ngx_conf_set_num_slot, + NGX_STREAM_SRV_CONF_OFFSET, + offsetof(ngx_stream_proxy_srv_conf_t, requests), + NULL }, + { ngx_string("proxy_next_upstream"), NGX_STREAM_MAIN_CONF|NGX_STREAM_SRV_CONF|NGX_CONF_FLAG, ngx_conf_set_flag_slot, @@ -237,7 +249,7 @@ static ngx_command_t ngx_stream_proxy_c { ngx_string("proxy_ssl_protocols"), NGX_STREAM_MAIN_CONF|NGX_STREAM_SRV_CONF|NGX_CONF_1MORE, - ngx_conf_set_bitmask_slot, + ngx_stream_proxy_set_ssl_protocols, NGX_STREAM_SRV_CONF_OFFSET, offsetof(ngx_stream_proxy_srv_conf_t, ssl_protocols), &ngx_stream_proxy_ssl_protocols }, @@ -398,7 +410,9 @@ ngx_stream_proxy_handler(ngx_stream_sess return; } - if (c->type == SOCK_STREAM) { + if (c->type == SOCK_STREAM || (c->type == SOCK_DGRAM && c->ssl) + || (c->shared && pscf->requests)) + { p = ngx_pnalloc(c->pool, pscf->buffer_size); if (p == NULL) { ngx_stream_proxy_finalize(s, NGX_STREAM_INTERNAL_SERVER_ERROR); @@ -422,6 +436,13 @@ ngx_stream_proxy_handler(ngx_stream_sess } } + if (c->shared && pscf->requests) { + if (ngx_event_udp_accept(c) != NGX_OK) { + ngx_stream_proxy_finalize(s, NGX_STREAM_INTERNAL_SERVER_ERROR); + return; + } + } + if (u->resolved == NULL) { uscf = pscf->upstream; @@ -754,14 +775,16 @@ ngx_stream_proxy_init_upstream(ngx_strea #if (NGX_STREAM_SSL) - if (pc->type == SOCK_STREAM && pscf->ssl) { - - if (u->proxy_protocol) { - if (ngx_stream_proxy_send_proxy_protocol(s) != NGX_OK) { - return; + if (pscf->ssl) { + + if (pc->type == SOCK_STREAM) { + if (u->proxy_protocol) { + if (ngx_stream_proxy_send_proxy_protocol(s) != NGX_OK) { + return; + } + + u->proxy_protocol = 0; } - - u->proxy_protocol = 0; } if (pc->ssl == NULL) { @@ -1044,6 +1067,8 @@ static void ngx_stream_proxy_ssl_handshake(ngx_connection_t *pc) { long rc; + u_char *p; + ngx_connection_t *c; ngx_stream_session_t *s; ngx_stream_upstream_t *u; ngx_stream_proxy_srv_conf_t *pscf; @@ -1083,6 +1108,29 @@ ngx_stream_proxy_ssl_handshake(ngx_conne ngx_del_timer(pc->write); } + c = s->connection; + + if (c->shared && pscf->requests) { + + if (ngx_event_udp_accept(c) != NGX_OK) { + ngx_stream_proxy_finalize(s, NGX_STREAM_INTERNAL_SERVER_ERROR); + return; + } + + p = ngx_pnalloc(c->pool, pscf->buffer_size); + if (p == NULL) { + ngx_stream_proxy_finalize(s, NGX_STREAM_INTERNAL_SERVER_ERROR); + return; + } + + u = s->upstream; + + u->downstream_buf.start = p; + u->downstream_buf.end = p + pscf->buffer_size; + u->downstream_buf.pos = p; + u->downstream_buf.last = p; + } + ngx_stream_proxy_init_upstream(s); return; @@ -1584,8 +1632,11 @@ ngx_stream_proxy_process(ngx_stream_sess } } - if (c->type == SOCK_DGRAM && ++u->responses == pscf->responses) - { + if (c->type == SOCK_DGRAM && from_upstream) { + u->responses++; + } + + if (c->type == SOCK_DGRAM && u->responses == pscf->responses) { src->read->ready = 0; src->read->eof = 1; } @@ -1848,6 +1899,7 @@ ngx_stream_proxy_create_srv_conf(ngx_con conf->upload_rate = NGX_CONF_UNSET_SIZE; conf->download_rate = NGX_CONF_UNSET_SIZE; conf->responses = NGX_CONF_UNSET_UINT; + conf->requests = NGX_CONF_UNSET_UINT; conf->next_upstream_tries = NGX_CONF_UNSET_UINT; conf->next_upstream = NGX_CONF_UNSET; conf->proxy_protocol = NGX_CONF_UNSET; @@ -1893,6 +1945,9 @@ ngx_stream_proxy_merge_srv_conf(ngx_conf ngx_conf_merge_uint_value(conf->responses, prev->responses, NGX_MAX_INT32_VALUE); + ngx_conf_merge_uint_value(conf->requests, + prev->requests, NGX_MAX_INT32_VALUE); + ngx_conf_merge_uint_value(conf->next_upstream_tries, prev->next_upstream_tries, 0); @@ -2019,6 +2074,34 @@ ngx_stream_proxy_set_ssl(ngx_conf_t *cf, return NGX_OK; } + +static +char *ngx_stream_proxy_set_ssl_protocols(ngx_conf_t *cf, ngx_command_t *cmd, + void *conf) +{ + ngx_stream_proxy_srv_conf_t *pscf = conf; + + char *rv; + + rv = ngx_conf_set_bitmask_slot(cf, cmd, conf); + + if (rv != NGX_CONF_OK) { + return rv; + } + + /* DTLS protocol requires corresponding TLS version to be set */ + + if (pscf->ssl_protocols & NGX_SSL_DTLSv1) { + pscf->ssl_protocols |= NGX_SSL_TLSv1; + } + + if (pscf->ssl_protocols & NGX_SSL_DTLSv1_2) { + pscf->ssl_protocols |= NGX_SSL_TLSv1_2; + } + + return NGX_CONF_OK; +} + #endif diff --git a/src/stream/ngx_stream_ssl_module.c b/src/stream/ngx_stream_ssl_module.c --- a/src/stream/ngx_stream_ssl_module.c +++ b/src/stream/ngx_stream_ssl_module.c @@ -34,6 +34,8 @@ static char *ngx_stream_ssl_merge_conf(n static char *ngx_stream_ssl_password_file(ngx_conf_t *cf, ngx_command_t *cmd, void *conf); +static char *ngx_stream_set_ssl_protocols(ngx_conf_t *cf, ngx_command_t *cmd, + void *conf); static char *ngx_stream_ssl_session_cache(ngx_conf_t *cf, ngx_command_t *cmd, void *conf); static ngx_int_t ngx_stream_ssl_init(ngx_conf_t *cf); @@ -46,6 +48,9 @@ static ngx_conf_bitmask_t ngx_stream_ss { ngx_string("TLSv1.1"), NGX_SSL_TLSv1_1 }, { ngx_string("TLSv1.2"), NGX_SSL_TLSv1_2 }, { ngx_string("TLSv1.3"), NGX_SSL_TLSv1_3 }, + { ngx_string("DTLSv1"), NGX_SSL_DTLSv1 }, + { ngx_string("DTLSv1.2"), NGX_SSL_DTLSv1_2 }, + { ngx_null_string, 0 } }; @@ -105,7 +110,7 @@ static ngx_command_t ngx_stream_ssl_com { ngx_string("ssl_protocols"), NGX_STREAM_MAIN_CONF|NGX_STREAM_SRV_CONF|NGX_CONF_1MORE, - ngx_conf_set_bitmask_slot, + ngx_stream_set_ssl_protocols, NGX_STREAM_SRV_CONF_OFFSET, offsetof(ngx_stream_ssl_conf_t, protocols), &ngx_stream_ssl_protocols }, @@ -365,7 +370,8 @@ ngx_stream_ssl_init_connection(ngx_ssl_t cscf = ngx_stream_get_module_srv_conf(s, ngx_stream_core_module); - if (cscf->tcp_nodelay && ngx_tcp_nodelay(c) != NGX_OK) { + if (c->type == SOCK_STREAM + && cscf->tcp_nodelay && ngx_tcp_nodelay(c) != NGX_OK) { return NGX_ERROR; } @@ -389,6 +395,11 @@ ngx_stream_ssl_init_connection(ngx_ssl_t return NGX_AGAIN; } + if (rc == NGX_ABORT) { + /* DTLS handshake sent the cookie to client */ + return NGX_ERROR; + } + /* rc == NGX_OK */ return NGX_OK; @@ -721,6 +732,33 @@ ngx_stream_ssl_password_file(ngx_conf_t static char * +ngx_stream_set_ssl_protocols(ngx_conf_t *cf, ngx_command_t *cmd, void *conf) +{ + ngx_stream_ssl_conf_t *scf = conf; + + char *rv; + + rv = ngx_conf_set_bitmask_slot(cf, cmd, conf); + + if (rv != NGX_CONF_OK) { + return rv; + } + + /* DTLS protocol requires corresponding TLS version to be set */ + + if (scf->protocols & NGX_SSL_DTLSv1) { + scf->protocols |= NGX_SSL_TLSv1; + } + + if (scf->protocols & NGX_SSL_DTLSv1_2) { + scf->protocols |= NGX_SSL_TLSv1_2; + } + + return NGX_CONF_OK; +} + + +static char * ngx_stream_ssl_session_cache(ngx_conf_t *cf, ngx_command_t *cmd, void *conf) { ngx_stream_ssl_conf_t *scf = conf; @@ -835,8 +873,13 @@ invalid: static ngx_int_t ngx_stream_ssl_init(ngx_conf_t *cf) { - ngx_stream_handler_pt *h; - ngx_stream_core_main_conf_t *cmcf; + ngx_uint_t i; + ngx_stream_listen_t *ls; + ngx_stream_handler_pt *h; + ngx_stream_conf_ctx_t *sctx; + ngx_stream_ssl_conf_t **sscfp, *sscf; + ngx_stream_core_srv_conf_t **cscfp, *cscf; + ngx_stream_core_main_conf_t *cmcf; cmcf = ngx_stream_conf_get_module_main_conf(cf, ngx_stream_core_module); @@ -847,5 +890,52 @@ ngx_stream_ssl_init(ngx_conf_t *cf) *h = ngx_stream_ssl_handler; + cmcf = ngx_stream_conf_get_module_main_conf(cf, ngx_stream_core_module); + + ls = cmcf->listen.elts; + + for (i = 0; i < cmcf->listen.nelts; i++) { + if (ls[i].ssl) { + sctx = ls[i].ctx; + + sscfp = (ngx_stream_ssl_conf_t **)sctx->srv_conf; + cscfp = (ngx_stream_core_srv_conf_t **)sctx->srv_conf; + + sscf = sscfp[ngx_stream_ssl_module.ctx_index]; + cscf = cscfp[ngx_stream_core_module.ctx_index]; + + if (sscf->certificates == NULL) { + ngx_log_error(NGX_LOG_EMERG, cf->log, 0, + "no \"ssl_certificate\" is defined " + "in server listening on SSL port at %s:%ui", + cscf->file_name, cscf->line); + return NGX_ERROR; + } + + if (ls[i].type == SOCK_DGRAM) { + if (!(sscf->protocols & NGX_SSL_DTLSv1 + || sscf->protocols & NGX_SSL_DTLSv1_2)) + { + ngx_log_error(NGX_LOG_EMERG, cf->log, 0, + "\"ssl_protocols\" does not enable DTLS in a " + "server listening on UDP SSL port at %s:%ui", + cscf->file_name, cscf->line); + return NGX_ERROR; + } + + } else { + if (sscf->protocols & NGX_SSL_DTLSv1 + || sscf->protocols & NGX_SSL_DTLSv1_2 ) + { + ngx_log_error(NGX_LOG_EMERG, cf->log, 0, + "\"ssl_protocols\" includes DTLS in a server " + "listening on SSL port at %s:%ui", + cscf->file_name, cscf->line); + return NGX_ERROR; + } + } + } + } + return NGX_OK; } From shankerwangmiao at gmail.com Wed Feb 21 14:30:14 2018 From: shankerwangmiao at gmail.com (Wang Shanker) Date: Wed, 21 Feb 2018 22:30:14 +0800 Subject: DTLS patches In-Reply-To: <20180221141232.GA10488@vlpc> References: <20180221101858.GA12440@vlpc> <44a1e93687ac4365614d825bd2650699.NginxMailingListEnglish@forum.nginx.org> <20180221141232.GA10488@vlpc> Message-ID: <3CCD82F6-DBE2-4F7F-8BAB-A2A0CEDB80F5@gmail.com> Hi, of course. I'm implementing RFC8094, which is for transmitting dns queries through DTLS. Nginx is used for offloading DTLS encryption and the software behind nginx is bind9. Cheers, Miao Wang > ? 2018?02?21??22:12?Vladimir Homutov ??? > > On Wed, Feb 21, 2018 at 08:47:37AM -0500, shankerwangmiao wrote: >> >> I have tested this patch in my environment. Before the patch is applied, >> `tcp_nodelay off` needs to be placed in every `server` clause with DTLS >> enabled to work the problem around. >> > > Hello, > can you please elaborate about your environment? Do you proxy DTLS > stream directly to backend, or you perform DTLS offload ? > What protocol are you using and which server/client software > before/behind nginx? > > I'm attaching refreshed patch against nginx-1.13.9 for those who are > interested to test. > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From shankerwangmiao at gmail.com Wed Feb 21 14:44:00 2018 From: shankerwangmiao at gmail.com (Wang Shanker) Date: Wed, 21 Feb 2018 22:44:00 +0800 Subject: DTLS patches In-Reply-To: <3CCD82F6-DBE2-4F7F-8BAB-A2A0CEDB80F5@gmail.com> References: <20180221101858.GA12440@vlpc> <44a1e93687ac4365614d825bd2650699.NginxMailingListEnglish@forum.nginx.org> <20180221141232.GA10488@vlpc> <3CCD82F6-DBE2-4F7F-8BAB-A2A0CEDB80F5@gmail.com> Message-ID: <5EB029C4-4D68-4E1A-B452-EB28DF6EC0A9@gmail.com> Hi, I noticed that you have introduced `ngx_event_udp_accept()`, which can create a separate socket for receiving datagrams from a specific client. I understand that it is necessary for DTLS servers. However I wonder why it is also called for normal udp servers. For udp servers listening on a port below 1024, such call will fail if the worker processes drop their privilege as a non-root user. The following patch solves this problem by retaining CAP_NET_BIND_SERVICE after worker processes change UID. Cheers, Miao Wang -------------- next part -------------- A non-text attachment was scrubbed... Name: 0001-Retain-CAP_NET_BIND_SERVICE-capability-for-udp-privi.patch Type: application/octet-stream Size: 3605 bytes Desc: not available URL: -------------- next part -------------- > ? 2018?02?21??22:30?Wang Shanker ??? > > Hi, of course. I'm implementing RFC8094, which is for transmitting dns > queries through DTLS. Nginx is used for offloading DTLS encryption and > the software behind nginx is bind9. > > Cheers, > > Miao Wang > >> ? 2018?02?21??22:12?Vladimir Homutov ??? >> >> On Wed, Feb 21, 2018 at 08:47:37AM -0500, shankerwangmiao wrote: >>> >>> I have tested this patch in my environment. Before the patch is applied, >>> `tcp_nodelay off` needs to be placed in every `server` clause with DTLS >>> enabled to work the problem around. >>> >> >> Hello, >> can you please elaborate about your environment? Do you proxy DTLS >> stream directly to backend, or you perform DTLS offload ? >> What protocol are you using and which server/client software >> before/behind nginx? >> >> I'm attaching refreshed patch against nginx-1.13.9 for those who are >> interested to test. >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > From vl at nginx.com Wed Feb 21 15:34:50 2018 From: vl at nginx.com (Vladimir Homutov) Date: Wed, 21 Feb 2018 18:34:50 +0300 Subject: DTLS patches In-Reply-To: <5EB029C4-4D68-4E1A-B452-EB28DF6EC0A9@gmail.com> References: <20180221101858.GA12440@vlpc> <44a1e93687ac4365614d825bd2650699.NginxMailingListEnglish@forum.nginx.org> <20180221141232.GA10488@vlpc> <3CCD82F6-DBE2-4F7F-8BAB-A2A0CEDB80F5@gmail.com> <5EB029C4-4D68-4E1A-B452-EB28DF6EC0A9@gmail.com> Message-ID: <20180221153449.GA17008@vlpc> On Wed, Feb 21, 2018 at 10:44:00PM +0800, Wang Shanker wrote: > Hi, > > I noticed that you have introduced `ngx_event_udp_accept()`, which can > create a separate socket for receiving datagrams from a specific client. > I understand that it is necessary for DTLS servers. However I wonder > why it is also called for normal udp servers. for normal udp server this is beneficial if you need to process bidirectional stream, i.e. proxying DTLS or similar protocols without offloading it. Probably this should be at least configurable. > For udp servers listening on a port below 1024, such call will fail if > the worker processes drop their privilege as a non-root user. > The following patch solves this problem by retaining CAP_NET_BIND_SERVICE > after worker processes change UID. yes, there is an issue in such case, and retaining (partial) permissions is a possible (but ugly) solution. From shankerwangmiao at gmail.com Wed Feb 21 15:49:37 2018 From: shankerwangmiao at gmail.com (Wang Shanker) Date: Wed, 21 Feb 2018 23:49:37 +0800 Subject: DTLS patches In-Reply-To: <20180221153449.GA17008@vlpc> References: <20180221101858.GA12440@vlpc> <44a1e93687ac4365614d825bd2650699.NginxMailingListEnglish@forum.nginx.org> <20180221141232.GA10488@vlpc> <3CCD82F6-DBE2-4F7F-8BAB-A2A0CEDB80F5@gmail.com> <5EB029C4-4D68-4E1A-B452-EB28DF6EC0A9@gmail.com> <20180221153449.GA17008@vlpc> Message-ID: <9128218F-2B7C-404D-A385-CC4E1B4B66F8@gmail.com> > ? 2018?2?21??23:34?Vladimir Homutov ??? > >> On Wed, Feb 21, 2018 at 10:44:00PM +0800, Wang Shanker wrote: >> Hi, >> >> I noticed that you have introduced `ngx_event_udp_accept()`, which can >> create a separate socket for receiving datagrams from a specific client. >> I understand that it is necessary for DTLS servers. However I wonder >> why it is also called for normal udp servers. > > for normal udp server this is beneficial if you need to process > bidirectional stream, i.e. proxying DTLS or similar protocols without > offloading it. Probably this should be at least configurable. > >> For udp servers listening on a port below 1024, such call will fail if >> the worker processes drop their privilege as a non-root user. >> The following patch solves this problem by retaining CAP_NET_BIND_SERVICE >> after worker processes change UID. > > yes, there is an issue in such case, and retaining (partial) permissions > is a possible (but ugly) solution. You can see from the code that it is not the first time to use that solution. I wonder if there is better solution for this issue. Cheers, Miao Wang From nginx-forum at forum.nginx.org Thu Feb 22 09:26:15 2018 From: nginx-forum at forum.nginx.org (sonpg) Date: Thu, 22 Feb 2018 04:26:15 -0500 Subject: Error: Too many redirect when redirect sharepoint site Message-ID: <87776d6828e53ffc240e38f243b3f498.NginxMailingListEnglish@forum.nginx.org> I have a problem when i redirect sharepoint site, it get error "redirected you too many times". I tried to clear cookie but not work code: user nginx; worker_processes 4; error_log /var/log/nginx/error.log warn; pid /var/run/nginx.pid; events { worker_connections 1024; } http { # Redirect sharepoint site server { listen 80 ; server_name ecm.test.com; rewrite ^(.*) http://ecm.test.com permanent; } # Redirect http://test.com -> https://www.test.com server { listen 80; server_name www.test.com; #rewrite ^(.*) https://www.test.com permanent; return 301 https://$host$request_uri; } # Redirect http://www.test.com -> https://www.test.com server { listen 80; server_name test.com; rewrite ^(.*) https://www.test.com permanent; #return 301 https://$host$request_uri; } ### Reverse Proxy for WEB02 server { listen 443 ssl; server_name www.test.com; ssl on; ### SSL cert files ### ssl_certificate /etc/nginx/ssl/cert.pem; ssl_certificate_key /etc/nginx/ssl/cert.key; ### ssl_ciphers HIGH:!aNULL:!MD5; #ssl_ciphers RC4:HIGH:!aNULL:!MD5; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_prefer_server_ciphers on; ssl_session_cache shared:SSL:10m; ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:ECDHE-RSA-AES128-GCM-SHA256:AES256+EECDH:DHE-RSA-AES128-GCM-SHA256:AES256+EDH:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES256-GCM-SHA384:AES128-GCM-SHA256:AES256-SHA256:AES128-SHA256:AES256-SHA:AES128-SHA:DES-CBC3-SHA:HIGH:!aNULL:!eNULL:!EXPORT:!DES:!MD5:!PSK:!RC4"; location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass https://web02.test.com; proxy_set_header host test.com; } } ################### } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,278716,278716#msg-278716 From nginx-forum at forum.nginx.org Thu Feb 22 13:17:54 2018 From: nginx-forum at forum.nginx.org (imrickysingh) Date: Thu, 22 Feb 2018 08:17:54 -0500 Subject: Redirection Message-ID: Hi guys, I am new to nginx and facing some problem with my setup. In my setup i have nginx and tomcat with the application running on tomcat as http://tomcatdomain/application_name. I want to redirect to application if someone hit http://nginxdomain/app. I am able to do the redirection using location block as: location /app { proxy_pass $tomcatdomain; proxy_set_header Host $host; proxy_pass_request_headers on; } http://nginxdomain/app gives default tomcat page but i am not being able to reach the application. If i do like: http://nginxdomain/app/application_name, then it doesn't go anywhere but gives me the default tomcat page Regards, Ricky Singh Posted at Nginx Forum: https://forum.nginx.org/read.php?2,278722,278722#msg-278722 From arozyev at nginx.com Thu Feb 22 14:40:44 2018 From: arozyev at nginx.com (Aziz Rozyev) Date: Thu, 22 Feb 2018 17:40:44 +0300 Subject: Redirection In-Reply-To: References: Message-ID: Hi, show your full config, usually there is no need to set variable like $tomcatdomain, proxy_pass http://tomcatdomain; is enough. br, Aziz. > On 22 Feb 2018, at 16:17, imrickysingh wrote: > > Hi guys, > > I am new to nginx and facing some problem with my setup. > > In my setup i have nginx and tomcat with the application running on tomcat > as http://tomcatdomain/application_name. I want to redirect to application > if someone hit http://nginxdomain/app. I am able to do the redirection using > location block as: > > location /app { > proxy_pass $tomcatdomain; > proxy_set_header Host $host; > proxy_pass_request_headers on; > } > > http://nginxdomain/app gives default tomcat page but i am not being able to > reach the application. > If i do like: http://nginxdomain/app/application_name, then it doesn't go > anywhere but gives me the default tomcat page > > > > Regards, > Ricky Singh > > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,278722,278722#msg-278722 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From Jason.Whittington at equifax.com Thu Feb 22 15:55:47 2018 From: Jason.Whittington at equifax.com (Jason Whittington) Date: Thu, 22 Feb 2018 15:55:47 +0000 Subject: Redirection Message-ID: <995C5C9AD54A3C419AF1C20A8B6AB9A432B0E061@STLEISEXCMBX3.eis.equifax.com> One easy newbie mistake to make is leaving out trailing slashes for location and proxy_pass blocks. I'd expect the location block to look something like this: location /app/ { proxy_pass http://tomcatdomain/application_name/; } Note the trailing slashes after /app/ and /application_name/. Without them the routing is not going to do what you want. Jason -----Original Message----- From: nginx [mailto:nginx-bounces at nginx.org] On Behalf Of imrickysingh Sent: Thursday, February 22, 2018 7:18 AM To: nginx at nginx.org Subject: [IE] Redirection Hi guys, I am new to nginx and facing some problem with my setup. In my setup i have nginx and tomcat with the application running on tomcat as http://tomcatdomain/application_name. I want to redirect to application if someone hit http://nginxdomain/app. I am able to do the redirection using location block as: location /app { proxy_pass $tomcatdomain; proxy_set_header Host $host; proxy_pass_request_headers on; } http://nginxdomain/app gives default tomcat page but i am not being able to reach the application. If i do like: http://nginxdomain/app/application_name, then it doesn't go anywhere but gives me the default tomcat page Regards, Ricky Singh Posted at Nginx Forum: https://forum.nginx.org/read.php?2,278722,278722#msg-278722 _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx This message contains proprietary information from Equifax which may be confidential. If you are not an intended recipient, please refrain from any disclosure, copying, distribution or use of this information and note that such actions are prohibited. If you have received this transmission in error, please notify by e-mail postmaster at equifax.com. Equifax? is a registered trademark of Equifax Inc. All rights reserved. From lists at lazygranch.com Fri Feb 23 02:40:12 2018 From: lists at lazygranch.com (lists at lazygranch.com) Date: Thu, 22 Feb 2018 18:40:12 -0800 Subject: Flush access log buffer Message-ID: <20180222184012.645a3152.lists@lazygranch.com> When I was using FreeBSD, the access log was real time. Since I went to Centos, that doesn't seem to be the case. Is there some way to flush the buffer? From nginx-forum at forum.nginx.org Fri Feb 23 08:52:36 2018 From: nginx-forum at forum.nginx.org (sonpg) Date: Fri, 23 Feb 2018 03:52:36 -0500 Subject: NTLM sharepoint when use nginx reverse proxy Message-ID: Hi everyone, I have issue with authentication when use nginx reverse proxy. it always require input user/pass my config file: ##### upstream test.com { server test.com; keepalive 16; } server { listen 80; server_name test.com; location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://test.com; proxy_set_header host test.com; } } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,278737,278737#msg-278737 From nginx-forum at forum.nginx.org Fri Feb 23 09:15:31 2018 From: nginx-forum at forum.nginx.org (sonpg) Date: Fri, 23 Feb 2018 04:15:31 -0500 Subject: NTLM sharepoint when use nginx reverse proxy In-Reply-To: References: Message-ID: <47388cf96eb57f11308121dc5585f807.NginxMailingListEnglish@forum.nginx.org> myserver requires NTLM authentication. I access myserver through nginx proxy and provide correct auth info,but the browser prompt auth again. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,278737,278738#msg-278738 From francis at daoine.org Fri Feb 23 12:25:25 2018 From: francis at daoine.org (Francis Daly) Date: Fri, 23 Feb 2018 12:25:25 +0000 Subject: Error: Too many redirect when redirect sharepoint site In-Reply-To: <87776d6828e53ffc240e38f243b3f498.NginxMailingListEnglish@forum.nginx.org> References: <87776d6828e53ffc240e38f243b3f498.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20180223122525.GA3280@daoine.org> On Thu, Feb 22, 2018 at 04:26:15AM -0500, sonpg wrote: Hi there, > I have a problem when i redirect sharepoint site, it get error "redirected > you too many times". I tried to clear cookie but not work I suspect that this has been overtaken by a later mail thread, but just in case... > http { > # Redirect sharepoint site > server { > listen 80 ; > server_name ecm.test.com; > rewrite ^(.*) http://ecm.test.com permanent; > } That part there is a redirect loop. A request from the client for http://ecm.test.com/anything will get a response of a http redirect to http://ecm.test.com; that request will get a response of a redirect to http://ecm.test.com; and you are in a loop that the client should report. f -- Francis Daly francis at daoine.org From francis at daoine.org Fri Feb 23 12:32:11 2018 From: francis at daoine.org (Francis Daly) Date: Fri, 23 Feb 2018 12:32:11 +0000 Subject: NTLM sharepoint when use nginx reverse proxy In-Reply-To: <47388cf96eb57f11308121dc5585f807.NginxMailingListEnglish@forum.nginx.org> References: <47388cf96eb57f11308121dc5585f807.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20180223123211.GB3280@daoine.org> On Fri, Feb 23, 2018 at 04:15:31AM -0500, sonpg wrote: Hi there, > myserver requires NTLM authentication. I access myserver through nginx proxy > and provide correct auth info,but the browser prompt auth again. http://nginx.org/r/ntlm nginx does not support NTLM authentication. If you need something to reverse-proxy a http server that uses NTLM, you must write the code to make your nginx do it, or you must use something that is not stock-nginx. If you choose the latter, "NGINX Plus" is one thing that does advertise NTLM support. Other things probably exist too. f -- Francis Daly francis at daoine.org From pchychi at gmail.com Fri Feb 23 14:05:00 2018 From: pchychi at gmail.com (Payam Chychi) Date: Fri, 23 Feb 2018 14:05:00 +0000 Subject: NTLM sharepoint when use nginx reverse proxy In-Reply-To: <20180223123211.GB3280@daoine.org> References: <47388cf96eb57f11308121dc5585f807.NginxMailingListEnglish@forum.nginx.org> <20180223123211.GB3280@daoine.org> Message-ID: On Fri, Feb 23, 2018 at 4:32 AM Francis Daly wrote: > On Fri, Feb 23, 2018 at 04:15:31AM -0500, sonpg wrote: > > Hi there, > > > myserver requires NTLM authentication. I access myserver through nginx > proxy > > and provide correct auth info,but the browser prompt auth again. > > http://nginx.org/r/ntlm > > nginx does not support NTLM authentication. > > If you need something to reverse-proxy a http server that uses NTLM, you > must write the code to make your nginx do it, or you must use something > that is not stock-nginx. > > If you choose the latter, "NGINX Plus" is one thing that does advertise > NTLM support. Other things probably exist too. > > f > -- > Francis Daly francis at daoine.org > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Pass it to squid for NTLM auth > -- Payam Tarverdyan Chychi Network Security Specialist / Network Engineer -------------- next part -------------- An HTML attachment was scrubbed... URL: From Jason.Whittington at equifax.com Fri Feb 23 15:22:14 2018 From: Jason.Whittington at equifax.com (Jason Whittington) Date: Fri, 23 Feb 2018 15:22:14 +0000 Subject: NTLM sharepoint when use nginx reverse proxy Message-ID: <995C5C9AD54A3C419AF1C20A8B6AB9A432B0E6EB@STLEISEXCMBX3.eis.equifax.com> I posted this a few weeks ago ? I hope it helps you. I did this with nginx plus, so it may not work if you are using the open-source product. NTLM authentication authenticates connections instead of requests, and this is somewhat contradicts HTTP protocol, which is expected to be stateless. As a result it doesn't generally work though proxies, including nginx. NGINX can support it though, you need to use the "ntlm" directive. Below is an [stripped down] example of how I have it set up in front of TFS. I would think Sharepoint would be very similar. This has worked very reliably for like a year. upstream MyNtlmService { zone backend; server 192.168.0.1:8080; server 192.168.0.2:8080; #See http://stackoverflow.com/questions/10395807/nginx-close-upstream-connection-after-request keepalive 64; #See http://nginx.org/en/docs/http/ngx_http_upstream_module.html#ntlm ntlm; } server { listen 80; location / { proxy_read_timeout 60s; #http://stackoverflow.com/questions/21284935/nginx-reverse-proxy-with-windows-authentication-that-uses-ntlm proxy_http_version 1.1; proxy_set_header Connection ""; proxy_pass http:// MyNtlmService /; } } From: nginx [mailto:nginx-bounces at nginx.org] On Behalf Of Payam Chychi Sent: Friday, February 23, 2018 8:05 AM To: nginx at nginx.org Subject: [IE] Re: NTLM sharepoint when use nginx reverse proxy On Fri, Feb 23, 2018 at 4:32 AM Francis Daly > wrote: On Fri, Feb 23, 2018 at 04:15:31AM -0500, sonpg wrote: Hi there, > myserver requires NTLM authentication. I access myserver through nginx proxy > and provide correct auth info,but the browser prompt auth again. http://nginx.org/r/ntlm nginx does not support NTLM authentication. If you need something to reverse-proxy a http server that uses NTLM, you must write the code to make your nginx do it, or you must use something that is not stock-nginx. If you choose the latter, "NGINX Plus" is one thing that does advertise NTLM support. Other things probably exist too. f -- Francis Daly francis at daoine.org _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx Pass it to squid for NTLM auth -- Payam Tarverdyan Chychi Network Security Specialist / Network Engineer This message contains proprietary information from Equifax which may be confidential. If you are not an intended recipient, please refrain from any disclosure, copying, distribution or use of this information and note that such actions are prohibited. If you have received this transmission in error, please notify by e-mail postmaster at equifax.com. Equifax? is a registered trademark of Equifax Inc. All rights reserved. -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists at lazygranch.com Sat Feb 24 02:54:48 2018 From: lists at lazygranch.com (lists at lazygranch.com) Date: Fri, 23 Feb 2018 18:54:48 -0800 Subject: Flush access log buffer In-Reply-To: <20180222184012.645a3152.lists@lazygranch.com> References: <20180222184012.645a3152.lists@lazygranch.com> Message-ID: <20180223185448.7b0bf9f8.lists@lazygranch.com> On Thu, 22 Feb 2018 18:40:12 -0800 "lists at lazygranch.com" wrote: > When I was using FreeBSD, the access log was real time. Since I went > to Centos, that doesn't seem to be the case. Is there some way to > flush the buffer? > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx I found a flush=x option on the command line. I set it for 1m for testing. Note that you need to specify a buffer size else nginx will choke. From nginx-forum at forum.nginx.org Sat Feb 24 10:37:45 2018 From: nginx-forum at forum.nginx.org (matke) Date: Sat, 24 Feb 2018 05:37:45 -0500 Subject: nginScript send POST resquest Message-ID: <3f2bf1b620d4437619d4d33625804dfa.NginxMailingListEnglish@forum.nginx.org> Hi, I tried to send some POST request to remote server (as part of external authentication), but I got message that XMLHttpRequest is undefined (or similar error). I saw later in this post https://forum.nginx.org/read.php?2,275459,275459#msg-275459 answer that this is not possible. Can it help if NodeJS is installed, or nginScript it completely independent? (I don't have much experience in any javascript stuff) This is how my script looks: [code] function baz(req, res) { res.headers.foo = 1234; res.status = 200; res.contentType = "text/plain; charset=utf-8"; res.contentLength = 22; res.sendHeader(); res.send("Enginx "); res.send("java"); res.send("script\r\n"); var response = "resp"; var data = "{\n \"page\": 0,\n \"count\": 1,\n \"order\": 2,\n \"sort\": \"0\",\n \"headers\": {\n \"Access-Control-Allow-Origin\": [\n \"*\"\n ]\n }\n}"; var xhr = new XMLHttpRequest(); xhr.withCredentials = true; xhr.addEventListener("readystatechange", function () { if (this.readyState === 4) { // console.log(this.responseText); } }); xhr.open("POST", "https://remote.host/api/login", false); xhr.setRequestHeader("origin", "https://localhost"); xhr.setRequestHeader("content-type", "application/json"); xhr.setRequestHeader("referer", "https://localhost/"); xhr.setRequestHeader("accept-encoding", "gzip, deflate, br"); xhr.setRequestHeader("accept-language", "sr-RS,sr;q=0.9,en-US;q=0.8,en;q=0.7,hr;q=0.6,bs;q=0.5"); xhr.setRequestHeader("cookie", "securitytoken=eyJraWQiOiJ.......Swnq3xjEvXodQ"); xhr.send(data); xhr.onreadystatechange = processRequest; function processRequest(e) { if (xhr.readyState == 4 && xhr.status == 200) { response = JSON.parse(xhr.responseText); // alert(response.ip); } } res.send(response); res.finish(); } [/code] Posted at Nginx Forum: https://forum.nginx.org/read.php?2,278746,278746#msg-278746 From nginx-forum at forum.nginx.org Sun Feb 25 08:08:28 2018 From: nginx-forum at forum.nginx.org (kefiras@gmail.com) Date: Sun, 25 Feb 2018 03:08:28 -0500 Subject: Jenkins reverse proxy on single domain with multiple apps Message-ID: <92ef7976cccafcee03adef68b1126eda.NginxMailingListEnglish@forum.nginx.org> Hello, I am trying to setup a reverse proxy on a single domain to host multiple apps separated by URI, for example: http://localhost/app1 http://localhost/app2 etc. Right know having problems with reverse proxy jenkins which sends HTML page in HTTP reply with relative paths eg. /static/abc/css/common.css So far I have rewritten the content to the login screen as per below but I want to know if there is any other way to rewrite this automatically: location /jenkins { proxy_pass http://jenkins:8080/; sub_filter 'url=/login?from=%2F' 'url=/jenkins/login?from=%2F'; sub_filter "('/login?from=%2F')" "('/jenkins/login?from=%2F')"; sub_filter_once off; } Once I hit http://localhost/jenkins I am getting whole bunch of 404 because of relative links as per the HTML response below Jenkins [Jenkins]
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,278748,278748#msg-278748 From nginx-forum at forum.nginx.org Sun Feb 25 09:40:36 2018 From: nginx-forum at forum.nginx.org (sonpg) Date: Sun, 25 Feb 2018 04:40:36 -0500 Subject: NTLM sharepoint when use nginx reverse proxy In-Reply-To: <995C5C9AD54A3C419AF1C20A8B6AB9A432B0E6EB@STLEISEXCMBX3.eis.equifax.com> References: <995C5C9AD54A3C419AF1C20A8B6AB9A432B0E6EB@STLEISEXCMBX3.eis.equifax.com> Message-ID: <02270391417551b5c98c37bafa5e9575.NginxMailingListEnglish@forum.nginx.org> i try and it work but have new issue. Some site i need redirect from port 80 to 443 and it use same port 80 with sharepoint site My code is: events { worker_connections 1024; } stream { upstream ecm.test.com { hash $remote_addr consistent; server ecm.test.com:81 weight=5; } server { listen 81; #Line 27 proxy_connect_timeout 1s; proxy_timeout 3s; proxy_pass ecm.test.com; } } http { server_tokens off; proxy_buffering off; expires 12h; proxy_redirect off; # Redirect http://test.com -> https://www.test.com server { listen 80; server_name www.test.com; #rewrite ^(.*) https://www.test.com permanent; return 301 https://$host$request_uri; } # Redirect http://www.test.com -> https://www.test.com server { listen 80; server_name test.com; rewrite ^(.*) https://www.test.com permanent; #return 301 https://$host$request_uri; } ### Reverse Proxy for WEB02 server { listen 443 ssl; server_name www.test.com; ssl on; ### SSL cert files ### ssl_certificate /etc/nginx/ssl/cert.pem; ssl_certificate_key /etc/nginx/ssl/cert.key; ### ssl_ciphers HIGH:!aNULL:!MD5; #ssl_ciphers RC4:HIGH:!aNULL:!MD5; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_prefer_server_ciphers on; ssl_session_cache shared:SSL:10m; ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:ECDHE-RSA-AES128-GCM-SHA256:AES256+EECDH:DHE-RSA-AES128-GCM-SHA256:AES256+EDH:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES256-GCM-SHA384:AES128-GCM-SHA256:AES256-SHA256:AES128-SHA256:AES256-SHA:AES128-SHA:DES-CBC3-SHA:HIGH:!aNULL:!eNULL:!EXPORT:!DES:!MD5:!PSK:!RC4"; location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass https://web02.test.com; proxy_set_header host test.com; } } } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,278737,278749#msg-278749 From arozyev at nginx.com Sun Feb 25 13:11:40 2018 From: arozyev at nginx.com (Aziz Rozyev) Date: Sun, 25 Feb 2018 16:11:40 +0300 Subject: Jenkins reverse proxy on single domain with multiple apps In-Reply-To: <92ef7976cccafcee03adef68b1126eda.NginxMailingListEnglish@forum.nginx.org> References: <92ef7976cccafcee03adef68b1126eda.NginxMailingListEnglish@forum.nginx.org> Message-ID: <4AC89E77-21C4-4C73-ADF5-1618B96E51C2@nginx.com> Hi, compare the output of curl -ivvv http://jenkins:8080 curl -ivvv http://localhost/jenkins then curl -iLvvv http://jenkins:8080 curl -iLvvv http://localhost/jenkins pay attention on the cookie headers. java based applications usually may set session cookies and you should handle them accordingly. and it?s not clear (at least for me) why you?ve mentioned localhost/app1|2, is that app1/app2 are different jenkins applications servers? br, Aziz. > On 25 Feb 2018, at 11:08, kefiras at gmail.com wrote: > > Hello, > > I am trying to setup a reverse proxy on a single domain to host multiple > apps separated by URI, for example: > http://localhost/app1 > http://localhost/app2 > etc. > > Right know having problems with reverse proxy jenkins which sends HTML page > in HTTP reply with relative paths eg. /static/abc/css/common.css > > So far I have rewritten the content to the login screen as per below but I > want to know if there is any other way to rewrite this automatically: > > location /jenkins { > proxy_pass http://jenkins:8080/; > sub_filter 'url=/login?from=%2F' 'url=/jenkins/login?from=%2F'; > sub_filter "('/login?from=%2F')" "('/jenkins/login?from=%2F')"; > sub_filter_once off; > > } > > > Once I hit http://localhost/jenkins I am getting whole bunch of 404 because > of relative links as per the HTML response below > > > data-resurl="/static/a4190cd9"> > > > Jenkins [Jenkins] href="/static/a4190cd9/css/layout-common.css" type="text/css" /> rel="stylesheet" href="/static/a4190cd9/css/style.css" type="text/css" > /> type="text/css" /> href="/static/a4190cd9/css/responsive-grid.css" type="text/css" /> rel="shortcut icon" href="/static/a4190cd9/favicon.ico" > type="image/vnd.microsoft.icon" /> href="/images/mask-icon.svg" /> href="/static/a4190cd9/scripts/yui/container/assets/container.css" > type="text/css" /> href="/static/a4190cd9/scripts/yui/assets/skins/sam/skin.css" > type="text/css" /> href="/static/a4190cd9/scripts/yui/container/assets/skins/sam/container.css" > type="text/css" /> href="/static/a4190cd9/scripts/yui/button/assets/skins/sam/button.css" > type="text/css" /> href="/static/a4190cd9/scripts/yui/menu/assets/skins/sam/menu.css" > type="text/css" /> name="viewport" content="width=device-width, initial-scale=1" /> data-model-type="jenkins.install.SetupWizard" id="jenkins" > class="yui-skin-sam full-screen jenkins-2.60.3" data-version="2.60.3">
id="page-body" class="clear">
name="skip2content"> href="/static/a4190cd9/jsbundles/pluginSetupWizard.css" type="text/css" > />
type="hidden" value="/" />
class="modal fade in" style="display: block;">
> > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,278748,278748#msg-278748 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From francis at daoine.org Sun Feb 25 15:33:33 2018 From: francis at daoine.org (Francis Daly) Date: Sun, 25 Feb 2018 15:33:33 +0000 Subject: Jenkins reverse proxy on single domain with multiple apps In-Reply-To: <92ef7976cccafcee03adef68b1126eda.NginxMailingListEnglish@forum.nginx.org> References: <92ef7976cccafcee03adef68b1126eda.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20180225153333.GC3280@daoine.org> On Sun, Feb 25, 2018 at 03:08:28AM -0500, kefiras at gmail.com wrote: Hi there, > I am trying to setup a reverse proxy on a single domain to host multiple > apps separated by URI, for example: > http://localhost/app1 > http://localhost/app2 > etc. You will be much happier if your nginx/app1/ application believes that it is installed at appserver/app1/, and not at appserver/. > Right know having problems with reverse proxy jenkins which sends HTML page > in HTTP reply with relative paths eg. /static/abc/css/common.css For jenkins, use the jenkins "prefix" argument. For other apps, use whatever their config/install options are. > So far I have rewritten the content to the login screen as per below but I > want to know if there is any other way to rewrite this automatically: The easiest way is not to have to rewrite anything. If jenkins is installed at /jenkins, then your nginx config will be pretty much "proxy_pass http://jenkins:8080;", unless you specifically want nginx to handle some of the static files instead of jenkins. Good luck with it, f -- Francis Daly francis at daoine.org From francis at daoine.org Sun Feb 25 15:34:49 2018 From: francis at daoine.org (Francis Daly) Date: Sun, 25 Feb 2018 15:34:49 +0000 Subject: NTLM sharepoint when use nginx reverse proxy In-Reply-To: <02270391417551b5c98c37bafa5e9575.NginxMailingListEnglish@forum.nginx.org> References: <995C5C9AD54A3C419AF1C20A8B6AB9A432B0E6EB@STLEISEXCMBX3.eis.equifax.com> <02270391417551b5c98c37bafa5e9575.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20180225153449.GD3280@daoine.org> On Sun, Feb 25, 2018 at 04:40:36AM -0500, sonpg wrote: Hi there, > i try and it work but have new issue. Some site i need redirect from port 80 > to 443 and it use same port 80 with sharepoint site What request do you make? What response do you get? What response do you want to get instead? f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Sun Feb 25 16:34:17 2018 From: nginx-forum at forum.nginx.org (sonpg) Date: Sun, 25 Feb 2018 11:34:17 -0500 Subject: NTLM sharepoint when use nginx reverse proxy In-Reply-To: <20180225153449.GD3280@daoine.org> References: <20180225153449.GD3280@daoine.org> Message-ID: here is my issue, i using nginx to reverse proxy for sharepoint site: ecm.test.com:80 and redirect test.com:80 to https://test.com:443. it show "nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use)" Posted at Nginx Forum: https://forum.nginx.org/read.php?2,278737,278753#msg-278753 From wiktor at metacode.biz Sun Feb 25 19:16:18 2018 From: wiktor at metacode.biz (Wiktor Kwapisiewicz) Date: Sun, 25 Feb 2018 20:16:18 +0100 Subject: Routing based on ALPN In-Reply-To: References: Message-ID: <8febc163-7206-e530-7ea1-019d476a2271@metacode.biz> >> Is there a way to access and save ALPN value to a variable? > > It should possible to parse the incoming buffer with https://nginx.org/r/js_filter and create a variable to make a routing decision on. > Excellent idea for quickly solving this problem, thanks! Would a long term solution involve creating a new, additional variable in the ssl_preread module (e.g. ssl_preread_alpn)? I've seen something similar being done by HAProxy (ssl_fc_alpn [1]). Kind regards, Wiktor [1]: https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#7.3.4-ssl_fc_alpn -- */metacode/* From francis at daoine.org Sun Feb 25 21:17:54 2018 From: francis at daoine.org (Francis Daly) Date: Sun, 25 Feb 2018 21:17:54 +0000 Subject: NTLM sharepoint when use nginx reverse proxy In-Reply-To: References: <20180225153449.GD3280@daoine.org> Message-ID: <20180225211754.GE3280@daoine.org> On Sun, Feb 25, 2018 at 11:34:17AM -0500, sonpg wrote: Hi there, > i using nginx to reverse proxy for sharepoint site: ecm.test.com:80 and > redirect test.com:80 to https://test.com:443. > it show "nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in > use)" You have something other than this nginx running, which is listening on port 80. One nginx with two server{} blocks which each "listen 80" is ok. Two separate nginxs which each "listen 80" is not ok. Maybe you have an old nginx running, maybe you have another web server running. Make sure nothing is listening on port 80 before you start this nginx. f -- Francis Daly francis at daoine.org From pchychi at gmail.com Mon Feb 26 00:36:36 2018 From: pchychi at gmail.com (Payam Chychi) Date: Mon, 26 Feb 2018 00:36:36 +0000 Subject: NTLM sharepoint when use nginx reverse proxy In-Reply-To: <20180225211754.GE3280@daoine.org> References: <20180225153449.GD3280@daoine.org> <20180225211754.GE3280@daoine.org> Message-ID: On Sun, Feb 25, 2018 at 1:18 PM Francis Daly wrote: > On Sun, Feb 25, 2018 at 11:34:17AM -0500, sonpg wrote: > > Hi there, > > > i using nginx to reverse proxy for sharepoint site: ecm.test.com:80 and > > redirect test.com:80 to https://test.com:443. > > it show "nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address > already in > > use)" > > You have something other than this nginx running, which is listening on > port 80. > > One nginx with two server{} blocks which each "listen 80" is ok. Two > separate nginxs which each "listen 80" is not ok. > > Maybe you have an old nginx running, maybe you have another web server > running. > > Make sure nothing is listening on port 80 before you start this nginx. > > f > -- > Francis Daly francis at daoine.org > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx You can?t just install software and expect it to work. Have you drawn up a design document at all that covers how the connections and handled and forwarded? You can?t have multiple processes listening to the same ip:port. You can get away but changing the ports for different backend applications or use different ip addresses. Draw out your design and give it a bit of thought. > > -- Payam Tarverdyan Chychi Network Security Specialist / Network Engineer -------------- next part -------------- An HTML attachment was scrubbed... URL: From pritamc99 at gmail.com Mon Feb 26 07:05:15 2018 From: pritamc99 at gmail.com (pritam chavan) Date: Mon, 26 Feb 2018 12:35:15 +0530 Subject: No subject In-Reply-To: References: Message-ID: Hi All, I am using open source NGINX as reverse proxy. There are certain URL which have URL parameters. I am getting following error while accessing this URL. 2018/02/22 15:11:08 [error] 1606#0: *21 upstream sent invalid chunked response while reading upstream, client: 10.109.1.4, server: XXX.XXXXXXXX.com, request: "GET /bsg/scrips HTTP/1.1", upstream: " http://127.0.0.1:8042/bsg/scrips", host: "XXX.XXXXXXXX.com:8030" 2018/02/22 15:11:47 [error] 1606#0: *24 upstream sent invalid chunked response while reading upstream, client: 10.109.1.4, server: XXX.XXXXXXXX.com, request: "GET /bsg/scrips HTTP/1.1", upstream: " http://127.0.0.1:8042/bsg/scrips", host: "XXX.XXXXXXXX.com:8030" The nginx.conf for above is # For more information on configuration, see: # * Official English Documentation: http://nginx.org/en/docs/ # * Official Russian Documentation: http://nginx.org/ru/docs/ user nginx; worker_processes auto; error_log /var/log/nginx/error.log; pid /var/run/nginx.pid; # Load dynamic modules. See /usr/share/nginx/README.dynamic. include /usr/share/nginx/modules/*.conf; events { worker_connections 1024; } http { client_body_buffer_size 10M; server { listen 8030; server_name xxx.xxxxxxxx.com; location /bsg/ltp/live { proxy_pass http://localhost:8041/bsg/ltp/live; } location /bsg/ltp/ { proxy_pass http://localhost:8041/bsg/ltp/; } location /bsg/ltp/live/$arg_name { proxy_pass http://localhost:8041; } location /bsg/ltp/live/$arg_name/$arg_name { proxy_pass http://localhost:8041; } location /bsg/ltp/closing { proxy_pass http://localhost:8041/bsg/ltp/closing; } location /bsg/ltp/closing/$arg_name { proxy_pass http://localhost:8041; } location /bsg/ltp/closing/$arg_name/$arg_name { proxy_pass http://localhost:8041; } location /bsg/scrips { proxy_pass http://localhost:8042/bsg/scrips; } location /bsg/scrips/find/isin/$arg_name { proxy_pass http://localhost:8042; } location /bsg/scrips/find/bse-code/$arg_name { proxy_pass http://localhost:8042; } location /bsg/scrips/find/nse-symbol/$arg_name { proxy_pass http://localhost:8042; } location /bsg/scrips/find/group/$arg_name { proxy_pass http://localhost:8042; } location /bsg/ucm/pincode/$arg_name { proxy_pass http://localhost:8043; } location /bsg/ucm/ifscode/$arg_name { proxy_pass http://localhost:8043; } location ucic/customerid { proxy_pass http://localhost:8044/ucic/customerid; } } log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; Include /etc/nginx/mime.types; default_type application/octet-stream; # Load modular configuration files from the /etc/nginx/conf.d directory. # See http://nginx.org/en/docs/ngx_core_module.html#include # for more informationi nclude /etc/nginx/conf.d/*.conf; } Thanks & Regards, Pritam Chavan. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Mon Feb 26 08:32:39 2018 From: nginx-forum at forum.nginx.org (Andrzej Walas) Date: Mon, 26 Feb 2018 03:32:39 -0500 Subject: Files still on disc after inactive time In-Reply-To: <20180215143853.GV24410@mdounin.ru> References: <20180215143853.GV24410@mdounin.ru> Message-ID: Hi, I don't use 3rd party modules. After update to 1.13.8 I have still many logs like this: [alert] 704#704: ignore long locked inactive cache entry 1ee9fd62b649e731d69f56b98e3e58a5, count:28 Sometimes: [error] 5466#5466: *168740 readv() failed (104: Connection reset by peer) while reading upstream Nothing else in error log. How can I add more debug logs? I see that in cache and tmp still exist files older than 1 inactive day. Br Andrzej Walas Posted at Nginx Forum: https://forum.nginx.org/read.php?2,278589,278758#msg-278758 From arut at nginx.com Mon Feb 26 11:53:46 2018 From: arut at nginx.com (Roman Arutyunyan) Date: Mon, 26 Feb 2018 14:53:46 +0300 Subject: nginScript send POST resquest In-Reply-To: <3f2bf1b620d4437619d4d33625804dfa.NginxMailingListEnglish@forum.nginx.org> References: <3f2bf1b620d4437619d4d33625804dfa.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20180226115346.GA3177@Romans-MacBook-Air.local> Hi, On Sat, Feb 24, 2018 at 05:37:45AM -0500, matke wrote: > Hi, > I tried to send some POST request to remote server (as part of external > authentication), but I got message that XMLHttpRequest is undefined (or > similar error). > I saw later in this post > https://forum.nginx.org/read.php?2,275459,275459#msg-275459 answer that this > is not possible. Exactly. It's not possible. This page describes current state of njs: http://nginx.org/en/docs/njs_about.html > Can it help if NodeJS is installed, or nginScript it completely independent? > (I don't have much experience in any javascript stuff) NodeJS and nginScript are completely independent. We are currently working on a solution, which allows making HTTP requests from njs. > This is how my script looks: > > > [code] > function baz(req, res) { > res.headers.foo = 1234; > res.status = 200; > res.contentType = "text/plain; charset=utf-8"; > res.contentLength = 22; > res.sendHeader(); > res.send("Enginx "); > res.send("java"); > res.send("script\r\n"); > var response = "resp"; > > > var data = "{\n \"page\": 0,\n \"count\": 1,\n \"order\": 2,\n > \"sort\": \"0\",\n \"headers\": {\n \"Access-Control-Allow-Origin\": [\n > \"*\"\n ]\n }\n}"; > > var xhr = new XMLHttpRequest(); > xhr.withCredentials = true; > > xhr.addEventListener("readystatechange", function () { > if (this.readyState === 4) { > // console.log(this.responseText); > } > }); > > xhr.open("POST", "https://remote.host/api/login", false); > > xhr.setRequestHeader("origin", "https://localhost"); > xhr.setRequestHeader("content-type", "application/json"); > xhr.setRequestHeader("referer", "https://localhost/"); > xhr.setRequestHeader("accept-encoding", "gzip, deflate, br"); > xhr.setRequestHeader("accept-language", > "sr-RS,sr;q=0.9,en-US;q=0.8,en;q=0.7,hr;q=0.6,bs;q=0.5"); > xhr.setRequestHeader("cookie", > "securitytoken=eyJraWQiOiJ.......Swnq3xjEvXodQ"); > > xhr.send(data); > > xhr.onreadystatechange = processRequest; > > function processRequest(e) { > if (xhr.readyState == 4 && xhr.status == 200) { > response = JSON.parse(xhr.responseText); > // alert(response.ip); > } > } > > res.send(response); > res.finish(); > } > [/code] > > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,278746,278746#msg-278746 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Roman Arutyunyan From mdounin at mdounin.ru Mon Feb 26 13:05:16 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 26 Feb 2018 16:05:16 +0300 Subject: your mail In-Reply-To: References: Message-ID: <20180226130516.GA89840@mdounin.ru> Hello! On Mon, Feb 26, 2018 at 12:35:15PM +0530, pritam chavan wrote: > Hi All, > > I am using open source NGINX as reverse proxy. There are certain URL which > have URL parameters. > > I am getting following error while accessing this URL. > > 2018/02/22 15:11:08 [error] 1606#0: *21 upstream sent invalid chunked > response while reading upstream, client: 10.109.1.4, server: > XXX.XXXXXXXX.com, request: "GET /bsg/scrips HTTP/1.1", upstream: " > http://127.0.0.1:8042/bsg/scrips", host: "XXX.XXXXXXXX.com:8030" > > 2018/02/22 15:11:47 [error] 1606#0: *24 upstream sent invalid chunked > response while reading upstream, client: 10.109.1.4, server: > XXX.XXXXXXXX.com, request: "GET /bsg/scrips HTTP/1.1", upstream: " > http://127.0.0.1:8042/bsg/scrips", host: "XXX.XXXXXXXX.com:8030" The message suggests the backend is broken and returns an invalid response with broken chunked transfer encoding. Check your backend. -- Maxim Dounin http://mdounin.ru/ From nginx-forum at forum.nginx.org Mon Feb 26 14:58:26 2018 From: nginx-forum at forum.nginx.org (kefiras@gmail.com) Date: Mon, 26 Feb 2018 09:58:26 -0500 Subject: Jenkins reverse proxy on single domain with multiple apps In-Reply-To: <4AC89E77-21C4-4C73-ADF5-1618B96E51C2@nginx.com> References: <4AC89E77-21C4-4C73-ADF5-1618B96E51C2@nginx.com> Message-ID: <5421dd4a8e8bcdaa2286fa4d82fbee66.NginxMailingListEnglish@forum.nginx.org> It was just an example, app1 or app2 may be any app, eg. zabbix or another web server. I have left the basic config for troubleshooting, my proxy pass is just as per below: location /jenkins { proxy_pass http://jenkins:8080; } The very first GET is: --- curl http://localhost:8123/jenkins Authentication required --- which tells the browser to go to http://localhost/login?from=%2Fjenkins but it should be rewritten to http://localhost/jenkins/login?from=%2Fjenkins Posted at Nginx Forum: https://forum.nginx.org/read.php?2,278748,278769#msg-278769 From nginx-forum at forum.nginx.org Mon Feb 26 15:03:20 2018 From: nginx-forum at forum.nginx.org (kefiras@gmail.com) Date: Mon, 26 Feb 2018 10:03:20 -0500 Subject: Jenkins reverse proxy on single domain with multiple apps In-Reply-To: <20180225153333.GC3280@daoine.org> References: <20180225153333.GC3280@daoine.org> Message-ID: <04dc1109a3e8d23e42f94f5eb9f747cc.NginxMailingListEnglish@forum.nginx.org> The setup is a simple set of docker containers. a 'localhost' is a docker container with nginx acting as a reverse proxy it has defined /jenkins location to reverse proxy connection to jenkins container which exposes the java app at port 8080 so: location /jenkins { proxy_pass http://jenkins:8080; } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,278748,278771#msg-278771 From arozyev at nginx.com Mon Feb 26 16:01:46 2018 From: arozyev at nginx.com (Aziz Rozyev) Date: Mon, 26 Feb 2018 19:01:46 +0300 Subject: Jenkins reverse proxy on single domain with multiple apps In-Reply-To: <04dc1109a3e8d23e42f94f5eb9f747cc.NginxMailingListEnglish@forum.nginx.org> References: <20180225153333.GC3280@daoine.org> <04dc1109a3e8d23e42f94f5eb9f747cc.NginxMailingListEnglish@forum.nginx.org> Message-ID: <0D75CF75-1D9D-4A00-8A91-965C878901C1@nginx.com> well, as I?ve said try checking headers first, as per the following doc: https://wiki.jenkins.io/display/JENKINS/Jenkins+behind+an+NGinX+reverse+proxy it can be much complicated than just proxy_pass?ing. regarding sub_fillter, try getting rid of one of the sub_filters, probably the second one: proxy_pass http://jenkins:8080/; sub_filter 'url=/login?from=%2F' 'url=/jenkins/login?from=%2F'; # sub_filter "('/login?from=%2F')" "('/jenkins/login?from=%2F')"; sub_filter_once off; br, Aziz. > On 26 Feb 2018, at 18:03, kefiras at gmail.com wrote: > > The setup is a simple set of docker containers. > > a 'localhost' is a docker container with nginx acting as a reverse proxy > > it has defined /jenkins location to reverse proxy connection to jenkins > container which exposes the java app at port 8080 so: > > location /jenkins { > proxy_pass http://jenkins:8080; > } > > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,278748,278771#msg-278771 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at forum.nginx.org Mon Feb 26 16:09:49 2018 From: nginx-forum at forum.nginx.org (kefiras@gmail.com) Date: Mon, 26 Feb 2018 11:09:49 -0500 Subject: Jenkins reverse proxy on single domain with multiple apps In-Reply-To: <0D75CF75-1D9D-4A00-8A91-965C878901C1@nginx.com> References: <0D75CF75-1D9D-4A00-8A91-965C878901C1@nginx.com> Message-ID: <2ef6b24ae6a7f04187d9bb877f0c24cb.NginxMailingListEnglish@forum.nginx.org> Thanks for help on this. I've fixed it by moving jenkins application to /jenkins path on the application server. Now it's much simpler and it appears it's working without hiccups location /jenkins { proxy_pass http://jenkins:8080; proxy_redirect http://jenkins:8080/ http://nginx/; } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,278748,278776#msg-278776 From lists at lazygranch.com Tue Feb 27 02:32:12 2018 From: lists at lazygranch.com (lists at lazygranch.com) Date: Mon, 26 Feb 2018 18:32:12 -0800 Subject: Flush access log buffer In-Reply-To: <20180223185448.7b0bf9f8.lists@lazygranch.com> References: <20180222184012.645a3152.lists@lazygranch.com> <20180223185448.7b0bf9f8.lists@lazygranch.com> Message-ID: <20180226183212.583bd1c8.lists@lazygranch.com> On Fri, 23 Feb 2018 18:54:48 -0800 "lists at lazygranch.com" wrote: > On Thu, 22 Feb 2018 18:40:12 -0800 > "lists at lazygranch.com" wrote: > > > When I was using FreeBSD, the access log was real time. Since I went > > to Centos, that doesn't seem to be the case. Is there some way to > > flush the buffer? > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > I found a flush=x option on the command line. I set it for 1m for > testing. Note that you need to specify a buffer size else nginx will > choke. > > _______________________________________________ This flush=time option isn't working. I'm at a loss here. Here is some of a ls -l: -rw-r----- 1 nginx adm 12936 Feb 27 02:17 access.log -rw-r--r-- 1 nginx root 4760 Feb 24 03:06 access.log-20180224.gz -rw-r----- 1 nginx adm 1738667 Feb 26 03:21 access.log-20180226 This is the ls -l on /var/log/nginx: drwxr-xr-x. 2 root root 4096 Feb 27 02:11 nginx I'm not requesting a compressed log, so I assume centos is creating the gunzip files. Usually the access.log file has content, but sometimes it is empty and the log data is on the access.log-"date" file, which I suspect is a roll over from access.log. That is maybe centos rolls it but doesn't zip it right away. http { log_format main '$status $remote_addr - $remote_user [$time_local] "$request" ' '$body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main buffer=32k flush=1m; uname -a Linux 3.10.0-693.17.1.el7.x86_64 #1 SMP Thu Jan 25 20:13:58 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux nginx -V nginx version: nginx/1.12.2 built by gcc 4.8.5 20150623 (Red Hat 4.8.5-16) (GCC) built with OpenSSL 1.0.2k-fips 26 Jan 2017 TLS SNI support enabled configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --modules-path=/usr/lib64/nginx/modules --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx --group=nginx --with-http_ssl_module --with-http_realip_module --with-http_addition_module --with-http_sub_module --with-http_dav_module --with-http_flv_module --with-http_mp4_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_random_index_module --with-http_secure_link_module --with-http_stub_status_module --with-http_auth_request_module --with-http_xslt_module=dynamic --with-http_image_filter_module=dynamic --with-http_geoip_module=dynamic --with-http_perl_module=dynamic --add-dynamic-module=njs-1c50334fbea6/nginx --with-threads --with-stream --with-stream_ssl_module --with-http_slice_module --with-mail --with-mail_ssl_module --with-file-aio --with-ipv6 --with-http_v2_module --with-cc-opt='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic' --with-ld-opt=-Wl,-E From Pritam.Chavan at edelweissfin.com Tue Feb 27 06:59:54 2018 From: Pritam.Chavan at edelweissfin.com (Pritam Chavan) Date: Tue, 27 Feb 2018 12:29:54 +0530 Subject: your mail In-Reply-To: <20180226130516.GA89840@mdounin.ru> References: <20180226130516.GA89840@mdounin.ru> Message-ID: <000901d3af98$92440f20$b6cc2d60$@Chavan@edelweissfin.com> Hi Maxim, First of all thanks for your reply. I have checked my backend system It is working fine. If I hit same url without nginx reverse proxy its giving proper output. Example: For following configuration location /bsg/ltp/live/$arg_name/$arg_name { proxy_pass http://localhost:8041; } If I hit above url without nginx reverse proxy then its giving proper output. Thanks & Regards, Pritam Chavan. Pritam Chavan ----------------------------------------------------------------------------------------------------------------------------------------------- Disclaimer: This e-mail message may contain confidential, proprietary or legally privileged information. It should not be used by anyone who is not the original intended recipient. If you have erroneously received this message, please delete it immediately and notify the sender. http://www.edelweissfin.com/Portals/0/documents/miscellaneous/Disclaimer.pdf -----Original Message----- From: nginx [mailto:nginx-bounces at nginx.org] On Behalf Of Maxim Dounin Sent: 26 February 2018 18:35 To: nginx at nginx.org Subject: Re: your mail Hello! On Mon, Feb 26, 2018 at 12:35:15PM +0530, pritam chavan wrote: > Hi All, > > I am using open source NGINX as reverse proxy. There are certain URL > which have URL parameters. > > I am getting following error while accessing this URL. > > 2018/02/22 15:11:08 [error] 1606#0: *21 upstream sent invalid chunked > response while reading upstream, client: 10.109.1.4, server: > XXX.XXXXXXXX.com, request: "GET /bsg/scrips HTTP/1.1", upstream: " > http://127.0.0.1:8042/bsg/scrips", host: "XXX.XXXXXXXX.com:8030" > > 2018/02/22 15:11:47 [error] 1606#0: *24 upstream sent invalid chunked > response while reading upstream, client: 10.109.1.4, server: > XXX.XXXXXXXX.com, request: "GET /bsg/scrips HTTP/1.1", upstream: " > http://127.0.0.1:8042/bsg/scrips", host: "XXX.XXXXXXXX.com:8030" The message suggests the backend is broken and returns an invalid response with broken chunked transfer encoding. Check your backend. -- Maxim Dounin http://mdounin.ru/ _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From oscaretu at gmail.com Tue Feb 27 08:00:22 2018 From: oscaretu at gmail.com (oscaretu .) Date: Tue, 27 Feb 2018 09:00:22 +0100 Subject: Flush access log buffer In-Reply-To: <20180226183212.583bd1c8.lists@lazygranch.com> References: <20180222184012.645a3152.lists@lazygranch.com> <20180223185448.7b0bf9f8.lists@lazygranch.com> <20180226183212.583bd1c8.lists@lazygranch.com> Message-ID: Hello! If you have installed sysdig , [ https://www.sysdig.org/] (a kind of strace but for all the computer, not just for a only process) you can do commands like: sysdig fd.name contains .gz and it will show information about who is accessing any file that contains ".gz" in its name. root at veve0410:/home/oscar# *sysdig proc.name =nginx and fd.name contains access* 2828 08:45:18.248862970 1 nginx (28325) > write fd=75(/html/logs/nginx/produccion/portal/access.log) size=331 2829 08:45:18.248867711 1 nginx (28325) < write res=331 data=66.249.79.51 - - [27/Feb/2018:08:45:18 +0100] \"GET /diario/1991/04/10/internacio 15081 08:45:19.538002590 1 nginx (28325) > write fd=75(/html/logs/nginx/produccion/portal/access.log) size=124 15082 08:45:19.538007576 1 nginx (28325) < write res=124 data=104.199.186.40 - - [27/Feb/2018:08:45:19 +0100] \"GET /elpais/portada_america.htm 19211 08:45:19.718872876 1 nginx (28325) > write fd=75(/html/logs/nginx/produccion/portal/access.log) size=332 19212 08:45:19.718877388 1 nginx (28325) < write res=332 data=66.249.79.45 - - [27/Feb/2018:08:45:19 +0100] \"GET /diario/2005/08/23/catalunya/ 22775 08:45:20.215718840 1 nginx (28325) > write fd=75(/html/logs/nginx/produccion/portal/access.log) size=330 22776 08:45:20.215723447 1 nginx (28325) < write res=330 data=66.249.79.42 - - [27/Feb/2018:08:45:20 +0100] \"GET /diario/2009/05/23/babelia/12 ^Croot at veve0410:/home/oscar# sysdig can be a great help to watch what is happening in your linux computer. Here you have other examples of what you can do with sysdig / csysdig (sorry, the explanations are in Spanish): csysdig # versi?n de tipo 'top', desde la que se puede activar la traza de cada proceso sysdig -h # ayuda sysdig -l sysdig -cl # lista los chisels disponibles. Mira en /usr/share/sysdig/chisels/ los que vienen de serie. Mira en /usr/share/sysdig/chisels/ los que vienen de serie sysdig -L # listar los eventos que se pueden capturar sysdig "proc.name=httpd and evt.type=open and fd.num<0 and evt.dir =<" # comprobar errores al abrir ficheros sysdig -c spy_ip 10.168.1.100 # Ver la conversaci?n que tiene lugar con esa IP # Si se hace desde un frontal, se ven las peticiones HTTP # hechas por los navegadores y las respuestas del servidor sudo sysdig -c echo_fds "fd.name not contains /dev/" # Mostrar accesos a ficheros, con cierto filtro adicional sysdig fd.name contains sitemap # Vigilar accesos a ficheros de sitemaps sysdig proc.name=httpd and proc.pid = 23216 sysdig proc.pid = 23216 sysdig proc.apid = 23216 # procesos cuyo padre sea el proceso de PID 23216 sysdig proc.name=httpd sysdig -w apache-durante-atasco-nanosleep-al-recibir-SIGHUP.scap proc.name=httpd # est? en /html/tmp de veve0223 sysdig -r apache-durante-atasco-nanosleep-al-recibir-SIGHUP.scap # reproducir las operaciones guardadas con -w sysdig -p"%evt.time %evt.arg.name" evt.type=open # mostrar el instante sysdig -p"%evt.num %evt.arg.name" evt.type=open # mostrar el numero; sirve para luego filtrar un rango por el numero sysdig -r apache-durante-atasco-nanosleep-al-recibir-SIGHUP.scap -p"%evt.num %evt.arg.name" evt.type=open # mostrar el numero sysdig -r apache-durante-atasco-nanosleep-al-recibir-SIGHUP.scap "evt.num > 3362620" | less # ignorar eventos anteriores a uno dado sysdig "not evt.type in ('select', 'switch', 'clock_gettime', 'rt_sigprocmask', 'ioctl')" # es posible que esto no funcione en los servidores, pero s? en mi port?til (versi?n m?s reciente) sysdig proc.name=searchd and evt.type=recvfrom # para que se muestren las IPs y puertos que se conectan al daemon de b?squeda de Sphinx sysdig -c lsof "fd.type=ipv4" # equivale a lsof -i que sirve para listar todas las conexiones de red, # aunque con lsof veo que indica si es TCP o UDP. Para separar las que # son TCP o UDP, tienes que ejecutar por separado los dos comandos siguientes sysdig -c lsof "fd.l4proto=tcp" # Versi?n restringida a TCP del comando anterior, equivalente a lsof -i tcp sysdig -c lsof "fd.l4proto=udp" # Versi?n restringida a UDP del comando anterior, equivalente a lsof -i udp csysdig -v files # Ficheros a los que se va accediendo, con refresco de pantalla csysdig -v file_opens # Ficheros a los que se va accediento, en modo lista acumulada Kind regards, Oscar On Tue, Feb 27, 2018 at 3:32 AM, lists at lazygranch.com wrote: > On Fri, 23 Feb 2018 18:54:48 -0800 > "lists at lazygranch.com" wrote: > > > On Thu, 22 Feb 2018 18:40:12 -0800 > > "lists at lazygranch.com" wrote: > > > > > When I was using FreeBSD, the access log was real time. Since I went > > > to Centos, that doesn't seem to be the case. Is there some way to > > > flush the buffer? > > > _______________________________________________ > > > nginx mailing list > > > nginx at nginx.org > > > http://mailman.nginx.org/mailman/listinfo/nginx > > > > I found a flush=x option on the command line. I set it for 1m for > > testing. Note that you need to specify a buffer size else nginx will > > choke. > > > > _______________________________________________ > > This flush=time option isn't working. I'm at a loss here. > > Here is some of a ls -l: > -rw-r----- 1 nginx adm 12936 Feb 27 02:17 access.log > -rw-r--r-- 1 nginx root 4760 Feb 24 03:06 access.log-20180224.gz > -rw-r----- 1 nginx adm 1738667 Feb 26 03:21 access.log-20180226 > > This is the ls -l on /var/log/nginx: > drwxr-xr-x. 2 root root 4096 Feb 27 02:11 nginx > > I'm not requesting a compressed log, so I assume centos is creating the > gunzip files. Usually the access.log file has content, but sometimes it > is empty and the log data is on the access.log-"date" file, which I > suspect is a roll over from access.log. That is maybe centos rolls it > but doesn't zip it right away. > > > http { > log_format main '$status $remote_addr - $remote_user [$time_local] > "$request" ' > '$body_bytes_sent "$http_referer" ' > '"$http_user_agent" "$http_x_forwarded_for"'; > access_log /var/log/nginx/access.log main buffer=32k flush=1m; > > > uname -a > Linux 3.10.0-693.17.1.el7.x86_64 #1 SMP Thu Jan 25 20:13:58 UTC 2018 > x86_64 x86_64 x86_64 GNU/Linux > > nginx -V > nginx version: nginx/1.12.2 > built by gcc 4.8.5 20150623 (Red Hat 4.8.5-16) (GCC) > built with OpenSSL 1.0.2k-fips 26 Jan 2017 > TLS SNI support enabled > configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx > --modules-path=/usr/lib64/nginx/modules > --conf-path=/etc/nginx/nginx.conf > --error-log-path=/var/log/nginx/error.log > --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid > --lock-path=/var/run/nginx.lock > --http-client-body-temp-path=/var/cache/nginx/client_temp > --http-proxy-temp-path=/var/cache/nginx/proxy_temp > --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp > --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp > --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx > --group=nginx --with-http_ssl_module --with-http_realip_module > --with-http_addition_module --with-http_sub_module > --with-http_dav_module --with-http_flv_module --with-http_mp4_module > --with-http_gunzip_module --with-http_gzip_static_module > --with-http_random_index_module --with-http_secure_link_module > --with-http_stub_status_module --with-http_auth_request_module > --with-http_xslt_module=dynamic --with-http_image_filter_module=dynamic > --with-http_geoip_module=dynamic --with-http_perl_module=dynamic > --add-dynamic-module=njs-1c50334fbea6/nginx --with-threads > --with-stream --with-stream_ssl_module --with-http_slice_module > --with-mail --with-mail_ssl_module --with-file-aio --with-ipv6 > --with-http_v2_module --with-cc-opt='-O2 -g -pipe -Wall > -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong > --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic' > --with-ld-opt=-Wl,-E > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Oscar Fernandez Sierra oscaretu at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Tue Feb 27 08:19:10 2018 From: francis at daoine.org (Francis Daly) Date: Tue, 27 Feb 2018 08:19:10 +0000 Subject: your mail In-Reply-To: <000901d3af98$92440f20$b6cc2d60$@Chavan@edelweissfin.com> References: <20180226130516.GA89840@mdounin.ru> <000901d3af98$92440f20$b6cc2d60$@Chavan@edelweissfin.com> Message-ID: <20180227081910.GF3280@daoine.org> On Tue, Feb 27, 2018 at 12:29:54PM +0530, Pritam Chavan wrote: Hi there, > I have checked my backend system It is working fine. If I hit same url > without nginx reverse proxy its giving proper output. > Example: For following configuration > > location /bsg/ltp/live/$arg_name/$arg_name { > proxy_pass http://localhost:8041; > } For info, you are almost certainly not actually using that location{} block. $variables are not expanded in "location" directives. It may well be that your :8041 backend is working fine; but the error message you showed was about your :8042 backend; and as far as nginx was concerned: upstream sent invalid chunked response > If I hit above url without nginx reverse proxy then its giving proper > output. How do you know? Can you use something like "tcpdump" to show the traffic from the backend when nginx reports a problem, and when things work correctly? Perhaps that will show what the difference is. In the logs you provided, the request was "GET /bsg/scrips"; does nginx always report a problem when you make that request? Good luck with it, f -- Francis Daly francis at daoine.org From bra at fsn.hu Tue Feb 27 10:21:23 2018 From: bra at fsn.hu (Nagy, Attila) Date: Tue, 27 Feb 2018 11:21:23 +0100 Subject: fsync()-in webdav PUT Message-ID: <4a497b7c-fc8d-ff1d-688e-c180ae9dd5f7@fsn.hu> Hi, I would like to make sure when a WebDAV (with ngx_http_dav) PUT returns the file is reliably on the storage. This needs an fsync() on the file. It would be easy to put that into the module, but it would block the whole nginx process. Now, that nginx supports running threads, are there plans to convert at least DAV PUTs into it's own thread(pool), so make it possible to do non-blocking (from nginx's event loop PoV) fsync on the uploaded file? Thanks, From mdounin at mdounin.ru Tue Feb 27 13:24:27 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 27 Feb 2018 16:24:27 +0300 Subject: fsync()-in webdav PUT In-Reply-To: <4a497b7c-fc8d-ff1d-688e-c180ae9dd5f7@fsn.hu> References: <4a497b7c-fc8d-ff1d-688e-c180ae9dd5f7@fsn.hu> Message-ID: <20180227132427.GF89840@mdounin.ru> Hello! On Tue, Feb 27, 2018 at 11:21:23AM +0100, Nagy, Attila wrote: > I would like to make sure when a WebDAV (with ngx_http_dav) PUT returns > the file is reliably on the storage. This needs an fsync() on the file. > It would be easy to put that into the module, but it would block the > whole nginx process. > > Now, that nginx supports running threads, are there plans to convert at > least DAV PUTs into it's own thread(pool), so make it possible to do > non-blocking (from nginx's event loop PoV) fsync on the uploaded file? No, there are no such plans. (Also, trying to do fsync() might not be the best idea even in threads. A reliable server might be a better option.) -- Maxim Dounin http://mdounin.ru/ From bra at fsn.hu Wed Feb 28 09:30:08 2018 From: bra at fsn.hu (Nagy, Attila) Date: Wed, 28 Feb 2018 10:30:08 +0100 Subject: fsync()-in webdav PUT In-Reply-To: <20180227132427.GF89840@mdounin.ru> References: <4a497b7c-fc8d-ff1d-688e-c180ae9dd5f7@fsn.hu> <20180227132427.GF89840@mdounin.ru> Message-ID: On 02/27/2018 02:24 PM, Maxim Dounin wrote: > >> Now, that nginx supports running threads, are there plans to convert at >> least DAV PUTs into it's own thread(pool), so make it possible to do >> non-blocking (from nginx's event loop PoV) fsync on the uploaded file? > No, there are no such plans. > > (Also, trying to do fsync() might not be the best idea even in > threads. A reliable server might be a better option.) > What do you mean by a reliable server? I want to make sure when the HTTP operation returns, the file is on the disk, not just in a buffer waiting for an indefinite amount of time to be flushed. This is what fsync is for. Why doing this in a thread is not a good idea? It would'nt block nginx that way. From arozyev at nginx.com Wed Feb 28 10:04:52 2018 From: arozyev at nginx.com (Aziz Rozyev) Date: Wed, 28 Feb 2018 13:04:52 +0300 Subject: fsync()-in webdav PUT In-Reply-To: References: <4a497b7c-fc8d-ff1d-688e-c180ae9dd5f7@fsn.hu> <20180227132427.GF89840@mdounin.ru> Message-ID: <995CC1C3-26DC-46B4-94B4-2D1C7AC0204D@nginx.com> While it?s not clear why one may need to flush the data on each http operation, I can imagine to what performance degradation that may lead of. if it?s not a some kind of funny clustering among nodes, I wouldn't care much where actual data is, RAM still should be much faster, than disk I/O. br, Aziz. > On 28 Feb 2018, at 12:30, Nagy, Attila wrote: > > On 02/27/2018 02:24 PM, Maxim Dounin wrote: >> >>> Now, that nginx supports running threads, are there plans to convert at >>> least DAV PUTs into it's own thread(pool), so make it possible to do >>> non-blocking (from nginx's event loop PoV) fsync on the uploaded file? >> No, there are no such plans. >> >> (Also, trying to do fsync() might not be the best idea even in >> threads. A reliable server might be a better option.) >> > What do you mean by a reliable server? > I want to make sure when the HTTP operation returns, the file is on the disk, not just in a buffer waiting for an indefinite amount of time to be flushed. > This is what fsync is for. > > Why doing this in a thread is not a good idea? It would'nt block nginx that way. > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at forum.nginx.org Wed Feb 28 10:22:14 2018 From: nginx-forum at forum.nginx.org (Andrzej Walas) Date: Wed, 28 Feb 2018 05:22:14 -0500 Subject: Files still on disc after inactive time In-Reply-To: References: <20180215143853.GV24410@mdounin.ru> Message-ID: <3c47abc3cdb25b40080823af219fecf2.NginxMailingListEnglish@forum.nginx.org> Can you answer? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,278589,278826#msg-278826 From rainer at ultra-secure.de Wed Feb 28 13:39:23 2018 From: rainer at ultra-secure.de (rainer at ultra-secure.de) Date: Wed, 28 Feb 2018 14:39:23 +0100 Subject: Client certificates and check for DN? Message-ID: <5fd95a205f8800ef60700349a718a279@ultra-secure.de> Hi, it seems most examples, even for apache, seem to assume that the client certificates are issued by your own CA. In this case, you just need to check if your certificates were issued by this CA - and if they're not, it's game over. However, I may have a case where the CA is a public CA and the client certificates need to be verified down to the correct O and OU. How do you do this with nginx? Something along these lines: https://www.tbs-certificates.co.uk/FAQ/en/183.html Best Regards Rainer From mdounin at mdounin.ru Wed Feb 28 13:41:47 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 28 Feb 2018 16:41:47 +0300 Subject: Files still on disc after inactive time In-Reply-To: <3c47abc3cdb25b40080823af219fecf2.NginxMailingListEnglish@forum.nginx.org> References: <20180215143853.GV24410@mdounin.ru> <3c47abc3cdb25b40080823af219fecf2.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20180228134147.GN89840@mdounin.ru> Hello! On Wed, Feb 28, 2018 at 05:22:14AM -0500, Andrzej Walas wrote: > Can you answer? The last recommendation you were given is to find out who and why killed nginx worker process, see here: http://mailman.nginx.org/pipermail/nginx/2018-February/055648.html If you think nginx processes are no longer killed, please make sure it is indeed the case: that is, make sure you've stopped nginx (make sure "ps -ef | grep nginx" shows no nginx processes), started it again (record "ps -ef | grep nginx" output here), and you've started to see "ignore long locked" messages after it, and no any critical / alert messages in between. Additionally, compare "ps -ef | grep nginx" output with what you've got right after start - to make sure there are the same worker processes, and no processes were lost or restarted. You may want to share all intermediate results of these steps here for us to make sure you've did it right. If any of these steps indicate that nginx processes are still killed, consider further investigating the reasons. If these steps demonstrate that "ignore long locked" messages appear without any crashes, consider testing various other things to futher isolate the cause. In particular, if you use http2 - try disabling it to see if it helps. If it does, we need debug logs of all requests to a particular resource since nginx start till "ignore long locked" message. Futher information on how to configure debug logging can be found here: http://nginx.org/en/docs/debugging_log.html Note though that enabling debug logging will result in a lot of logs, and obtaining required logs with "inactive" set to 1d might not be trivial as you'll have to store at least the whole day of debug logs till the "ignore long locked" message will appear. -- Maxim Dounin http://mdounin.ru/ From mdounin at mdounin.ru Wed Feb 28 14:08:32 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 28 Feb 2018 17:08:32 +0300 Subject: fsync()-in webdav PUT In-Reply-To: References: <4a497b7c-fc8d-ff1d-688e-c180ae9dd5f7@fsn.hu> <20180227132427.GF89840@mdounin.ru> Message-ID: <20180228140831.GO89840@mdounin.ru> Hello! On Wed, Feb 28, 2018 at 10:30:08AM +0100, Nagy, Attila wrote: > On 02/27/2018 02:24 PM, Maxim Dounin wrote: > > > >> Now, that nginx supports running threads, are there plans to convert at > >> least DAV PUTs into it's own thread(pool), so make it possible to do > >> non-blocking (from nginx's event loop PoV) fsync on the uploaded file? > > No, there are no such plans. > > > > (Also, trying to do fsync() might not be the best idea even in > > threads. A reliable server might be a better option.) > > > What do you mean by a reliable server? > I want to make sure when the HTTP operation returns, the file is on the > disk, not just in a buffer waiting for an indefinite amount of time to > be flushed. > This is what fsync is for. The question here is - why you want the file to be on disk, and not just in a buffer? Because you expect the server to die in a few seconds without flushing the file to disk? How probable it is, compared to the probability of the disk to die? A more reliable server can make this probability negligible, hence the suggestion. (Also, another question is what "on the disk" meas from physical point of view. In many cases this in fact means "somewhere in the disk buffers", and a power outage can easily result in the file being not accessible even after fsync().) > Why doing this in a thread is not a good idea? It would'nt block nginx > that way. Because even in threads, fsync() is likely to cause performance degradation. It might be a better idea to let the OS manage buffers instead. -- Maxim Dounin http://mdounin.ru/ From iippolitov at nginx.com Wed Feb 28 15:41:29 2018 From: iippolitov at nginx.com (Igor A. Ippolitov) Date: Wed, 28 Feb 2018 18:41:29 +0300 Subject: Client certificates and check for DN? In-Reply-To: <5fd95a205f8800ef60700349a718a279@ultra-secure.de> References: <5fd95a205f8800ef60700349a718a279@ultra-secure.de> Message-ID: <4a8b7048-a393-32fd-5554-effaf2d4ba8e@nginx.com> Hello. I'm not sure about what do you really need, but it looks like you can get almost the same result using a combination of map{} blocks and conditionals. Something like this: map $ssl_client_s_dn $ou_matched { ??? ~OU=whatever 1; ??? default 0; } map $ssl_client_s_dn $cn_matched { ??? ~CN=whatever 1; ??? default 0; } map $ou_verified$cn_verified $unauthed { ??? ~0 1 ??? default 0; } server { ??? .... ??? ssl_trusted_certificate path/to/public/certs; ??? ssl_verify_client on; ??? if ($unauthed) {return 403;} } On 28.02.2018 16:39, rainer at ultra-secure.de wrote: > Hi, > > it seems most examples, even for apache, seem to assume that the > client certificates are issued by your own CA. > In this case, you just need to check if your certificates were issued > by this CA - and if they're not, it's game over. > > > However, I may have a case where the CA is a public CA and the client > certificates need to be verified down to the correct O and OU. > > How do you do this with nginx? > > Something along these lines: > > https://www.tbs-certificates.co.uk/FAQ/en/183.html > > > Best Regards > Rainer > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From rainer at ultra-secure.de Wed Feb 28 16:04:06 2018 From: rainer at ultra-secure.de (rainer at ultra-secure.de) Date: Wed, 28 Feb 2018 17:04:06 +0100 Subject: Client certificates and check for DN? In-Reply-To: <4a8b7048-a393-32fd-5554-effaf2d4ba8e@nginx.com> References: <5fd95a205f8800ef60700349a718a279@ultra-secure.de> <4a8b7048-a393-32fd-5554-effaf2d4ba8e@nginx.com> Message-ID: <7fc78688f8824f3bbd7c64beb6dcdd20@ultra-secure.de> Am 2018-02-28 16:41, schrieb Igor A. Ippolitov: > Hello. > > I'm not sure about what do you really need, but it looks like you can > get almost the same result using a combination of map{} blocks and > conditionals. > > Something like this: > > map $ssl_client_s_dn $ou_matched { > ??? ~OU=whatever 1; > ??? default 0; > } > map $ssl_client_s_dn $cn_matched { > ??? ~CN=whatever 1; > ??? default 0; > } > map $ou_verified$cn_verified $unauthed { > ??? ~0 1 > ??? default 0; > } > server { > ??? .... > ??? ssl_trusted_certificate path/to/public/certs; > ??? ssl_verify_client on; > ??? if ($unauthed) {return 403;} > } OK, thanks a lot. I'll look into it. Currently, the exact details are still a bit murky. Customer was very vague... I'll know more Friday next week. Regards, Rainer From luciano at vespaperitivo.it Wed Feb 28 18:03:22 2018 From: luciano at vespaperitivo.it (Luciano Mannucci) Date: Wed, 28 Feb 2018 19:03:22 +0100 Subject: Nginx Directory Autoindex Message-ID: <3zs3Mb0pjZz3jYkR@baobab.bilink.it> Hello all, I have a directory served by nginx via autoindex (That works perfectly as documented :). I need to show the content in reverse order (ls -r), is there any rather simple method? Thanks in advance, Luciano. -- /"\ /Via A. Salaino, 7 - 20144 Milano (Italy) \ / ASCII RIBBON CAMPAIGN / PHONE : +39 2 485781 FAX: +39 2 48578250 X AGAINST HTML MAIL / E-MAIL: posthamster at sublink.sublink.ORG / \ AND POSTINGS / WWW: http://www.lesassaie.IT/ From valery+nginxen at grid.net.ru Wed Feb 28 18:24:18 2018 From: valery+nginxen at grid.net.ru (Valery Kholodkov) Date: Wed, 28 Feb 2018 19:24:18 +0100 Subject: fsync()-in webdav PUT In-Reply-To: <995CC1C3-26DC-46B4-94B4-2D1C7AC0204D@nginx.com> References: <4a497b7c-fc8d-ff1d-688e-c180ae9dd5f7@fsn.hu> <20180227132427.GF89840@mdounin.ru> <995CC1C3-26DC-46B4-94B4-2D1C7AC0204D@nginx.com> Message-ID: It's completely clear why someone would need to flush file's data and metadata upon a WebDAV PUT operation. That is because many architectures expect a PUT operation to be completely settled before a reply is returned. Without fsyncing file's data and metadata a client will receive a positive reply before data has reached the storage, thus leaving non-zero probability that states of two systems involved into a web transaction end up inconsistent. Further, the exact moment when the data of certain specific file reaches the storage depends on numerous factors, for example, I/O contention. Consequently, the exact moment when the data of a file being uploaded reaches the storage can be only determined by executing fsync. val On 28-02-18 11:04, Aziz Rozyev wrote: > While it?s not clear why one may need to flush the data on each http operation, > I can imagine to what performance degradation that may lead of. > > if it?s not a some kind of funny clustering among nodes, I wouldn't care much > where actual data is, RAM still should be much faster, than disk I/O. > > > br, > Aziz. > > > > > >> On 28 Feb 2018, at 12:30, Nagy, Attila wrote: >> >> On 02/27/2018 02:24 PM, Maxim Dounin wrote: >>> >>>> Now, that nginx supports running threads, are there plans to convert at >>>> least DAV PUTs into it's own thread(pool), so make it possible to do >>>> non-blocking (from nginx's event loop PoV) fsync on the uploaded file? >>> No, there are no such plans. >>> >>> (Also, trying to do fsync() might not be the best idea even in >>> threads. A reliable server might be a better option.) >>> >> What do you mean by a reliable server? >> I want to make sure when the HTTP operation returns, the file is on the disk, not just in a buffer waiting for an indefinite amount of time to be flushed. >> This is what fsync is for. >> >> Why doing this in a thread is not a good idea? It would'nt block nginx that way. From valery+nginxen at grid.net.ru Wed Feb 28 18:53:57 2018 From: valery+nginxen at grid.net.ru (Valery Kholodkov) Date: Wed, 28 Feb 2018 19:53:57 +0100 Subject: fsync()-in webdav PUT In-Reply-To: <20180228140831.GO89840@mdounin.ru> References: <4a497b7c-fc8d-ff1d-688e-c180ae9dd5f7@fsn.hu> <20180227132427.GF89840@mdounin.ru> <20180228140831.GO89840@mdounin.ru> Message-ID: <7e5b677c-692c-82d4-05b7-cf7391c1f4a7@grid.net.ru> On 28-02-18 15:08, Maxim Dounin wrote: >> What do you mean by a reliable server? >> I want to make sure when the HTTP operation returns, the file is on the >> disk, not just in a buffer waiting for an indefinite amount of time to >> be flushed. >> This is what fsync is for. > > The question here is - why you want the file to be on disk, and > not just in a buffer? Because you expect the server to die in a > few seconds without flushing the file to disk? How probable it > is, compared to the probability of the disk to die? A more > reliable server can make this probability negligible, hence the > suggestion. I think the point here is that lack of fsync leaves some questions unanswered. Adding fsync will simply put all dots above the "i"s. > (Also, another question is what "on the disk" meas from physical > point of view. In many cases this in fact means "somewhere in the > disk buffers", and a power outage can easily result in the file > being not accessible even after fsync().) > >> Why doing this in a thread is not a good idea? It would'nt block nginx >> that way. > > Because even in threads, fsync() is likely to cause performance > degradation. It might be a better idea to let the OS manage > buffers instead. fsync does not cause performance degradation. fsync simply instructs OS to ensure consistency of a file. What causes performance degradation is expenditure of resources necessary to ensure consistency. val From gk at leniwiec.biz Wed Feb 28 20:08:14 2018 From: gk at leniwiec.biz (Grzegorz Kulewski) Date: Wed, 28 Feb 2018 21:08:14 +0100 Subject: ngx_stream_auth_request? Message-ID: <7ef360ff-30f5-7f04-e4f2-b01bf5c78800@leniwiec.biz> Hello, Could you add something similar to HTTP auth_request module for stream? Basically I want to allow or deny access to TCP stream proxy based on the result of HTTP request. I want to pass to this request source and destination IP addresses and ports and possibly some more informations (results from TLS negotiation/preread and similar). -- Grzegorz Kulewski From nginx-forum at forum.nginx.org Wed Feb 28 20:28:56 2018 From: nginx-forum at forum.nginx.org (itpp2012) Date: Wed, 28 Feb 2018 15:28:56 -0500 Subject: fsync()-in webdav PUT In-Reply-To: <7e5b677c-692c-82d4-05b7-cf7391c1f4a7@grid.net.ru> References: <7e5b677c-692c-82d4-05b7-cf7391c1f4a7@grid.net.ru> Message-ID: <19400fe39acb5f66e34b328571c819c0.NginxMailingListEnglish@forum.nginx.org> Not waiting for fsync to complete makes calling fsync pointless, waiting for fsync is blocking, thread based or otherwise. The only midway solution is to implement fsync as a cgi, ea. a none-blocking (background)fc call in combination with an OS resource lock. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,278788,278847#msg-278847 From arozyev at nginx.com Wed Feb 28 21:41:47 2018 From: arozyev at nginx.com (Aziz Rozyev) Date: Thu, 1 Mar 2018 00:41:47 +0300 Subject: fsync()-in webdav PUT In-Reply-To: References: <4a497b7c-fc8d-ff1d-688e-c180ae9dd5f7@fsn.hu> <20180227132427.GF89840@mdounin.ru> <995CC1C3-26DC-46B4-94B4-2D1C7AC0204D@nginx.com> Message-ID: <58F408FB-17D2-4595-B6C8-7225D09F3F42@nginx.com> Valery, may you please suggest how you came to the conclusion that ?fsync simply instructs OS to ensure consistency of a file"? As far as understand simply instructing OS staff come at no cost, right? > Without fsyncing file's data and metadata a client will receive a positive reply before data has reached the storage, thus leaving non-zero probability that states of two systems involved into a web transaction end up inconsistent. I understand why one may need consistency, but doing so with fsyncing is non-sense. Here is what man page says in that regard: fsync() transfers ("flushes") all modified in-core data of (i.e., modified buffer cache pages for) the file referred to by the file descriptor fd to the disk device (or other permanent storage device) so that all changed information can be retrieved even after the system crashed or was rebooted. This includes writing through or flushing a disk cache if present. The call blocks until the device reports that the transfer has completed. It also flushes metadata information associated with the file (see stat(2)). br, Aziz. > On 28 Feb 2018, at 21:24, Valery Kholodkov wrote: > > It's completely clear why someone would need to flush file's data and metadata upon a WebDAV PUT operation. That is because many architectures expect a PUT operation to be completely settled before a reply is returned. > > Without fsyncing file's data and metadata a client will receive a positive reply before data has reached the storage, thus leaving non-zero probability that states of two systems involved into a web transaction end up inconsistent. > > Further, the exact moment when the data of certain specific file reaches the storage depends on numerous factors, for example, I/O contention. Consequently, the exact moment when the data of a file being uploaded reaches the storage can be only determined by executing fsync. > > val > > On 28-02-18 11:04, Aziz Rozyev wrote: >> While it?s not clear why one may need to flush the data on each http operation, >> I can imagine to what performance degradation that may lead of. >> if it?s not a some kind of funny clustering among nodes, I wouldn't care much >> where actual data is, RAM still should be much faster, than disk I/O. >> br, >> Aziz. >>> On 28 Feb 2018, at 12:30, Nagy, Attila wrote: >>> >>> On 02/27/2018 02:24 PM, Maxim Dounin wrote: >>>> >>>>> Now, that nginx supports running threads, are there plans to convert at >>>>> least DAV PUTs into it's own thread(pool), so make it possible to do >>>>> non-blocking (from nginx's event loop PoV) fsync on the uploaded file? >>>> No, there are no such plans. >>>> >>>> (Also, trying to do fsync() might not be the best idea even in >>>> threads. A reliable server might be a better option.) >>>> >>> What do you mean by a reliable server? >>> I want to make sure when the HTTP operation returns, the file is on the disk, not just in a buffer waiting for an indefinite amount of time to be flushed. >>> This is what fsync is for. >>> >>> Why doing this in a thread is not a good idea? It would'nt block nginx that way. > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From arozyev at nginx.com Wed Feb 28 22:07:22 2018 From: arozyev at nginx.com (Aziz Rozyev) Date: Thu, 1 Mar 2018 01:07:22 +0300 Subject: fsync()-in webdav PUT In-Reply-To: <58F408FB-17D2-4595-B6C8-7225D09F3F42@nginx.com> References: <4a497b7c-fc8d-ff1d-688e-c180ae9dd5f7@fsn.hu> <20180227132427.GF89840@mdounin.ru> <995CC1C3-26DC-46B4-94B4-2D1C7AC0204D@nginx.com> <58F408FB-17D2-4595-B6C8-7225D09F3F42@nginx.com> Message-ID: here is a synthetic test on vm, not perfect, but representative: [root at nginx-single ~]# dd if=/dev/zero of=/writetest bs=8k count=30960 30960+0 records in 30960+0 records out 253624320 bytes (254 MB) copied, 0.834861 s, 304 MB/s [root at nginx-single ~]# dd if=/dev/zero of=/writetest bs=8k count=30960 conv=fsync 30960+0 records in 30960+0 records out 253624320 bytes (254 MB) copied, 0.854208 s, 297 MB/s [root at nginx-single ~]# dd if=/dev/zero of=/writetest bs=8k count=61960 61960+0 records in 61960+0 records out 507576320 bytes (508 MB) copied, 1.71833 s, 295 MB/s [root at nginx-single ~]# dd if=/dev/zero of=/writetest bs=8k count=61960 conv=fsync 61960+0 records in 61960+0 records out 507576320 bytes (508 MB) copied, 1.74482 s, 291 MB/s br, Aziz. > On 1 Mar 2018, at 00:41, Aziz Rozyev wrote: > > Valery, > > may you please suggest how you came to the conclusion that > > ?fsync simply instructs OS to ensure consistency of a file"? > > As far as understand simply instructing OS staff come at no cost, right? > >> Without fsyncing file's data and metadata a client will receive a positive reply before data has reached the storage, thus leaving non-zero probability that states of two systems involved into a web transaction end up inconsistent. > > > I understand why one may need consistency, but doing so with fsyncing is non-sense. > > Here is what man page says in that regard: > > > fsync() transfers ("flushes") all modified in-core data of (i.e., modified buffer cache pages for) the file referred to by the file descriptor fd to the disk device (or other permanent > storage device) so that all changed information can be retrieved even after the system crashed or was rebooted. This includes writing through or flushing a disk cache if present. The call > blocks until the device reports that the transfer has completed. It also flushes metadata information associated with the file (see stat(2)). > > > > > br, > Aziz. > > > > > >> On 28 Feb 2018, at 21:24, Valery Kholodkov wrote: >> >> It's completely clear why someone would need to flush file's data and metadata upon a WebDAV PUT operation. That is because many architectures expect a PUT operation to be completely settled before a reply is returned. >> >> Without fsyncing file's data and metadata a client will receive a positive reply before data has reached the storage, thus leaving non-zero probability that states of two systems involved into a web transaction end up inconsistent. >> >> Further, the exact moment when the data of certain specific file reaches the storage depends on numerous factors, for example, I/O contention. Consequently, the exact moment when the data of a file being uploaded reaches the storage can be only determined by executing fsync. >> >> val >> >> On 28-02-18 11:04, Aziz Rozyev wrote: >>> While it?s not clear why one may need to flush the data on each http operation, >>> I can imagine to what performance degradation that may lead of. >>> if it?s not a some kind of funny clustering among nodes, I wouldn't care much >>> where actual data is, RAM still should be much faster, than disk I/O. >>> br, >>> Aziz. >>>> On 28 Feb 2018, at 12:30, Nagy, Attila wrote: >>>> >>>> On 02/27/2018 02:24 PM, Maxim Dounin wrote: >>>>> >>>>>> Now, that nginx supports running threads, are there plans to convert at >>>>>> least DAV PUTs into it's own thread(pool), so make it possible to do >>>>>> non-blocking (from nginx's event loop PoV) fsync on the uploaded file? >>>>> No, there are no such plans. >>>>> >>>>> (Also, trying to do fsync() might not be the best idea even in >>>>> threads. A reliable server might be a better option.) >>>>> >>>> What do you mean by a reliable server? >>>> I want to make sure when the HTTP operation returns, the file is on the disk, not just in a buffer waiting for an indefinite amount of time to be flushed. >>>> This is what fsync is for. >>>> >>>> Why doing this in a thread is not a good idea? It would'nt block nginx that way. >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > From peter_booth at me.com Wed Feb 28 22:33:54 2018 From: peter_booth at me.com (Peter Booth) Date: Wed, 28 Feb 2018 17:33:54 -0500 Subject: fsync()-in webdav PUT In-Reply-To: <58F408FB-17D2-4595-B6C8-7225D09F3F42@nginx.com> References: <4a497b7c-fc8d-ff1d-688e-c180ae9dd5f7@fsn.hu> <20180227132427.GF89840@mdounin.ru> <995CC1C3-26DC-46B4-94B4-2D1C7AC0204D@nginx.com> <58F408FB-17D2-4595-B6C8-7225D09F3F42@nginx.com> Message-ID: <3EA526B9-EA09-4304-86BD-1A341CBBD5B8@me.com> This discussion is interesting, educational, and thought provoking. Web architects only learn ?the right way? by first doing things ?the wrong way? and seeing what happens. Attila and Valery asked questions that sound logical, and I think there's value in exploring what would happen if their suggestions were implemented. First caveat - nginx is deployed in all manner different scenarios on different hardware and operating systems. Physical servers and VMs behave very differently, as do local and remote storage. When an application writes to NFS mounted storage there's no guarantee that even and synch will correctly enforce a write barrier. Still, if we consider real numbers: On current model quad socket hosts, nginx can support well over 1 million requests per second (see TechEmpower benchmarks) On the same hardware, a web app that writes to a Postgresql DB can do at least a few thousand writes per second. A SATA drive might support 300 write IOPS, whilst an SSD will support 100x that. What this means that doing fully synchronous writes can reduce your potential throughput by a factor of 100 or more. So it?s not a great way to ensure consistency. But there are cheaper ways to achieve the same consistency and reliability characteristics: If you are using Linux then your reads and write swill occur through the page cache - so the actual disk itself really doesn?t matter (whilst your host is up). If you want to protect against loss of physical disk then use RAID. If you want to protect against a random power failure then use drives with battery backed caches, so writes will get persisted when a server restarts after a power failure If you want to protect against a crazy person hitting your server with an axe then write to two servers ... But the bottom line is separation of concerns. Nginx should not use fsync because it isn?t nginx's business. My two cents, Peter > On Feb 28, 2018, at 4:41 PM, Aziz Rozyev wrote: > > Hello! > > On Wed, Feb 28, 2018 at 10:30:08AM +0100, Nagy, Attila wrote: > >> On 02/27/2018 02:24 PM, Maxim Dounin wrote: >>> >>>> Now, that nginx supports running threads, are there plans to convert at >>>> least DAV PUTs into it's own thread(pool), so make it possible to do >>>> non-blocking (from nginx's event loop PoV) fsync on the uploaded file? >>> No, there are no such plans. >>> >>> (Also, trying to do fsync() might not be the best idea even in >>> threads. A reliable server might be a better option.) >>> >> What do you mean by a reliable server? >> I want to make sure when the HTTP operation returns, the file is on the >> disk, not just in a buffer waiting for an indefinite amount of time to >> be flushed. >> This is what fsync is for. > > The question here is - why you want the file to be on disk, and > not just in a buffer? Because you expect the server to die in a > few seconds without flushing the file to disk? How probable it > is, compared to the probability of the disk to die? A more > reliable server can make this probability negligible, hence the > suggestion. > > (Also, another question is what "on the disk" meas from physical > point of view. In many cases this in fact means "somewhere in the > disk buffers", and a power outage can easily result in the file > being not accessible even after fsync().) > >> Why doing this in a thread is not a good idea? It would'nt block nginx >> that way. > > Because even in threads, fsync() is likely to cause performance > degradation. It might be a better idea to let the OS manage > buffers instead. > > -- > Maxim Dounin > http://mdounin.ru/ > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From miguelmclara at gmail.com Wed Feb 28 23:30:35 2018 From: miguelmclara at gmail.com (Miguel C) Date: Wed, 28 Feb 2018 23:30:35 +0000 Subject: Nginx Directory Autoindex In-Reply-To: <3zs3Mb0pjZz3jYkR@baobab.bilink.it> References: <3zs3Mb0pjZz3jYkR@baobab.bilink.it> Message-ID: I'm unsure if thats possible without 3rd party module... I've used fancyindex before when I wanted sorting. On Wednesday, February 28, 2018, Luciano Mannucci wrote: > > Hello all, > > I have a directory served by nginx via autoindex (That works perfectly > as documented :). I need to show the content in reverse order (ls -r), > is there any rather simple method? > > Thanks in advance, > > Luciano. > -- > /"\ /Via A. Salaino, 7 - 20144 Milano (Italy) > \ / ASCII RIBBON CAMPAIGN / PHONE : +39 2 485781 FAX: +39 2 48578250 > X AGAINST HTML MAIL / E-MAIL: posthamster at sublink.sublink.ORG > / \ AND POSTINGS / WWW: http://www.lesassaie.IT/ > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Miguel Clara, IT Consuling -------------- next part -------------- An HTML attachment was scrubbed... URL: